input
stringlengths
6.82k
29k
Instruction: Insensitivity to scope in contingent valuation studies: reason for dismissal of valuations? Abstracts: abstract_id: PUBMED:22963163 Insensitivity to scope in contingent valuation studies: reason for dismissal of valuations? Background: The credibility of contingent valuation studies has been questioned because of the potential occurrence of scope insensitivity, i.e. that respondents do not react to higher quantities or qualities of a good. Objective: The aim of this study was to examine the extent of scope insensitivity and to assess the relevance of potential explanations that may help to shed light on how to appropriately handle this problem in contingent valuation studies. Methods: We surveyed a sample of 2004 men invited for cardiovascular disease screening. Each respondent had three contingent valuation tasks from which their sensitivity to larger risk reductions (test 1) and to change in travel costs associated with participation (test 2) could be assessed. Participants were surveyed while waiting for their screening session. Non-participants were surveyed by postal questionnaire. Results: The sample was overall found to be sensitive to scope, testing at the conventional sample-mean level. At the individual respondent level, however, more than half of the respondents failed the tests. Potential determinants for failing the tests were examined in alternative regression models but few consistent relationships were identified. One exception was the influence of more detailed information, which was positively associated with willingness to pay and negatively associated with scope sensitivity. Conclusion: Possible explanations for scope insensitivity are discussed; if cognitive limitations, emotional load and mental budgeting explain scope insensitivity there are grounds for rejecting valuations, whereas other factors such as the alternative theoretical framework of regret theory may render insensitivity to scope a result of rational thinking. It is concluded that future contingent valuation studies should focus more on extracting the underlying motives for the stated preferences in order to appropriately deal with responses that are seemingly irrational, and which may lead to imprecise welfare estimates. abstract_id: PUBMED:15531390 Scope insensitivity in contingent valuation of complex environmental amenities. It has been argued that respondents in contingent valuation (CV) surveys, asked to value complex environmental amenities, will state willingness to pay (WTP) independently of the scope of the project. Such insensitivity to scope would be at odds with rational choice, and could therefore imply that CV is not a theoretically valid method for biodiversity valuation. The scope test in the present CV study was applied to endangered species preservation. Respondents were split in four sub-samples facing different scopes of endangered species preservation. The design allowed for both external and internal scope tests. Furthermore, the tests were split according to elicitation format. Of four external tests of insensitivity to scope, one was rejected, two gave mixed results, depending on either the type of test or elicitation format, and for the last one the null hypothesis could not be rejected. Of five internal tests, insensitivity to scope was rejected in three cases, one test gave mixed results, and one could not be rejected. Survey design features of the CV study, especially an unfamiliar sub-group of endangered species, could explain the apparent insensitivity to scope observed. abstract_id: PUBMED:28436139 Improving scope sensitivity in contingent valuation: Joint and separate evaluation of health states. We present data of a contingent valuation survey, testing the effect of evaluation mode on the monetary valuation of preventing road accidents. Half of the interviewees was asked to state their willingness to pay (WTP) to reduce the risk of having only 1 type of injury (separate evaluation, SE), and the other half of the sample was asked to state their WTP for 4 types of injuries evaluated simultaneously (joint evaluation, JE). In the SE group, we observed lack of sensitivity to scope while in the JE group WTP increased with the severity of the injury prevented. However, WTP values in this group were subject to context effects. Our results suggest that the traditional explanation of the disparity between SE and JE, namely, the so-called "evaluability," does not apply here. The paper presents new explanations based on the role of preference imprecision. abstract_id: PUBMED:29288935 The contingent valuation study of Heiðmörk, Iceland - Willingness to pay for its preservation. The decision-making and policy formation context in Iceland has been largely devoid of total economic valuations in cost-benefit assessments. Using an internet survey and applying the double bounded dichotomous choice methodology, this contingent valuation study sets out an estimate of the total economic value pertaining to Heiðmörk, a popular recreational area of urban open space located on the fringes of Reykjavík, Garðabær and Kópavogur. In so doing, this case study advances the practice of using non-market valuation techniques in the country. The welfare estimates provide evidence that Icelanders consider Heiðmörk to possess considerable total economic value, with taxpayers willing to pay a mean lump-sum tax in the range 17,039 to 24,790 ISK per payment to secure its preservation, equating to an estimated total economic value of between 5.87 and 35.47 billion ISK. In the light of possible competitive land management demands among Heiðmörk's three owners and many recreational users in the future, the establishment of these values and their potential use in cost-benefit assessments informs the debate concerning whether the area should be preserved or further developed to satisfy economic objectives. Additionally, a body of experimental evidence is formed suggesting that the increased duration of a fixed payment vehicle is associated with much higher total economic valuations compared to a one-year payment period. abstract_id: PUBMED:31082756 Inferred valuation versus conventional contingent valuation: A salinity intrusion case study. People's willingness-to-pay values may be inflated by a variety of influences (e.g. hypothetical bias), which means that stated preference validity tests remain relevant. Recently developed inferred valuation approaches may serve to identify and/or reduce inflated stated preference values. However, economic applications of inferred valuation approaches are relatively limited in the literature, and the evidence remains mixed. This paper examines farmers' willingness-to-pay for salinity intrusion mitigation programs in the Mekong River Delta of Vietnam using both conventional contingent and inferred valuation approaches. Inferred valuation estimates were as much as 31 per cent lower than conventional estimates of willingness-to-pay, and averaged about 24 per cent lower across the groups. We discuss these findings, and the role that commitment costs and provision point mechanism payment vehicles may play. Public policy implications for any future salinity intrusion mitigation program are also outlined. abstract_id: PUBMED:23457024 A note on the expected biases in conventional iterative health state valuation protocols. Background: Typical health state valuation exercises use tradeoff methods, such as the time tradeoff or the standard gamble, involving a series of iterated questions so that a value for each health state by each individual respondent is elicited. This iterative process is a source of potential biases, but this has not received much attention in the health state valuation literature. The issue has been researched widely in the contingent valuation (CV) literature, which elicits the monetary value of hypothetical outcomes. Methods: The lessons learned in the CV literature are revisited in the context of the design and administration of health state valuations. The article introduces the main known biases in the CV literature and then examines how each might affect conventional iterative health state valuations. Results: Of the 8 main types of biases, starting point bias, range bias, and incentive incompatibility bias are found to be potentially relevant. Furthermore, the magnitude and direction of the bases are unlikely to be uniform and depend on the range of the value (e.g., between 0 and 0.5). Limitation. This is an overview article, and the conclusions drawn need to be tested empirically. Conclusions: Health state valuation studies, like CV studies, are susceptible to a number of possible biases that affect the resulting values. Their magnitude and direction are unlikely to be uniform, and thus empirical studies are needed to diagnose the problem and, if necessary, to address it. abstract_id: PUBMED:30258339 Contingent Valuation Studies in Orthopaedic Surgery: A Health Economic Review. Background: A greater emphasis on providing high-value orthopaedic interventions has resulted in increased health economic reporting. The contingent-valuation method (CVM) is used to determine consumer valuation of the benefits provided by healthcare interventions. CVM is an important value-based health economic tool that is underutilized in orthopaedic surgery. Questions/purposes: The purpose of this study was to (1) identify previously published CVM studies in the orthopaedic literature, (2) assess the methodologies used for CVM research, and (3) understand how CVM has been used in the orthopaedic cost-benefit analysis framework. Methods: A systematic review of the literature using the MEDLINE database was performed to compile CVM studies. Search terms incorporated the phrase willingness to pay (WTP) or willingness to accept (WTA) in combination with orthopaedic clinical key terms. Study methodology was appraised using previously defined empirical and conceptual criteria for CVM studies. Results: Of the 160 studies retrieved, 22 (13.8%) met our inclusion criteria. The economics of joint arthroplasty (n = 6, 27.3%) and non-operative osteoarthritis care (n = 4, 18.2%) were the most common topics. Most studies used CVM for pricing and/or demand forecasting (n = 16, 72.7%); very few studies used CVM for program evaluation (n = 6). WTP was used in all included studies, and one study used both WTP and WTA. Otherwise, there was little consistency among included studies in terms of CVM methodology. Open-ended questioning was used by only ten studies (45.5%), a significant number of studies did not perform a sensitivity analysis (n = 9, 40.9%), and none of the studies accounted for the risk preference of subjects. Only two of the included studies applied CVM within a cost-benefit analysis framework. Conclusion: CVM is not commonly reported in orthopaedic surgery and is seldom used in the context of cost-benefit analysis. There is wide variability in the methods used to perform CVM. We propose that CVM is an appropriate and underappreciated method for understanding the value of orthopaedic interventions. Increased attention should be paid to consumer valuations for orthopaedic interventions. abstract_id: PUBMED:15250748 The measurement of contingent valuation for health economics. In health economics, contingent valuation is a method that elicits an individual's monetary valuations of health programmes or health states. This article reviews the theory and conduct of contingent valuation studies, with suggestions for improving the future measurement of contingent valuation for health economics applications. Contingent valuation questions can be targeted to any of the following groups: the general population, to value health insurance premiums for programmes; users of a health programme, to value the associated programme costs; or individuals with a disease, to evaluate health states. The questions can be framed to ask individuals how much they would pay to obtain positive changes in health status or avoid negative changes in health status ('willingness to pay'; WTP) or how much they would need to be paid to compensate for a decrease in health status or for foregoing an improvement in heath status ('willingness to accept'; WTA). In general WTP questions yield more accurate and precise valuations than WTA questions. Payment card techniques, with follow-up bidding using direct interviews with visual aids, are well suited for small contingent valuation studies. Several biases may be operative when assessing contingent valuation, including biases in the way participants are selected, the way in which the questions are posed, the way in which individuals interpret probabilities and value gains relative to losses, and the way in which missing or extreme responses are interpreted. An important aspect of all contingent valuation studies is an assessment of respondents' understanding of the evaluation method and the valuation task. Contingent valuation studies should measure the potential influence of biases, the validity of contingent valuation tests as measures of QOL, and the reliability and responsiveness of responses. Future research should address equity concerns associated with using contingent valuation and explore contingent valuation as a measure of utility for health states, particularly those that are minor or temporary. abstract_id: PUBMED:23775623 Timing effects in health valuations. This paper analyzes the impact of external sources of information, conveyed by the frequency of risky events that vary across time, on the individual willingness to pay (WTP) for a reduction of mortality risk. We collected data from a contingent valuation (CV) exercise conducted in two waves (fall and winter) to examine whether individual WTP varied across periods that differed in the predominance of fatal accidents. Risk valuations were based on fatal snow avalanche accidents, that is, a type of risk with seasonal differences in occurrence. We found slightly lower but statistically significant mean WTP figures in the winter than in the fall sample because of time-varying individual risk attitudes and, therefore, recommend controlling for these factors in risk assessment CV surveys. abstract_id: PUBMED:32654269 Altruistic and Private Values For Saving Lives With an Oyster Consumption Safety Program. We use data from an Internet-based survey and estimate the benefits of an oyster consumption safety policy with the contingent valuation method. In addition to providing a context-specific estimate of willingness-to-pay for oyster safety, we consider an important issue in the contingent valuation mortality risk reduction literature. A number of studies find that willingness-to-pay for mortality risk reduction is not sensitive to the scope of the risk change. We present the scope test as a difference in the number of lives saved by the program, instead of small changes in risk, and find that referendum votes are responsive to scope. A third feature of this article is that we identify those at-risk respondents who would most benefit from the policy and decompose willingness-to-pay into use values and altruistic nonuse values. We find that willingness-to-pay per life saved ranges from $3.95 million to $7.69 million for the private good of lives saved when the respondent is at risk (i.e., use values). Willingness-to-pay per life saved including both use and altruistic nonuse values ranges from $6.89 million to $12.87 million. Answer: Insensitivity to scope in contingent valuation (CV) studies refers to the phenomenon where respondents do not adjust their willingness to pay (WTP) according to the quantity or quality of the good being valued, which is counterintuitive to economic theory that suggests WTP should increase with the scope of the good (PUBMED:22963163). This issue raises concerns about the credibility of CV studies and whether the valuations obtained are reliable for informing policy decisions. The literature presents mixed findings on scope insensitivity. Some studies have found that respondents are sensitive to scope at a group level, but individual responses often fail to reflect scope sensitivity (PUBMED:22963163). Other studies have shown that scope insensitivity can be rejected in some cases, while in others, the null hypothesis of insensitivity cannot be rejected, suggesting that the phenomenon is not consistent across different CV applications (PUBMED:15531390). Several potential explanations for scope insensitivity have been proposed, including cognitive limitations, emotional load, mental budgeting, and the theoretical framework of regret theory (PUBMED:22963163). Additionally, survey design features, such as unfamiliarity with the good being valued, may contribute to apparent insensitivity to scope (PUBMED:15531390). Improving scope sensitivity in CV studies has been a focus of research. For example, joint evaluation of health states has been shown to increase WTP with the severity of the injury prevented, although context effects may influence the results (PUBMED:28436139). Furthermore, the timing of the survey and external information can impact WTP, as seen in studies where WTP varied with the seasonality of risks (PUBMED:23775623). In conclusion, while scope insensitivity is a concern in CV studies, it is not a definitive reason for dismissing valuations outright. Instead, researchers should focus on understanding the underlying motives for stated preferences and improving survey design to address potential biases and enhance the reliability of WTP estimates (PUBMED:22963163). It is also important to consider the context and specific application of CV, as different settings may yield different levels of scope sensitivity (PUBMED:15531390; PUBMED:28436139; PUBMED:23775623).
Instruction: Can a prediction model combining self-reported symptoms, sociodemographic and clinical features serve as a reliable first screening method for sleep apnea syndrome in patients with stroke? Abstracts: abstract_id: PUBMED:24378806 Can a prediction model combining self-reported symptoms, sociodemographic and clinical features serve as a reliable first screening method for sleep apnea syndrome in patients with stroke? Objective: To determine whether a prediction model combining self-reported symptoms, sociodemographic and clinical parameters could serve as a reliable first screening method in a step-by-step diagnostic approach to sleep apnea syndrome (SAS) in stroke rehabilitation. Design: Retrospective study. Setting: Rehabilitation center. Participants: Consecutive sample of patients with stroke (N=620) admitted between May 2007 and July 2012. Of these, 533 patients underwent SAS screening. In total, 438 patients met the inclusion and exclusion criteria. Interventions: Not applicable. Main Outcome Measures: We administered an SAS questionnaire consisting of self-reported symptoms and sociodemographic and clinical parameters. We performed nocturnal oximetry to determine the oxygen desaturation index (ODI). We classified patients with an ODI ≥15 as having a high likelihood of SAS. We built a prediction model using backward multivariate logistic regression and evaluated diagnostic accuracy using receiver operating characteristic analysis. We calculated sensitivity, specificity, and predictive values for different probability cutoffs. Results: Thirty-one percent of patients had a high likelihood of SAS. The prediction model consisted of the following variables: sex, age, body mass index, and self-reported apneas and falling asleep during daytime. The diagnostic accuracy was .76. Using a low probability cutoff (0.1), the model was very sensitive (95%) but not specific (21%). At a high cutoff (0.6), the specificity increased to 97%, but the sensitivity dropped to 24%. A cutoff of 0.3 yielded almost equal sensitivity and specificity of 72% and 69%, respectively. Depending on the cutoff, positive predictive values ranged from 35% to 75%. Conclusions: The prediction model shows acceptable diagnostic accuracy for a high likelihood of SAS. Therefore, we conclude that the prediction model can serve as a reasonable first screening method in a stepped diagnostic approach to SAS in stroke rehabilitation. abstract_id: PUBMED:33211156 Prospective multicentric validation of a novel prediction model for paroxysmal atrial fibrillation. Background: The early recognition of paroxysmal atrial fibrillation (pAF) is a major clinical challenge for preventing thromboembolic events. In this prospective and multicentric study we evaluated prediction scores for the presence of pAF, calculated from non-invasive medical history and echocardiographic parameters, in patients with unknown AF status. Methods: The 12-parameter score with parameters age, LA diameter, aortic root diameter, LV,ESD, TDI A', heart frequency, sleep apnea, hyperlipidemia, type II diabetes, smoker, ß-blocker, catheter ablation, and the 4-parameter score with parameters age, LA diameter, aortic root diameter and TDI A' were tested. Presence of pAF was verified by continuous electrocardiogram (ECG) monitoring for up to 21 days in 305 patients. Results: The 12-parameter score correctly predicted pAF in all 34 patients, in which pAF was newly detected by ECG monitoring. The 12- and 4-parameter scores showed sensitivities of 100% and 82% (95%-CI 65%, 93%), specificities of 75% (95%-CI 70%, 80%) and 67% (95%-CI 61%, 73%), and areas under the receiver operating characteristic (ROC) curves of 0.84 (95%-CI 0.80, 0.88) and 0.81 (95%-CI 0.74, 0.87). Furthermore, properties of AF episodes and durations of ECG monitoring necessary to detect pAF were analysed. Conclusions: The prediction scores adequately detected pAF using variables readily available during routine cardiac assessment and echocardiography. The model scores, denoted as ECHO-AF scores, represent simple, highly sensitive and non-invasive tools for detecting pAF that can be easily implemented in the clinical practice and might serve as screening test to initiate further diagnostic investigations for validating the presence of pAF. Prospective validation of a novel prediction model for paroxysmal atrial fibrillation based on echocardiography and medical history parameters by long-term Holter ECG. abstract_id: PUBMED:31604670 Self-Reported Daytime Sleepiness and Sleep-Disordered Breathing in Patients With Atrial Fibrillation: SNOozE-AF. Background: Atrial fibrillation (AF) management guidelines recommend screening for symptoms of sleep-disordered breathing (SDB). We aimed to assess the role of self-reported daytime sleepiness in detection of patients with SDB and AF. Methods: A total of 442 consecutive ambulatory patients with AF who were considered candidates for rhythm control and underwent polysomnography comprised the study population. The utility of daytime sleepiness (quantified by the Epworth Sleepiness Scale [ESS]) to predict any (apnea-hypopnea index [AHI] ≥ 5), moderate-to-severe (AHI ≥ 15), and severe (AHI ≥ 30) SDB on polysomnography was tested. Results: Mean age was 60 ± 11 years and 69% patients were men. SDB was present in two-thirds of the population with 33% having moderate-to-severe SDB. Daytime sleepiness was low (median ESS = 8/24) and the ESS poorly predicted SDB, regardless of the degree of SDB tested (area under the curve: 0.48-0.56). Excessive daytime sleepiness (ESS ≥ 11) was present in 11.9% of the SDB population and had a negative predictive value of 43.1% and a positive predictive value of 67.5% to detect moderate-to-severe SDB. Male gender (odds ratio [OR]: 2.3, 95% confidence interval [CI]: 1.4-3.8, P = 0.001), obesity (OR: 3.5, 95% CI: 2.3-5.5, P < 0.001), diabetes (OR: 2.3, 95% CI: 1.2-4.4, P = 0.08), and stroke (OR: 4.6, 95% CI: 1.7-12.3, P = 0.002) were independently associated with an increased likelihood of moderate-to-severe SDB. Conclusions: In an ambulatory AF population, SDB was common but most patients reported low daytime sleepiness levels. Clinical features, rather than daytime sleepiness, were predictive of patients with moderate-to-severe SDB. Lack of excessive daytime sleepiness should not preclude patients from being investigated for the potential presence of concomitant SDB. abstract_id: PUBMED:31957653 Sleep-related symptoms in patients with mild stroke. Study Objectives: Treatable sleep-related conditions are frequent in stroke patients, although their prevalence across stroke types and ideal method for screening is not clear. The objectives of this study were to evaluate the prevalence of sleep disturbance across different stroke types and identify approaches to the collection of sleep-related measures in clinical practice. Methods: We performed an observational cohort study of 2,213 patients with ischemic stroke, intracerebral hemorrhage (ICH), subarachnoid hemorrhage (SAH), or transient ischemic attack seen in a cerebrovascular clinic February 17, 2015 through July 5, 2017 who completed at least one of the following sleep-related questionnaires: Patient-Reported Outcomes Measurement Information System (PROMIS) sleep disturbance, Insomnia Severity Index (ISI), Sleep Apnea Probability Scale (SAPS), and sleep duration. Prevalence of abnormal scores were calculated using the following thresholds: PROMIS sleep disturbance ≥ 55, ISI ≥ 15, SAPS score ≥ 0.50, and sleep duration fewer than 6 or more than 9 hours. Sensitivity, specificity, and positive and negative predictive values of PROMIS sleep disturbance T-score ≥ 55 to identify patients with moderate-severe insomnia (ISI ≥ 15) were computed. Results: In the cohort, 28.6% patients (624/2183) had PROMIS sleep disturbance score ≥ 55, 17.6% (142/808) had ISI ≥ 15, and 61.3% (761/1241) had a positive SAPS screen. The frequency of abnormal sleep scale scores was similar across time periods and stroke types. The sensitivity and specificity of PROMIS sleep disturbance T-score ≥ 55 to identify patients with ISI ≥ 15 were 0.89 (95% confidence interval 0.83-0.94) and 0.81 (95% confidence interval 0.78-0.84), respectively. Conclusions: The prevalence of sleep-related symptoms in patients with mild stroke are similar across stroke types and time periods after stroke. Potential approaches to screening for sleep disturbance in stroke patients are provided. abstract_id: PUBMED:34943449 A Prediction Model of Incident Cardiovascular Disease in Patients with Sleep-Disordered Breathing. (1) Purpose: this study proposes a method of prediction of cardiovascular diseases (CVDs) that can develop within ten years in patients with sleep-disordered breathing (SDB). (2) Methods: For the design and evaluation of the algorithm, the Sleep Heart Health Study (SHHS) data from the 3367 participants were divided into a training set, validation set, and test set in the ratio of 5:3:2. From the data during a baseline period when patients did not have any CVD, we extracted 18 features from electrography (ECG) based on signal processing methods, 30 ECG features based on artificial intelligence (AI), ten clinical risk factors for CVD. We trained the model and evaluated it by using CVD outcomes result, monitored in follow-ups. The optimal feature vectors were selected through statistical analysis and support vector machine recursive feature elimination (SVM-RFE) of the extracted feature vectors. Features based on AI, a novel proposal from this study, showed excellent performance out of all selected feature vectors. In addition, new parameters based on AI were possibly meaningful predictors for CVD, when used in addition to the predictors for CVD that are already known. The selected features were used as inputs to the prediction model based on SVM for CVD, determining the development of CVD-free, coronary heart disease (CHD), heart failure (HF), or stroke within ten years. (3) Results: As a result, the respective recall and precision values were 82.9% and 87.5% for CVD-free; 71.9% and 63.8% for CVD; 57.2% and 55.4% for CHD; 52.6% and 40.8% for HF; 52.4% and 44.6% for stroke. The F1-score between CVD and CVD-free was 76.5%, and it was 59.1% in class four. (4) Conclusion: In conclusion, our results confirm the excellence of the prediction model for CVD in patients with SDB and verify the possibility of prediction within ten years of the CVDs that may occur in patients with SDB. abstract_id: PUBMED:37970972 Predictors of self-care behaviors in individuals with heart failure in Brazil. Objective: To identify the predictors of self-care behaviors in individuals with heart failure. Method: A cross-sectional study including 405 patients with heart failure. Self-care behaviors were assessed by the Self-Care of Heart Failure Index. Sociodemographic and clinical characteristics were investigated as predictors of self-care maintenance, management and confidence through logistic regressions. Results: The predictors of self-care maintenance were number of children (p<0.01), left ventricular ejection fraction (p<0.01), positive feeling about disease (p=0.03), obesity (p=0.02) and dialytic chronic kidney disease (p<0.01). The predictors of self-care management were having married children (p<0.01) and sleep apnea (p<0.01). The predictors of self-care confidence were family income (p<0.01), number of hospitalizations in the previous 12 months (p=0.01), number of daily medication doses (p<0.01) and sedentarism (p<0.01). Conclusion: Some predictors related to the self-care behaviors were found, so some intensified education and social aid should be aimed at patients with these specific characteristics. abstract_id: PUBMED:27053028 Self-Reported Sleep Disordered Breathing as Risk Factor for Mortality in the Elderly. Background: This study aimed to examine the association between self-reported sleep disordered breathing (SDB) ("awaken short of breath or with a headache") and mortality in a large and ethnically diverse group of community-dwelling elderly people. Methods: A total of 1288 participants, 65 years and older, were examined longitudinally. Sleep problems were estimated using the Medical Outcomes Study Sleep Scale examining sleep disturbance, snoring, awaken short of breath or with a headache, sleep adequacy, and sleep somnolence. Cox regression analysis was used to examine the association between sleep problems and mortality. Age, gender, education, ethnicity, and body mass index were included as covariates. In further analyses we included hypertension, diabetes, heart disease, and stroke as additional covariates. Results: The participants were followed for up to 6 years (mean = 2.9, standard deviation = 1.1), and 239 (18.6%) participants died during the follow-up. In unadjusted models, SDB at the initial visit was associated with mortality (hazard ratio [HR] = 1.37; 95% confidence interval [CI] 1.21-1.55; P < .0001). After adjusting for all the covariates, the relationship between SDB and mortality remained significant (HR = 1.48; 95% CI 1.29-1.70; P < .0001). Participants with Caribbean-Hispanic ancestry have higher risk for mortality. Conclusions: Our results suggest that SDB is a risk factor for mortality in a large and ethnically diverse group of older adults, independent of demographic and clinical factors. Further research is needed to examine the underlying mechanisms of this association. abstract_id: PUBMED:15829165 Capnography screening for sleep apnea in patients with acute stroke. Sleep apnea syndrome (SAS) is a prominent clinical feature in acute stroke patients. Diagnosis is usually established by polysomnography or cardio-respiratory polygraphy (CRP). Both diagnostic procedures produce high costs, are dependent on the access to a specialized sleep laboratory, and are poorly tolerated by patients with acute stroke. In this study we therefore investigated whether capnography may work as a simple screening tool in this context. In addition to conventional CRP, 27 patients with acute stroke were studied with capnography provided by our standard monitoring system. The trend graphs of the end-tidal CO(2) values (EtCO(2)) were used to determine the capnography-based estimate of the apnea-hypopnea index (AHI(CO2)). Index events were scored when the EtCO(2) value dropped for > 50% of the previous baseline value. We found that the AHI(CO2) correlated significantly with the apnea-hypopnea index measured with conventional CRP (AHI(CRP)) (r = 0.94; p < 0.001). An AHI(CO2) > 5 turned out to be highly predictive of an AHI(CRP) > 10. According to our findings, routinely acquired capnography may provide a reliable estimate of the AHI(CRP). The equipment needed for this screening procedure is provided by the monitoring systems of most intensive care units and stroke units where stroke patients are regularly treated during the first days of their illness. Therefore, early diagnosis of SAS in these patients is made substantially easier. abstract_id: PUBMED:30482619 Sleep apnea screening is uncommon after stroke. Objective/background: To assess (1) pre and post-stroke screening for sleep apnea (SA) within a population-based study without an academic medical center, and (2) ethnic differences in post-stroke sleep apnea screening among Mexican Americans (MAs) and non-Hispanic whites (NHWs). Patients/methods: MAs and NHWs with stroke in the Brain Attack Surveillance in Corpus Christi project (2011-2015) were interviewed shortly after stroke about the pre-stroke period, and again at approximately 90 days after stroke in reference to the post-stroke period. Questions included whether any clinical provider directly asked about snoring or daytime sleepiness or had offered polysomnography. Logistic regression tested the association between these outcomes and ethnicity both unadjusted and adjusted for potential confounders. Results: Among 981 participants, 63% were MA. MAs in comparison to NHWs were younger, had a higher prevalence of hypertension, diabetes, and never smoking, a higher body mass index, and a lower prevalence of atrial fibrillation. Only 17% reported having been offered SA diagnostic testing pre-stroke, without a difference by ethnicity. In the post-stroke period, only 50 (5%) participants reported being directly queried about snoring; 86 (9%) reported being directly queried about sleepiness; and 55 (6%) reported having been offered polysomnography. No ethnic differences were found for these three outcomes, in unadjusted or adjusted analyses. Conclusions: Screening for classic symptoms of SA, and formal testing for SA, are rare within the first 90 days after stroke, for both MAs and NHWs. Provider education is needed to raise awareness that SA affects most patients after stroke and is associated with poor outcomes. abstract_id: PUBMED:27448475 A screening tool for obstructive sleep apnea in cerebrovascular patients. Background: A majority of stroke patients suffer from obstructive sleep apnea (OSA), which can go unrecognized as the current OSA screens do not perform well in stroke patients. The objective of this study is to modify the existing OSA screening tools for use in stroke patients. Methods: The cohort study consisted of patients who completed the validated OSA STOP screen and underwent polysomnography within one year. Six prediction models were created and sensitivity and specificity of various cut points were calculated. Results: There were 208 patients with mean age of 55.4 years; 61.0% had sleep apnea. Models with the highest c-statistics included the STOP items plus BMI, age, and sex (STOP-BAG). Addition of neck circumference and other variables did not significantly improve the models. The STOP-BAG2 model, using continuous variables, had a greater sensitivity of 0.94 (95% CI 0.89-0.98) and specificity 0.60 (95% CI 0.49-0.71) compared to the STOP-BAG model, which used dichotomous variables, and had a sensitivity of 0.91 (95% CI 0.85-0.96) and specificity of 0.48 (95% CI 0.37-0.60). Conclusions: The STOP-BAG screen can be used to identify cerebrovascular patients at an increased risk of OSA. The use of continuous variables (STOP-BAG2) is preferable if automated score calculation is available. It can improve the efficiency of evaluation for OSA and lead to improved outcomes of patients with cerebrovascular disease. Answer: Yes, a prediction model combining self-reported symptoms, sociodemographic, and clinical features can serve as a reliable first screening method for sleep apnea syndrome (SAS) in patients with stroke. A retrospective study conducted in a rehabilitation center on a consecutive sample of patients with stroke (N=620) found that a prediction model consisting of variables such as sex, age, body mass index, and self-reported apneas and falling asleep during daytime showed acceptable diagnostic accuracy for a high likelihood of SAS. The model's diagnostic accuracy was .76, and depending on the probability cutoff used, it could achieve a sensitivity and specificity of up to 95% and 97%, respectively. This suggests that such a model can be a reasonable first screening method in a stepped diagnostic approach to SAS in stroke rehabilitation (PUBMED:24378806). Moreover, other studies have also indicated the potential for prediction models in identifying sleep-related conditions in patients with stroke. For instance, a study on mild stroke patients found that sleep-related symptoms were prevalent across different stroke types and time periods after stroke, suggesting that screening for sleep disturbance could be integrated into clinical practice (PUBMED:31957653). Additionally, capnography has been proposed as a simple screening tool for SAS in patients with acute stroke, showing a significant correlation with the apnea-hypopnea index measured with conventional cardio-respiratory polygraphy (PUBMED:15829165). However, it is important to note that while prediction models can be useful, they are not without limitations. For example, a study on atrial fibrillation patients found that self-reported daytime sleepiness was a poor predictor of sleep-disordered breathing (SDB), and clinical features were more predictive of moderate-to-severe SDB (PUBMED:31604670). This suggests that while self-reported symptoms can be part of a prediction model, they should be combined with other clinical and sociodemographic factors for a more accurate assessment. In conclusion, a prediction model that combines self-reported symptoms with sociodemographic and clinical features can be an effective first screening method for SAS in stroke patients, but it should be used as part of a comprehensive diagnostic approach that may include further testing and clinical evaluation.
Instruction: Is anastomotic biopsy necessary before radiotherapy after radical prostatectomy? Abstracts: abstract_id: PUBMED:11435834 Is anastomotic biopsy necessary before radiotherapy after radical prostatectomy? Purpose: External beam radiotherapy may be given after radical prostatectomy as adjuvant (immediate) or therapeutic (delayed) treatment, the latter in response to evidence of disease recurrence. In patients receiving delayed radiotherapy the necessity of a positive anastomotic biopsy before treatment remains unclear. We determined whether a positive anastomotic biopsy predicted the response to radiation in this setting. Materials And Methods: We reviewed the records of 67 patients who received radiotherapy for biochemical or biopsy proved recurrent prostate cancer after radical prostatectomy. Patients underwent surgery at our institution or its affiliated hospitals, or were referred to our institution for radiotherapy. All patients had a negative metastatic evaluation before receiving radiotherapy. Biochemical failure after radiotherapy was defined as serum prostate specific antigen (PSA) 0.2 ng./dl. or greater on 2 or more consecutive occasions. Biochemical recurrence-free survival was calculated using the Kaplan-Meier method. Independent predictors of PSA failure after radiotherapy were identified using the multivariate Cox proportional hazards model. Results: Of the 67 patients evaluated 33 and 34 received radiotherapy for biochemical failure and biopsy proved local recurrence, respectively. The 3-year recurrence-free survival rate was 49% in patients treated for biochemical failure and 39% in those with biopsy proved local recurrence. There was no significant difference in PSA-free survival in these 2 groups. Only pre-radiotherapy PSA 1 ng./dl. or greater (p = 0.02) and seminal vesicle invasion (p = 0.02) were significant independent predictors of biochemical failure. Conclusions: A positive anastomotic biopsy did not predict an improved outcome after radiotherapy following radical prostatectomy. Anastomotic biopsy was associated with a longer time to salvage radiotherapy. However, this delay did not translate into worse disease-free outcomes in patients who underwent anastomotic biopsy. High pre-radiotherapy PSA greater than 1 ng./ml. was the most significant predictor of biochemical failure after therapeutic radiotherapy. Decisions regarding local radiation therapy after radical prostatectomy may be made without documenting recurrent local disease. abstract_id: PUBMED:26963663 Anastomotic complications after robot-assisted laparoscopic and open radical prostatectomy. Objective Anastomotic complications are well known after radical prostatectomy (RP). The vesicourethral anastomotic technique is handled differently between open and robotic RP. The aim of the study was to investigate whether the frequency of anastomotic leakages and strictures differed between patients undergoing retropubic radical prostatectomy (RRP) and robot-assisted radical prostatectomy (RARP) and to identify risk factors associated with these complications. Materials and methods The study included 735 consecutive patients who underwent RRP (n = 499) or RARP (236) at the Department of Urology, Rigshospitalet, Denmark, in a complete 3 year period from 2010 to 2012. Univariate and multivariate logistic regression analysis was used to analyse associations between surgical procedure (RRP vs RARP) and anastomotic complications. Analyses included age, smoking status, diabetes, hypertension, surgeon, prostate volume and anastomotic leakage as variables. Owing to a low number of events, multivariable analyses only included smoking status, diabetes and prostate volume for anastomotic leakage, and age, smoking status, prostate volume and anastomotic leakage for anastomotic strictures. Results The frequency of anastomotic leakage was 2.9%. Anastomotic stricture was seen in 4.9% of patients during follow-up. No differences were found in the frequency of anastomotic leakage (p = 0.35) or strictures (p = 0.35) between RRP and RARP. Univariate analysis demonstrated an association between surgeon and the risk of anastomotic strictures in RRP patients (p = 0.02). No other independent risk factors were identified. Conclusion Overall, the anastomotic complication rate in this cohort is similar to other published reports. No obvious risk factors for anastomotic complications could be identified, which in part was due to the low number of events. abstract_id: PUBMED:21392853 Vesicourethral anastomotic stricture following radical prostatectomy with or without postoperative radiotherapy Objective: To know the incidence of vesicourethral anastomotic stricture in patients with prostate cancer treated with radical prostatectomy. Our secondary aim was to verify if postoperative radiotherapy increases the risk of presenting anastomotic stricture. Materials And Methods: We retrospectively checked the clinical records of patients that had undergone radical prostatectomy as their primary treatment between January 2000 and December 2008, with a minimum clinical follow-up of 12 months. Of the total patients, 258 met the foregoing requirements. Of them, 25 (9.6%) received postoperative radiotherapy, 12 (48%) received adjuvant radiotherapy and 13 (52%) received salvage radiotherapy. The mean age of the patients that received radiotherapy was 64 (46-77) years. The mean pre-radiotherapy PSA was 2.3 (0.04-26.1) ng/ ml. The mean time between surgery and radiotherapy was 17.4 (3-72) months. The mean dosage administered was 68 (58-70) Gy. The mean follow-up was 50.5 (15-177) months. Results: Of 25 prostatectomized patients that received radiotherapy, four (16%) developed vesicourethral anastomotic stricture. The mean time from the completion of the radiotherapy until the appearance of the stricture was 4 months (1-22). On the other hand, 36 (15.4%) of the prostatectomized patients that did not receive postoperative radiotherapy presented the same complication. Comparatively, we did not note significant differences between both groups (p=0.599). Conclusions: In our retrospective review, postoperative radiotherapy did not significantly increase the incidence of vesicourethral anastomotic stricture. abstract_id: PUBMED:22318180 Radical prostatectomy after radiotherapy Radical prostatectomy is an excellent salvage method for patients with prostatic cancer when radical radiotherapy or brachytherapy fail. To define local failure is not always reliable; nevertheless, performing a prostatic biopsy two years after treatment could reach an early diagnosis. Another accepted attitude is to perform the biopsy after biochemical recurrence, but sometimes the pathological stage is already locally advanced tumor. It is also difficult to determine which patients are suitable for this rescue treatment, probably those with locally confined tumors and with favorable PSA kinetics, PSA velocity below 2.0 or a PSA doubling time over 12 months, and in whom detectable PSA is reached 2 years after treatment. These patients are suitable for radical prostatectomy if they have a live expectancy of more than 10 years. Although rescue radical prostatectomy has a higher rate of complications and worse functional results, cancer-specific survival rates are high, and remain high after 15 years of follow-up. Currently, new surgical improvements and new radiotherapy technology are diminishing surgical complications and improving functional results. In summary radical prostatectomy is a feasible rescue procedure after radiotherapy failure although the complications rate remains higher than prostatectomy as initial therapy. abstract_id: PUBMED:33655148 Use of Disposable Punch Biopsy Device to Add Foley Catheter Fenestration to Improve Drainage of Post Radical Prostatectomy Anastomotic Leak. Context: Radical prostatectomy (RP) is a major oncologic urological surgery that can have high morbidity if complications arise. Bladder-urethral urine anastomotic leaks (AL) are one of the most common complications and can greatly increase morbidity. To date, there are few resources to manage AL. One management technique is using a Foley catheter with an additional auxiliary drainage port, also known as a fenestrated catheter. This type of auxiliary drainage port allows a low-pressure drainage source that is located near the anastomosis to increase urine drainage from catheter rather than from the AL site. The optimal size and location of this additional drainage port is currently unknown. This experiment evaluated the optimal auxiliary drainage port size and an inexpensive technique to easily construct such a catheter. Methods: Utilizing different size punch biopsies, auxiliary drainage ports were placed in different size Foley catheters and drainage rates and the structural integrity of the catheter was assessed. Results: A 3.0 mm punch biopsy located 1.0 cm proximal to the Foley balloon in an 18 French (Fr) catheter was determined to be the optimal size. A 2.0 mm punch biopsy provided significantly less drainage. The 4.0 mm punch biopsy compromised the structural integrity of the catheter. Conclusions: Based on these experimental results, we recommend using a 3.0 mm punch biopsy in an 18 Fr catheter 1.0 cm. proximal to the balloon for an auxiliary drain site in Foley catheter when the anastomosis is not watertight or the surgeon has reason to believe the patient is at higher risk for an AL Factors such as history of pelvic radiation, abnormal anatomy, large prostate, post-surgical hematoma formation, obesity, previous prostatic surgery, difficult anastomosis, blood loss and postoperative urinary tract infection may make use of this type of device more attractive. abstract_id: PUBMED:26515118 Surgical approach to vesicourethral anastomotic stricture following radical prostatectomy. Introduction: Vesicourethral anastomotic stricture following prostatectomy is uncommon but represents a challenge for reconstructive surgery and has a significant impact on quality of life. The aim of this study was to relate our experience in managing vesicourethral anastomotic strictures and present the treatment algorithm used in our institution. Patients And Methods: We performed a descriptive, retrospective study in which we assessed the medical records of 45 patients with a diagnosis of vesicourethral anastomotic stricture following radical prostatectomy. The patients were treated in the same healthcare centre between January 2002 and March 2015. Six patients were excluded for meeting the exclusion criteria. The stricture was assessed using cystoscopy and urethrocystography. The patients with patent urethral lumens were initially treated with minimally invasive procedures. Open surgery was indicated for the presence of urethral lumen obliteration or when faced with failure of endoscopic treatment. Urinary continence following the prostatectomy was determinant in selecting the surgical approach (abdominal or perineal). Results: Thirty-nine patients treated for vesicourethral anastomotic stricture were recorded. The mean age was 64.4 years, and the mean follow-up was 40.3 months. Thirty-three patients were initially treated endoscopically. Seventy-five percent progressed free of restenosis following 1 to 4 procedures. Twelve patients underwent open surgery, 6 initially due to obliterative stricture and 6 after endoscopic failure. All patients progressed favourable after a mean follow-up of 29.7 months. Conclusions: Endoscopic surgery is the initial treatment option for patients with vesicourethral anastomotic strictures with patent urethral lumens. Open reanastomosis is warranted when faced with recalcitrant or initially obliterative strictures and provides good results. abstract_id: PUBMED:30514243 The transverse and vertical distribution of prostate cancer in biopsy and radical prostatectomy specimens. Background: Prostate biopsy is the most common method for the diagnosis of prostate cancer and the basis for further treatment. Confirmation using radical prostatectomy specimens is the most reliable method for verifying the accuracy of template-guided transperineal prostate biopsy. The study aimed to reveal the spatial distribution of prostate cancer in template-guided transperineal saturation biopsy and radical prostatectomy specimens. Methods: Between December 2012 to December 2016, 171 patients were diagnosed with prostate cancer via template-guided transperineal prostate biopsy and subsequently underwent laparoscopic radical prostatectomy. The spatial distributions of prostate cancer were analyzed and the consistency of the tumor distribution between biopsy and radical prostatectomy specimens were compared. Results: The positive rate of biopsy in the apex region was significantly higher than that of the other biopsy regions (43% vs 28%, P < 0.01). In radical prostatectomy specimens, the positive rate was highest at the region 0.9-1.3 cm above the apex, and it had a tendency to decrease towards the base. There was a significant difference in the positive rate between the cephalic and caudal half of the prostate (68% vs 99%, P < 0.01). There were no significant differences between the anterior and posterior zones for either biopsy or radical prostatectomy specimens. Conclusion: The tumor spatial distribution generated by template-guided transperineal prostate biopsy was consistent with that of radical prostatectomy specimens in general. The positive rate was consistent between anterior and posterior zones. The caudal half of the prostate, especially the vicinity of the apex, was the frequently occurred site of the tumor. abstract_id: PUBMED:32594901 Can we define reliable risk factors for anastomotic strictures following radical prostatectomy? Background: To identify risk factors for anastomotic strictures in patients after radical prostatectomy. Methods: In all, 140 prostate cancer patients with one or more postoperative anastomotic strictures after radical prostatectomy were included. All patients underwent transurethral anastomotic resection at the University Hospital of Munich between January 2009 and May 2016. Clinical data and follow-up information were retrieved from patients' records. Statistical analysis was done using Kaplan-Meier curves and log rank-test with time to first transurethral anastomotic resection as endpoint, Chi-square-test, and Mann-Whitney-U test. Results: In all, 140 patients with a median age of 67 years (IQR: 61-71 years) underwent radical prostatectomy. Median age at time of transurethral anastomotic resection was 68 years (IQR: 62-72). Patients needed 2 surgical interventions in median (range: 1-15). Median time from radical prostatectomy to transurethral anastomotic resection was 6 months (IQR: 3.9-17.4). Median duration of catheterization after radical prostatectomy was 10 days (IQR: 8-13). In all, 26% (36/140) received additional radiotherapy. Regarding time to first transurethral anastomotic resection, age and longer duration of catheterization after radical prostatectomy with a cutoff of 7 days showed no statistically significant differences (p = 0.392 and p = 0.141, respectively). Tumor stage was no predictor for development of anastomotic strictures (p = 0.892), and neither was prior adjuvant radiation (p = 0.162). Potential risk factors were compared between patients with up to 2 strictures (low-risk) and patients developing > 2 strictures (high-risk): high-risk patients had more often injection of cortisone during surgery (14% vs 0%, p < 0.001) and more frequently advanced tumor stage pT > 2 (54% vs 38%, p = 0.055), respectively. Other risk factors did not show any significant difference compared to number of prior transurethral anastomotic strictures. Conclusions: We could not identify a reliable risk factor to predict development of anastomotic strictures following radical prostatectomy. abstract_id: PUBMED:18645267 Value of immediate anastomotic biopsy following biochemical failure after radical prostatectomy. Aims: To study the value, in diagnostic terms, of performing transrectal ultrasound (TRUS)-guided anastomotic biopsy immediately following the diagnosis of biochemical failure in patients treated by radical retropubic prostatectomy. Methods: We report on 50 sessions of TRUS-guided biopsy obtained during post-radical retropubic prostatectomy follow-up, immediately after the diagnosis of biochemical failure. No patient had received either adjuvant or further treatment due to biochemical failure status prior to the biopsy session. In each case, tissue sampling involved cores taken by a standard protocol (random) as well as TRUS-guided biopsy to sonographically suspicious areas. Statistical analysis focused on identifying the statistical importance of various pre- and post-treatment variables in predicting biopsy outcome. Results: 10/50 cases with local evidence of malignancy (1 case harboring both benign and malignant glands) were identified and in another 10/50 cases benign prostatic tissue was diagnosed. The remaining (30) cases have shown fibromuscular tissue only. Statistical analysis of various parameters has demonstrated a significant correlation between local evidence of cancer and the finding of sonographically remarkable areas. Conclusion: Performing TRUS-guided anastomotic biopsy close to the onset of biochemical failure can frequently identify but also may easily miss the suspected local presence of prostate tissue. We believe that, with minimal risks of morbidity, there is merit in exploring its possibilities to trace local prostatic tissue, since it may provide useful information for further treatment actions also in an earlier time. abstract_id: PUBMED:23490232 Accuracy of post-radiotherapy biopsy before salvage radical prostatectomy. Objective: To determine whether post-radiotherapy (RT) biopsy (PRB) adequately predicts the presence, location, and histological features of cancer in the salvage radical prostatectomy (SRP) specimen. Before salvage treatment, a PRB is required to confirm the presence of locally recurrent or persistent cancer and to determine the extent and location of the prostate cancer. Patients And Methods: SRP was performed between 1998 and 2011 on 198 patients. All patients underwent a PRB. PRB and SRP specimens were evaluated by a genitourinary pathologist. Patients had external-beam RT alone (EBRT; 71%) or brachytherapy with or without EBRT (29%). Results: Of the men undergoing SRP, 26 (14%) were clinical stage ≥T3, with 13% of PRBs with Gleason score ≥8. Cancer was unilateral in 120 (61%) biopsies, with contralateral or bilateral prostate cancer at SRP in 49%. In the SRP specimen, cancer was multifocal in 57%. Cancer was upgraded at SRP in 58% of men, with 20% having an increase in primary Gleason grade. The accuracy of PRB varied by region from 62% to 76%, with undetected cancers ranging from 12% to 26% and most likely to occur at the mid-gland. Conclusions: Radiation-recurrent prostate cancers were often multifocal, and biopsy missed up to 20% of tumours. More than half of the cancers were upgraded at SRP, and many that were unilateral on PRB were bilateral at SRP. Answer: Anastomotic biopsy before radiotherapy after radical prostatectomy is not necessarily required for all patients. A study reviewed the records of 67 patients who received radiotherapy for biochemical or biopsy-proven recurrent prostate cancer after radical prostatectomy and found that a positive anastomotic biopsy did not predict an improved outcome after radiotherapy. The study concluded that decisions regarding local radiation therapy after radical prostatectomy may be made without documenting recurrent local disease, as high pre-radiotherapy PSA greater than 1 ng./ml. was the most significant predictor of biochemical failure after therapeutic radiotherapy (PUBMED:11435834). Additionally, another study aimed to determine whether post-radiotherapy biopsy adequately predicts the presence, location, and histological features of cancer in the salvage radical prostatectomy specimen. The study found that radiation-recurrent prostate cancers were often multifocal, and biopsy missed up to 20% of tumors. More than half of the cancers were upgraded at salvage radical prostatectomy, and many that were unilateral on post-radiotherapy biopsy were bilateral at salvage radical prostatectomy. This suggests that the accuracy of post-radiotherapy biopsy varied and that it may not always be a reliable indicator of the extent or presence of recurrent cancer (PUBMED:23490232). Therefore, based on these findings, anastomotic biopsy may not be necessary before radiotherapy after radical prostatectomy, especially considering that it did not predict the response to radiation in the setting of biochemical failure or biopsy-proven local recurrence (PUBMED:11435834). Instead, other factors such as pre-radiotherapy PSA levels may be more significant predictors of treatment outcomes.
Instruction: Chronic fatigue syndrome in adolescents: do parental expectations of their child's intellectual ability match the child's ability? Abstracts: abstract_id: PUBMED:19616144 Chronic fatigue syndrome in adolescents: do parental expectations of their child's intellectual ability match the child's ability? Objective: This cross-sectional study aimed to measure the discrepancy between actual and perceived IQ in a sample of adolescents with CFS compared to healthy controls. We hypothesized that adolescents with CFS and their parent would have higher expectations of the adolescent's intellectual ability than healthy adolescents and their parent. Methods: The sample was 28 CFS patients and 29 healthy controls aged 11-19 years and the parent of each participant. IQ was assessed using the AH4 group test of general intelligence and a self-rating scale which measured perceived IQ. Results: Parents' perceptions of their children's IQ were significantly higher for individuals with CFS than healthy controls. Conclusions: High expectations may need to be addressed within the context of treatment. abstract_id: PUBMED:38201629 From a Clustering of Adverse Symptoms after Colorectal Cancer Therapy to Chronic Fatigue and Low Ability to Work: A Cohort Study Analysis with 3 Months of Follow-Up. In colorectal cancer (CRC) patients, apart from fatigue, psychological and physical symptoms often converge, affecting their quality of life and ability to work. Our objective was to ascertain symptom clusters within a year following CRC treatment and their longitudinal association with persistent fatigue and reduced work ability at the 3-month follow-up. We used data from MIRANDA, a multicenter cohort study enrolling adult CRC patients who are starting a 3-week in-patient rehabilitation within a year post-curative CRC treatment. Participants completed questionnaires evaluating symptoms at the start of rehabilitation (baseline) and after three months. We performed an exploratory factor analysis to analyze the clustering of symptoms at baseline. Longitudinal analysis was performed using a multivariable linear regression model with dichotomized symptoms at baseline as independent variables, and the change in fatigue and ability to work from baseline to 3-month-follow-up as separate outcomes, adjusted for covariates. We identified six symptom clusters: fatigue, gastrointestinal symptoms, pain, psychosocial symptoms, urinary symptoms, and chemotherapy side effects. At least one symptom from each factor was associated with higher fatigue or reduced ability to work at the 3-month follow-up. This study highlights the interplay of multiple symptoms in influencing fatigue and work ability among CRC patients post-rehabilitation. abstract_id: PUBMED:26983007 Child abuse and physical health in adulthood. Background: A large literature exists on the association between child abuse and mental health, but less is known about associations with physical health. The study objective was to determine if several types of child abuse were related to an increased likelihood of negative physical health outcomes in a nationally representative sample of Canadian adults. Data And Methods: Data are from the 2012 Canadian Community Health Survey-Mental Health (n = 23,395). The study sample was representative of the Canadian population aged 18 or older. Child physical abuse, sexual abuse, and exposure to intimate partner violence were assessed in relation to self-perceived general health and 13 self-reported, physician-diagnosed physical conditions. Results: All child abuse types were associated with having a physical condition (odds ratios = 1.4 to 2.0) and increased odds of obesity (odds ratios = 1.2 to 1.4). Abuse in childhood was associated with arthritis, back problems, high blood pressure, migraine headaches, chronic bronchitis/emphysema/COPD, cancer, stroke, bowel disease, and chronic fatigue syndrome in adulthood, even when sociodemographic characteristics, smoking, and obesity were taken into account (odds ratios = 1.1 to 2.6). Child abuse remained significantly associated with back problems, migraine headaches, and bowel disease when further adjusting for mental conditions and other physical conditions (odds ratios = 1.2 to 1.5). Sex was a significant moderator between child abuse and back problems, chronic bronchitis/emphysema/COPD, cancer, and chronic fatigue syndrome, with slightly stronger effects for women than men. Interpretation: Abuse in childhood was associated with increased odds of having 9 of the 13 physical conditions assessed in this study and reduced self-perceived general health in adulthood. Awareness of associations between child abuse and physical conditions is important in the provision of health care. abstract_id: PUBMED:15684437 Development of a functional ability scale for children and young people with myalgic encephalopathy (ME)/chronic fatigue syndrome (CFS). The numerous symptoms and unpredictable pattern of myalgic encephalopathy (ME) make it difficult to describe, especially for children. It was left to carers to guess what the child could achieve each day, often leading to over/underestimates. A functional ability scale was needed, which measured from 0 to 100 percent able and that children and young people themselves designed. A new scale was developed from the Moss Ability Scale using the critique of 251 children and young people from the Association of Young People with ME (AYME). Responding to the shift in emphasis towards patients taking an active role in their own care, it was felt these young people would know whether the scale measured what it had set out to measure, and were asked questions on the face and content validity of the scale. There was a 99 percent agreement between the young people that the final scale was 'workable' or better. abstract_id: PUBMED:34499948 The importance of child abuse and neglect in adult medicine. The risk for adverse consequences and disease due to the trauma of child abuse or neglect is easily assessed using the self-administered modified ACEs questionnaire. Exposure to child maltreatment is endemic and common. At least one out of every ten USA adults has a significant history of childhood maltreatment. This is a review of the literature documenting that a past history of childhood abuse and neglect (CAN) makes substantial contributions to physical disease in adults, including asthma, chronic obstructive pulmonary disease, lung cancer, hypertension, stroke, kidney disease, hepatitis, obesity, diabetes, coronary artery disease, pelvic pain, endometriosis, chronic fatigue syndrome, irritable bowel syndrome, fibromyalgia, and auto immune diseases. Adults who have experienced child maltreatment have a shortened life expectancy. The contribution of CAN trauma to these many pathologies remains largely underappreciated and neglected compared to the attention given to the array of mental illnesses associated with child maltreatment. Specific pathophysiolologic pathways have yet to be defined. Clinical recognition of the impact of past CAN trauma will contribute to the healing process in any disease but identifying specific effective therapies based on this insight remains to be accomplished. Recommendations are made for managing these patients in the clinic. It is important to incorporate screening for CAN throughout adult medical practice now. abstract_id: PUBMED:24022814 The prospective association between childhood cognitive ability and somatic symptoms and syndromes in adulthood: the 1958 British birth cohort. Background: Cognitive ability is negatively associated with functional somatic symptoms (FSS) in childhood. Lower childhood cognitive ability might also predict FSS and functional somatic syndromes in adulthood. However, it is unknown whether this association would be modified by subjective and objective measures of parental academic expectations. Methods: 14 068 participants from the 1958 British birth cohort, whose cognitive ability was assessed at 11 years. Outcomes were somatic symptoms at 23, 33 and 42 years. Self-reported irritable bowel syndrome (IBS), chronic fatigue syndrome/myalgic encephalomyelitis (CFS/ME) and operationally defined CFS-like illness were measured at 42 years. Results: Lower cognitive ability at age 11 years was associated with somatic symptoms at ages 23, 33 and 42 years. Adjusting for sex, childhood internalising problems, previous somatic symptoms and concurrent psychological symptoms, childhood cognitive ability remained negatively associated with somatic symptoms at age 23 years (β=-0.060, 95% CI -0.081 to -0.039, p<0.01), 33 years (β = -0.031, 95% CI -0.050 to -0.011, p<0.01), but not with somatic symptoms at 42 years. Overall, we found no clear association between lower childhood cognitive ability and CFS/ME, CFS-like illness and IBS. Associations between cognitive ability and somatic symptoms at 23 years were moderated by low parental social class, but not by subjective indicators of parental academic expectations. Conclusions: Lower childhood cognitive ability predicted somatic symptoms, but not CFS/ME, CFS-like illness and IBS in adulthood. While earlier research indicated an important role for high parental academic expectations in the development of early-life FSS, these expectations do not seem relevant for somatic symptoms or functional somatic syndromes in later adulthood. abstract_id: PUBMED:16740850 Mirrored symptoms in mother and child with chronic fatigue syndrome. Objective: Our aim with this study was to assess the relation between chronic fatigue syndrome in adolescents and fatigue and associated symptoms in their fathers and mothers, more specifically the presence of chronic fatigue syndrome-like symptoms and psychologic distress. Method: In this cross-sectional study, 40 adolescents with chronic fatigue syndrome according to the Centers for Disease Control and Prevention criteria were compared with 36 healthy control subjects and their respective parents. Questionnaires regarding fatigue (Checklist Individual Strength), fatigue-associated symptoms, and psychopathology (Symptom Checklist-90) were applied to the children and their parents. Results: Psychologic distress in the mother corresponds with an adjusted odds ratio of 5.6 for the presence of CFS in the child. The presence of fatigue in the mother and dimensional assessment of fatigue with the Checklist Individual Strength revealed odds ratios of, respectively, 5.29 and 2.86 for the presence of chronic fatigue syndrome in the child. An increase of 1 SD of the hours spent by the working mother outside the home reduced the risk for chronic fatigue syndrome in their child with 61%. The fathers did not show any risk indicator for chronic fatigue syndrome in their child. Conclusions: Mothers of adolescents with chronic fatigue syndrome exhibit fatigue and psychologic symptoms similar to their child in contrast with the fathers. The striking difference between the absent association in fathers and the evident association in mothers suggests that the shared symptom complex of mother and child is the result of an interplay between genetic vulnerability and environmental factors. abstract_id: PUBMED:33892206 Anticipation of and response to exercise in adolescents with CFS: An experimental study. Background: Using a laboratory-based exercise task, this study investigated objective exercise performance as well as expectations, anxiety and perceived task performance ratings in adolescents with CFS compared to healthy controls and illness controls. Method: Trials of a sit-stand exercise task (SST) were undertaken (CFS: n = 61, asthma (AS): n = 31, healthy adolescents (HC): n = 78). Adolescents rated their expectations, pre- and post-task anxiety, and perceived task difficulty. Their parents independently rated their performance expectations of their child. Results: The CFS group took significantly longer to complete the SST than the AS group (MD 3.71, 95% CI [2.41, 5.01] p < .001) and HC (MD 3.61, 95% CI [2.41, 4.81], p < .001). Adolescents with CFS had lower expectations for their performance on the exercise task than AS participants (MD -11.79, 95% CI [-22.17, -1.42] p = .022) and HC (MD -15.08, 95% CI [-23.01, -7.14] p < .001). They rated their perceived exertion as significantly greater than AS (MD 3.04, 95% CI [1.86, 4.21] p < .001) and HC (MD 2.98, 95% CI [1.99, 3.98], p < .001). The CFS group reported greater anxiety pre-task than AS (MD 14.11, 95% CI [5.57, 22.65] p < .001) and HC (MD 11.19, 95% CI [2.64, 19.75], p. = 007). Parental group differences showed similar patterns to the adolescents''. Conclusions: Lower expectations and greater anxiety regarding exercise may reflect learning from previous difficult experiences which could impact future exercise performance. Further examination of pre-exercise expectations and post-exercise appraisals could improve our understanding of the mechanisms by which fatigue is maintained. abstract_id: PUBMED:10451339 The effects on siblings in families with a child with chronic fatigue syndrome. Paediatric CFS/ME is a stressor, which affects not only the sufferer but also the whole family. The sibling bond exerts a great influence on all the children in the family. Healthy siblings are often overlooked as attention is focused on the child with CFS/ME or other chronic illness. Individual children react in different ways to serious illness in another sibling by adopting a variety of coping mechanisms. There is a need for health and education professionals to consider the whole family when caring for and working with a child with CFS/ME. abstract_id: PUBMED:34191221 Parent-child discrepancies in health-related quality of life of children and adolescents with myalgic encephalomyelitis/chronic fatigue syndrome. Purpose: Few studies have examined parent-child discrepancies on self-report measures of myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) symptomatology and health-related quality of life (HRQOL). The aim of this study was to investigate parent-child reporting discrepancies between a pediatric sample of diagnosed patients with ME/CFS and controls to better understand the role of children and adolescent reporting. Method: Data for this study were drawn from a community-based epidemiological study of pediatric ME/CFS in the Chicagoland area. A total of 147 parent-child dyads (75 pairs with ME/CFS and 72 control pairs) completed measures assessing HRQOL and ME/CFS symptomatology. At the individual level, agreement was assessed using intra-class correlation coefficient (ICC) scores. Agreement was measured at the group level by a comparison of means using paired-sample t-tests. Results: Intra-class correlations revealed varied agreement in both parent-child pairs of children who met at least one case definition of ME/CFS and in parent-child pairs in the control group. Conclusion: The current study provides support for the existence of discrepancies between parent-child reports of ME/CFS symptomatology and HRQOL measures. Limitations and future directions are discussed. Answer: The study described in abstract PUBMED:19616144 investigated whether parental expectations of their child's intellectual ability matched the child's actual ability, specifically in the context of adolescents with Chronic Fatigue Syndrome (CFS) compared to healthy controls. The results indicated that parents of children with CFS had significantly higher perceptions of their children's IQ than parents of healthy controls. This suggests that there may be a discrepancy between parental expectations and the actual intellectual abilities of adolescents with CFS. The study concludes that addressing high expectations may be an important aspect of treatment for adolescents with CFS.
Instruction: Do proton-pump inhibitors confer additional gastrointestinal protection in patients given celecoxib? Abstracts: abstract_id: PUBMED:17530673 Do proton-pump inhibitors confer additional gastrointestinal protection in patients given celecoxib? Objective: Celecoxib has a superior upper-gastrointestinal (GI) safety profile compared with nonselective nonsteroidal antiinflammatory drugs (NS-NSAIDs). It is unclear whether the utilization of a proton-pump inhibitor (PPI) with celecoxib confers additional protection in elderly patients. We assessed the association between GI hospitalizations and use of celecoxib with a PPI versus celecoxib alone, and NS-NSAIDs with a PPI or NS-NSAIDs alone in elderly patients. Methods: We conducted a population-based retrospective cohort study using the government of Quebec health services administrative databases. Elderly patients were included at their first dispensing date for celecoxib or an NS-NSAID between April 1999 and December 2002. Prescriptions were separated into 4 groups: celecoxib, celecoxib plus PPI, NS-NSAIDs, and NS-NSAIDs plus PPI. Cox regression models with time-dependent exposure were used to compare the hazard rates of GI hospitalization between the 4 groups adjusting for patient characteristics at baseline. Results: There were 1,161,508 prescriptions for celecoxib, 360,799 for celecoxib plus PPI, 715,176 for NS-NSAIDs, and 148,470 for NS-NSAIDs plus PPI. The adjusted hazard ratios (HRs; 95% confidence intervals [95% CIs]) were 0.69 (0.52-0.93) for celecoxib plus PPI versus celecoxib, 0.98 (0.67-1.45) for NS-NSAIDs plus PPI versus celecoxib, and 2.18 (1.82-2.61) for NS-NSAIDs versus celecoxib. Subgroup analyses showed that use of a PPI with celecoxib may be beneficial in patients ages >/=75 years but was not better than celecoxib alone among those ages 66-74 years (HR 0.98, 95% CI 0.63-1.52). Conclusion: Addition of a PPI to celecoxib conferred extra protection for patients ages >/=75 years. PPI did not seem necessary with celecoxib for patients ages 66-74 years. abstract_id: PUBMED:26520197 Gastrointestinal bleeding In the Digestive Disease Week in 2015 there have been some new contributions in the field of gastrointestinal bleeding that deserve to be highlighted. Treatment of celecoxib with a proton pump inhibitor is safer than treatment with nonselective NSAID and a proton pump inhibitor in high risk gastrointestinal and cardiovascular patients who mostly also take acetylsalicylic acid. Several studies confirm the need to restart the antiplatelet or anticoagulant therapy at an early stage after a gastrointestinal hemorrhage. The need for urgent endoscopy before 6-12 h after the onset of upper gastrointestinal bleeding episode may be beneficial in patients with hemodynamic instability and high risk for comorbidity. It is confirmed that in Western but not in Japanese populations, gastrointestinal bleeding episodes admitted to hospital during weekend days are associated with a worse prognosis associated with delays in the clinical management of the events. The strategy of a restrictive policy on blood transfusions during an upper GI bleeding event has been challenged. Several studies have shown the benefit of identifying the bleeding vessel in non varicose underlying gastric lesions by Doppler ultrasound which allows direct endoscopic therapy in the patient with upper GI bleeding. Finally, it has been reported that lower gastrointestinal bleeding diverticula band ligation or hemoclipping are both safe and have the same long-term outcomes. abstract_id: PUBMED:27888865 Advances in gastrointestinal bleeding. The main innovations of the latest meeting of the Gastroenterological Association (2016) concerning upper gastrointestinal bleeding from the clinician's perspective can be summarised as follows: a) The Glasgow-Blatchford scale has the best accuracy in predicting the need for surgical intervention and hospital mortality; b) Prognostic scales for non-variceal upper gastrointestinal bleeding are also useful for lower gastrointestinal bleeding; c) Preliminary data suggest that treatment with hemospray does not seem to be superior to current standard treatment in controlling active peptic ulcer bleeding; d) Either famotidine or a proton pump inhibitor may be effective in preventing haemorrhagic recurrence in patients taking aspirin, but this finding needs to be confirmed in further studies; e) There was confirmation of the need to re-introduce antiplatelet therapy as early as possible in patients with antiplatelet-associated gastrointestinal bleeding in order to prevent cardiovascular mortality; f) Routine clinical practice suggests that gastrointestinal or cardiovascular complications with celecoxib or traditional NSAIDs are very low; g) Dabigatran is associated with an increased incidence of gastrointestinal bleeding compared with apixaban or warfarin. At least half of the episodes are located in the lower gastrointestinal tract; h) Implant devices for external ventricular circulatory support are associated with early gastrointestinal bleeding in up to one third of patients; the bleeding is often secondary to arteriovenous malformations. abstract_id: PUBMED:12705062 Cyclooxygenase 2 selective antirheumatic analgesics Because of Cyclooxygenase-2 selective non steroidal anti-inflammatory drugs (NSAIDs), the therapy of articular pain has become safer and more convenient. Currently, two highly Cyclooxygenase-2 selective drugs, celecoxib and rofecoxib, are available. Both are effective for patients with osteoarthritis (at daily dosages of 200 mg and 12.5 mg, respectively) and rheumatoid arthritis (RA) (at twice the above dosages). At higher daily dosages of 800 mg and 50 mg these substances still appear safe with regard to life-threatening gastrointestinal complications (perforation, obstruction, bleeding), if not given with concomitant aspirin. However, Cyclooxygenase-2 selective non steroidal anti-inflammatory drugs do not confer protection against platelet aggregation and aspirin must be given where required for cardiovascular prophylaxis. Most patients will then routinely need gastroprotective agents such as proton pump inhibitors or misoprostol; it is unclear whether coxibs confer any benefit under such circumstances. Although not a coxib, Meloxicam does not appear to cause serious gastrointestinal complications if the low daily dosage of 7.5 mg is sufficient for the control of less pronounced pain and thus not exceeded. The gastrointestinal safety of nimesulide can not be sufficiently evaluated based on the available clinical data. abstract_id: PUBMED:17469317 Further definition of the role of COX-2 inhibitors and NSAIDs in patients with nociceptive pain New information has been reported regarding the effects of cyclo-oxygenase(COX)-2 inhibitors on renal function and cardiac arrhythmia, indicating that the incidence of peripheral oedema, hypertension and renal failure is different for the different selective COX-2 inhibitors. The estimated renal risk due to valdecoxib/parecoxib, etoricoxib and lumiracoxib is essentially unchanged, the risk due to rofecoxib is increased, while the risk due to celecoxib in low dosage is decreased. New data have also been reported on the cardiovascular risk due to cyclo-oxygenase inhibition, indicating that the relative risk due to naproxen, piroxicam, ibuprofen, celecoxib and meloxicam is essentially unchanged while the risk due to indomethacin, diclofenac and rofecoxib is increased. Recent studies show that the cardiovascular risk of etoricoxib is comparable to that ofdiclofenac. For daily practice, the following actions should be taken: (a) determine whether a prostaglandin synthetase inhibitor is needed; (b) consider the gastrointestinal as well as the cardiovascular risk profile ofthe patient; (c) if the gastrointestinal risk is above normal, a selective COX-2 inhibitor or a classical NSAID with a proton-pump inhibitor may be used; (d) in patients with renal disease, heart failure or hypertension without arteriosclerosis, the choice is between a classical NSAID, notably naproxen and ibuprofen, and low-dose celecoxib (200 mg per day); (e) in patients with arteriosclerosis in whom secondary cardiovascular prophylaxis with low-dose aspirin is indicated, celecoxib has no added value. abstract_id: PUBMED:17552414 Cardiovascular and gastrointestinal risks associated with selective and non-selective NSAIDs There has been much discussion regarding the cardiovascular and gastrointestinal safety of traditional and COX-2 selective NSAIDs. The national and international guidelines differ in their recommendations. Selective COX-2 inhibitors seem to have a diminished risk for severe gastrointestinal complications in the short-term, but the long-term benefit has not yet been proven. In various studies, COX-2 selective NSAIDs have been associated with an increased risk of cardiovascular complications. This connection has been clearly demonstrated only for rofecoxib. Celecoxib seems to lead to an increased risk only at high dosages. However, more patients will have to be followed for a longer period to confirm these results. There is insufficient evidence that the COX-2 selective agents lead to more frequent cardiovascular complications than the traditional NSAIDs. In patients with an increased risk of gastrointestinal complications and no cardiovascular risk, there is no preference for either COX-2 selective NSAIDs or the combination of traditional NSAIDs and a proton pump inhibitor. If dyspepsia develops during the use of a traditional NSAID, then it seems more effective to add a proton-pump inhibitor to the traditional NSAID rather than replacing it by a COX-2 selective NSAID. abstract_id: PUBMED:20662999 Are COX-2 inhibitors preferable to combined NSAID and PPI in countries with moderate health service expenditures? Rationale: In developed countries, cyclooxygenase 2 (COX-2) inhibitors were shown to be less costly than the combination of non-steroidal anti-inflammatory drugs (NSAIDs) and proton pump inhibitors (PPIs) in treatment of patients with high risk of serious gastrointestinal (GI) adverse effects. It is questionable if such results apply to developing countries where health service costs are lower and there is high discrepancy between generic and patent protected drug prices. We analysed the direct cost of treatment with generic NSAIDs in combination with PPIs versus branded COX-2 inhibitors in patients with high risk of serious GI adverse effects from the perspective of the public health service in Serbia. Methods: Total cost of treatment of serious GI complications and the use of NSAID+PPI versus COX-2 inhibitors were calculated. A model for estimation of cost of treatment of NSAID+PPI versus COX-2 inhibitors which included the probability of developing serious GI adverse effects was developed. Results: Total cost of treatment of serious GI adverse effects resulted in an average of $814/patient. Considering the relative risk of such adverse effects for patients with four or more risk factors, the least costly treatment over 6 months was the use of celecoxib ($487). Compared with diclofenac+omeprazole, cost savings were estimated at $59 and $22 per patient with celecoxib and etoricoxib, respectively. Conclusion: Cost savings may be achieved by using COX-2 inhibitors in patients at high risk of GI adverse effects even in countries with moderate health care service expenditures. Such possibility requires further investigation. abstract_id: PUBMED:16393277 Systematic review: coxibs, non-steroidal anti-inflammatory drugs or no cyclooxygenase inhibitors in gastroenterological high-risk patients? Selective cyclooxygenase-2 inhibitors have been marketed as alternatives of conventional, non-steroidal anti-inflammatory drugs with the purpose of reducing/eliminating the risk of ulcer complications. Unexpectedly, randomized-controlled trials revealed that long-term use of coxibs, such as rofecoxib, significantly increased the risk of myocardial infarction and stroke, while the use of valdecoxib was associated with potentially life-threatening skin reactions. Subsequently, rofecoxib and valdecoxib were withdrawn from the market. Although more strict precautions for other coxibs, such as celecoxib, etoricoxib, lumiracoxib and parecoxib, may be accepted/recommended by regulatory agencies, a critical review of published data suggests that their use may not be justified - even in high-risk patients - taking benefits, costs and risks into consideration. Clinicians should, therefore, never prescribe coxibs to patients with cardiovascular risk factors, and should only reluctantly prescribe coxibs to patients with a history of ulcer disease or dyspepsia to overcome persistent pain due to, e.g. rheumatoid arthritis or osteoarthritis. Instead, they should consider using conventional non-steroidal anti-inflammatory drugs in combination with a proton pump inhibitor or a prostaglandin analogue, especially for patients with increased cardiovascular risks, i.e. established ischaemic heart disease, cerebrovascular disease and/or peripheral arterial disease, or alternatively acetaminophen. An evidence-based algorithm for treatment of a chronic arthritis patient with one or more gastrointestinal risk factors is presented. abstract_id: PUBMED:19602530 Cost effectiveness of COX 2 selective inhibitors and traditional NSAIDs alone or in combination with a proton pump inhibitor for people with osteoarthritis. Objectives: To investigate the cost effectiveness of cyclo-oxygenase-2 (COX 2) selective inhibitors and traditional non-steroidal anti-inflammatory drugs (NSAIDs), and the addition of proton pump inhibitors to these treatments, for people with osteoarthritis. Design: An economic evaluation using a Markov model and data from a systematic review was conducted. Estimates of cardiovascular and gastrointestinal adverse events were based on data from three large randomised controlled trials, and observational data were used for sensitivity analyses. Efficacy benefits from treatment were estimated from a meta-analysis of trials reporting total Western Ontario and McMaster Universities (WOMAC) osteoarthritis index score. Other model inputs were obtained from the relevant literature. The model was run for a hypothetical population of people with osteoarthritis. Subgroup analyses were conducted for people at high risk of gastrointestinal or cardiovascular adverse events. Comparators Licensed COX 2 selective inhibitors (celecoxib and etoricoxib) and traditional NSAIDs (diclofenac, ibuprofen, and naproxen) for which suitable data were available were compared. Paracetamol was also included, as was the possibility of adding a proton pump inhibitor (omeprazole) to each treatment. Main Outcome Measures: The main outcome measure was cost effectiveness, which was based on quality adjusted life years gained. Quality adjusted life year scores were calculated from pooled estimates of efficacy and major adverse events (that is, dyspepsia; symptomatic ulcer; complicated gastrointestinal perforation, ulcer, or bleed; myocardial infarction; stroke; and heart failure). Results: Addition of a proton pump inhibitor to both COX 2 selective inhibitors and traditional NSAIDs was highly cost effective for all patient groups considered (incremental cost effectiveness ratio less than pound1000 (euro1175, $1650)). This finding was robust across a wide range of effectiveness estimates if the cheapest proton pump inhibitor was used. In our base case analysis, adding a proton pump inhibitor to a COX 2 selective inhibitor (used at the lowest licensed dose) was a cost effective option, even for patients at low risk of gastrointestinal adverse events (incremental cost effectiveness ratio approximately pound10 000). Uncertainties around relative adverse event rates meant relative cost effectiveness for individual COX 2 selective inhibitors and traditional NSAIDs was difficult to determine. Conclusions: Prescribing a proton pump inhibitor for people with osteoarthritis who are taking a traditional NSAID or COX 2 selective inhibitor is cost effective. The cost effectiveness analysis was sensitive to adverse event data and the specific choice of COX 2 selective inhibitor or NSAID agent should, therefore, take into account individual cardiovascular and gastrointestinal risks. abstract_id: PUBMED:12772009 Motion--Cyclo-oxygenase-2 selective nonsteroidal anti-inflammatory drugs are as safe as placebo for the stomach: arguments against the motion. Cyclo-oxygenase (COX) exists in two isoforms, COX-1 and COX-2, that direct the synthesis of prostaglandins, prostacyclin and thromboxane. Traditional nonsteroidal anti-inflammatory drugs (NSAIDs) inhibit both isoenzymes, resulting in damage to the mucosa of the stomach and duodenum, but also in cardioprotection. Selective COX-2 inhibitors are less likely to damage the upper gastrointestinal tract, as has been shown by large, randomized, controlled trials. Specifically, the newer agents are superior to ibuprofen and naproxen in this regard, but celecoxib and diclofenac were not significantly different in patients who were not also taking low-dose acetylsalicylic acid. These studies did not include a placebo arm, however, and controlled comparisons of COX-2 inhibitors with placebo have not enlisted enough subjects to demonstrate conclusively that they are equally safe. Selectivity for the COX-2 isoform affords protection against upper gastrointestinal toxicity possibly at the expense of the cardioprotective effect of traditional NSAIDs. This might explain the higher rate of nonfatal myocardial infarction in patients who are given rofecoxib compared with naproxen. A traditional NSAID, combined with either misoprostol or a proton pump inhibitor, is still a suitable alternative to selective COX-2 inhibitors for the treatment of arthritis. Answer: Yes, proton pump inhibitors (PPIs) do confer additional gastrointestinal protection in patients given celecoxib, particularly in certain populations. A population-based retrospective cohort study indicated that the addition of a PPI to celecoxib conferred extra protection against gastrointestinal (GI) hospitalizations for patients aged 75 years and older. However, for patients aged 66-74 years, the use of a PPI with celecoxib did not show a significant benefit over celecoxib alone (PUBMED:17530673). Furthermore, treatment with celecoxib and a PPI has been reported to be safer than treatment with nonselective NSAIDs and a PPI in high-risk GI and cardiovascular patients, many of whom also take acetylsalicylic acid (PUBMED:26520197). This suggests that in patients at high risk of GI complications, the combination of a PPI with celecoxib may be a preferable treatment option. In the context of cost-effectiveness, adding a PPI to both COX-2 selective inhibitors like celecoxib and traditional NSAIDs has been found to be highly cost-effective for all patient groups considered, which supports the use of PPIs in combination with these medications (PUBMED:19602530). However, it is important to note that the use of COX-2 selective inhibitors such as celecoxib should be carefully considered in patients with cardiovascular risk factors, and the specific choice of COX-2 selective inhibitor or NSAID should take into account individual cardiovascular and gastrointestinal risks (PUBMED:17469317, PUBMED:17552414). In summary, the evidence suggests that PPIs do offer additional GI protection when used in conjunction with celecoxib, especially in elderly patients and those at high risk of GI complications. However, the decision to use a PPI should be individualized based on the patient's overall risk profile.
Instruction: High versus standard-volume haemofiltration in hyperdynamic porcine peritonitis: effects beyond haemodynamics? Abstracts: abstract_id: PUBMED:18853140 High versus standard-volume haemofiltration in hyperdynamic porcine peritonitis: effects beyond haemodynamics? Objective: The role of haemofiltration as an adjunctive treatment of sepsis remains a contentious issue. To address the role of dose and to explore the biological effects of haemofiltration we compared the effects of standard and high-volume haemofiltration (HVHF) in a peritonitis-induced model of porcine septic shock. Design And Setting: Randomized, controlled experimental study. Subjects: Twenty-one anesthetized and mechanically ventilated pigs. Interventions: After 12 h of hyperdynamic peritonitis, animals were randomized to receive either supportive treatment (Control, n = 7) or standard haemofiltration (HF 35 ml/kg per h, n = 7) or HVHF (100 ml/kg per hour, n = 7). Measurements And Results: Systemic and hepatosplanchnic haemodynamics, oxygen exchange, energy metabolism (lactate/pyruvate, ketone body ratios), ileal and renal cortex microcirculation and systemic inflammation (TNF-alpha, IL-6), nitrosative/oxidative stress (TBARS, nitrates, GSH/GSSG) and endothelial/coagulation dysfunction (von Willebrand factor, asymmetric dimethylarginine, platelet count) were assessed before, 12, 18, and 22 h of peritonitis. Although fewer haemofiltration-treated animals required noradrenaline support (86, 43 and 29% animals in the control, HF and HVHF groups, respectively), neither of haemofiltration doses reversed hyperdynamic circulation, lung dysfunction and ameliorated alterations in gut and kidney microvascular perfusion. Both HF and HVHF failed to attenuate sepsis-induced alterations in surrogate markers of cellular energetics, nitrosative/oxidative stress, endothelial injury or systemic inflammation. Conclusions: In this porcine model of septic shock early HVHF proved superior in preventing the development of septic hypotension. However, neither of haemofiltration doses was capable of reversing the progressive disturbances in microvascular, metabolic, endothelial and lung function, at least within the timeframe of the study and severity of the model. abstract_id: PUBMED:18084982 Continuous veno-venous haemofiltration attenuates myocardial mitochondrial respiratory chain complexes activity in porcine septic shock. Increasing evidence indicates that mitochondrial dysfunction plays an important role in modulating the development of septic shock. In the present study, we investigated whether continuous veno-venous haemofiltration (CVVH) with high-volume might improve myocardial mitochondrial dysfunction in a porcine model of peritonitis-induced septic shock. Sixteen male Landrace pigs weighing 31 +/- 5 kg were randomly assigned to normal control group (n = 4), peritonitis group (n = 6) and peritonitis plus CVVH group (n = 6). All animals were anaesthetised and mechanically ventilated. After baseline examinations, the peritonitis group and the peritonitis plus CVVH group underwent induction of peritonitis. One hour later, the animals in the peritonitis plus CVVH group received treatment with high-volume CVVH. Twelve hours after treatment, the animals were sacrificed. Animals in the peritonitis group were killed 13 hours after induction of peritonitis. Peritonitis challenge induced septic shock associated with increased blood lactate and high-volume CVVH improved lactate acidosis. Compared with the peritonitis group, cardiac output, stroke volume and mean arterial pressure were better maintained in peritonitis plus CVVH group. More importantly, high-volume CVVH improved myocardial mitochondrial complex I activity (0.22 +/- 0.03 vs. 0.15 +/- 0.04, P = 0.04). These results suggest that high-volume CVVH improves haemodynamics and heart dysfunction in septic shock and the improvement may be attributed to amelioration of myocardial mitochondrial dysfunction. abstract_id: PUBMED:25437577 The current status of emergency operations at a high-volume cancer center. This study aimed to assess the pathogenic causes, clinical conditions, surgical procedures, in-hospital mortality, and operative death associated with emergency operations at a high-volume cancer center. Although many reports have described the contents, operative procedures, and prognosis of elective surgeries in high-volume cancer centers, emergency operations have not been studied in sufficient detail. We retrospectively enrolled 28 consecutive patients who underwent emergency surgery. Cases involving operative complications were excluded. The following surgical procedures were performed during emergency operations: closure in 3 cases (10.7%), diversion in 22 cases (78.6%), ileus treatment in 2 cases (7.1%), and hemostasis in 1 case (3.6%). Closure alone was performed only once for peritonitis. Diversion was performed in 17 cases (77.3%) of peritonitis, 4 cases (18.2%) of stenosis of the gastrointestinal tract, and 1 case (4.5%) of bleeding. There was a significant overall difference (P = 0.001). The frequency of emergency operations was very low at a high-volume cancer center. However, the recent shift in treatment approaches toward nonoperative techniques may enhance the status of emergency surgical procedures. The results presented in this study will help prepare for emergency situations and resolve them as quickly and efficiently as possible. abstract_id: PUBMED:18936693 Continuous hemofiltration in pigs with hyperdynamic septic shock affects cardiac repolarization. Objective: Sepsis has been defined as the systemic host response to infection with an overwhelming systemic production of both proinflammatory and anti-inflammatory mediators. Continuous hemofiltration has been suggested as possible therapeutic option that may remove the inflammatory mediators. However, hemodialysis and hemofiltration were reported to influence cardiac electrophysiologic parameters and to increase the arrhythmogenic risk. We hypothesize that sepsis affects electrophysiologic properties of the pig heart and that the effects of sepsis are modified by hemofiltration. Design: Laboratory animal experiments. Setting: Animal research laboratory at university medical school. Subjects: Forty domestic pigs of either gender. Interventions: In anesthetized, mechanically ventilated, and instrumented pigs sepsis was induced by fecal peritonitis and continued for 22 hours. Conventional or high-volume hemofiltration was applied for the last 10 hours of this period. Measurements And Main Results: Electrocardiogram was recorded before and 22 hours after induction of peritonitis. RR, QT, and QTc intervals were significantly shortened by sepsis. The plasma levels of interleukin-6 and tumor necrosis factor-alpha were increased in sepsis. High-volume hemofiltration blunted the sepsis-induced increase in tumor necrosis factor-alpha. Action potentials were recorded in isolated ventricular tissues obtained at the end of in vivo experiments. Action potential durations were significantly shortened in septic preparations at all stimulation cycle lengths tested. Both conventional and high-volume hemofiltrations lead to further shortening of action potential durations measured afterward in vitro. This action potential duration shortening was reversed by septic hemofiltrates obtained previously by conventional or high-volume hemofiltration. Tumor necrosis factor-alpha (500 ng/L) had no effect on action potential durations in vitro. Conclusions: In a clinically relevant porcine model of hyperdynamic septic shock, both sepsis and continuous hemofiltration shortened duration of cardiac repolarization. The continuous hemofiltration was not associated with an increased prevalence of ventricular arrhythmias. Tumor necrosis factor-alpha or interleukin-6 did not contribute to the observed changes in action potential durations. abstract_id: PUBMED:26695620 A systematic review of the impact of center volume in dialysis. Background: A significant relationship exists between the volume of surgical procedures that a given center performs and subsequent outcomes. It seems plausible that such a volume-outcome relationship is also present in dialysis. Methods: MEDLINE and EMBASE were searched in November 2014 for non-experimental studies evaluating the association between center volume and patient outcomes [mortality, morbidity, peritonitis, switch to hemodialysis (HD) or any other treatment], without language restrictions or other limits. Selection of relevant studies, data extraction and critical appraisal were performed by two independent reviewers. We did not perform meta-analysis due to clinical and methodological heterogeneity (e.g. different volume categories). Results: 16 studies met out inclusion criteria. Most studies were performed in the US. The study quality ranged from fair to good. Only few items were judged to have a high risk of bias, while many items were judged to have an unclear risk of bias due to insufficient reporting. All 10 studies that analyzed peritoneal dialysis (PD) technique survival by modeling switch to HD or any other treatment as an outcome showed a statistical significant effect. The relative effect measures ranged from 0.25 to 0.94 (median 0.73) in favor of high volume centers. All nine studies indicated a lower mortality for PD in high volume centers, but only study was statistical significant. Conclusions: This systematic review supports a volume-outcome relationship in peritoneal dialysis with respect to switch to HD or any other treatment. An effect on mortality is probably present in HD. Further research is needed to identify and understand the associations of center volume that are causally related to patient benefit. abstract_id: PUBMED:7984315 Hemodynamic and metabolic effects of noradrenaline infusion in a case of hyperdynamic septic shock Aim: To evaluate the effect of noradrenaline infusion in a case of hyperdynamic septic shock refractory to volume loading, dopamine and dobutamine, on hemodynamic parameters, oxygen transport, lactate and pyruvate levels. Design: Description of a clinical case. Setting: Postsurgical Intensive Care Unit in a University Hospital. Patient: A 48-year-old woman with symptoms of peritonitis due to Enterobacter Agglomerans and refractory hyperdynamic septic shock. Interventions: Administration of noradrenaline in doses ranging from 0.03 to 0.14 micrograms/kg/min. Measurements And Results: Before and after noradrenaline infusion the following were evaluated: hemodynamic (parameters) and oxygen transport acid-base status, arterial blood levels of lactate and pyruvate, and lactate/pyruvate ratio. During the administration of noradrenaline an increase was observed over time in oxygen consumption (from 110 +/- 16 to 164 +/- 19 mL/min/m2; p < 0.01), peripheral vascular resistance (from 509 +/- 95 to 1172 +/- 384 dynes.sec.cm-5, p < 0.01) and the oxygen extraction index (from 12.9 +/- 2.1 to 21.2 +/- 2.9%, p < 0.01), together with reduced lactate (from 24.4 +/- 1.5 to 4.9 +/- 5.1 mmol/L) and pyruvate levels (from 945 +/- 62 to 357 +/- 174 mumol/L; p < 0.01) and a reduced lactate/pyruvate ratio (from 26.2 +/- 1.2 to 11.8 +/- 5.9, p < 0.01). No significant increases were found in cardiac output and oxygen delivery. Conclusions: In the case observed here the infusion of noradrenaline induced an increase in oxygen consumption and the oxygen extraction index associated with a reduction in the lactate/pyruvate ratio and the normalisation of the acid-base status. These changes were not associated with an increase in oxygen which remained delivery > or = 600 mL/min/m2. abstract_id: PUBMED:4032510 Skeletal muscle insulin unresponsiveness during chronic hyperdynamic sepsis in the dog. Recent reports from our laboratory and others have documented changes in insulin unresponsiveness and electrolyte and hormonal changes characteristic of hypodynamic shock states in anesthetized animals. Since most acute shock protocols do not adequately mimic the clinical profile of sepsis, the present study was undertaken to document the hemodynamic and metabolic changes associated with chronic hyperdynamic peritonitis in dogs. Mongrel dogs of either sex weighing 20 +/- 2 kg were surgically instrumented with an electromagnetic aortic flow probe for monitoring cardiac output determinations, and aortic and central venous catheters for withdrawing blood for blood pressure and chemical analyses. Following a recovery period (7-10 days) control hemodynamic and metabolic measurements were made. Sepsis was induced (peritoneal abscess) by implanting a 4" X 4" gauze sponge, previously inoculated with human fecal bacteria, amid the intestines. Experimental (N = 18) and pair-fed control (N = 6) animals were monitored daily for 21 days or until death. During the septic protocol, cardiac index increased from a control value of 3.4 L/min/m2 to 5.5 L/min/m2 by the end of the experimental period. Mean arterial blood pressure, total peripheral resistance index, body weight, and plasma Ca++ fell below control values during the experimental period. Body temperature, plasma glucose, insulin, glucagon, and Mg++ were all elevated with sepsis. At the end of the chronic experimental period, skeletal muscle insulin responsiveness was assessed in the isolated, innervated, constantly perfused gracilis muscle preparation. Pair-fed control animals responded to various concentrations of local insulin infusion by increasing glucose uptake by the gracilis muscle. However, septic animals had a blunted response to local insulin infusion resulting in a decrease in the maximum dose response effect. These data demonstrate that: chronic, hyperdynamic peritonitis in the dog more closely mimics the human clinical profile of sepsis; and hyperdynamic sepsis is associated with a state of skeletal muscle insulin unresponsiveness which results from a post-receptor defect. abstract_id: PUBMED:22440906 Effects of biocompatible versus standard fluid on peritoneal dialysis outcomes. The clinical benefits of using "biocompatible" neutral pH solutions containing low levels of glucose degradation products for peritoneal dialysis compared with standard solutions are uncertain. In this multicenter, open-label, parallel-group, randomized controlled trial, we randomly assigned 185 incident adult peritoneal dialysis patients with residual renal function to use either biocompatible or conventional solution for 2 years. The primary outcome measure was slope of renal function decline. Secondary outcome measures comprised time to anuria, fluid volume status, peritonitis-free survival, technique survival, patient survival, and adverse events. We did not detect a statistically significant difference in the rate of decline of renal function between the two groups as measured by the slopes of GFR: -0.22 and -0.28 ml/min per 1.73 m(2) per month (P=0.17) in the first year in the biocompatible and conventional groups, respectively, and, -0.09 and -0.10 ml/min per 1.73 m(2) per month (P=0.9) in the second year. The biocompatible group exhibited significantly longer times to anuria (P=0.009) and to the first peritonitis episode (P=0.01). This group also had fewer patients develop peritonitis (30% versus 49%) and had lower rates of peritonitis (0.30 versus 0.49 episodes per year, P=0.01). In conclusion, this trial does not support a role for biocompatible fluid in slowing the rate of GFR decline, but it does suggest that biocompatible fluid may delay the onset of anuria and reduce the incidence of peritonitis compared with conventional fluid in peritoneal dialysis. abstract_id: PUBMED:31920228 Low tidal volume ventilation strategy and organ functions in patients with pre-existing systemic inflammatory response. Background And Aims: Ventilation can induce increase in inflammatory mediators that may contribute to systemic organ dysfunction. Ventilation-induced organ dysfunction is likely to be accentuated if there is a pre-existing systemic inflammatory response. Material And Methods: Adult patients suffering from intestinal perforation peritonitis-induced systemic inflammatory response syndrome and scheduled for emergency laparotomy were randomized to receive intraoperative ventilation with 10 ml.kg-1 tidal volume (Group H) versus lower tidal volume of 6 ml.kg-1 along with positive end-expiratory pressure (PEEP) of 10 cmH2O (Group L), (n = 45 each). The primary outcome was postoperative organ dysfunction evaluated using the aggregate Sepsis-related Organ Failure Assessment (SOFA) score. The secondary outcomes were, inflammatory mediators viz. interleukin-6, tumor necrosis factor-α, procalcitonin, and C-reactive protein, assessed prior to (basal) and 1 h after initiation of mechanical ventilation, and 18 h postoperatively. Results: The aggregate SOFA score (3[1-3] vs. 1[1-3]); and that on the first postoperative day (2[1-3] vs. 1[0-3]) were higher for group L as compared to group H (P < 0.05). All inflammatory mediators were statistically similar between both groups at all time intervals (P > 0.05). Conclusions: Mechanical ventilation with low tidal volume of 6 ml/kg-1 along with PEEP of 10 cmH2O is associated with significantly worse postoperative organ functions as compared to high tidal volume of 10 ml.kg-1 in patients of perforation peritonitis-induced systemic inflammation undergoing emergency laparotomy. abstract_id: PUBMED:34925058 Early Hyperdynamic Sepsis Alters Coronary Blood Flow Regulation in Porcine Fecal Peritonitis. Background: Sepsis is a common condition known to impair blood flow regulation and microcirculation, which can ultimately lead to organ dysfunction but such contribution of the coronary circulation remains to be clarified. We investigated coronary blood flow regulatory mechanisms, including autoregulation, metabolic regulation, and endothelial vasodilatory response, in an experimental porcine model of early hyperdynamic sepsis. Methods: Fourteen pigs were randomized to sham (n = 7) or fecal peritonitis-induced sepsis (n = 7) procedures. At baseline, 6 and 12 h after peritonitis induction, the animals underwent general and coronary hemodynamic evaluation, including determination of autoregulatory breakpoint pressure and adenosine-induced maximal coronary vasodilation for coronary flow reserve and hyperemic microvascular resistance calculation. Endothelial-derived vasodilatory response was assessed both in vivo and ex vivo using bradykinin. Coronary arteries were sampled for pathobiological evaluation. Results: Sepsis resulted in a right shift of the autoregulatory breakpoint pressure, decreased coronary blood flow reserve and increased hyperemic microvascular resistance from the 6th h after peritonitis induction. In vivo and ex vivo endothelial vasomotor function was preserved. Sepsis increased coronary arteries expressions of nitric oxide synthases, prostaglandin I2 receptor, and prostaglandin F2α receptor. Conclusion: Autoregulation and metabolic blood flow regulation were both impaired in the coronary circulation during experimental hyperdynamic sepsis, although endothelial vasodilatory response was preserved. Answer: In a study comparing high-volume haemofiltration (HVHF) to standard haemofiltration (HF) in a porcine model of septic shock induced by peritonitis, it was found that early HVHF was superior in preventing the development of septic hypotension. However, neither HF nor HVHF doses were capable of reversing the progressive disturbances in microvascular, metabolic, endothelial, and lung function within the timeframe and severity of the model used in the study. Despite fewer animals in the haemofiltration-treated groups requiring noradrenaline support, there was no reversal of hyperdynamic circulation, lung dysfunction, or amelioration of alterations in gut and kidney microvascular perfusion. Additionally, both HF and HVHF failed to attenuate sepsis-induced alterations in surrogate markers of cellular energetics, nitrosative/oxidative stress, endothelial injury, or systemic inflammation (PUBMED:18853140). In another study, continuous veno-venous haemofiltration (CVVH) with high-volume was shown to improve myocardial mitochondrial complex I activity in a porcine model of peritonitis-induced septic shock. High-volume CVVH improved haemodynamics and heart dysfunction in septic shock, which may be attributed to the amelioration of myocardial mitochondrial dysfunction (PUBMED:18084982). These findings suggest that while high-volume haemofiltration may have some beneficial effects on certain aspects of sepsis, such as preventing septic hypotension and improving myocardial mitochondrial function, it does not appear to significantly reverse other systemic effects of sepsis, including disturbances in microvascular, metabolic, endothelial, and lung function, at least within the limitations of the experimental models used in these studies.
Instruction: Optimal lesion assessment following acute radio frequency ablation of porcine kidney: cellular viability or histopathology? Abstracts: abstract_id: PUBMED:14501771 Optimal lesion assessment following acute radio frequency ablation of porcine kidney: cellular viability or histopathology? Purpose: Radio frequency ablation (RFA) has been used as a minimally invasive alternative to nephrectomy for small renal tumors. Questions have arisen regarding the accuracy of cell viability determination on standard hematoxylin and eosin (H & E) staining. We investigated and compared the histological characteristics of RF ablated renal tissue using nicotinamide adenine dinucleotide (NADH) and H & E staining. Materials And Methods: Ten porcine kidneys underwent laparoscopic RFA of the upper and lower poles using a 2 (8) or 3 cm (2) protocol with 2 cycles of 90 W, target temperature 105C and treatment time 5.5 minutes per cycle. Following tract ablation the kidneys were immediately harvested, gross lesion size was measured and tissue was processed for standard H & E and NADH staining. Results: H & E staining of ablated tissue revealed a number of alterations in renal tubular histology. However, all of these findings were focal with areas of parenchyma that appeared well preserved. Corresponding areas on NADH processed sections showed the complete absence of staining, indicating the lack of cellular viability. There were no skip areas noted on NADH processed sections and treated portions demonstrated a well demarcated border of ablation. Conclusions: While RFA produces discernible histological changes acutely on H & E, these alterations are variable and patchy, and they alternate with areas of well preserved tissue. Therefore, NADH staining should always be used to assess and verify cellular death in RFA lesions. In this study no skip areas of viable cells were noted within areas of ablated tissue on NADH staining. abstract_id: PUBMED:30358349 Progress of Frequency Ablation Technique on Atrial Fibrillation Treatment Atrial Fibrillation(AF) and its complications are serious threat to human health and the radio frequency ablation (RFA) becomes one of the main therapies of AF. Conventionally, the RFA is performed by unipolar ablation mode. Because the unipolar ablation mode is point-to-point and incomplete linear lesion formation, the success rates of treatment on AF decline and the procedures are time consuming. In order to solve these shortcomings, the bipolar ablation mode and the multichannel frequency ablation method that facilitate the easy creation of linear lesion are presented, especially kinds of multichannel frequency ablation technique and applications are introduced in this paper. abstract_id: PUBMED:37587256 Power-Controlled, Irrigated Radio-Frequency Ablation of Gastric Tissue: A Biophysical Analysis of Lesion Formation. Background: Radio-frequency ablation of gastric tissue is in its infancy compared to its extensive history and use in the cardiac field. Aims: We employed power-controlled, irrigated radio-frequency ablation to create lesions on the serosal surface of the stomach to examine the impact of ablation power, irrigation, temperature, and impedance on lesion formation and tissue damage. Methods: A total of 160 lesions were created in vivo in female weaner pigs (n = 5) using a combination of four power levels (10, 15, 20, 30 W) at two irrigation rates (2, 5 mL min-1) and with one temperature-controlled (65 °C) reference setting previously validated for electrophysiological intervention in the stomach. Results: Power and irrigation rate combinations above 15 W resulted in lesions with significantly higher surface area and depth than the temperature-controlled setting. Irrigation resulted in significantly lower temperature (p < 0.001) and impedance (p < 0.001) compared to the temperature-controlled setting. No instances of perforation or tissue pop were recorded for any ablation sequence. Conclusion: Power-controlled, irrigated radio-frequency ablation of gastric tissue is effective in creating larger and deeper lesions at reduced temperatures than previously investigated temperature-controlled radio-frequency ablation, highlighting a substantial improvement. These data define the biophysical impact of ablation parameters in gastric tissue, and they will guide future translation toward clinical application and in silico gastric ablation modeling. Combination of ablation settings (10-30 W power, 2-5 mL min-1 irrigation) were used to create serosal spot lesions. Histological analysis of lesions quantified localized tissue damage. abstract_id: PUBMED:25135987 Delayed bronchobiliary fistula and cholangiolithiasis following percutaneous radio frequency ablation for hepatocellular carcinoma. Although percutaneous radio frequency ablation for hepatocellular carcinoma is a minimally invasive therapy, there are some complications reported; major complications include hemorrhage (0.477%), hepatic injuries (1.690%), and extrahepatic organ injuries (0.691%). We, for the first time, described a rare complication of delayed bronchobiliary fistula and cholangiolithiasis in common bile duct following radio frequency ablation and the salvage treatment in a patient with chronic hepatitis B virus infection. Surgeons should be aware of severe and rare complications before deciding the ablation area and when performing radio frequency ablation, and should be aware of the relevant salvage treatment. abstract_id: PUBMED:32333809 Spectroscopic characterization of tissue and liquids during arthroscopic radio-frequency ablation. Purpose: Radio-frequency ablation devices generating a local plasma are widely used as a safe and precise tool for tissue removal in arthroscopic surgeries. During this process, specific light emissions are generated. The aim of this study was to investigate the diagnostic potential of optical emission spectrum analysis for liquid and tissue characterization. Methods: The emissions in different saline solutions and during porcine tendon, muscle, and bone tissue ablation were recorded and analyzed in the range of 200-1000 nm. Results: Specific atomic lines (Na, K, Ca, H, O, W) and molecular bands (OH, CN, C2) were identified, originating from compounds in the liquids and tissues in contact with the probe. A linear correlation between the concentration of both Na and K in solution with the intensities of their spectral lines was observed (Na: R2 = 0.986, P < 0.001; K: R2 = 0.963, P < 0.001). According to the Wilcoxon rank-sum test, the Ca- and K-peak intensities between all three tissue samples and the CN-peak intensities between muscle and bone and tendon and bone differed significantly (P < 0.05). Conclusions: These findings prove the general feasibility of spectroscopic analysis as a tool for characterization of liquids and tissues ablated during radio-frequency ablation. This method can potentially be further developed into an intraoperative, real-time diagnostic feature aiding the surgical team in further optimizing the procedure. abstract_id: PUBMED:10612898 Temperature-controlled and constant-power radio-frequency ablation: what affects lesion growth? Radio-frequency (RF) catheter ablation is the primary interventional therapy for the treatment of many cardiac tachyarrhythmias. Three-dimensional finite element analysis of constant-power (CPRFA) and temperature-controlled RF ablation (TCRFA) of the endocardium is performed. The objectives are to study: 1) the lesion growth with time and 2) the effect of ground electrode location on lesion dimensions and ablation efficiency. The results indicate that: a) for TCRFA: i) lesion growth was fastest during the first 20 s, subsequently the lesion growth slowed reaching a steady state after 100 s, ii) positioning the ground electrode directly opposite the catheter tip (optimal) produced a larger lesion, and iii) a constant tip temperature maintained a constant maximum tissue temperature; b) for CPRFA: i) the lesion growth was fastest during the first 20 s and then the lesion growth slowed; however, the lesion size did not reach steady state even after 600 s suggesting that longer durations of energy delivery may result in wider and deeper lesions, ii) the temperature-dependent electrical conductivity of the tissue is responsible for this continuous lesion growth, and iii) an optimal ground electrode location resulted in a slightly larger lesion and higher ablation efficiency. abstract_id: PUBMED:12814243 Assessment of myocardial lesion size during in vitro radio frequency catheter ablation. We report our experience with a system that utilizes changes in several biophysical characteristics of cardiac tissue to determine lesion formation and to estimate lesion size both on and off-line in vitro during radio frequency (RF) energy delivery. We analyzed the reactive and resistive components of tissue impedance and tracked the change of phase angle during RF ablation. We correlated the amount of tissue damage with these and other biophysical parameters and compared them with off-line analysis. We found that there are irreversible changes in the reactive and resistive components of impedance that occurred during tissue ablation. The irreversible changes of these components are greater in magnitude, and correlate better with the size of lesions than that of impedance alone that is currently used. Numerically, the best single on-line and off-line correlation for combined perpendicular and parallel electrode orientation was with phase angle. On-line and off-line capacitance and susceptance correlations were essentially similar suggesting that they may be useful as lesion size predictors, given these parameter's persistent change without temperature sensitivity. This study indicates that it is technically feasible to assess lesion formation using biophysical parameters. abstract_id: PUBMED:31741592 Computed tomography guided radio-frequency ablation of osteoid osteomas in atypical locations. Purpose: Percutaneous radio-frequency ablation is a minimally invasive treatment option for osteoid osteomas. The ablation process is straightforward in the more common locations like the femur/tibia. Surgery has historically been the gold standard, but is currently used in lesions, that may not be effectively and safely ablated, i.e. close to skin/nerve. Radio-frequency ablation can still be used in such cases along with additional techniques/strategies to protect the sensitive structures and hence improve the outcomes. The authors describe their experience with four challenging osteoid osteoma ablation cases. Methods: We retrospectively reviewed radio-frequency ablations of four osteoid osteomas in rather atypical locations, the protective techniques/strategies employed, the adequacy and safety of the radio-frequency ablation with the use of these techniques. Results: All patients had complete resolution of pain with no recurrence in the follow-up period. No complications were reported. Conclusion: RFA has been proven to be an effective and safe option for treatment of OOs in the common locations. It is generally recommended to have a 1 cm safety margin between the RF probe and any critical structures in the vicinity. However, with OOs in atypical locations this may not be always possible and hence additional techniques may be needed to ensure protection of the surrounding sensitive structures and also allow for effective ablation. abstract_id: PUBMED:36620693 Follow-up study of depressive state on patients with atrial fibrillation 1 year after radio-frequency ablation. Objective: To analyze the effect of depression on the recurrence of atrial fibrillation (AF) 1 year after radio-frequency ablation. Methods: A total of 91 patients with AF admitted to our hospital from January 2020 to July 2021 were studied. All patients were followed up 1 year after radio-frequency ablation. A total of 91 subjects were divided into recurrence group (n = 30) and no recurrence group (n = 61) according to the recurrence situation 1 year after radio-frequency ablation. Age, disease course, body mass index (BMI), gender, echocardiography (left atrial diameter), blood inflammatory indicators (neutrophil count, lymphocyte count, and monocyte count), and Self-rating Depression Scale (SDS) scores were compared between the two groups. Logistic multivariate regression analysis was used to analyze the effect of SDS score and other indexes on the recurrence of AF 1 year after radio-frequency ablation. Results: The age of patients in relapse group was higher than that in no relapse group (P < 0.05) and the course of disease was longer than that of the no recurrence group (P < 0.05). The BMI was higher than that of the no recurrence group (P < 0.05) and the left atrial diameter was greater than that of the no recurrence group (P < 0.05). Neutrophil count and monocyte count were significantly higher than those in no recurrence group (P < 0.05) and the lymphocyte count was significantly lower than that in the no recurrence group (P < 0.05). There were significant differences in SDS score composition between the two groups (P < 0.05) and the composition ratio of patients with moderate and major depression in the relapsing group was significantly higher than that in the non-relapsing group. The composition ratio of patients without depression in the relapsing group was significantly lower than that in the non-relapsing group. Multivariate analysis showed that age, disease course, BMI, left atrial diameter, neutrophil count, lymphocyte count, monocyte count, and SDS score were all independent factors affecting the recurrence of AF patients 1 year after radio frequency ablation (P < 0.05). Compared with patients without depression, patients with mild, moderate and major depression had an increased risk of recurrence by 74.0, 98.2, and 151.2% 1 year after radio-frequency ablation, respectively (OR = 1.740, 1.982, and 2.512). Conclusion: There is a high rate of depression in patients with AF. Depression is an important factor affecting the early recurrence of patients with AF after radio-frequency ablation. abstract_id: PUBMED:31898034 Radio frequency-induced superoxide accumulation affected the growth and viability of Saccharomyces cerevisiae. With the development of the electric technologies, the biological effects of electromagnetic fields (EMF) were widely studied. However, the results remain controversial and the biophysical mechanisms are still unknown. To our knowledge, little studies pay attention to the radio frequency (RF) of 2.6-5 MHz. In the present study, we investigated the effect of these radio frequencies on the growth and cell viability of Saccharomyces cerevisiae at very low power density below 0.1 mT. The result appeared to be time-dependent. The growth of the yeast cells was obviously affected by the RF-EMF with a 43.5% increase when exposed for 30 h, and the growth-promoting effect decreased along with the radiation time and eventually turned to an inhibiting effect retarding growth by 20.7% at 89 h. The cell viability was improved to 70.1% at 8 h and reduced by 33.5% at 28 h. The superoxide accumulated in exposed cells as radiation time increased which may lead to the inhibition of viability and growth of the cells. However, the efficient frequency, power density, and exposure dosage await further investigation. Nevertheless, the wave band studied in this research is effective to produce biological effect, and therefore, it may provide an optional new radio frequency which is valuable for the development and utilization in therapy technique and medical use. Answer: Optimal lesion assessment following acute radio frequency ablation (RFA) of porcine kidney should include both cellular viability and histopathology. However, the study by PUBMED:14501771 suggests that while hematoxylin and eosin (H&E) staining can reveal histological changes in ablated tissue, these alterations are variable and patchy, alternating with areas of well-preserved tissue. In contrast, nicotinamide adenine dinucleotide (NADH) staining showed a complete absence of staining in the treated areas, indicating the lack of cellular viability. The study concluded that NADH staining should be used to assess and verify cellular death in RFA lesions, as it provides a clear demarcation of the border of ablation without skip areas of viable cells within the ablated tissue. Therefore, for accurate assessment of lesions following RFA, it is important to use NADH staining to confirm cellular death, in addition to histopathological examination with H&E staining to observe tissue changes.
Instruction: Does school bullying affect adult health? Abstracts: abstract_id: PUBMED:28329924 Reduction and control of school bullying is urgently needed School bullying and campus violence is a widespread social problem in the world. School bullying is characterized by its repeatability and suddenness, which could make the victims suffering from both psychological and health damage, and even affect their personality growth. Government should pay close attention to the reduction and control of school bullying and campus violence by establishing school bullying emergency response system and preparedness plan. The school and teacher's role and legal responsibility in the service and management in schools should be cleared and defined. It is necessary to help teachers conduct early detection and intervention for school bullying, conduct morality, mental health and legal educations in students to teach them to act according to the law and protect themselves according to the law and help them identify and avoid risks, encourage the establishment of rescue facility and web of anti-school bullying by non-government organizations, and set hotline for school bullying incident to reduce the incidence of school bullying. abstract_id: PUBMED:37865437 Bullying and School Violence. Rates of traditional bullying have remained stable (30%) but rates of cyberbullying are increasing rapidly (46% of youth). There are significant long-term physical and mental health consequences of bullying especially for vulnerable youth. Multi-component school-based prevention programs that include caring adults, positive school climate, and supportive services for involved youth can effectively reduce bullying. While bullying has emerged as a legitimate concern, studies of surviving perpetrators to date suggest bullying is not the most significant risk factor of mass school shootings. Pediatricians play a critical role in identification, intervention, awareness, and advocacy. abstract_id: PUBMED:35627743 Negative Parenting Style and Perceived Non-Physical Bullying at School: The Mediating Role of Negative Affect Experiences and Coping Styles. At present, school bullying incidents frequently occur, attracting increased attention from researchers. In this study, we attempt to explore the impact of parenting styles on perceived school non-physical bullying. Four hundred ninety-two students in the fifth and sixth grades of eight primary schools in Zhejiang province were surveyed. To control any potential confounding factors, a randomized sampling survey method was used to distribute questionnaires. The results showed that negative affect experiences, negative coping styles, negative family parenting styles, and the perceived school non-physical bullying were all positively correlated with each other (p < 0.05). Perceived verbal bullying differed significantly by gender, grade, and only/non-only children (p < 0.05). Perceived relationship bullying significantly differed between grades (p < 0.05). The gender difference in perceived cyberbullying also reached a significant level (p < 0.05). The rejection parenting style was shown to be an important factor that may be associated with students’ perceived school non-physical bullying; it was observed to be directly associated with students’ perceived school non-physical bullying and indirectly associated with students’ perceived school non-physical bullying by influencing negative affect experiences and negative coping styles. In conclusion, negative affect experiences and coping styles may have a chain-like mediating effect between the rejection parenting style and students’ perceived school verbal bullying. Moreover, negative affect experiences may have a partial mediating effect between the rejection parenting style and students’ perceived school cyberbullying, relationship bullying, and non-physical bullying total scores. This study provides first-hand empirical data support for schools, families, and education authorities to guide and manage non-physical bullying incidents in schools. They also provide a theoretical basis for subsequent related research in the field of non-physical bullying. abstract_id: PUBMED:26060720 Understanding of School Related Factors Associated with Emotional Health and Bullying Behavior among Jordanian Adolescents. Background: Students emotional health and bullying behavior are receiving greater attention worldwide due to their long-term effects on students' health. The purpose of this study was to examine the relationships between perceived school climate, peer support, teacher support, school pressure and emotional health and bullying among adolescent school students in Jordan. Methods: A cross-sectional descriptive design was used to recruit a sample of 1166 in-school adolescents in Amman between November 2013 and January 2014. A multi-stage cluster sampling technique was used to select respondents and Health Behavior in School Aged Children questionnaire was used to collect the data. Data were analyzed using Pearson Correlation to detect relationships among study variables. Results: Significant correlations (P value was ≤.05) were found between school climate including teacher and peer support and emotional health and bullying behavior of school students. School pressure was not correlated significantly with emotional health and bullying. Conclusion: Study findings emphasize the importance of school related factors in influencing students' emotional health and bullying behavior. This indicates that the issue of bullying and emotional health of students in Jordanian schools requires further attention, both for future research and preventive intervention. abstract_id: PUBMED:36498416 The Associations between Sibling Victimization, Sibling Bullying, Parental Acceptance-Rejection, and School Bullying. Bullying has been identified as the most common form of aggression experienced by school-age youth. However, it is still unclear about the family's influence on school bullying. Therefore, the current study aimed to explore the associations between sibling bullying and school bullying, sibling victimization and school victimization, and parental acceptance-rejection and school bullying victimization. The study was cross-sectional and conducted on a sample of students aged between 11 and 20 years recruited from middle schools in Algeria. The study used a survey adopted from the scale of Sibling Bullying, Student Survey of Bullying Behavior-Revised 2, and the Survey of parental acceptance-rejection in collecting the data. The model's results assessing the association between sibling bullying and school bullying demonstrated that the effect of sibling physical and sibling verbal victims on school victimization was statistically significant. Despite the non-significant effect of sibling emotional victims on school victimization, the effect of sibling physical and sibling verbal bullying on school bullying was statistically significant. However, the effect of sibling emotional bullying on school bullying was not statistically significant. The direct effect of parental acceptance on school victimization was not statistically significant, whereas the effect of parental rejection on school victimization was statistically significant. The direct effect of parental acceptance on school bullying was not statistically significant, while the effect of parental rejection on school bullying was statistically significant. Based on the results, this study provides insights into the understanding of how the family and siblings contribute to school bullying. In particular, sibling victimization, sibling bullying, and parental acceptance-rejection are predictive factors of school bullying among adolescents. Future research should take into account factors based on family to explore the risks of school bullying. abstract_id: PUBMED:28954145 The practice of bullying among Brazilian schoolchildren and associated factors, National School Health Survey 2015. This study explored associations between bullying and sociodemographic, mental health and risk behavior variables in school age children. This cross-sectional survey analyzed data from the National School Health Survey (PeNSE 2015). A multiple logistic regression analysis checked for factors associated with bullying. Nineteen point eight percent (95%CI 10.5 - 20.0) of the students claimed they practiced bullying. The practice of bullying was more common among students enrolled in private schools, those living with their parents, and those whose mothers have more years of schooling and are gainfully employed (28.1% CI 27.3-28.8). In terms of mental health characteristics, bullying was more common among those feeling alone, suffering from insomnia and with no friends. Looking at family characteristics, those reporting they are physically punished by family members (33.09% CI 33.1-34.6) and miss school without telling their family (28.4% 95% CI 27.9-29.0) are more likely to practice bullying. Bullying was more frequent among those reporting tobacco, alcohol and drug use, and among students claiming to have had sexual relations. The data shows that bullying is significant and interferes in school children's health and the teaching-learning process. This must be addressed looking at youth as protagonists and in an inter-sectoral context. abstract_id: PUBMED:34906157 Bullying at school and mental health problems among adolescents: a repeated cross-sectional study. Objective: To examine recent trends in bullying and mental health problems among adolescents and the association between them. Method: A questionnaire measuring mental health problems, bullying at school, socio-economic status, and the school environment was distributed to all secondary school students aged 15 (school-year 9) and 18 (school-year 11) in Stockholm during 2014, 2018, and 2020 (n = 32,722). Associations between bullying and mental health problems were assessed using logistic regression analyses adjusting for relevant demographic, socio-economic, and school-related factors. Results: The prevalence of bullying remained stable and was highest among girls in year 9; range = 4.9% to 16.9%. Mental health problems increased; range = + 1.2% (year 9 boys) to + 4.6% (year 11 girls) and were consistently higher among girls (17.2% in year 11, 2020). In adjusted models, having been bullied was detrimentally associated with mental health (OR = 2.57 [2.24-2.96]). Reports of mental health problems were four times higher among boys who had been bullied compared to those not bullied. The corresponding figure for girls was 2.4 times higher. Conclusions: Exposure to bullying at school was associated with higher odds of mental health problems. Boys appear to be more vulnerable to the deleterious effects of bullying than girls. abstract_id: PUBMED:33433013 The Importance of Pedagogical and Social School Climate to Bullying: A Cross-Sectional Multilevel Study of 94 Swedish Schools. Background: Bullying is a public health issue with long-term effects for victims. This study investigated if there was an association between pedagogical and social school climate and student-reported bullying victimization, which dimensions of pedagogical and social school climate were associated with bullying, and if these associations were modified by individual-level social factors. Methods: The study had a cross-sectional multilevel design with individual-level data on bullying from 3311 students nested in 94 schools over 3 consecutive school years. School climate was measured with student and teacher questionnaires, aggregated at the school level. The association between school climate and bullying victimization was estimated with multilevel mixed-model logistic regression. Results: In schools with the most favorable school climate, fewer students reported being bullied. This was especially evident when school climate was measured with the student instrument. Students in schools with favorable climate had an adjusted odds ratio of bullying of 0.74 (95% CI: 0.55-1.00) compared to students in schools with the worst climate. Results from the teacher instrument were in the same direction, but less consistent. Conclusions: Improvement in school climate has the potential to affect students both academically, and socially, as well as decrease the prevalence of bullying. abstract_id: PUBMED:37998700 An Analysis of the Association between School Bullying Prevention and Control Measures and Secondary School Students' Bullying Behavior in Jiangsu Province. (1) Background: China released regulations on school bullying prevention and control in 2017; however, current research on school bullying in China focuses on exploring influencing factors and lacks empirical research on the effectiveness of anti-bullying policies in schools. The objective of this study was to use an empirical model to explore the association between bullying prevention and control measures and secondary school students' bullying victimization and multiple bullying victimization in Chinese schools. (2) Methods: Data were derived from the 2019 Surveillance of Common Diseases and Health Influencing Factors among Students in Jiangsu Province. The school's bullying prevention and control measures, which was the independent variable, were obtained in the form of a self-report questionnaire and consisted of five measures: the establishment of bullying governance committees, thematic education for students, thematic training for parents, special investigations on bullying, and a bullying disposal process. Bullying victimization and multiple bullying victimization, which was the dependent variable, were obtained through a modified version of the Olweus bullying victimization questionnaire. In order to better explain the differences in the results, this study constructed multilevel logistic regression models to test the association between school bullying prevention and control measures and the rates of bullying victimization and multiple bullying victimization among secondary school students at both the school level and the student level. Meanwhile, this study constructed five models based on the null model by sequentially incorporating demographic variables, physical and mental health variables, lifestyle variables, and bullying prevention and control measures in schools to verify this association. (3) Results: A total of 25,739 students were included in the analysis. The range of bullying victimization rates for students in the different secondary schools in this study was between 6.8% and 37.3%, and the range of multiple bullying victimization rates was between 0.9% and 14.8%. The establishment of bullying disposal procedures was strongly associated with a reduction in bullying victimization (OR = 0.83, 95%CI: 0.71-0.99, p < 0.05). Establishing bullying disposal procedures was not significantly associated with multiple bullying victimization rates (OR = 0.89, 95%CI: 0.73-1.09, p > 0.05). The establishment of a bullying governance committee, thematic education for students, thematic training for parents, and special surveys on bullying were not significantly associated with bullying victimization rates or multiple bullying victimization rates (all p > 0.05). (4) Conclusions: Among the current bullying prevention and control measures for secondary school students in China, the establishment of a bullying disposal process was conducive to reducing the rate of bullying victimization, but it was ineffective in reducing the rate of multiple bullying victimization, and the other preventive and control measures did not achieve the purpose of anti-bullying in schools. abstract_id: PUBMED:35742383 Bullying Victimization and Mental Health among Migrant Children in Urban China: A Moderated Mediation Model of School Belonging and Resilience. School bullying victimization among children is a significant public health issue that may negatively influence their mental health. However, few studies have been conducted on the bullying of migrant children in urban China. A positive psychological perspective has rarely been adopted in examining the mechanisms through which bullying victimization influences mental health, and the protective factors remain understudied. This research investigates the factors that may contribute to reducing the negative effects of bullying victimization on mental health, focusing on the protective roles of school belonging and resilience in the association between bullying victimization and mental health. Data were collected from 1087 school-aged migrant children in Shanghai and Nanjing, China. The PROCESS macro was used to conduct moderated mediation analyses to test the hypothesized models. The results of moderated mediation modeling revealed that bullying victimization (β = −0.386, p < 0.001) was negatively linked with mental health through decreased school belonging (β = 0.398, p < 0.001). Moreover, resilience buffered the indirect negative effects of bullying victimization on migrant children’s mental health via school belonging (β = −0.460, p < 0.01). Specifically, lower resilience was clearly associated with stronger indirect effects. Our findings suggest that school belonging and resilience must be incorporated into mental health prevention and intervention programs targeting migrant children with bullying victimization experiences. Answer: The provided abstracts do not directly address the long-term effects of school bullying on adult health. They focus primarily on the prevalence, factors, and immediate consequences of school bullying among children and adolescents, as well as the effectiveness of interventions and the role of the school environment in mitigating bullying behavior (PUBMED:28329924, PUBMED:37865437, PUBMED:35627743, PUBMED:26060720, PUBMED:36498416, PUBMED:28954145, PUBMED:34906157, PUBMED:33433013, PUBMED:37998700, PUBMED:35742383). However, one of the abstracts does mention significant long-term physical and mental health consequences of bullying, especially for vulnerable youth (PUBMED:37865437). This suggests that the impact of bullying can extend beyond the immediate school environment and potentially affect individuals into adulthood. The abstract does not provide specific details on how these long-term consequences manifest in adult health. To answer the question of whether school bullying affects adult health, one would need to look at longitudinal studies that follow individuals from childhood into adulthood to assess the long-term health outcomes associated with experiences of bullying during school years. Such studies would provide evidence on the potential psychological and physical health issues that may arise later in life as a result of being bullied in school.
Instruction: Are antibiotics related to oral combination contraceptive failures in the Netherlands? Abstracts: abstract_id: PUBMED:22553004 Are antibiotics related to oral combination contraceptive failures in the Netherlands? A case-crossover study. Purpose: To investigate whether there is an association between use of antibiotics and breakthrough pregnancy. Methods: The study was performed in a population-based prescription database (IADB.nl). We computed case-crossover odds ratios of 397 cases of defined breakthrough pregnancy comparing the use of antibiotics in the exposure window with the use of antibiotics in two control windows. We defined a control group consisting of 29 022 other pregnancies. We computed case-control odds ratios of the use of antibiotics in cases as compared with controls in the different time windows. Results: The case-crossover odds ratios comparing the use of antibiotics in the exposure window with both control windows were 2.21 (95%CI = 1.03-4.75) and 1.65 (95%CI = 0.78-3.48), respectively. The traditional case-control odds ratios after adjustment for age were 1.71 (95%CI = 1.09-2.66) in the exposure window, 0.81 (95%CI = 0.44-1.47) 2 months before the exposure window, and 1.04 (95%CI = 0.61-1.78) 12 months before the exposure window. Conclusions: We did find a relationship between the use of antibiotics and breakthrough pregnancy in a population-based prescription database. The results did not hold for broad-spectrum antibiotics or in a sensitivity analysis. The results are partly not the same as those found in a pharmacoepidemiological study with a similar design using two US pregnancy databases. Both studies can suffer from bias and confounding, but these will be different because of the use of different databases. abstract_id: PUBMED:32918267 Oral Antibiotics for Acne. Oral antibiotics are integral for treating inflammatory acne based on what is understood about the pathogenesis as well as the role of Cutibacterium acnes. However, rising concerns of antibiotic resistance and the perception of "antibiotic phobia" create potential limitations on their integration in an acne treatment regimen. When prescribing oral antibiotics, dermatologists need to consider dosage, duration, and frequency, and to avoid their use as monotherapy. These considerations are important, along with the use of newer strategies and compounds, to reduce adverse-event profiles, antibiotic resistance, and to optimize outcomes. Aside from concomitant medications, allergies, and disease severity, costs and patient demographics can influence variability in prescribing plans. There are multiple published guidelines and consensus statements for the USA and Europe to promote safe antibiotic use by dermatologists. However, there is a lack of head-to-head studies and evidence for comparative superiority of any individual antibiotic, as well as any evidence to support the use of agents other than tetracyclines. Although oral antibiotics are one of the main options for moderate to severe acne, non-antibiotic therapy such as isotretinoin and hormonal therapies should be considered. As newer therapies and more outcomes data emerge, so will improved management of antibiotic therapy to foster patient safety. abstract_id: PUBMED:24880665 Meta-analysis comparing efficacy of antibiotics versus oral contraceptives in acne vulgaris. Background: Both antibiotics and oral contraceptive pills (OCPs) have been found to be effective in managing acne vulgaris. Despite widespread use, few direct comparisons of efficacy between the 2 modalities have been published. Objective: We compared the efficacy of antibiotics and OCPs in managing acne. Methods: A meta-analysis was conducted in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses and Cochrane collaboration guidelines. Results: A review of 226 publications yielded 32 randomized controlled trials that met our inclusion criteria. At 3 and 6 months, compared with placebo, both antibiotics and OCPs effected greater percent reduction in inflammatory, noninflammatory, and total lesions; the 2 modalities at each time point demonstrated statistical parity, except that antibiotics were superior to OCPs in percent reduction of total lesions at 3 months (weighted mean inflammatory lesion reduction: 3-month course of oral antibiotic treatment = 53.2%, 3-month course of OCPs = 35.6%, 3-month course of placebo treatment = 26.4%, 6-month course of oral antibiotic treatment = 57.9%, 6-month course of OCPs = 61.9%, 6-month course of placebo treatment = 34.2%; weighted mean noninflammatory lesion reduction: 3-month course of oral antibiotic treatment = 41.9%, 3-month course of OCPs = 32.6%, 3-month course of placebo treatment = 17.1%, 6-month course of oral antibiotic treatment = 56.4%, 6-month course of OCPs = 49.1%, 6-month course of placebo treatment = 23.4%; weighted mean total lesion reduction: 3-month course of oral antibiotic treatment = 48.0%, 3-month course of OCPs = 37.3%, 3-month course of placebo treatment = 24.5%, 6-month course of oral antibiotic treatment = 52.8%, 6-month course of OCPs = 55.0%, 6-month course of placebo treatment = 28.6%). Limitations: Investigative treatment heterogeneity and publication bias are limitations. Conclusions: Although antibiotics may be superior at 3 months, OCPs are equivalent to antibiotics at 6 months in reducing acne lesions and, thus, may be a better first-line alternative to systemic antibiotics for long-term acne management in women. abstract_id: PUBMED:9146531 Oral contraceptive failure rates and oral antibiotics. Background: Despite anecdotal evidence of a possibility of decreased effectiveness of oral contraceptives (OCs) with some antibiotics, it is not known whether antibiotic use in dermatologic practices engenders any increased risk of accidental pregnancy. Objective: Our purpose was to examine the effect of commonly prescribed oral antibiotics (tetracyclines, penicillins, cephalosporins) on the failure rate of OCs. Methods: The records from three dermatology practices were reviewed, and 356 patients with a history of combined oral antibiotic/OC use were surveyed retrospectively. Of these patients, 263 also provided "control" data (during the times they used OCs alone). An additional 162 patients provided control data only. Results: Five pregnancies occurred in 311 woman-years of combined antibiotic/OC exposure (1.6% per year failure rate) compared with 12 pregnancies in 1245 woman-years of exposure (0.96% per year) for the 425 control patients. This difference was not significant (p = 0.4), and the 95% confidence interval on the difference (-0.81, 2.1) ruled out a substantial difference (> 2.1% per year). There was also no significant difference between OC failure rates for the women who provided data under both conditions, nor between the two control groups. All our data groups had failure rates below the 3% or higher per year, which are typically found in the United States. Conclusion: The difference in failure rates of OCs when taken concurrently with antibiotics commonly used in dermatology versus OC use alone suggests that these antibiotics do not increase the risk of pregnancy. Physicians and patients need to recognize that the expected OC failure rate, regardless of antibiotic use, is at least 1% per year and it is not yet possible to predict in whom OCs may fail. abstract_id: PUBMED:7863842 The dynamics of oral contraceptive use in The Netherlands 1990-1993. Data from an ongoing series of surveys on contraceptive use in the Netherlands were analyzed with respect to the percentages of oral contraceptive (OC) users who annually started use, discontinued use or switched to another OC type. The surveys had been conducted between 1990 and 1993 among samples of women aged 15-49 who belonged to a survey panel. Response rates of the surveys were 89-90% and the sample sizes ranged from 4560 to 4621 women. The assessed OC use rates reflected those of the Dutch population reasonably well. Of all respondents who had used OCs during the 12 months prior to the surveys, 12-15% discontinued use within this period, mainly in order to get pregnant, 12-16% were starters and 9-14% switchers. Of all starters 37% switched to another OC type within the first 12 months after starting. Switching was mainly related to the experience of perceived side-effects and wishes for better cycle control. The results highlighted the relevance of closely monitoring the individual woman's satisfaction with her OC. Since OC use appeared in many cases to be characterized by an active seeking for the most acceptable OC type, a wide range of OC types available and the development and introduction of new types is highly relevant for tailoring contraceptive use to individual needs. abstract_id: PUBMED:7803151 Oral contraceptives and antibiotics: important considerations for dental practice. This paper considers the possible interactions between oral contraceptive pills and antibiotics, in the context of modern dental practice. A review of the literature on such interactions leads to the conclusion that current national guidelines on the use of alternative contraceptive measures during a course of broad spectrum antibiotics in women also using the oral contraceptive pill should be emphasised and encouraged as part of good clinical practice. A patient information leaflet may be considered as a useful way of presenting such advice to female patients. abstract_id: PUBMED:3383573 The relative reliability of oral contraceptives; findings of an epidemiological study. A study performed in The Netherlands shows that an undesired pregnancy, claimed to be due to "method failure" with oral contraceptive use and resulting in a request for induced abortion, occurs approximately twice per 10,000 woman-years. For these claimed method failures the type of pill used has been recorded during the years 1982-1984. From those data it is clear that the usage of sequential and triphasic OCs was found to be significantly more often amongst women who requested abortion than would have been expected on the basis of their usage in the general population. This over-representation of sequential and triphasic OCs could not be explained by gastro-intestinal disorders or by drug-interaction. abstract_id: PUBMED:3470667 Pill method failures. This is a study over a four year period from November 1981 to December 1985 documenting 163 cases of pill method failures in reliable pill takers. In over one third of these cases (36%) there were no predisposing factors. Significant factors which were associated with failure were diarrhoea and/or vomiting in 35%, and breakthrough bleeding on the combined pill in 21%. While there is controversy in the literature about the role of antibiotics in causing pill failure, this study found 23% of failures associated with antibiotic use. Two failures occurred on anticonvulsant medication. Recommendations are made for improved instructions to patients. abstract_id: PUBMED:8752245 Ovarian cancer incidence (1989-1991) and mortality (1954-1993) in The Netherlands. Objective: To examine ovarian cancer incidence and mortality in the Netherlands, and to relate trends in mortality to changes in parity and use of oral contraceptives. Methods: Age-standardized and age-specific incidence and mortality rates are presented using incidence data from the Netherlands Cancer Registry, 1989-1991, and mortality data from the Netherlands Central Bureau of Statistics, 1954-1993. Results: In the period 1989-1991, age-standardized incidence of ovarian cancer was 14.9 per 10(5) woman-years. The majority (89%) of these tumors had an epithelial origin. Two-thirds of all newly diagnosed ovarian cancers already showed extension to the pelvis or beyond at diagnosis. From the period 1954-1958 to 1969-1973, age-standardized mortality rates increased from 10.6 to 13.1 per 10(5) woman-years. Thereafter, a decline was noted to 11.4 per 10(5) woman-years in the period 1989-1993. Age-specific mortality rates showed a pattern of rising mortality in the elderly, whereas mortality in the younger age categories was declining. The number of live births has declined gradually, and oral contraceptive use has increased. Conclusion: Incidence of ovarian cancer is high in the Netherlands, but comparable to other countries in north-western Europe and North America. Mortality rates are rising in the elderly and declining in the young. Further research is needed concerning the effects of oral contraceptives, fertility drugs, and hormone replacement therapy on the incidence and mortality of ovarian cancer. abstract_id: PUBMED:22627223 Antibiotics and oral contraceptives: new considerations for dental practice. The aim of this paper is to highlight a change in guidance relating to possible interactions between antibiotics and oral contraceptives. Until recently, dentists have been advised to warn women taking the combined oral contraceptive pill of the routine need to use additional contraceptive measures while taking courses of broad spectrum antibiotics. Recent guidance relating to this issue has changed and dentists may not be aware of this. This paper reminds dentists of the previous guidelines and related evidence, reviews the pharmacokinetics of hormonal contraception and presents them with the latest evidence-based guidance. This should change their clinical practice. Answer: Yes, antibiotics are related to oral combination contraceptive failures in the Netherlands. A study performed in a population-based prescription database (IADB.nl) found a relationship between the use of antibiotics and breakthrough pregnancy. The case-crossover odds ratios comparing the use of antibiotics in the exposure window with both control windows were 2.21 and 1.65, respectively. The traditional case-control odds ratios after adjustment for age were 1.71 in the exposure window. However, the results did not hold for broad-spectrum antibiotics or in a sensitivity analysis. The study concluded that there was an association between the use of antibiotics and breakthrough pregnancy, although the results are partly not the same as those found in a similar study using two US pregnancy databases. Both studies can suffer from bias and confounding, but these will be different because of the use of different databases (PUBMED:22553004). Additionally, another study from the Netherlands shows that an undesired pregnancy, claimed to be due to "method failure" with oral contraceptive use and resulting in a request for induced abortion, occurs approximately twice per 10,000 woman-years. The over-representation of sequential and triphasic OCs in these cases could not be explained by gastro-intestinal disorders or by drug-interaction (PUBMED:3383573). Furthermore, a study over a four-year period documenting pill method failures in reliable pill takers found that 23% of failures were associated with antibiotic use (PUBMED:3470667). These findings suggest that there is a connection between antibiotic use and the failure of oral contraceptives, leading to breakthrough pregnancies in the Netherlands.
Instruction: Is Neurofibromatosis Type 1-Noonan Syndrome a Phenotypic Result of Combined Genetic and Epigenetic Factors? Abstracts: abstract_id: PUBMED:27107091 Is Neurofibromatosis Type 1-Noonan Syndrome a Phenotypic Result of Combined Genetic and Epigenetic Factors? Background/aim: Neurofibromatosis 1-Noonan syndrome (NFNS) presents combined characteristics of both autosomal dominant disorders: NF1 and Noonan syndrome (NS). The genes causing NF1 and NS are located on different chromosomes, making it uncertain whether NFNS is a separate entity as previously suggested, or rather a clinical variation. Patients And Methods: We present a four-membered Greek family. The father was diagnosed with familial NF1 and the mother with generalized epilepsy, being under hydantoin treatment since the age of 18 years. Their two male children exhibited NFNS characteristics. Results: The father and his sons shared R1947X mutation in the NF1 gene. The two children with NFNS phenotype presented with NF1 signs inherited from their father and fetal hydantoin syndrome-like phenotype due to exposure to that anticonvulsant during fetal development. Conclusion: The NFNS phenotype may be the result of both a genetic factor (mutation in the NF1 gene) and an epigenetic/environmental factor (e.g. hydantoin). abstract_id: PUBMED:28971455 Neurofibromatosis-Noonan Syndrome: A Possible Paradigm of the Combination of Genetic and Epigenetic Factors. Neurofibromatosis-Noonan syndrome (NFNS) is a clinical entity possessing traits of autosomal dominant disorders neurofibromatosis type 1 (NF1) and Noonan syndrome (NS). Germline mutations that disrupt the RAS/MAPK pathway are involved in the pathogenesis of both NS and NF1. In light of a studied Greek family, a new theory for etiological pathogenesis of NFNS is suggested. The NFNS phenotype may be the final result of a combination of a genetic factor (a mutation in the NF1 gene) and an environmental factor with the epigenetic effects of muscle hypotonia (such as hydantoin in the reported Greek family), causing hypoplasia of the face and micrognathia. abstract_id: PUBMED:7501563 Neurofibromatosis-Noonan syndrome. Type I neurofibromatosis (NF-1) and Noonan syndrome (NS) are two fairly common genetic disorders. Patients with features of both disorders have been described, but considerable variability of phenotypic expression occurs. As a result, the correct nosology of this syndrome is uncertain. We present a patient with full expression of both NF-1 and NS phenotypes, and discuss the debate regarding the genetics of the combined syndrome. abstract_id: PUBMED:21549079 Neurofibromatosis-Noonan syndrome: case report and clinicopathogenic review of the Neurofibromatosis-Noonan syndrome and RAS-MAPK pathway. Neurofibromatosis-Noonan syndrome is an entity that combines both features of Noonan syndrome and Neurofibromatosis type 1. This phenotypic overlap can be explained by the involvement of the RAS-MAPK pathway (mitogen-activated protein kinase) in both disorders. We report the case of a 17-year-old boy with Neurofibromatosis 1 with Noonan-like features, who complained of the progressive appearance of blue-gray lesions on his back. abstract_id: PUBMED:12661943 Neurofibromatosis--Noonan's syndrome with associated rhabdomyosarcoma of the urinary bladder in an infant: case report. Neurofibromatosis 1 is an autosomal dominant disorder. Noonan's syndrome is known to be associated with neurofibromatoses. Patients with neurofibromatosis are predisposed to developing malignant tumors. The relationship between the genetic changes in the neurofibromin gene and mechanisms associated with tumor development in neurofibromatosis has been investigated. A non-sense mutation C2446T --> R816X of the neurofibromin gene has been detected in some patients with the neurofibromatosis 1-Noonan's syndrome phenotype. We describe a case of an infant with the overlapping features of neurofibromatosis 1 and Noonan's syndrome who presented with rhabdomyosarcoma of the urinary bladder. The genetic analysis of our patient revealed neither mutation in the neurofibromatosis 1-guanosine triphosphatase-activating protein-related domain nor the R816X nonsense mutation. The phenotypic and genotypic features of neurofibromatosis, Noonan's syndrome, and cases with the overlapping features of both syndromes have been reviewed. The presentation of our case underlines the importance of careful examination for the clinical features of neurofibromatosis and phenotypic traits of associated diseases, especially in patients with malignant tumors. abstract_id: PUBMED:3103548 Noonan's syndrome and neurofibromatosis. A child with Noonan syndrome and multiple cafe au lait spots, compatible in size and number with von Recklinghausen's neurofibromatosis, is presented. These features may represent a distinct genetic entity rather than the coincidence of two diseases. abstract_id: PUBMED:21567923 Lethal presentation of neurofibromatosis and Noonan syndrome. Neurofibromatosis type 1 and Noonan syndrome are both common genetic disorders with autosomal dominant inheritance. Similarities between neurofibromatosis type 1 and Noonan syndrome have been noted for over 20 years and patients who share symptoms of both conditions are often given the diagnosis of neurofibromatosis-Noonan syndrome (NFNS). The molecular basis of these combined phenotypes was poorly understood and controversially discussed over several decades until the discovery that the syndromes are related through disturbances of the Ras pathway. We present an infant male with coarse facial features, severe supravalvar pulmonic stenosis, automated atrial tachycardia, hypertrophic cardiomyopathy, airway compression, severe neurological involvement, and multiple complications that lead to death during early infancy. The severity of clinical presentation and significant dysmorphic features suggested the possibility of a double genetic disorder in the Ras pathway instead of NFNS. Molecular analysis showed a missense mutation in exon 25 of the NF1 gene (4288A>G, p.N1430D) and a pathogenic mutation on exon 8 (922A>G, p.N308D) of the PTPN11 gene. Cardiovascular disease has been well described in patients with Noonan syndrome with PTPN11 mutations but the role of haploinsufficiency for neurofibromin in the heart development and function is not yet well understood. Our case suggests that a double genetic defect resulting in the hypersignaling of the Ras pathway may lead to complex cardiovascular abnormalities, cardiomyopathy, refractory arrhythmia, severe neurological phenotype, and early death. abstract_id: PUBMED:31088041 A Neurofibromatosis Noonan Syndrome Patient Presenting with Abnormal External Genitalia Neurofibromatosis Noonan syndrome (NFNS) is a rare RASopathy syndrome, resulting from NF1 gene mutations. NFNS is characterized by phenotypic features of both neurofibromatosis type 1 (NF1) and Noonan syndrome. Plexiform neurofibromas (PNFs) are an unusual finding in NFNS. A seven year-old girl with typical clinical features of NF1 was referred to our clinic due to short stature and abnormal genital appearance. Due to dysmorphic features, a clinical diagnosis of NFNS was considered in the patient and, following molecular analysis, revealed a novel heterozygous c.3052_3056delTTAGT (p.L1018X) variant in the NF1 gene. Although evaluation for genital virilization, including karyotype and hormonal studies were normal, imaging studies revealed a diffuse genital PNF. Although PNFs are seen rarely in NFNS, this should be considered in the differential diagnosis of genital virilization in these patients to prevent unnecessary testing. abstract_id: PUBMED:8985499 Exclusion of allelism of Noonan syndrome and neurofibromatosis-type 1 in a large family with Noonan syndrome-neurofibromatosis association. A large four-generation family with Noonan syndrome (NS) and neurofibromatosis-type 1 (NF1) was studied for clinical association between the two diseases and for linkage analysis with polymorphic DNA markers of the NF1 region in 17q11.2. Nonrandom segregation between NS and NF1 phenotypes was observed. Neurofibromatosis was tightly linked to NF1 markers, whereas Noonan syndrome was found not be allelic to NF1. These results suggest that two mutations at two independent but closely linked loci are the cause of neurofibromatosis-Noonan syndrome (NF-NS) association in this family. abstract_id: PUBMED:3135755 Neurofibromatosis with fully expressed Noonan syndrome. We present an 18-year-old man with neurofibromatosis (NF) and classic manifestations of the Noonan syndrome (NS), including the cardiac findings. His father also has neurofibromatosis but only some of the characteristics of Noonan syndrome. This case lends further support to the notion that the neurofibromatosis-Noonan syndrome (NF-NS) is a discrete entity and demonstrates that the NF-NS can be inherited, with variable expression of the Noonan phenotype within a family. Answer: Yes, Neurofibromatosis Type 1-Noonan Syndrome (NFNS) is considered to be a phenotypic result of combined genetic and epigenetic factors. NFNS presents characteristics of both Neurofibromatosis type 1 (NF1) and Noonan syndrome (NS), which are autosomal dominant disorders caused by mutations that disrupt the RAS/MAPK pathway (PUBMED:21549079). The genetic component is evident in the presence of mutations in the NF1 gene, as seen in a Greek family where the father and his sons shared the R1947X mutation in the NF1 gene (PUBMED:27107091). Additionally, the NFNS phenotype may also be influenced by environmental factors with epigenetic effects, such as exposure to anticonvulsants like hydantoin during fetal development, which can result in a fetal hydantoin syndrome-like phenotype (PUBMED:27107091). This suggests that the NFNS phenotype can arise from a combination of a genetic mutation and an environmental factor that has epigenetic effects, leading to the clinical variation observed in NFNS (PUBMED:28971455). Moreover, the variability in phenotypic expression and the debate regarding the genetics of the combined syndrome further support the idea that NFNS may not be a separate entity but rather a clinical variation influenced by both genetic and epigenetic factors (PUBMED:7501563). The involvement of the RAS-MAPK pathway in both NF1 and NS and the presence of overlapping features in patients with NFNS underscore the complex interplay between genetic mutations and other factors that contribute to the phenotype (PUBMED:21549079). In summary, NFNS is likely a result of both genetic mutations in the NF1 gene and epigenetic or environmental influences that together contribute to the clinical presentation of the syndrome.
Instruction: Is stereotactic radiosurgery a rational treatment option for brain metastases from small cell lung cancer? Abstracts: abstract_id: PUBMED:36964529 SAFESTEREO: phase II randomized trial to compare stereotactic radiosurgery with fractionated stereotactic radiosurgery for brain metastases. Background: Stereotactic radiosurgery (SRS) is a frequently chosen treatment for patients with brain metastases and the number of long-term survivors is increasing. Brain necrosis (e.g. radionecrosis) is the most important long-term side effect of the treatment. Retrospective studies show a lower risk of radionecrosis and local tumor recurrence after fractionated stereotactic radiosurgery (fSRS, e.g. five fractions) compared with stereotactic radiosurgery in one or three fractions. This is especially true for patients with large brain metastases. As such, the 2022 ASTRO guideline of radiotherapy for brain metastases recommends more research to fSRS to reduce the risk of radionecrosis. This multicenter prospective randomized study aims to determine whether the incidence of adverse local events (either local failure or radionecrosis) can be reduced using fSRS versus SRS in one or three fractions in patients with brain metastases. Methods: Patients are eligible with one or more brain metastases from a solid primary tumor, age of 18 years or older, and a Karnofsky Performance Status ≥ 70. Exclusion criteria include patients with small cell lung cancer, germinoma or lymphoma, leptomeningeal metastases, a contraindication for MRI, prior inclusion in this study, prior surgery for brain metastases, prior radiotherapy for the same brain metastases (in-field re-irradiation). Participants will be randomized between SRS with a dose of 15-24 Gy in 1 or 3 fractions (standard arm) or fSRS 35 Gy in five fractions (experimental arm). The primary endpoint is the incidence of a local adverse event (local tumor failure or radionecrosis identified on MRI scans) at two years after treatment. Secondary endpoints are salvage treatment and the use of corticosteroids, bevacizumab, or antiepileptic drugs, survival, distant brain recurrences, toxicity, and quality of life. Discussion: Currently, limiting the risk of adverse events such as radionecrosis is a major challenge in the treatment of brain metastases. fSRS potentially reduces this risk of radionecrosis and local tumor failure. Trial Registration: ClincalTrials.gov, trial registration number: NCT05346367 , trial registration date: 26 April 2022. abstract_id: PUBMED:27583180 Bevacizumab for the treatment of post-stereotactic radiosurgery adverse radiation effect. Background: Adverse radiation effect (ARE) is one of the complications of stereotactic radiosurgery. Its treatment with conventional medications, such as corticosteroids, vitamin E, and pentoxifylline carries a high risk of failure, with up to 20% of lesions refractory to such medications. In addition, deep lesions and those occurring in patients with significant medical comorbidities may not be suitable for surgical resection. Bevacizumab is an antiangiogenic monoclonal antibody against vascular endothelial growth factor, a known mediator of cerebral edema. It can be used to successfully treat ARE. Case Description: An 85-year-old man with a history of small-cell lung cancer presented with metastatic disease to the brain. He underwent stereotactic radiosurgery to a brain metastasis involving the right external capsule. Three months later, the lesion had increased in size, with significant surrounding edema. The patient developed an adverse reaction to steroid treatment and had a poor response to treatment with pentoxifylline and vitamin E. He was deemed a poor surgical candidate because of his medical comorbidities. He was eventually treated with 3 doses of bevacizumab, and the treatment resulted in significant clinical improvement. Magnetic resonance imaging showed some decrease in the size of the lesion and significant decrease in the surrounding edema. Conclusions: Bevacizumab can be successfully used to treat ARE induced by stereotactic radiosurgery in patients with cerebral metastases. It is of particular benefit in patients considered unsuitable for surgical decompression. It is also beneficial in patients with poor tolerance to corticosteroids and in patients who do not respond to other medications. abstract_id: PUBMED:30775073 Gamma Knife radiosurgery for brain metastases from small-cell lung cancer: Institutional experience over more than a decade and review of the literature. Introduction: In the present study, we reviewed the efficacy of stereotactic radiosurgery (SRS) alone or in combination with WBRT, for the treatment of patients with BM secondary to SCLC. We further identified patient and treatment specific factors that correlated with improved survival. Methods: Forty-one patients treated with GKRS for BM secondary to SCLC from 2004 to 2017 at the University of Virginia were identified with histopathologically proven SCLC and included in the study. Results: Following the first GKRS treatment, the median survival was 6 months (1-41 months). There was no statistical difference in overall survival and tumor control between the patients who had PCI, WBRT or upfront GKRS. The only factor associated with decreased OS after the diagnosis of BM from SCLC was active extracranial disease (P=0.045, HR=2.354). Conclusion: Stereotactic radiosurgery is a reasonable treatment option for patients with brain metastases of SCLC who had PCI or WBRT failure. abstract_id: PUBMED:31385968 Outcomes of stereotactic radiosurgery of brain metastases from neuroendocrine tumors. Background: Stereotactic radiosurgery (SRS) is an established treatment for brain metastases, yet little is known about SRS for neuroendocrine tumors given their unique natural history. Objective: To determine outcomes and toxicity from SRS in patients with brain metastases arising from neuroendocrine tumors. Methods: Thirty-three patients with brain metastases from neuroendocrine tumors who underwent SRS were retrospectively reviewed. Median age was 61 years and median Karnofsky performance status was 80. Primary sites were lung (87.9%), cervix (6.1%), esophagus (3%), and prostate (3%). Ten patients (30.3%) received upfront SRS, 7 of whom had neuroendocrine tumors other than small cell lung carcinoma. Kaplan-Meier survival and Cox regression analyses were performed to determine prognostic factors for survival. Results: With median follow-up after SRS of 5.3 months, local and distant brain recurrence developed in 5 patients (16.7%) and 20 patients (66.7%), respectively. Median overall survival (OS) after SRS was 6.9 months. Patients with progressive disease per Response Assessment in Neuro-Oncology-Brain Metastases (RANO-BM) criteria at 4 to 6 weeks after SRS had shorter median time to developing recurrence at a distant site in the brain and shorter OS than patients without progressive disease: 1.4 months and 3.3 months vs 11.4 months and 12 months, respectively (both P < .001). Toxicity was more likely in lesions of small cell histology than in lesions of other neuroendocrine tumor histology, 15.7% vs 3.3% (P = .021). No cases of grade 3 to 5 necrosis occurred. Conclusions: SRS is an effective treatment option for patients with brain metastases from neuroendocrine tumors with excellent local control despite slightly higher toxicity rates than expected. Progressive disease at 4 to 6 weeks after SRS portends a poor prognosis. abstract_id: PUBMED:36176847 Stereotactic Radiosurgery in a Small Cell Lung Cancer Patient With Numerous Brain Metastases. Small cell lung cancer (SCLC) is an aggressive form of lung cancer characterized by its propensity to metastasize to the brain. When SCLC patients develop brain metastasis, the standard-of-care treatment is whole-brain radiotherapy (WBRT), with the goal of treating both macroscopic and microscopic tumors. However, WBRT is found to be associated with significant morbidity including cognitive impairment. An emerging alternative to WBRT for SCLC is stereotactic radiosurgery (SRS), supported by a recent multi-institutional series and meta-analysis. However, there is limited evidence on the use of SRS when there are greater than 15 lesions from any histology, much less SCLC, where the risk of microscopic disease is felt to be even higher. Here, we present the case of an adult female with extensive-stage SCLC who developed 23 brain metastases. Due to patient preference, these were treated with SRS to a total dose of 20 Gy in one fraction. The patient did not experience any radiation-induced toxicity, including radionecrosis, and had overall favorable intracranial control using SRS alone at the time of her death, which was due to extracranial disease progression. This case adds to the literature suggesting that SRS could be a reasonable option for patients with SCLC. It illustrates that it might be reasonable to seek to expand on who might be considered a candidate for SRS treatment, with a high number of lesions not necessarily representing imminent widespread intracranial disease progression. abstract_id: PUBMED:25879433 Is stereotactic radiosurgery a rational treatment option for brain metastases from small cell lung cancer? A retrospective analysis of 70 consecutive patients. Background: Because of the high likelihood of multiple brain metastases (BM) from small cell lung cancer (SCLC), the role of focal treatment using stereotactic radiosurgery (SRS) has yet to be determined. We aimed to evaluate the efficacy and limitations of upfront and salvage SRS for patients with BM from SCLC. Methods: This was a retrospective and observational study analyzing 70 consecutive patients with BM from SCLC who received SRS. The median age was 68 years, and the median Karnofsky performance status (KPS) was 90. Forty-six (66%) and 24 (34%) patients underwent SRS as the upfront and salvage treatment after prophylactic or therapeutic whole brain radiotherapy (WBRT), respectively. Overall survival (OS), neurological death-free survival, remote and local tumor recurrence rates were analyzed. Results: None of our patients were lost to follow-up and the median follow-up was 7.8 months. One-and 2-year OS rates were 43% and 15%, respectively. The median OS time was 7.8 months. One-and 2-year neurological death-free survival rates were 94% and 84%, respectively. In total, 219/292 tumors (75%) in 60 patients (86 %) with sufficient radiological follow-up data were evaluated. Six-and 12-month rates of remote BM relapse were 25% and 47%, respectively. Six-and 12-month rates of local control failure were 4% and 23%, respectively. Repeat SRS, salvage WBRT and microsurgery were subsequently required in 30, 8 and one patient, respectively. Symptomatic radiation injury, treated conservatively, developed in 3 patients. Conclusions: The present study suggested SRS to be a potentially effective and minimally invasive treatment option for BM from SCLC either alone or after failed WBRT. Although repeat salvage treatment was needed in nearly half of patients to achieve control of distant BM, such continuation of radiotherapeutic management might contribute to reducing the rate of neurological death. abstract_id: PUBMED:37223122 Single fraction stereotactic radiosurgery and fractionated stereotactic radiotherapy provide equal prognosis with overall survival in patients with brain metastases at diagnosis without surgery at primary site. Background And Purpose: Stereotactic radiosurgery (SRS) and fractionated stereotactic radiation therapy (SRT) are both treatments shown to be effective in treating brain metastases (BMs). However, it is unknown how these treatments compare in effectiveness and safety in cancer patients with BMs regardless of the primary cancer. The main objective of this study is to investigate the SRS and SRT treatments' associations with the overall survival (OS) of patients diagnosed with BMs using the National Cancer Database (NCDB). Materials And Methods: Patients in the NCDB with breast cancer, non-small cell lung cancer, small cell lung cancer, other lung cancers, melanoma, colorectal cancer, or kidney cancer who had BMs at the time of their primary cancer diagnosis and received either SRS or SRT as treatment for their BMs were included in the study. We analyzed OS with a Cox proportional hazard analysis that adjusted variables associated with improved OS during univariable analysis. Results: Of the total 6,961 patients that fit the criteria for the study, 5,423 (77.9%) received SRS and 1,538 (22.1%) received SRT. Patients who received SRS treatment had a median survival time of 10.9 (95% CI [10.5-11.3]), and those who received SRT treatment had a median survival time of 11.3 (95% CI [10.4-12.3]) months. This difference was not found to be significant (Log-rank P = 0.31). Multivariable Cox proportional hazard analysis did not yield a significant difference between the treatments' associations with OS (Hazard Ratio: 0.942, CI 95% [0.882-1.006]; P = .08) or SRS vs. SRT. Conclusions: In this analysis, SRS and SRT did not show a significant difference in their associations with OS. Future studies investigating the neurotoxicity risks of SRS as compared to SRT are warranted. abstract_id: PUBMED:33440723 MRI Texture Analysis for the Prediction of Stereotactic Radiosurgery Outcomes in Brain Metastases from Lung Cancer. This study aims to evaluate the utility of texture analysis in predicting the outcome of stereotactic radiosurgery (SRS) for brain metastases from lung cancer. From 83 patients with lung cancer who underwent SRS for brain metastasis, a total of 118 metastatic lesions were included. Two neuroradiologists independently performed magnetic resonance imaging (MRI)-based texture analysis using the Imaging Biomarker Explorer software. Inter-reader reliability as well as univariable and multivariable analyses were performed for texture features and clinical parameters to determine independent predictors for local progression-free survival (PFS) and overall survival (OS). Furthermore, Harrell's concordance index (C-index) was used to assess the performance of the independent texture features. The primary tumor histology of small cell lung cancer (SCLC) was the only clinical parameter significantly associated with local PFS in multivariable analysis. Run-length non-uniformity (RLN) and short-run emphasis were the independent texture features associated with local PFS. In the non-SCLC (NSCLC) subgroup analysis, RLN and local range mean were associated with local PFS. The C-index of independent texture features was 0.79 for the all-patients group and 0.73 for the NSCLC subgroup. In conclusion, texture analysis on pre-treatment MRI of lung cancer patients with brain metastases may have a role in predicting SRS response. abstract_id: PUBMED:37330054 Stereotactic radiosurgery versus whole-brain radiotherapy in patients with 4-10 brain metastases: A nonrandomized controlled trial. Background And Purpose: There is no randomized evidence comparing whole-brain radiotherapy (WBRT) and stereotactic radiosurgery (SRS) in the treatment of multiple brain metastases. This prospective nonrandomized controlled single arm trial attempts to reduce the gap until prospective randomized controlled trial results are available. Material And Methods: We included patients with 4-10 brain metastases and ECOG performance status ≤ 2 from all histologies except small-cell lung cancer, germ cell tumors, and lymphoma. The retrospective WBRT-cohort was selected 2:1 from consecutive patients treated within 2012-2017. Propensity-score matching was performed to adjust for confounding factors such as sex, age, primary tumor histology, dsGPA score, and systemic therapy. SRS was performed using a LINAC-based single-isocenter technique employing prescription doses from 15-20Gyx1 at the 80% isodose line. The historical control consisted of equivalent WBRT dose regimens of either 3Gyx10 or 2.5Gyx14. Results: Patients were recruited from 2017-2020, end of follow-up was July 1st, 2021. 40 patients were recruited to the SRS-cohort and 70 patients were eligible as controls in the WBRT-cohort. Median OS, and iPFS were 10.4 months (95%-CI 9.3-NA) and 7.1 months (95%-CI 3.9-14.2) for the SRS-cohort, and 6.5 months (95%-CI 4.9-10.4), and 5.9 months (95%-CI 4.1-8.8) for the WBRT-cohort, respectively. Differences were non-significant for OS (HR: 0.65; 95%-CI 0.40-1.05; P =.074) and iPFS (P =.28). No grade III toxicities were observed in the SRS-cohort. Conclusion: This trial did not meet its primary endpoint as the OS-improvement of SRS compared to WBRT was non-significant and thus superiority could not be proven. Prospective randomized trials in the era of immunotherapy and targeted therapies are warranted. abstract_id: PUBMED:29296400 Stereotactic radiosurgery to the resection cavity for brain metastases: prognostic factors and outcomes. Background: Adjuvant stereotactic radiosurgery (SRS) alone after surgical resection is increasingly being used to provide excellent local control while avoiding the side effects of whole brain radiation therapy (WBRT). We report our ten year experience using this treatment scheme. Purpose/objectives: To determine the rates and any correlates of local control, distant brain failure, and overall survival using SRS alone to the resection cavity. Materials/methods: We performed a retrospective analysis of 509 patients with brain metastasis who underwent Gamma Knife SRS at our institution between 2003 and 2013. Of this group 85 patients were identified that had resection of the metastasis and subsequent SRS to the cavity. Mean dose to the resection cavity was 17.3 Gy (range 14-20) to an average volume of 12cc (range 0.3-83cc). Multiple patient, tumor, and treatment specific factors were collected for analysis (see Table 1). Vital statistics were provided by our institution's tumor registry. The primary endpoint of our analyses was recurrence free survival (RFS); defined as the duration in time between the date of SRS and any local or distant brain tumor recurrence. Results: With a median follow up of 16.4 months, the overall local and distant brain failure at 12 months was 13% (95%CI 5%-21%) and 51% (95%CI 37%-64%) respectively. RPA was class 1 (5%), 2 (75%), and 3 (20%). The median overall survival (OS) was 20 months. The median RFS was 24 months with radiosensitive tumors: non small cell lung cancer (n=12), breast (n=16), gastrointestinal (n=7), small cell lung cancer (n=1), and other (n=9) compared to 5.6 months (p=0.006) in radioresistant tumors: melanoma (n=33), sarcoma (n=1), and renal cell carcinoma (n=6). Median OS for radioresistant and radiosensitive patients was 12 vs 25 months respectively (p=0.11). Additionally, there was a significant improved survival difference seen amongst those who had a gross total resection (GTR, n=46) compared to a sub total resection (n=39) with median OS of 27 vs 16 months (p=0.020) respectively. Radiographic changes suggestive of radiation necrosis were present in 6 patients, 2 of which were determined histiopathologicaly after surgical intervention. Due to the limited number of local recurrence events (n=10), there was insufficient power to analyze prognostic factors for local recurrence. Conclusions: Our results compare favorably with multiple other institution experiences showing excellent local control with SRS to the resection cavity following resection. Radioresistant histologies were associated with a worse RFS. Patients undergoing GTR had a significantly longer OS than those with STR. At our institution we continue to offer patients SRS to the resection cavity for those with good performance status and limited brain metastases. Answer: Yes, stereotactic radiosurgery (SRS) is considered a rational treatment option for brain metastases from small cell lung cancer (SCLC). Several studies have evaluated the efficacy of SRS for treating brain metastases secondary to SCLC and have provided insights into its potential benefits and limitations. A study by PUBMED:30775073 reviewed the efficacy of SRS alone or in combination with whole-brain radiotherapy (WBRT) for the treatment of patients with brain metastases secondary to SCLC. The results indicated that SRS is a reasonable treatment option for patients with brain metastases of SCLC who had failed prior prophylactic cranial irradiation (PCI) or WBRT. Another study by PUBMED:25879433 retrospectively analyzed 70 consecutive patients with brain metastases from SCLC who received SRS. The study suggested that SRS could be an effective and minimally invasive treatment option for BM from SCLC either alone or after failed WBRT. The study also noted that nearly half of the patients required repeat salvage treatment to achieve control of distant brain metastases, but this continuation of radiotherapeutic management might contribute to reducing the rate of neurological death. Additionally, PUBMED:36176847 presented a case of an extensive-stage SCLC patient with 23 brain metastases treated with SRS, which resulted in favorable intracranial control and no radiation-induced toxicity, including radionecrosis, at the time of the patient's death due to extracranial disease progression. This case adds to the literature suggesting that SRS could be a reasonable option for patients with SCLC and that a high number of lesions might not necessarily represent imminent widespread intracranial disease progression. However, it is important to note that SCLC patients with brain metastases are typically excluded from certain trials due to the high likelihood of multiple brain metastases and the aggressive nature of the disease. For instance, the SAFESTEREO trial (PUBMED:36964529) excludes patients with small cell lung cancer from its study comparing stereotactic radiosurgery with fractionated stereotactic radiosurgery for brain metastases. In conclusion, while SRS is a rational treatment option for brain metastases from SCLC, the decision to use SRS should be individualized based on factors such as the number of brain metastases, the patient's overall health, and the presence of extracranial disease.
Instruction: Alliance for a Healthier Generation's competitive beverage and food guidelines: do elementary school administrators know about them and do they report implementing them? Abstracts: abstract_id: PUBMED:22954166 Alliance for a Healthier Generation's competitive beverage and food guidelines: do elementary school administrators know about them and do they report implementing them? Background: The availability of competitive foods in schools is a modifiable factor in efforts to prevent childhood obesity. The Alliance for a Healthier Generation launched the Healthy Schools Program in 2006 to encourage schools to create healthier food environments, including the adoption of nutritional guidelines for competitive beverages and foods. This study examines nationwide awareness and implementation of the guidelines in US public elementary schools. Methods: Data were collected from a nationally representative sample of elementary schools using mail-back surveys in 2006-2007, 2007-2008, 2008-2009, and 2009-2010. Results: From 2006-2007 to 2009-2010, awareness of the Alliance's beverage guidelines increased from 35.0% to 51.8% among school administrators (p < .01); awareness of the food guidelines increased from 29.4% to 40.2% (p < .01). By 2009-2010, almost one third of the schools that sold competitive beverages and foods reported having implemented or being in the process of implementing the guidelines. Implementation was higher among schools from Southern states. Schools with a majority of Black or Latino students were less likely to implement the guidelines. Conclusions: Awareness and implementation of the Alliance's beverage and food guidelines has significantly increased since the 2006-2007 school year, indicating successful diffusion of the guidelines. However, many administrators at schools who sold competitive products were not aware of the guidelines, indicating a need for continued efforts. In addition, lower implementation among schools serving minority students suggests that the Alliance's targeted efforts to provide intensive technical assistance to such schools is warranted and necessary. abstract_id: PUBMED:26210085 Implementation of Competitive Food and Beverage Standards in a Sample of Massachusetts Schools: The NOURISH Study (Nutrition Opportunities to Understand Reforms Involving Student Health). Background: During 2012, Massachusetts adopted comprehensive school competitive food and beverage standards that closely align with Institute of Medicine recommendations and Smart Snacks in School national standards. Objective: We examined the extent to which a sample of Massachusetts middle schools and high schools sold foods and beverages that were compliant with the state competitive food and beverage standards after the first year of implementation, and complied with four additional aspects of the regulations. Design: Observational cohort study with data collected before implementation (Spring 2012) and 1 year after implementation (Spring 2013). Participants/setting: School districts (N=37) with at least one middle school and one high school participated. Main Outcome Measures: Percent of competitive foods and beverages that were compliant with Massachusetts standards and compliance with four additional aspects of the regulations. Data were collected via school site visits and a foodservice director questionnaire. Statistical Analyses Performed: Multilevel models were used to examine change in food and beverage compliance over time. Results: More products were available in high schools than middle schools at both time points. The number of competitive beverages and several categories of competitive food products sold in the sample of Massachusetts schools decreased following the implementation of the standards. Multilevel models demonstrated a 47-percentage-point increase in food and 46-percentage-point increase in beverage compliance in Massachusetts schools from 2012 to 2013. Overall, total compliance was higher for beverages than foods. Conclusions: This study of a group of Massachusetts schools demonstrated the feasibility of schools making substantial changes in response to requirements for healthier competitive foods, even in the first year of implementation. abstract_id: PUBMED:26201754 Socioeconomic Differences in the Association Between Competitive Food Laws and the School Food Environment. Background: Schools of low socioeconomic status (SES) tend to sell fewer healthy competitive foods/beverages. This study examined whether state competitive food laws may reduce such disparities. Methods: School administrators for fifth- and eighth grade reported foods and beverages sold in school. Index measures of the food/beverage environments were constructed from these data. Schools were classified into SES tertiles based on median household income of students' postal zip code. Regression models were used to estimate SES differences in (1) Healthy School Food Environment Index (HSFEI) score, Healthy School Beverage Environment Index (HSBEI) score, and specific food/beverage sales, and (2) associations between state competitive food/beverage laws and HSFEI score, HSBEI score, and specific food/beverage sales. Results: Strong competitive food laws were positively associated with HSFEI in eighth grade, regardless of SES. Strong competitive beverage laws were positively associated with HSBEI particularly in low-SES schools in eighth grade. These associations were attributable to schools selling fewer unhealthy items, not providing healthy alternatives. High-SES schools sold more healthy items than low-SES schools regardless of state laws. Conclusions: Strong competitive food laws may reduce access to unhealthy foods/beverages in middle schools, but additional initiatives are needed to provide students with healthy options, particularly in low-SES areas. abstract_id: PUBMED:21087254 Implementation of California state school competitive food and beverage standards. Background: Competitive foods and beverages are available on most US school campuses. States and school districts are adopting nutrition standards to regulate these products, but few studies have reported on the extent to which schools are able to adhere to competitive regulations. The purpose of this study was to describe the extent to which schools in disadvantaged communities were able to implement California competitive food and beverage standards. Methods: Data on the competitive foods (n = 1019) and beverages (n = 572) offered for sale on 19 school campuses were collected in 2005 and 2008. Descriptive statistics were generated on overall adherence rates to school nutrition standards and adherence rates by venue and school level. Logistic regression models tested predictors of adherence by continuous and categorical variables (eg, venue, item selling price). Results: Data show an increase from 2005 to 2008 in average adherence to the California standards. Several predictors had statistically significant associations with adherence or nonadherence. Adherence was higher for competitive foods sold in school stores than foods sold in vending machines. Higher selling price was associated with lower adherence. Competitive foods classified as entrees were more likely to adhere than snack items, and larger total size (in fluid ounces) beverages were associated with higher adherence. Conclusions: Schools have begun to implement competitive food and beverage policies. However, school environments, particularly in secondary schools, are not 100% compliant with school nutrition standards. These findings can inform policymakers and school officials about the feasibility of implementing competitive food standards in schools. abstract_id: PUBMED:24889082 Profits, commercial food supplier involvement, and school vending machine snack food availability: implications for implementing the new competitive foods rule. Background: The 2013-2014 school year involved preparation for implementing the new US Department of Agriculture (USDA) competitive foods nutrition standards. An awareness of associations between commercial supplier involvement, food vending practices, and food vending item availability may assist schools in preparing for the new standards. Methods: Analyses used 2007-2012 questionnaire data from administrators of 814 middle and 801 high schools in the nationally representative Youth, Education, and Society study to examine prevalence of profit from and commercial involvement with vending machine food sales, and associations between such measures and food availability. Results: Profits for the school district were associated with decreased low-nutrient, energy-dense (LNED) food availability and increased fruit/vegetable availability. Profits for the school and use of company suppliers were associated with increased LNED availability; company suppliers also were associated with decreased fruit/vegetable availability. Supplier "say" in vending food selection was associated with increased LNED availability and decreased fruit/vegetable availability. Conclusions: Results support (1) increased district involvement with school vending policies and practices, and (2) limited supplier "say" as to what items are made available in student-accessed vending machines. Schools and districts should pay close attention to which food items replace vending machine LNED foods following implementation of the new nutrition standards. abstract_id: PUBMED:32157974 Examining school-level implementation of British Columbia, Canada's school food and beverage sales policy: a realist evaluation. Objective: To identify key school-level contexts and mechanisms associated with implementing a provincial school food and beverage policy. Design: Realist evaluation. Data collection included semi-structured interviews (n 23), structured questionnaires (n 62), participant observation at public events (n 3) and scans of school, school district and health authority websites (n 67). The realist heuristic, context + mechanism → outcome configuration was used to conduct the analysis. Setting: Public schools in five British Columbia (BC), Canada school districts. Participants: Provincial and regional health and education staff, private food vendors and school-level stakeholders. Results: We identified four mechanisms influencing the implementation of BC's school food and beverage sales policy. First, the mandatory nature of the policy triggered some actors' implementation efforts, influenced by their normative acceptance of the educational governance system. Second, some expected implementers had an opposite response to the mandate where they ignored or 'skirted' the policy, influenced by values and beliefs about the role of government and school food. A third mechanism related to economics demonstrated ways vendors' responses to school demand for compliance with nutritional Guidelines were mediated by beliefs about food preferences of children, health and food. The last mechanism demonstrated how resource constraints and lack of capacity led otherwise motivated stakeholders to not implement the mandatory policy. Conclusion: Implementation of the food and beverage sales policy at the school level is shaped by interactions between administrators, staff, parent volunteers and vendors with contextual factors such as varied motivations, responsibilities and capacities. abstract_id: PUBMED:30213618 The Impact of 1 Year of Healthier School Food Policies on Students' Diets During and Outside of the School Day. Background: In 2012, Massachusetts implemented both the updated national school meal standards and comprehensive competitive food/beverage standards that closely align with current national requirements for school snacks. Objectives: This study examines the impact of these combined standards on school meal and snack food selections, as well as food choices outside of school. In addition, this study examines the impact of these standards on nutrients consumed. Design: The NOURISH (Nutrition Opportunities to Understand Reforms Involving Student Health) Study was an observational cohort study conducted among students from spring 2012 to spring 2013. Participants/setting: One hundred sixty students in 12 middle schools and high schools in Massachusetts completed two 24-hour recalls before (spring 2012) and after implementation (spring 2013) of the updated standards. Main Outcome Measures: Changes in school meals, competitive food, and after-school snack selection, as well as nutrients consumed outside of school were examined. Statistical Analyses Performed: Logistic regression and mixed-model analysis of variance were used to examine food selection and consumption. Results: After implementation, 13.6% more students chose a school meal (70.1% vs 56.5%; P=0.02). There were no differences in competitive food purchases but a significant decrease in the number of after-school unhealthy snacks consumed (0.69 [standard error=0.08] vs 1.02 [standard error=0.10]; P=0.009). During the entire day, students consumed, on average, 22 fewer grams of sugar daily after implementation compared with before implementation (86 g vs 108 g; P=0.002). Conclusions: With the reduction in the number of unhealthy school snacks, significantly more students selected school meals. Students did not compensate for lack of unhealthy snacks in school by increased consumption of unhealthy snacks outside of school. This provides important new evidence that both national school meal and snack policies may improve daily diet quality and should remain strong. abstract_id: PUBMED:29262875 Product reformulation and nutritional improvements after new competitive food standards in schools. Objective: In 2012, Massachusetts enacted school competitive food and beverage standards similar to national Smart Snacks. These standards aim to improve the nutritional quality of competitive snacks. It was previously demonstrated that a majority of foods and beverages were compliant with the standards, but it was unknown whether food manufacturers reformulated products in response to the standards. The present study assessed whether products were reformulated after standards were implemented; the availability of reformulated products outside schools; and whether compliance with the standards improved the nutrient composition of competitive snacks. Design: An observational cohort study documenting all competitive snacks sold before (2012) and after (2013 and 2014) the standards were implemented. Setting: The sample included thirty-six school districts with both a middle and high school. Results: After 2012, energy, saturated fat, Na and sugar decreased and fibre increased among all competitive foods. By 2013, 8 % of foods were reformulated, as were an additional 9 % by 2014. Nearly 15 % of reformulated foods were look-alike products that could not be purchased at supermarkets. Energy and Na in beverages decreased after 2012, in part facilitated by smaller package sizes. Conclusions: Massachusetts' law was effective in improving the nutritional content of snacks and product reformulation helped schools adhere to the law. This suggests fully implementing Smart Snacks standards may similarly improve the foods available in schools nationally. However, only some healthier reformulated foods were available outside schools. abstract_id: PUBMED:22491007 Limited evidence that competitive food and beverage practices affect adolescent consumption behaviors. Childhood obesity is emerging as a considerable public health problem with no clear antidote. The school food environment is a potential intervention point for policy makers, with competitive food and beverage regulation as a possible policy lever. This research examines the link between competitive food and beverage availability in school and adolescent consumption patterns using data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-1999. Results from value-added multivariate regression models reveal limited evidence that competitive food policy affects fruit and vegetable consumption. Findings suggest a stronger link between competitive beverage policy and consumption of sweetened beverages for population subgroups. abstract_id: PUBMED:28895921 Food and Beverage Marketing in Schools: A Review of the Evidence. Despite growing interest from government agencies, non-governmental organizations and school boards in restricting or regulating unhealthy food and beverage marketing to children, limited research has examined the emerging knowledge base regarding school-based food and beverage marketing in high-income countries. This review examined current approaches for measuring school food and beverage marketing practices, and evidence regarding the extent of exposure and hypothesized associations with children's diet-related outcomes. Five databases (MEDLINE, Web of Science, CINAHL, Embase, and PsycINFO) and six grey literature sources were searched for papers that explicitly examined school-based food and beverage marketing policies or practices. Twenty-seven papers, across four high-income countries including Canada (n = 2), Ireland (n = 1), Poland (n = 1) and United States (n = 23) were identified and reviewed. Results showed that three main methodological approaches have been used: direct observation, self-report surveys, and in-person/telephone interviews, but few studies reported on the validity or reliability of measures. Findings suggest that students in the U.S. are commonly exposed to a broad array of food and beverage marketing approaches including direct and indirect advertising, although the extent of exposure varies widely across studies. More pervasive marketing exposure was found among secondary or high schools compared with elementary/middle schools and among schools with lower compared with higher socio-economic status. Three of five studies examining diet-related outcomes found that exposure to school-based food and beverage marketing was associated with food purchasing or consumption, particularly for minimally nutritious items. There remains a need for a core set of standard and universal measures that are sufficiently rigorous and comprehensive to assess the totality of school food and beverage marketing practices that can be used to compare exposure between study contexts and over time. Future research should examine the validity of school food and beverage marketing assessments and the impacts of exposure (and emerging policies that reduce exposure) on children's purchasing and diet-related knowledge, attitudes and behaviors in school settings. Answer: Yes, elementary school administrators are aware of the Alliance for a Healthier Generation's competitive beverage and food guidelines, and some report implementing them. From 2006-2007 to 2009-2010, awareness of the Alliance's beverage guidelines increased from 35.0% to 51.8% among school administrators, and awareness of the food guidelines increased from 29.4% to 40.2%. By the 2009-2010 school year, almost one-third of the schools that sold competitive beverages and foods reported having implemented or being in the process of implementing the guidelines. However, many administrators at schools who sold competitive products were not aware of the guidelines, indicating a need for continued efforts. Additionally, schools with a majority of Black or Latino students were less likely to implement the guidelines, suggesting that targeted efforts to provide intensive technical assistance to such schools are warranted and necessary (PUBMED:22954166).
Instruction: Episiotomy in the United States: has anything changed? Abstracts: abstract_id: PUBMED:19243733 Episiotomy in the United States: has anything changed? Objective: The objective of the study was to describe episiotomy rates in the United States following recommended changes in clinical practice. Study Design: The National Hospital Discharge Survey, a federal data set sampling inpatient hospitals, was used to obtain data based on International Classification of Diseases, Clinical Modification, 9th revision, diagnosis and procedure codes from 1979 to 2004. Age-adjusted rates of term, singleton, vertex, live-born spontaneous vaginal delivery, operative vaginal delivery, cesarean delivery, episiotomy, and anal sphincter laceration were calculated. Census data for 1990 for women 15-44 years of age was used for age adjustment. Regression analysis was used to evaluate trends in episiotomy. Results: The rate of episiotomy with all vaginal deliveries decreased from 60.9% in 1979 to 24.5% in 2004. Anal sphincter laceration with spontaneous vaginal delivery declined from 5% in 1979 to 3.5% in 2004. Rates of anal sphincter laceration with operative delivery increased from 7.7% in 1979 to 15.3% in 2004. The age-adjusted rate of operative vaginal delivery declined from 8.7 in 1979 to 4.6 in 2004, whereas cesarean delivery rates increased from 8.3 in 1979 to 17.2 per 1000 women in 2004. Conclusion: Routine episiotomy has declined since liberal usage has been discouraged. Anal sphincter laceration rates with spontaneous vaginal delivery have decreased, likely reflecting the decreased usage of episiotomy. The decline in operative vaginal delivery corresponds to a sharp increase in cesarean delivery, which may indicate that practitioners are favoring cesarean delivery for difficult births. abstract_id: PUBMED:11552962 Trends in the use of episiotomy in the United States: 1980-1998. Background: Despite a relative paucity of clinical evidence justifying its routine use, approximately 40 percent of all vaginal deliveries include an episiotomy. The purpose of this study is to examine trends in episiotomy in the United States from 1980 through 1998, a period during which calls increased to abandon routine episiotomy. Methods: Data were obtained from the National Hospital Discharge Survey, which is conducted annually and based on a nationally representative sample of discharges from short-stay non-Federal hospitals. Results: From 1980 through 1998 the episiotomy rate in the United States dropped by 39 percent. Rates decreased for all age and racial groups investigated, in all four geographic regions, and for all sources of payment. Significant differences remained between groups in 1998, including a higher rate for white women than for black women, and a higher rate for women with private insurance than for women with Medicaid or in the self-pay category. The incidence of first- and second-degree lacerations to the perineum increased for women without episiotomies, but the more severe third- and fourth-degree lacerations remained more frequent for women with episiotomies. Women with episiotomies were more likely to have forceps-assisted deliveries or vacuum extractions. Conclusions: Despite dramatic declines in the use of episiotomy during the last two decades, it remains one of the most frequent surgical procedures performed on women in the United States, and it continues to be performed at a higher rate for certain groups of women. abstract_id: PUBMED:12468160 Episiotomy use in the United States, 1979-1997. Objective: To describe episiotomy usage at vaginal delivery in the United States from 1979-1997. Methods: We used the National Hospital Discharge Survey, a federal database of a national sample of inpatient hospitals. Data from 1979 to 1997 were analyzed using International Classification of Diseases, Ninth Revision, Clinical Modification codes for diagnoses and procedures. Rates per 1000 women were calculated using the 1990 census population for women aged 15-44 years. We calculated the number of episiotomies per 100 vaginal deliveries. Rates and percentages were compared using the score test for linear trend. Results: The number of episiotomies ranged from a high of 2,015,000 in 1981 to a low of 1,128,000 in 1997. The age-adjusted annual rate for episiotomy with vaginal deliveries varied from 32.7 in 1979 to 18.7 in 1997 per 1000 women aged 15-44 years. The percentage of episiotomy with vaginal deliveries ranged from 65.3% in 1979 to 38.6% in 1997 (P <.001). Episiotomy with operative deliveries decreased over time (87.0% to 70.8%, P <.001), as did episiotomy with spontaneous deliveries (60.1% to 32.8%, P <.001). Women undergoing episiotomy were slightly younger (mean +/- standard deviation, 25.7 +/- 5.5 years) than women without episiotomy (26.2 +/- 5.7 years, P <.001). Black women (39%) were less likely to receive episiotomy than white women (60%, P <.001). More women with private insurance (62%) had episiotomy performed than women with government insurance (43%, P <.001). Conclusion: Although episiotomy use has decreased over time, the most recent rate of 39 per 100 vaginal deliveries remains higher than evidence-based recommendations for optimal patient care. abstract_id: PUBMED:32222098 Mediolateral Episiotomy: Technique, Practice, and Training. Episiotomy is one of the most common obstetric procedures. However, restrictive use of episiotomy has led to a decrease in its use in the United States. Historically, mediolateral episiotomy has been performed less often than median episiotomy in the United States, but both have purported advantages and disadvantages. Emerging research on episiotomy and obstetric anal sphincter injuries has led to an examination of the effects of mediolateral episiotomy. This article describes performance of a mediolateral episiotomy in a situation of fetal bradycardia. Technical aspects of the incision and repair are described, and outcome data and knowledge gaps are summarized. Implications for practice, clinical competency, and education are reviewed. abstract_id: PUBMED:10426673 Changed pattern in the use of episiotomy in Sweden. Objective: To study changes in the use of episiotomy since 1989, controlling for variables such as severe tears, epidural anaesthesia, duration of the second stage of labour, instrumental deliveries, birthweight and maternal position at delivery. Design: Retrospective study. Data were obtained from original birth records and questionnaires. Setting: Huddinge University Hospital and all labour wards (n = 62) in Sweden. Population: 10,661 women who were delivered vaginally (4575 nulliparae, 6086 multiparae) between 1992 and 1994, and 3366 nulliparae delivered in all Swedish hospitals during the month of March 1995. Main Outcome Measures: Episiotomy rates, severe tears and instrumental deliveries. Results: The rate of episiotomy was 1% and of severe tears 0.6% among multiparae delivered vaginally (including instrumental deliveries) at Huddinge University Hospital between 1992 and 1994. The rate of episiotomy was 6.6% and of severe tears 2.3% among nulliparae. Vacuum extraction and epidural anaesthesia were more commonly associated with episiotomy. Factors significantly associated with severe tears were infant birthweight > or = 4000 g, vacuum extraction and episiotomy. In all Swedish labour wards in 1995 the mean incidence of episiotomy in nulliparae was 24.5%, a significant decrease from 33.7% in 1989. Wide variations occurred between hospitals (4%-50%). Conclusion: The use of episiotomy was much reduced at Huddinge University Hospital, with a consistently low rate of severe tears. This supports the growing evidence for individualised and restrictive use of episiotomy at childbirth. abstract_id: PUBMED:17459568 Operative vaginal delivery and the use of episiotomy--a survey of practice in the United Kingdom and Ireland. Objective: To establish the views and current practice of obstetricians with regard to operative vaginal delivery and the use of episiotomy. Study Design: A national survey of consultant obstetricians and specialist registrars practising in the United Kingdom and Ireland registered with the Royal College of Obstetricians and Gynaecologists (RCOG), London. A postal questionnaire was sent to all obstetricians with two subsequent reminders to non-responders. The choice of procedure for specific circumstances, instrument preference, use of episiotomy and views on the relationship between episiotomy use and anal sphincter tears at operative vaginal delivery were explored. Results: The response rate was 80.4%. Instrument preference varied according to the fetal position and station and the grade of operator. Vacuum and forceps were both used for mid-cavity non-rotational deliveries (64% and 56% reported frequent use respectively). Rotational vacuum was preferred for a mid-cavity mal-position (69%) followed by equal numbers using rotational forceps or manual rotation and forceps (34% and 36%, respectively). Inexperienced operators were more likely to proceed directly to caesarean section (35%). A restrictive approach to use of episiotomy was preferred for vacuum delivery (72%) and a routine approach for forceps (73%). Obstetricians varied greatly in their perception of the relationship between episiotomy use and anal sphincter tears at operative vaginal delivery. Conclusion: There is wide variation in the use of episiotomy at operative vaginal delivery with uncertainty about its role in preventing anal sphincter tears. A randomised controlled trial would address this important aspect of obstetric care. abstract_id: PUBMED:9355272 Episiotomy counts: trends and prevalence in Canada, 1981/1982 to 1993/1994. Background: The purpose of this study was to produce a minimum estimate of the prevalence of episiotomy use in Canada, and to investigate the trend in its use between 1981/1982 and 1993/1994. Method: A retrospective population case series study was conducted using hospital discharge abstracts. Outcome measures were the count of episiotomies performed during a 12-month period and the episiotomy rate per 100 vaginal births. Results: For more than a decade, official statistics have significantly underreported episiotomy use by as much as 50 percent. In 1993/1994 at least 37.7 percent of women giving birth vaginally in Canada are known to have received an episiotomy. Between 1981/1982 and 1993/1994 its prevalence declined 29.1 percent, with the greatest decline occurring during the 1990s. This decline did not result from changes in parity in the population. The decrease in episiotomy use during this 13-year period is more than twice that found in the United States (a decline of only 13.6%). Conclusions: The reporting of official statistics on obstetric procedures in Canada should be modified to include all known cases of episiotomy. The observed downward trend in the rate of this procedure is encouraging, and is in the direction of evidence-based recommendations advocating its restrictive use. abstract_id: PUBMED:28334585 Uptake and Utilization of Practice Guidelines in Hospitals in the United States: the Case of Routine Episiotomy. Background: The gap between publishing and implementing guidelines differs based on practice setting, including hospital geography and teaching status. On March 31, 2006, a Practice Bulletin published by the American College of Obstetricians and Gynecologists (ACOG) recommended against the routine use of episiotomy and urged clinicians to make judicious decisions to restrict the use of the procedure. Objective: This study investigated changes in trends of episiotomy use before and after the ACOG Practice Guideline was issued in 2006, focusing on differences by hospital geographic location (rural/urban) and teaching status. Methods: In a retrospective analysis of discharge data from the Nationwide Inpatient Sample (NIS)-a 20% sample of US hospitals-5,779,781 hospital-based births from 2002 to 2011 (weighted N = 28,067,939) were analyzed using multivariable logistic regression analysis to measure odds of episiotomy and trends in episiotomy use in vaginal deliveries. Results: The overall episiotomy rate decreased from 20.3% in 2002 to 9.4% in 2011. Across all settings, a comparatively larger decline in episiotomy rates preceded the issuance of the ACOG Practice Guideline (34.0% decline), rather than following it (23.9% decline). The episiotomy rate discrepancies between rural, urban teaching, and urban nonteaching hospitals remained steady prior to the guideline's release; however, differences between urban nonteaching and urban teaching hospitals narrowed between 2007 and 2011 after the guideline was issued. Conclusion: Teaching status was a strong predictor of odds of episiotomy, with urban nonteaching hospitals having the highest rates of noncompliance with evidence-based practice. Issuance of clinical guidelines precipitated a narrowing of this discrepancy. abstract_id: PUBMED:2658595 Episiotomy: can its routine use be defended? Episiotomy is routinely used before delivery in the United States. The rationale for routine episiotomy is based on two foundation arguments: that episiotomy reduces perineal trauma and that it prevents subsequent pelvic relaxation. A careful review finds little evidence to support these arguments. Episiotomy may predispose the gravid woman to third- and fourth-degree lacerations. Efforts should be directed to determine whether episiotomy is beneficial. abstract_id: PUBMED:19889394 Episiotomy in the United States: has anything changed? N/A Answer: Episiotomy rates in the United States have indeed changed over time. The rate of episiotomy with all vaginal deliveries decreased significantly from 60.9% in 1979 to 24.5% in 2004 (PUBMED:19243733). This decline in routine episiotomy usage is likely a reflection of the recommended changes in clinical practice that discourage liberal use of the procedure. Additionally, anal sphincter laceration rates with spontaneous vaginal delivery have decreased, which may also be associated with the decreased usage of episiotomy (PUBMED:19243733). Further analysis of trends from 1980 through 1998 showed that the episiotomy rate dropped by 39 percent during that period (PUBMED:11552962). The decline was observed across all age and racial groups, geographic regions, and sources of payment. However, episiotomy remained one of the most frequent surgical procedures performed on women in the United States, with certain groups of women, such as white women and those with private insurance, more likely to undergo the procedure (PUBMED:11552962). From 1979 to 1997, the percentage of episiotomy with vaginal deliveries ranged from 65.3% to 38.6%, indicating a significant decrease over time (PUBMED:12468160). Despite this decrease, the rate of episiotomy was still higher than evidence-based recommendations for optimal patient care (PUBMED:12468160). The use of mediolateral episiotomy, one of the types of episiotomy, has also been examined, with technical aspects of the incision and repair described, and implications for practice, clinical competency, and education reviewed (PUBMED:32222098). In summary, episiotomy rates in the United States have declined over the years, reflecting a shift towards more restrictive use in line with evidence-based recommendations. However, episiotomy continues to be a common obstetric procedure, and its use varies among different demographic groups and hospital settings.
Instruction: Does social isolation and low societal participation predict disability pension? Abstracts: abstract_id: PUBMED:24223229 Does social isolation and low societal participation predict disability pension? A population based study. Purpose: The aim was to examine the potential influence of social isolation and low societal participation on the future risk of receiving disability pension among individuals in Sweden. A specific aim was to describe differences depending on disability pension diagnoses, and how the results were modified by sex and age. Method: The study comprised representative samples of Swedish women and men, who had been interviewed in any of the annual Swedish Surveys of Living Conditions between 1990 and 2007. Information on disability pension and diagnoses was added from the Swedish Social Insurance Agency's database (1991-2011). The mean number of years of follow-up for the 53920 women and men was twelve years (SD 5.5), and the study base was restricted to the ages 20 to 64 years of age. The predictors were related to disability pension by Cox's proportional hazards regression. Results: Social isolation and low societal participation were associated with future disability pension also after control for age, year of interview, socio demographic conditions and self reported longstanding illness. Lone individuals were at increased risk of disability pension, and the effect of living without children was modified by sex and age. An increase in risk was particularly noticeable among younger women who reported that they had sparse contacts with others, and no close friend. Both women and men who reported that they did not participate in political discussions and who could not appeal on a decision by a public authority were also at increased risk. The effects of social isolation were mainly attributed to disability pension with mental diagnoses, and to younger individuals. Conclusions: The study suggests that social isolation and low societal participation are predictors of future disability pension. Social isolation and low societal participation increased particularly the risk of future disability pension in mental diagnoses among younger individuals. abstract_id: PUBMED:16449043 Neighborhood social participation, use of anxiolytic-hypnotic drugs, and women's propensity for disability pension: a multilevel analysis. Aims: The increasing number of people on disability pension in Sweden is of concern for Swedish policy-makers, and there is a need for a better understanding of the mechanisms behind disability pension. We investigated (i) whether women living in the same neighborhood have a similar propensity for disability pension that relates to neighborhood social participation, and (ii) whether there is an association between anxiolytic-hypnotic drug (AHD) use and disability pension in women that is modified by the neighborhood context. Methods: We used multilevel logistic regression with 12,156 women aged 45 to 64 (first level) residing in 95 neighborhoods (second level) in the city of Malmö (250,000 inhabitants), Sweden, who participated in the Malmö Diet and Cancer Study (1991-96). Results: Both AHD use (OR = 2.09, 95% CI 1.65, 2.65) and neighborhood rate of low social participation (OR = 11.85, 95% CI 5.09, 27.58) were associated with higher propensity for disability pension. The interval odds ratio indicated that the influence of neighborhood social participation was large compared with the unexplained variance between the neighborhoods. The association between AHD use and disability pension was not modified by the neighborhood context. The median odds ratio was 1.44 after adjusting for individual characteristics and 1.27 after the additional adjusting for neighborhood social participation. Conclusions: Women living in the same neighborhood appear to have a similar propensity for disability pension, beyond individual characteristics, and this contextual effect seems largely explained by neighborhood social participation. In addition, AHD use might increase the propensity for disability pension in women. abstract_id: PUBMED:37945420 The link between disability and social participation revisited: Heterogeneity by type of social participation and by socioeconomic status. Background: While prior literature explores the impact of disability on social participation, the distinct characteristics of diverse social activities could further complicate this relationship. Furthermore, this relationship may exhibit heterogeneity when considering socioeconomic status (SES). Objective: This study aims to investigate whether the relationship between disability and social participation differs depending on the type of social participation, and to what extent this relationship is moderated by SES. Methods: Data from seven waves of the Korean Longitudinal Study of Ageing were analyzed. Various types of social participation, including socializing, leisure, volunteer, political, and religious activities, were considered. Individual fixed effects models were employed to account for unobserved individual-level heterogeneity. To investigate the potential moderating role of SES, an interaction term between disability and SES was included. Results: Disability was associated with a decrease in social participation (b = -0.088). When differentiating types of social participation, the associations were negative for socializing and leisure activities (b = -0.092 and b = -0.012, respectively) and positive for volunteer activities (b = 0.012). The negative association between disability and social participation was generally stronger among higher-SES groups than lower-SES groups. Specifically, the negative association with leisure activities was more pronounced among the high-education groups. In contrast, the positive association with volunteer activities was more evident among the low-education group. Conclusions: Disability has a negative association with engagement in socializing and leisure activities and a positive association with engagement in volunteer activities. Policymakers should consider the role of SES in complicating the relationship between disability and social participation. abstract_id: PUBMED:27492313 Disability studies: social exclusion a research subject The article presents disability studies and elaborates, as their central feature, the distinction between societal disability and impairment which can be described on an individual and medical level. Disability studies define disability as socially caused exclusion. Participation and inclusion, seen as sociopolitical control and counter-terms, do, in fact, have a different content, depending on usage and context. Using the example of the International Classification of Functioning (ICF) and the United Nations Convention on the Rights of Persons with Disabilities (UN CRPD), the respective understanding of disability is depicted. Against this background, the deficits of implementation of the UN CRPD, as criticized by the responsible UN Committee, are shown. Finally, a research agenda for disability studies is outlined, that deals with, among other things, implementation strategies and conflicts of interest in terms of inclusion, furthering widely unquestioned economic conditions and especially the negative impact of European austerity politics. abstract_id: PUBMED:23988324 Participation and social participation: are they distinct concepts? Introduction: The concept of participation has been extensively used in health and social care literature since the World Health Organization introduced its description in the International Classification of Functioning, Disability and Health (ICF) in 2001. More recently, the concept of social participation is frequently used in research articles and policy reports. However, in the ICF, no specific definition exists for social participation, and an explanation of differences between the concepts is not available. Aim: The central question in this discussion article is whether participation, as defined by the ICF, and social participation are distinct concepts. This article illustrates the concepts of participation and social participation, presents a critical discussion of their definitions, followed by implications for rehabilitation and possible future directions. Discussion: A clear definition for participation or social participation does not yet exist. Definitions for social participation differ from each other and are not sufficiently distinct from the ICF definition of participation. Although the ICF is regarded an important conceptual framework, it is criticised for not being comprehensive. The relevance of societal involvement of clients is evident for rehabilitation, but the current ICF definition of participation does not sufficiently capture societal involvement. Conclusion: Changing the ICF's definition of participation towards social roles would overcome a number of its shortcomings. Societal involvement would then be understood in the light of social roles. Consequently, there would be no need to make a distinction between social participation and participation. abstract_id: PUBMED:28666397 The association of mobility disability and weight status with risk of disability pension: A prospective cohort study. Background/aims: Mobility disability (MD) and obesity are conditions which have been associated with weaker labour market attachment. This study investigates whether the combined burden of MD and obesity increase the risk of disability pension compared with having only one of these conditions (the reference group). Methods: A nationwide cohort study, based on national surveys made between 1996 and 2011, was conducted including 50,015 individuals aged 19-64 years who were followed-up in a large database in terms of attainment of disability pension until 31 December 2012 (at the latest). Proportional hazards regression models were used to analyse the risk of all-cause and diagnosis-specific disability pension with six exposure groups, established by mobility and weight status (BMI) obtained through self-reports. Results: A total of 2296 participants had received disability pension after a mean follow-up period of 7.2 years (SD 4.6). People with MD, regardless of weight, had 4-8 times higher risk of disability pension (for any reason) compared with the reference group (individuals with normal weight and no MD). Conclusions: No evidence of a double burden of MD and obesity with disability pension was observed in this study. MD seemed to contribute more to the risk of disability pension than weight status. In a long-term perspective, society and also people at risk of these disabling conditions would benefit from reallocation of resources from disability pensions to health-promoting and preventive policies, not least targeting MD. abstract_id: PUBMED:36905805 Effect of social participation on the association between frailty and disability. Objectives: To examine whether social participation affects the association between frailty and disability. Methods: A baseline survey conducted from December 1 to 15, 2006, included 11,992 participants who were classified based on the Kihon Checklist into three categories and based on the number of activities in which they socially participated into four categories. The study outcome, incident functional disability, was defined as in Long-Term Care Insurance certification. A Cox proportional hazards model was used to calculate hazard ratios (HRs) for incident functional disability according to frailty and social participation categories. Combination analysis was performed between the nine groups using the above-mentioned Cox proportional hazards model. Results: During the 13-year follow-up (107,170 person-years), 5,732 incident cases of functional disability were certified. Compared with the robust group, the other groups had significantly higher incident functional disability. However, the HRs for those participating in social activities were lower than that for those not participating in any activity [1.52 (pre-frail + none group); 1.31 (pre-frail + one activity group); 1.42 (pre-frail + two activities group); 1.37 (pre-frail + three activities group); 2.35 (frail + none group); 1.87 (frail + one activity group); 1.85 (frail + two activities group); and 1.71 (frail + three activities group)]. Conclusions: The risk of functional disability for those participating in social activities was lower than that for those not participating in any activity, irrespective of being pre-frail or frail. Comprehensive social systems for disability prevention need to focus on social participation in frail older adults. abstract_id: PUBMED:32260520 Unemployment Trajectories and the Early Risk of Disability Pension among Young People with and without Autism Spectrum Disorder: A Nationwide Study in Sweden. Depression and anxiety are associated with unemployment and disability pension, while autism spectrum disorder (ASD) is less studied. We aimed to first identify unemployment trajectories among young adults with and without ASD, and then to examine their social determinants. Finally, we used the trajectories as determinants for subsequent disability pension. We used a population-based cohort, including 814 people who were 19-35 years old, not on disability pension, and who had their ASD diagnosis between 2001 and 2009. A matched reference population included 22,013 people with no record of mental disorders. Unemployment follow-up was the inclusion year and four years after. Disability pension follow-up started after the unemployment follow-up and continued through 2013. We identified three distinctive trajectories of unemployment during the follow-up: (1) low, then sharply increasing (9%,) (2) low (reference, 67%), and (3) high then slowly decreasing (24%). People with ASD had higher odds of belonging belong to the trajectory groups 1 (OR 2.53, 95% CI 2.02-3.18) and 3 (OR 3.60, 95% CI 3.08-4.19). However, the mean number of unemployment days was relatively low in all groups. A disability pension was a rare event in the cohort, although memberships to groups 1 and 3 were associated with the risk of a future disability pension. More knowledge is needed about factors facilitating participation in paid employment among people with ASD. abstract_id: PUBMED:29471760 Working while on a disability pension in Finland: Association of diagnosis and financial factors to employment. Aims: The aim of this study was to find out whether health and financial factors are associated with engagement in paid work during a disability pension. Methods: The data included a 10 per cent sample of Finns aged 20-62 years who were drawing earnings-related full or partial disability pension in 2012 ( n = 14,418). Logistic regression analysis was used to estimate odds ratios for working while on a full or partial disability pension. Results: Fourteen per cent of full disability pensioners and 76 per cent of partial disability pensioners were engaged in paid work. Full disability pensioners due to mental disorders were working less often than full disability pensioners due to other diseases. Partial disability pensioners due to cardiovascular diseases were working more than partial disability pensioners due to other diseases. More recent timing of disability pension was associated with working for both partial and full disability pensioners. Working while on disability pension was more common among those with higher education. Partial disability pensioners with average pension worked more often than those with high pension. Conclusions: By knowing the factors associated with working while on a disability pension, policies could be more efficiently allocated to encourage disability pensioners to take up work. One way would be to support disability pensioners with low education to work more. Another way to increase work among disability pensioners is to support the recently retired in working longer. abstract_id: PUBMED:29366393 Impact of changes in welfare legislation on the incidence of disability pension. A cohort study of construction workers. Aims: Study objectives were to investigate how changes in social insurance legislation influenced the incidence of disability pension. Methods: The study included 295,636 male construction workers who attended health examinations between 1971 and 1993, aged 20-60 years and without previous disability pension. Via the Swedish National Insurance Agency national register we identified 66,046 subjects who were granted disability pension up until 2010. The incidence rates were calculated and stratified according to age and diagnosis. Results: The incidence rate of disability pension was fairly stable until the 1990s when large variations occurred, followed by a strong decreasing trend from the early 2000s to 2010. Trends in incidence rates, stratified by age and diagnosis, showed a consistent decrease in cardiovascular disease for all age groups. In subjects aged 30-49 years there was a high peak around 2003 for musculoskeletal diseases and psychiatric diseases. For the age group 50-59 years, musculoskeletal diagnosis, the most common cause of disability pension, had a sharp peak around 1993 and then a decreasing trend. In the 60-64 age group, the incidence rate for psychiatric diagnosis was stable, while incidence rates for musculoskeletal diagnosis varied during the 1990s. Conclusions: There are considerable variations in the incidence rate of disability pension over time, with different patterns depending on age and diagnosis. Changes in social insurance legislation, as well as in administration processes, seem to influence the variation. Answer: Yes, social isolation and low societal participation are associated with an increased risk of receiving a disability pension. Studies have found that individuals who experience social isolation or have low levels of societal participation are more likely to receive disability pensions in the future. This association persists even after controlling for factors such as age, socio-demographic conditions, and self-reported longstanding illness (PUBMED:24223229). For example, one study showed that women living in neighborhoods with low social participation had a higher propensity for disability pension, suggesting that the neighborhood context can influence individual outcomes related to disability pension (PUBMED:16449043). Another study indicated that the negative association between disability and social participation was generally stronger among higher socioeconomic status (SES) groups, with different types of social participation being affected differently by disability (PUBMED:37945420). Moreover, the concept of social participation is closely linked to the broader understanding of disability as socially caused exclusion, as outlined in disability studies. The distinction between societal disability and impairment is crucial, with participation and inclusion being seen as sociopolitical control and counter-terms that have different meanings depending on usage and context (PUBMED:27492313). The association between mobility disability, weight status, and the risk of disability pension has also been studied, with findings suggesting that mobility disability contributes more to the risk of disability pension than weight status (PUBMED:28666397). Additionally, social participation has been found to affect the association between frailty and disability, with those participating in social activities having a lower risk of functional disability, irrespective of being pre-frail or frail (PUBMED:36905805). In summary, the evidence suggests that social isolation and low societal participation are predictors of future disability pension, with the effects being particularly noticeable for disability pensions with mental diagnoses among younger individuals (PUBMED:24223229).
Instruction: Do emergency physicians attribute drug-related emergency department visits to medication-related problems? Abstracts: abstract_id: PUBMED:20005010 Do emergency physicians attribute drug-related emergency department visits to medication-related problems? Study Objective: Adverse drug events represent the most common cause of preventable nonsurgical adverse events in medicine but may remain undetected. Our objective is to determine the proportion of drug-related visits emergency physicians attribute to medication-related problems. Methods: This prospective observational study enrolled adults presenting to a tertiary care emergency department (ED) during 12 weeks. Drug-related visits were defined as ED visits caused by adverse drug events. The definition of adverse drug event was varied to examine both narrow and broad adverse drug event classification systems. Clinical pharmacists evaluated all patients for drug-related visits, using standardized assessment algorithms, and then followed patients until hospital discharge. Interrater agreement for the clinical pharmacist diagnosis of drug-related visit was assessed. Emergency physicians, blinded to the clinical pharmacist opinion, were interviewed at the end of each shift to determine whether they attributed the visit to a medication-related problem. An independent committee reviewed and adjudicated all cases in which the emergency physicians' and clinical pharmacists' assessments were discordant, or either the emergency physician or clinical pharmacist was uncertain. The primary outcome was the proportion of drug-related visits attributed to a medication-related problem by emergency physicians. Results: Nine hundred forty-four patients were enrolled, of whom 44 patients received a diagnosis of the narrowest definition of an adverse drug event, an adverse drug reaction (4.7%; 95% confidence interval [CI] 3.5% to 6.2%). Twenty-seven of these were categorized as medication-related by emergency physicians (61.4%; 95% CI 46.5% to 74.3%), 10 were categorized as uncertain (22.7%; 95% CI 12.9% to 37.1%), and 7 categorized as a non-medication-related problem (15.9%; 95% CI 8.0% to 29.5%). Seventy-eight patients (8.3%; 95% CI 6.7% to 10.2%) received a diagnosis of an adverse drug event caused by an adverse drug reaction, a drug interaction, drug withdrawal, a medication error, or noncompliance. Emergency physicians attributed 49 of these to a medication-related problem (62.8%; 95% CI 51.7% to 72.7%), were uncertain about 15 (19.2%; 95% CI 12.0% to 29.4%), and attributed 14 to non-medication-related problems (17.9%; 95% CI 11.0% to 27.9%). Twenty-five of 29 (86.2%; 95% CI 69.3% to 94.4%) adverse drug events not considered medication related by emergency physicians were rated at least moderate in severity. Conclusion: A significant proportion of drug-related visits are not deemed medication related by emergency physicians. Drug-related visits not attributed to medication-related problems by emergency physicians may be missed in ongoing outpatient adverse drug event surveillance programs intended to develop strategies to enhance drug safety. Further research is needed to determine what the effect may be of not attributing adverse drug events to medication-related problems. abstract_id: PUBMED:28610634 Medication-related visits in a pediatric emergency department: an 8-years retrospective analysis. Background: There are limited data on the characterization of medication-related visits (MRVs) to the emergency department (ED) in pediatric patients in Italy. We have estimated the frequency, severity, and classification of MRVs to the ED in pediatric patients. Methods: We retrospectively analyzed data for children seeking medical evaluation for a MRV over an 8 years period. A medication-related ED visit was identified by using a random pharmacist assessment, emergency physician assessment, and in case of conflicting events, by a third investigators random assessment. Results: In this study, regarding a single tertiary center in Italy, on a total of 147,643 patients from 0 to 14 years old, 497 medication-related visits were found, 54% of which occurred in children from 0 to 2 years of age. Severity was classified as mild in 21.6% of cases, moderate in 67.2% of cases, and severe in 11.2% of cases. The most common events were related to drug use without indication (51%), adverse drug reactions (30.3%), supratherapeutic dosage (13.2%) and improper drug selection (4.5%). The medication classes most frequently implicated in an ADE were anti-infective drugs for systemic use (28.9%), central nervous system agents (22.3%) and respiratory system drugs (10.8%). The most common symptom manifestations were dermatologic conditions (46.1%), general disorder and administration site conditions (29.7%) and gastrointestinal symptoms (16.0%). Conclusions: To our knowledge, this is the first study in Italy evaluating the epidemiologic characteristics of MRVs confirming a significant cause of healthcare contact resulting in ED visits and hospital admissions with associated resource utilization. Our results suggests further future prospective, large-sample sized, and multicenter research is necessary to better understand the impact of MRVs and to develop strategies to provide care plans and monitor patients to prevent medication-related visits. Trial Registration: Not applicable. abstract_id: PUBMED:38256662 Medication-Related Hospital Admissions and Emergency Department Visits in Older People with Diabetes: A Systematic Review. Limited data are available regarding adverse drug reactions (ADRs) and medication-related hospitalisations or emergency department (ED) visits in older adults with diabetes, especially since the emergence of newer antidiabetic agents. This systematic review aimed to explore the nature of hospital admissions and ED visits that are medication-related in older adults with diabetes. The review was conducted according to the PRISMA guidelines. Studies in English that reported on older adults (mean age ≥ 60 years) with diabetes admitted to the hospital or presenting to ED due to medication-related problems and published between January 2000 and October 2023 were identified using Medline, Embase, and International Pharmaceutical Abstracts databases. Thirty-five studies were included. Medication-related hospital admissions and ED visits were all reported as episodes of hypoglycaemia and were most frequently associated with insulins and sulfonylureas. The studies indicated a decline in hypoglycaemia-related hospitalisations or ED presentations in older adults with diabetes since 2015. However, the associated medications remain the same. This finding suggests that older patients on insulin or secretagogue agents should be closely monitored to prevent potential adverse events, and newer agents should be used whenever clinically appropriate. abstract_id: PUBMED:12126224 Drug-related visits to the emergency department: how big is the problem? Objectives: To review the literature concerning drug-related problems that result in emergency department visits, estimate the frequency of these problems and the rates of hospital admissions, and identify patient risk factors and drugs that are associated with the greatest risk. Methods: A systematic search of MEDLINE (January 1966-December 2001), EMBASE (January 1980-December 2001), and PubMed (January 1966-December 2001) databases for full reports published in English was performed. The Ottawa Valley Regional Drug Information Service database of nonindexed pharmacy journals also was searched. Results: Data from eight retrospective and four prospective trials retrieved indicated that as many as 28% of all emergency department visits were drug related. Of these, 70% were preventable, and as many as 24% resulted in hospital admission. Drug classes often implicated in drug-related visits to an emergency department were nonsteroidal antiinflammatory drugs, anticonvulsants, antidiabetic drugs, antibiotics, respiratory drugs, hormones, central nervous system drugs, and cardiovascular drugs. Common drug-related problems resulting in emergency department visits were adverse drug reactions, noncompliance, and inappropriate prescribing. Conclusion: Drug-related problems are a significant cause of emergency department visits and subsequent resource use. Primary caregivers, such as family physicians and pharmacists, should collaborate more closely to provide and reinforce care plans and monitor patients to prevent drug-related visits to the emergency department and subsequent morbidity and mortality. abstract_id: PUBMED:37343666 Emergency department physicians' experiences and perceptions with medication-related work tasks and the potential role of clinical pharmacists. Purpose: Medication-related problems are frequent among emergency department patients. Clinical pharmacists play an important role in identifying, solving, and preventing these problems, but are not present in emergency departments worldwide. We aimed to explore how Norwegian physicians experience medication-related work tasks in emergency departments without pharmacists present, and how they perceive future introduction of a clinical pharmacist in the interprofessional team. Methods: We interviewed 27 physicians in three emergency departments in Norway. Interviews were audio-recorded, transcribed, and analysed using qualitative content analysis. Results: Our informants' experience with medication-related work tasks mainly concerned medication reconciliation, and few other tasks were systematically performed to ensure medication safety. The informants were welcoming of clinical pharmacists and expressed a need and wish for assistance with compiling patient's medication lists. Simultaneously they expressed concerns regarding e.g., responsibility sharing, priorities in the emergency department and logistics. These concerns need to be addressed before implementing the clinical pharmacist in the interprofessional team in the emergency department. Conclusions: Physicians in Norwegian emergency departments welcome assistance from clinical pharmacists, but the identified professional, structural, and legislative barriers for this collaboration need to be addressed before implementation. abstract_id: PUBMED:36313329 Prevalence and predictors of medication-related emergency department visit in older adults: A multicenter study linking national claim database and hospital medical records. Objectives: Older adults are more likely to experience drug-related problems (DRP), which could lead to medication-related emergency department visits (MRED). To properly evaluate MRED, the entire history of drug use should be evaluated in a structured manner. However, limited studies have identified MRED with complete prescription records. We aimed to evaluate the prevalence and risk factors of MRED among community-dwelling older patients by linking national claims data and electronic medical records using a standardized medication related admission identification method. Methods: We included older patients who visited the emergency departments of four participating hospitals in 2019. Among the 54,034 emergency department (ED) visitors, we randomly selected 6,000 patients and structurally reviewed their medical records using a standardized MRED identification method after linking national claims data and electronic medical records. We defined and categorized MRED as ED visits associated with adverse drug events and those caused by the underuse of medication, including treatment omission and noncompliance and assessed as having probable or higher causality. We assessed preventability using Schumock and Thornton criteria. Results: MRED was observed in 14.3% of ED visits, of which 76% were preventable. In addition, 32.5% of MRED cases were related to underuse or noncompliance, and the rest were related to adverse drug events. Use of antipsychotics, benzodiazepines, anticoagulants, traditional nonsteroidal anti-inflammatory drugs without the use of proton pump inhibitors, P2Y12 inhibitors, insulin, diuretics, and multiple strong anticholinergic drugs were identified as predictors of MRED. Conclusion: One in seven cases of ED visits by older adults were medication related and over three-quarters of them were preventable. These findings suggest that DRPs need to be systemically screened and intervened in older adults who visit ED. abstract_id: PUBMED:35129789 Drug-related emergency department visits: prevalence and risk factors. The study aimed to investigate the prevalence of drug-related emergency department (ED) visits and associated risk factors. This retrospective cohort study was conducted in the ED, Diakonhjemmet Hospital, Oslo, Norway. From April 2017 to May 2018, 402 patients allocated to the intervention group in a randomized controlled trial were included in this sub-study. During their ED visit, these patients received medication reconciliation and medication review conducted by study pharmacists, in addition to standard care. Retrospectively, an interdisciplinary team assessed the reconciled drug list and identified drug-related issues alongside demographics, final diagnosis, and laboratory tests for all patients to determine whether their ED visit was drug-related. The study population's median age was 67 years (IQR 27, range 19-96), and patients used a median of 4 regular drugs (IQR 6, range 0-19). In total, 79 (19.7%) patients had a drug-related ED visits, and identified risk factors were increasing age, increasing number of regular drugs and medical referral reason. Adverse effects (72.2%) and non-adherence (16.5%) were the most common causes of drug-related ED visits. Antithrombotic agents were most frequently involved in drug-related ED visits, while immunosuppressants had the highest relative frequency. Only 11.4% of the identified drug-related ED visits were documented by physicians during ED/hospital stay. In the investigated population, 19.7% had a drug-related ED visit, indicating that drug-related ED visits are a major concern. If not recognized and handled, this could be a threat against patient safety. Identified risk factors can be used to identify patients in need of additional attention regarding their drug list during the ED visit. abstract_id: PUBMED:23465404 Medication-related emergency department visits and hospital admissions in pediatric patients: a qualitative systematic review. Objective: To review and describe the current literature pertaining to the incidence, classification, severity, preventability, and impact of medication-related emergency department (ED) and hospital admissions in pediatric patients. Study Design: A systematic search of PubMED, Embase, and Web of Science was performed using the following terms: drug toxicity, adverse drug event, medication error, emergency department, ambulatory care, and outpatient clinic. Additional articles were identified by a manual search of cited references. English language, full-reports of pediatric (≤18 years) patients that required an ED visit or hospital admission secondary to an adverse drug event (ADE) were included. Results: We included 11 studies that reported medication-related ED visit or hospital admission in pediatric patients. Incidence of medication-related ED visits and hospital admissions ranged from 0.5%-3.3% and 0.16%-4.3%, respectively, of which 20.3%-66.7% were deemed preventable. Among ED visits, 5.1%-22.1% of patients were admitted to hospital, with a length of stay of 24-72 hours. The majority of ADEs were deemed moderate in severity. Types of ADEs included adverse drug reactions, allergic reactions, overdose, medication use with no indication, wrong drug prescribed, and patient not receiving a drug for an indication. Common causative agents included respiratory drugs, antimicrobials, central nervous system drugs, analgesics, hormones, cardiovascular drugs, and vaccines. Conclusion: Medication-related ED visits and hospital admissions are common in pediatric patients, many of which are preventable. These ADEs result in significant healthcare utilization. abstract_id: PUBMED:34100393 Medication - A boon or bane: Emergencies due to medication-related visits. Background: Medication-related visits (MRV) to the Emergency Department (ED) are substantial though weakly recognized and intervened. Data from developing countries on the prevalence of MRV-related ED admissions are scanty. This study is first of its kind in India to estimate the prevalence of MRV, its severity and the factors contributing to these visits. Methodology: This prospective observational study was done in the ED of an apex tertiary care center in August 2018. A convenient cross-sectional sample of patients presenting with emergencies regarding drug use or ill-use were included and a questionnaire filled after obtaining a written informed consent. Results: During the study period, a cross-sectional sample of 443 patients was studied and the prevalence of MRV was 27.1% (120/443). The mean age was 55 (standard deviation: 15) years with a male preponderance (60.8%). Triage priority I patients comprised 39.1%. Common presenting complaints included vomiting (25%), seizure (20.8%), giddiness (20%), and abdomen pain (17.5%). Less than ½ (43.3%) were compliant to prescribed medication. The most common reasons for MRV were failure to receive drugs/noncompliance (47.5%), subtherapeutic dosage (25%), and adverse drug reaction (16.7%). Severity of MRV was classified as mild (50%), moderate (38.3%), and severe (11.7%). Out of these visits, 71 (59.2%) were deemed preventable. Three-fourths (73.3%) were stabilized and discharged from the ED. Conclusion: The fact that a quarter of the ED visits are due to MRV and that more than half of them are preventable is quite alarming. Diligent patient education by the treating physicians may perhaps help in decreasing the incidence of this deleterious event. abstract_id: PUBMED:10630823 Medication-related visits to the emergency department: a prospective study. Objective: To assess the prevalence, preventability, category, and severity of drug-related problems; medications involved; common causes of preventable medication-related visits (MRVs); and cost of MRVs. Methods: In this prospective, observational study of emergency department visits, patients were identified as having MRVs by use of a questionnaire. The patients or their representatives were interviewed to assess whether the emergency department visit was caused by a medication-related problem. Patient selection was consecutive. A blinded panel of pharmacists and one physician was used to assess study objectives. Results: Of the 253 patients interviewed, 71 patients (28.1%) presenting to the emergency department had a medication-related reason for their visits. Of the 71 MRVs, 50 (70.4%) were judged to be preventable and 21 (29.6%) nonpreventable. The data showed that MRVs are most often caused by an adverse drug reaction, followed by overprescribing of the correct medication. The severity of MRVs was most often classified as moderate. The most common medications involved were cardiovascular agents. The preventable MRVs were mostly due to noncompliance issues, inappropriate prescribing, inappropriate monitoring, and lack of patient education. The average cost to the institution was approximately $1444 per preventable MRV. Conclusions: In our study, the prevalence of MRVs was higher than in previous studies due to the prospective nature of the study and the assistance of drug experts in identifying and classifying the incidents. Areas identified for improvement included compliance issues with patients, education of healthcare workers regarding prescribing and monitoring of medications, and patient education. Answer: Yes, emergency physicians do attribute drug-related emergency department visits to medication-related problems, but a significant proportion of these visits are not deemed medication-related by emergency physicians. A study found that out of 944 patients enrolled, 44 patients received a diagnosis of an adverse drug event, and 27 of these were categorized as medication-related by emergency physicians (61.4%). However, 10 were categorized as uncertain (22.7%), and 7 categorized as non-medication-related problems (15.9%). When the definition of an adverse drug event was broadened to include drug interactions, drug withdrawal, medication errors, or noncompliance, 78 patients (8.3%) received such a diagnosis. Of these, emergency physicians attributed 49 to a medication-related problem (62.8%), were uncertain about 15 (19.2%), and attributed 14 to non-medication-related problems (17.9%). This indicates that while emergency physicians do recognize a substantial number of drug-related visits as medication-related, there is still a portion that may be missed or not attributed to medication-related problems (PUBMED:20005010).
Instruction: Evening continuity clinic: preserving primary care education in the face of duty hour limitations? Abstracts: abstract_id: PUBMED:15264939 Evening continuity clinic: preserving primary care education in the face of duty hour limitations? Objective: Residency programs with postcall afternoon continuity clinics violate the new Accreditation Council for Graduate Medical Education (ACGME) limitations on resident duty hours. We evaluated housestaff experience with a pilot intervention that replaced postcall continuity clinics with evening continuity clinics. Methods: We began this pilot program at one continuity clinic site for pediatric residents. Instead of postcall clinics, residents had evening continuity clinic added to a regular clinic day when they were neither postcall nor on call. At 5 and 11 months, we surveyed housestaff satisfaction and experience with the evening clinics, particularly in comparison to postcall clinics. Results: Nineteen of 23 pediatric residents participated in the pilot program. Twenty-two and 17 residents completed the 5- and 11-month follow-up surveys, respectively. A significantly greater proportion of residents rated their overall satisfaction with evening clinic as good/outstanding (16/18, 89%) compared with postcall clinic (2/19, 11%) at the 5-month survey (P<.01). Resident preference for evening clinic over postcall clinic persisted but was not statistically significant at 11 months (P =.05), and overall satisfaction with evening clinic was unchanged from the 5- and 11-month surveys (P =.64). All areas of patient care, medical education, and clinic infrastructure were better or equal in evening clinic in comparison to postcall clinic except for continuity of preceptors and access to medical services. Conclusion: Housestaff had greater satisfaction and a better clinic experience with evening clinic versus postcall clinic. Evening continuity clinic is a viable solution to meeting the ACGME work hour limitations while preserving housestaff primary care education. abstract_id: PUBMED:26384223 An observational pre-post study of re-structuring Medicine inpatient teaching service: Improved continuity of care within constraint of 2011 duty hours. Background: Implementation of more stringent regulations on duty hours and supervision by the Accreditation Council for Graduate Medical Education in July 2011 makes it challenging to design inpatient Medicine teaching service that complies with the duty hour restrictions while optimizing continuity of patient care. Objective: To prospectively compare two inpatient Medicine teaching service structures with respect to residents' impression of continuity of patient care (primary outcome), time available for teaching, resident satisfaction and length-of-stay (secondary endpoints). Design: Observational pre-post study. Methods: Surveys were conducted both before and after Conventional Medicine teaching service was changed to a novel model (MegaTeam). Settings: Academic General Medicine inpatient teaching service. Results: Surveys before and after MegaTeam implementation were completed by 68.5% and 72.2% of internal medicine residents, respectively. Comparing conventional with MegaTeam, the % of residents who agreed or strongly agreed that the (i) ability to care for majority of patients from admission to discharge increased from 29.7% to 86.6% (p<0.01); (ii) the concern that number of handoffs was too many decreased from 91.9% to 18.2% (p<0.01); (iii) ability to provide appropriate supervision to interns increased from 38.1% to 70.7% (p<0.01); (iv) overall resident satisfaction with inpatient Medicine teaching service increased from 24.7% to 56.4% (p<0.01); and (v) length-of-stay on inpatient Medicine service decreased from 5.3±6.2 to 4.9±6.8 days (p<0.03). Conclusions: According to our residents, the MegaTeam structure promotes continuity of patient care, decreases number of handoffs, provides adequate supervision and teaching of interns and medical students, increases resident overall satisfaction and decreases length-of-stay. abstract_id: PUBMED:37355401 Defining the Duty Hour for Surgical Trainees. Objective: Being mindful of duty hours has become an integral part of surgical training. Violations can lead to disciplinary action by the American Council for Graduate Medical Education (ACGME), including probation or even withdrawal of accreditation. It is therefore crucial to ensure these hours are accurately reported. However, as these are often self-reported by the resident, what counts as a duty hour is at the discretion of the reporter. The goal of this study is to identify what trainees and faculty include in their definitions of a duty hour. We hypothesized that there would be discrepancies in faculty versus trainee definitions of the duty hour, and that there remains an unclear understanding of which nonclinical activities contribute to surgical trainee duty hours. Design: An anonymous, voluntary survey was conducted at a single institution. The survey contained 14 scenarios, and participants answered either "yes" or "no" as to if they believed the scenario should be counted within duty hour reporting. Analysis of the results included evaluating overall responses to determine which scenarios were more controversial, as well as chi square analysis comparing trainee (residents and fellows) versus faculty responses to each scenario. Setting: This survey was performed within the Department of Surgery at the University of Texas Southwestern Medical Center, a large academic institution in Dallas, TX. Participants: There were 91 total faculty and trainee responses to the voluntary survey within the General Surgery Department and associated subspecialties, including 50 residents (54.9%), 4 clinical fellows (4.4%) and 37 faculty (40.7%). Results: When analyzing total responses, the most controversial scenarios were taking a short period of home call (50.6% of all respondents included this as a duty hour), making a presentation for resident education (48.4%), making a presentation related to patient care (57.1%), and making a monthly call schedule (44.0%). The least controversial topic was transit to and from work (91.2% of all respondents did not include this as a duty hour). Additionally, there were statistically significant differences between trainee and faculty perceptions when it came to attending departmental curricula (96.2% trainees included as a duty hour v 81.6% faculty, p =0.02), participating in nonmandatory journal club (5.7% trainees v 23.7% faculty, p =0.01), and attending mentorship meetings (30.2% trainees v 52.6% faculty, p =0.03). Conclusions: There is no consensus as to what nonclinical activities formally count towards a duty hour. There are also significant differences identified between faculty and trainee definitions, which could have implications for duty hour reporting and ACGME violations. Further research is required to obtain a clearer picture of the surgical opinion on defining the duty hour, and hopefully this will reduce duty hour violations and better optimize surgical trainee education. abstract_id: PUBMED:29073389 Impact of extended duty hours on medical trainees. Many studies on resident physicians have demonstrated that extended work hours are associated with a negative impact on well-being, education, and patient care. However, the relationship between the work schedule and the degree of impairment remains unclear. In recent years, because of concerns for patient safety, national minimum standards for duty hours have been instituted (2003) and revised (2011). These changes were based on studies of the effects of sleep deprivation on human performance and specifically on the effect of extended shifts on resident performance. These requirements necessitated significant restructuring of resident schedules. Concerns were raised that these changes have impaired continuity of care, resident education and supervision, and patient safety. We review the studies on the effect of extended work hours on resident well-being, education, and patient care as well as those assessing the effect of work hour restrictions. Although many studies support the adverse effects of extended shifts, there are some conflicting results due to factors such as heterogeneity of protocols, schedules, subjects, and environments. Assessment of the effect of work hour restrictions has been even more difficult. Recent data demonstrating that work hour limitations have not been associated with improvement in patient outcomes or resident education and well-being have been interpreted as support for lifting restrictions in some specialties. However, these studies have significant limitations and should be interpreted with caution. Until future research clarifies duty hours that optimize patient outcomes, resident education, and well-being, it is recommended that current regulations be followed. abstract_id: PUBMED:26073714 Effect of 2011 Accreditation Council for Graduate Medical Education Duty-Hour Regulations on Objective Measures of Surgical Training. Objective: In July 2011, new Accreditation Council for Graduate Medical Education duty-hour regulations were implemented in surgical residency programs. We examined whether differences in objective measures of surgical training exist at our institution since implementation. Design: Retrospective reviews of the American Board of Surgery In-Training Examination performance and surgical case volume were collected for 5 academic years. Data were separated into 2 groups, Period 1: July 2008 through June 2011 and Period 2: July 2011 through June 2013. Setting: Single-institution study conducted at the Mount Sinai Hospital, New York, NY, a tertiary-care academic center. Participants: All general surgery residents, levels postgraduate year 1 through 5, from July 2008 through June 2013. Results: No significant differences in the American Board of Surgery In-Training Examination total correct score or overall test percentile were noted between periods for any levels. Intern case volume increased significantly in Period 2 (90 vs 77, p = 0.036). For chief residents graduating in Period 2, there was a significant increase in total major cases (1062 vs 945, p = 0.002) and total chief cases (305 vs 267, p = 0.02). Conclusions: The duty-hour regulations did not negatively affect objective measures of surgical training in our program. Compliance with the Accreditation Council for Graduate Medical Education duty-hour regulations correlated with an increase in case volume. Adaptations made by our institution, such as maximizing daytime duty hours and increasing physician extenders, likely contributed to our findings. abstract_id: PUBMED:26473789 On resident duty hour restrictions and neurosurgical training: review of the literature. Within neurosurgery, the national mandate of the 2003 duty hour restrictions (DHR) by the Accreditation Council for Graduate Medical Education (ACGME) has been controversial. Ensuring the proper education and psychological well-being of residents while fulfilling the primary purpose of patient care has generated much debate. Most medical disciplines have developed strategies that address service needs while meeting educational goals. Additionally, there are numerous studies from those disciplines; however, they are not specifically relevant to the needs of a neurosurgical residency. The recent implementation of the 2011 DHR specifically aimed at limiting interns to 16-hour duty shifts has proven controversial and challenging across the nation for neurosurgical residencies--again bringing education and service needs into conflict. In this report the current literature on DHR is reviewed, with special attention paid to neurosurgical residencies, discussing resident fatigue, technical training, and patient safety. Where appropriate, other specialty studies have been included. The authors believe that a one-size-fits-all approach to residency training mandated by the ACGME is not appropriate for the training of neurosurgical residents. In the authors' opinion, an arbitrary timeline designed to limit resident fatigue limits patient care and technical training, and has not improved patient safety. abstract_id: PUBMED:17646602 Effect of residency duty-hour limits: views of key clinical faculty. Background: To determine the effect of duty-hour limitations, it is important to consider the views of faculty who have the most contact with residents. Method: We conducted a national survey of key clinical faculty (KCF) at 39 internal medicine residency programs affiliated with US medical schools selected by random sample stratified by federal research funding and program size to elicit their views on the effect of duty-hour limitations on residents' patient care, education, professionalism, and well-being and on faculty workload and satisfaction. Results: Of 154 KCF surveyed, 111 (72%) responded. The KCF reported worsening in residents' continuity of care (87%) and the physician-patient relationship (75%). Faculty believed that residents' education (66%) and professionalism, including accountability to patients (73%) and ability to place patient needs above self-interests (57%), worsened, yet 50% thought residents' well-being improved. The KCF reported spending more time providing inpatient services (47%). Faculty noted decreased satisfaction with teaching (56%), ability to develop relationships with residents (40%), and overall career satisfaction (31%). In multivariate analysis, KCF with 5 years of teaching experience or more were more likely to perceive a negative effect of duty hours on residents' education (odds ratio, 2.84; 95% confidence interval, 1.15-7.00). Conclusions: Key clinical faculty believe that duty-hour limitations have adversely affected important aspects of residents' patient care, education, and professionalism, as well as faculty workload and satisfaction. Residency programs should continue to look for ways to optimize experiences for residents and faculty within the confines of the duty-hour requirements. abstract_id: PUBMED:26520873 Of duty hour violations and shift work: changing the educational paradigm. Background: Successful surgical education balances learning opportunities with Accreditation Council on Graduate Medical Education (ACGME) duty hour requirements. We instituted a night shift system and hypothesized that implementation would decrease duty hour violations while maintaining quality education. Methods: A system of alternating teams working 12-hour shifts was instituted and was assessed via an electronic survey distributed at 2, 6, and 12 months after implementation. Resident duty hour violations and resident case volume were evaluated for 1 year before and 2 years after implementation of the night shift system. Results: Survey data revealed a decrease in the perception that residents had problems meeting duty hour restrictions from 44% to 14% at 12 months (P = .012). Total violations increased 26% in the 1st year, subsequently decreasing by 62%, with shift length violations decreasing by 90%. Resident availability for didactics was improved, and average operative cases per academic year increased by 65%. Conclusions: Night shift systems are feasible and help meet duty hour requirements. Our program decreased violations while increasing operative volume and didactic time. abstract_id: PUBMED:27411835 Resident duty hour modification affects perceptions in medical education, general wellness, and ability to provide patient care. Background: Resident duty hours have recently been under criticism, with concerns for resident and patient well-being. Historically, call shifts have been long, and some residency training programs have now restricted shift lengths. Data and opinions about the effects of such restrictions are conflicting. The Internal Medicine Residency Program at Dalhousie University recently moved from a traditional call structure to a day float/night float system. This study evaluated how this change in duty hours affected resident perceptions in several key domains. Methods: Senior residents from an internal medicine training program in Canada responded to an anonymous online survey immediately before and 6 months after the implementation of duty hour reform. The survey contained questions relating to three major domains: resident wellness, ability to deliver quality health care, and medical education experience. Mean pre- and post-intervention scores were compared using the t-test for paired samples. Results: Twenty-three of 27 (85 %) senior residents completed both pre- and post-reform surveys. Residents perceived significant changes in many domains with duty hour reform. These included improved general wellness, less exposure to personal harm, fewer feelings of isolation, less potential for error, improvement in clinical skills expertise, increased work efficiency, more successful teaching, increased proficiency in medical skills, more successful learning, and fewer rotation disruptions. Conclusions: Senior residents in a Canadian internal medicine training program perceived significant benefits in medical education experience, ability to deliver healthcare, and resident wellness after implementation of duty hour reform. abstract_id: PUBMED:27234632 Program Director Perceptions of Surgical Resident Training and Patient Care under Flexible Duty Hour Requirements. Background: The Flexibility in Duty Hour Requirements for Surgical Trainees (FIRST) trial was a national, cluster-randomized, pragmatic, noninferiority trial of 117 general surgery programs, comparing standard ACGME resident duty hour requirements ("Standard Policy") to flexible, less-restrictive policies ("Flexible Policy"). Participating program directors (PDs) were surveyed to assess their perceptions of patient care, resident education, and resident well-being during the study period. Study Design: A survey was sent to all PDs of the general surgery residency programs participating in the FIRST trial (N = 117 [100% response rate]) in June and July 2015. The survey compared PDs' perceptions of the duty hour requirements in their arm of the FIRST trial during the study period from July 1, 2014 to June 30, 2015. Results: One hundred percent of PDs in the Flexible Policy arm indicated that residents used their additional flexibility in duty hours to complete operations they started or to stabilize a critically ill patient. Compared with the Standard Policy arm, PDs in the Flexible Policy arm perceived a more positive effect of duty hours on the safety of patient care (68.9% vs 0%; p < 0.001), continuity of care (98.3% vs 0%; p < 0.001), and resident ability to attend educational activities (74.1% vs 3.4%; p < 0.001). Most PDs in both arms reported that safety of patient care (71.8%), continuity of care (94.0%), quality of resident education (83.8%), and resident well-being (55.6%) would be improved with a hypothetical permanent adoption of more flexible duty hours. Conclusions: Program directors involved in the FIRST trial perceived improvements in patient safety, continuity of care, and multiple aspects of resident education and well-being with flexible duty hours. Answer: The abstract from PUBMED:15264939 discusses a pilot program that replaced postcall continuity clinics with evening continuity clinics for pediatric residents. This change was made in response to the new Accreditation Council for Graduate Medical Education (ACGME) limitations on resident duty hours. The study aimed to evaluate housestaff experience with the evening clinics compared to the postcall clinics. The results showed that a significantly greater proportion of residents rated their overall satisfaction with evening clinic as good/outstanding compared to postcall clinic at the 5-month survey. Resident preference for evening clinic over postcall clinic persisted at 11 months, and overall satisfaction with evening clinic was unchanged from the 5- and 11-month surveys. The study concluded that housestaff had greater satisfaction and a better clinic experience with evening clinic versus postcall clinic, suggesting that evening continuity clinic is a viable solution to meeting ACGME work hour limitations while preserving housestaff primary care education.
Instruction: Do baseline characteristics accurately discriminate between patients likely versus unlikely to benefit from implantable defibrillator therapy? Abstracts: abstract_id: PUBMED:11136493 Do baseline characteristics accurately discriminate between patients likely versus unlikely to benefit from implantable defibrillator therapy? Evaluation of the Canadian implantable defibrillator study implantable cardioverter defibrillatory efficacy score in the antiarrhythmics versus implantable defibrillators trial. Objective: Our purpose was to evaluate whether baseline characteristics predictive of implantable cardioverter defibrillator (ICD) efficacy in the Canadian Implantable Defibrillator Study (CIDS) are predictive in the Antiarrhythmics Versus Implantable Defibrillators (AVID) Trial. Background: ICD therapy is superior to antiarrhythmic drug use in patients with life-threatening arrhythmias. However, identification of subgroups most likely to benefit from ICD therapy may be useful. Data from CIDS suggest that 3 characteristics (age &gt; or =70 years, ejection fraction [EF] &lt; or =0.35, and New York Heart Association class &gt;II) can be combined to reliably categorize patients as likely (&gt; or =2 characteristics) versus unlikely to benefit (&lt;2 characteristics) from ICD therapy. Methods: The utility of the CIDS categorization of ICD efficacy was assessed by Kaplan-Meier analysis and Cox hazards modeling. The accuracy of the CIDS score was formally tested by evaluating for interaction between categorization of benefit and treatment in a Cox model. Results: ICD therapy was associated with a significantly lower risk of death in the 320 patients categorized as likely to benefit (relative risk [RR] 0.57, 95% confidence interval [CI] 0.37-0.88, P =.01) and a trend toward a lower risk of death in the 689 patients categorized as unlikely to benefit (RR 0.70, 95% CI 0.48-1.03, P =.07). Categorization of benefit was imperfect, as evidenced by a lack of statistical interaction (P =.5). Although 32 of the 42 deaths prevented by ICD therapy in AVID were in patients categorized as likely to benefit, all 42 of these patients had EF values &lt; or =0.35. Neither advanced age nor poorer functional class predicted ICD efficacy in AVID. Conclusion: Of the 3 characteristics identified to predict ICD efficacy in CIDS, only depressed EF predicted ICD efficacy in AVID. Thus physicians faced with limited resources might elect to consider ICD therapy over antiarrhythmic drug use in patients with severely depressed EF values. abstract_id: PUBMED:37964443 Benefit of primary and secondary prophylactic implantable cardioverter defibrillator in elderly patients. Background: The benefit of implantable cardioverter-defibrillator (ICD) in elderly patients has been questioned. In the present study, we aimed to analyse the outcome of patients of different age groups with ICD implantation. Methods: We included all patients who received an ICD in our hospital from 2011 to 2020. Primary endpoints were (1) death from any cause and (2) appropriate ICD therapy (antitachycardia pacing/shock). A "benefit of ICD implantation" was defined as appropriate ICD therapy before death from any cause/or survival. "No benefit of ICD implantation" was defined as death from any cause without prior appropriate ICD therapy. Results: A total of 422 patients received an ICD (primary prophylaxis n = 323, secondary prophylaxis n = 99). At the time of implantation, 35 patients (8%) were &gt;80 years and 106 patients were &gt;75 years (25%). During the study period of 4.2 ± 3 years, benefit of ICD occurred in 89 patients (21%) and no benefit in 84 patients (20%). In primary prevention, the proportion of patients who had a benefit from ICD implantation decreased with increasing age, and there were no patients who benefited from ICD therapy in the group of patients &gt;80 years. In secondary prophylaxis, the proportion of patients with a benefit from ICD implantation ranged from 20% to 30% in all age groups. Conclusion: Our study suggests that the indication of primary prophylactic ICD in elderly and very old patients should be critically assessed. On the other hand, no patient should be denied secondary prophylactic ICD implantation because of age. abstract_id: PUBMED:10758047 Identification of patients most likely to benefit from implantable cardioverter-defibrillator therapy: the Canadian Implantable Defibrillator Study. Background: Patients with resuscitated ventricular tachyarrhythmias (ventricular tachycardia/ventricular fibrillation) benefit from implantable cardioverter-defibrillators (ICDs) compared with medical therapy. We hypothesized that the patients who benefit most from an ICD are those at greatest risk of death. Methods And Results: In the Canadian Implantable Defibrillator Study (CIDS), 659 patients with resuscitated ventricular tachyarrhythmias were randomly assigned to receive an ICD or amiodarone and were then followed for a mean of 3 years. There were 98 and 83 deaths in the amiodarone and ICD groups, respectively. We used multivariate Cox analysis to assess the impact of baseline parameters on the mortality in the amiodarone group. Reduced left ventricular ejection fraction, advanced age, and poor NYHA status identified high-risk patients (P=0.0001 to 0.0009). Quartiles of risk were constructed, and the mortality reduction associated with ICD treatment in each quartile was assessed. There was a significant interaction between risk quartile and the ICD treatment effect (P=0.011). In the highest risk quartile, there was a 50% relative risk reduction (95% CI 21% to 68%) of death in the ICD group, whereas in the 3 lower quartiles, there was no benefit. Patients who are most likely to benefit from an ICD can be identified with a simple risk score (&gt;/=2 of the following factors: age &gt;/=70 years, left ventricular ejection fraction &lt;/=35%, and NYHA class III or IV). Thirteen of 15 deaths that were prevented by the ICD occurred in patients with &gt;/=2 risk factors. Conclusions: In CIDS, patients at highest risk of death benefited most from ICD therapy. These can be identified easily on the basis of age, poor ventricular function, and poor functional status. abstract_id: PUBMED:29759827 Subcutaneous Versus Transvenous Implantable Defibrillator Therapy: A Meta-Analysis of Case-Control Studies. Objectives: This study aims to conduct a meta-analysis comparing efficacy and safety outcomes between subcutaneous implantable cardioverter-defibrillator (S-ICD) and transvenous implantable cardioverter-defibrillator (TV-ICD). Background: The S-ICD was developed to minimize complications related to the conventional TV-ICD. Direct comparison of clinical outcomes between the 2 devices has been limited by varying patient characteristics and definitions of complications with no randomized trials completed comparing these systems. Methods: Studies in the PubMed and Embase databases and secondary referencing sources were systematically reviewed. Studies meeting criteria were included in the meta-analysis. Baseline characteristics and outcome data of the S-ICD and TV-ICD groups were appraised and analyzed. A random-effects model was used to derive odds ratio (OR) with 95% confidence interval (CI). Results: Five studies met inclusion criteria. Baseline characteristics were similar between the S-ICD and TV-ICD groups. Fewer lead complications occurred in the S-ICD group compared to the TV-ICD group (OR: 0.13; 95% CI: 0.05 to 0.38). The infection rate was similar between the S-ICD and TV-ICD groups (OR: 0.75; 95% CI: 0.30 to 1.89). There were no differences in system or device failures between groups (OR: 1.13; 95% CI: 0.43 to 3.02). Overall, inappropriate therapy (T-wave oversensing, supraventricular tachycardia, episodes of inappropriate sensing) was similar between the 2 groups (OR: 0.87; 95% CI: 0.51 to 1.49). However, the nature of inappropriate therapy was different between the S-ICD and TV-ICD groups. Both devices appear to perform equally well with respect to appropriate shocks. Conclusions: S-ICD reduced lead-related complications but was similar to TV-ICD with regard to non-lead-related complications, including inappropriate therapy. These results support the concept that S-ICD is a safe and effective alternative to TV-ICD in appropriate patients. abstract_id: PUBMED:33417692 Predicted benefit of an implantable cardioverter-defibrillator: the MADIT-ICD benefit score. Aims: The benefit of prophylactic implantable cardioverter-defibrillator (ICD) is not uniform due to differences in the risk of life-threatening ventricular tachycardia (VT)/ventricular fibrillation (VF) and non-arrhythmic mortality. We aimed to develop an ICD benefit prediction score that integrates the competing risks. Methods And Results: The study population comprised all 4531 patients enrolled in the MADIT trials. Best-subsets Fine and Gray regression analysis was used to develop prognostic models for VT (≥200 b.p.m.)/VF vs. non-arrhythmic mortality (defined as death without prior sustained VT/VF). Eight predictors of VT/VF (male, age &lt; 75 years, prior non-sustained VT, heart rate &gt; 75 b.p.m., systolic blood pressure &lt; 140 mmHg, ejection fraction ≤ 25%, myocardial infarction, and atrialarrhythmia) and 7 predictors of non-arrhythmic mortality (age ≥ 75 years, diabetes mellitus, body mass index &lt; 23 kg/m2, ejection fraction ≤ 25%, New York Heart Association ≥II, ICD vs. cardiac resynchronization therapy with defibrillator, and atrial arrhythmia) were identified. The two scores were combined to create three MADIT-ICD benefit groups. In the highest benefit group, the 3-year predicted risk of VT/VF was three-fold higher than the risk of non-arrhythmic mortality (20% vs. 7%, P &lt; 0.001). In the intermediate benefit group, the difference in the corresponding predicted risks was attenuated (15% vs. 9%, P &lt; 0.01). In the lowest benefit group, the 3-year predicted risk of VT/VF was similar to the risk of non-arrhythmic mortality (11% vs. 12%, P = 0.41). A personalized ICD benefit score was developed based on the distribution of the two competing risks scores in the study population (https://is.gd/madit). Internal and external validation confirmed model stability. Conclusions: We propose the novel MADIT-ICD benefit score that predicts the likelihood of prophylactic ICD benefit through personalized assessment of the risk of VT/VF weighed against the risk of non-arrhythmic mortality. abstract_id: PUBMED:24333490 The effect of intermittent atrial tachyarrhythmia on heart failure or death in cardiac resynchronization therapy with defibrillator versus implantable cardioverter-defibrillator patients: a MADIT-CRT substudy (Multicenter Automatic Defibrillator Implantation Trial With Cardiac Resynchronization Therapy). Objectives: This study aimed to investigate the effect of both history of intermittent atrial tachyarrhythmias (IAT) and in-trial IAT on the risk of heart failure (HF) or death comparing cardiac resynchronization therapy with defibrillator (CRT-D) to implantable cardioverter-defibrillator (ICD) treatment in mildly symptomatic HF patients with left bundle branch block (LBBB). Background: Limited data exist regarding the benefit of CRT-D in patients with IAT. Methods: The benefit of CRT-D in reducing the risk of HF/death was evaluated using multivariate Cox models incorporating the presence of, respectively, a history of IAT at baseline and time-dependent development of in-trial IAT during follow-up in 1,264 patients with LBBB enrolled in the MADIT-CRT (Multicenter Automatic Defibrillator Implantation Trial With Cardiac Resynchronization Therapy) study. Results: The overall beneficial effect of CRT-D versus ICD on the risk of HF/death was not significantly different between LBBB patients with or without history of IAT (HR: 0.50, p = 0.028, and HR: 0.46, p &lt; 0.001, respectively; p for interaction = 0.79). Among patients who had in-trial IAT, CRT-D was associated with a significant 57% reduction in the risk of HF/death compared with ICD-only therapy (HR: 0.43, p = 0.047), similar to the effect of the device among patients who did not have IAT (HR: 0.47, p &lt; 0.001; p for interaction = 0.85). The percentage of patients with biventricular pacing ≥92% was similar in both groups (p = 0.43). Consistent results were shown for the benefit of CRT-D among patients who had in-trial atrial fibrillation/flutter (HR: 0.30, p = 0.027; p for interaction = 0.41). Conclusions: In the MADIT-CRT study, the clinical benefit of CRT-D in LBBB patients was not attenuated by prior history of IAT or by the development of in-trial atrial tachyarrhythmias. (MADIT-CRT: Multicenter Automatic Defibrillator Implantation Trial With Cardiac Resynchronization Therapy; NCT00180271). abstract_id: PUBMED:35038570 Subcutaneous versus transvenous implantable defibrillator in patients with hypertrophic cardiomyopathy. Background: Hypertrophic cardiomyopathy (HCM) is the most prevalent inherited cardiomyopathy. The implantable cardioverter-defibrillator (ICD) is important for prevention of sudden cardiac death (SCD) in patients at high risk. In recent years, the subcutaneous implantable cardioverter-defibrillator (S-ICD) has emerged as a viable alternative to the transvenous implantable cardioverter-defibrillator (TV-ICD). The S-ICD does not require intravascular access; however, it cannot provide antitachycardia pacing (ATP) therapy. Objective: The purpose of this study was to assess the real-world incidence of ICD therapy in patients with HCM implanted with TV-ICD vs S-ICD. Methods: We compared the incidence of ATP and shock therapies among all HCM patients with S-ICD and TV-ICD enrolled in the Boston Scientific ALTITUDE database. Cumulative Kaplan-Meier incidence was used to compare therapy-free survival, and Cox proportional hazard ratios were calculated. We performed unmatched as well as propensity match analyses. Results: We included 2047 patients with TV-ICD and 626 patients with S-ICD, followed for an average of 1650.5 ± 1038.5 days and 933.4 ± 550.6 days, respectively. Patients with HCM and TV-ICD had a significantly higher rate of device therapy compared to those with S-ICD (32.7 vs 14.5 therapies per 100 patient-years, respectively; P &lt;.001), driven by a high incidence of ATP therapy in the TV-ICD group, which accounted for &gt;67% of therapies delivered. Shock incidence was similar between groups, both in the general and the matched cohorts. Conclusion: Patients with HCM and S-ICD had a significantly lower therapy rate than patients with TV-ICD without difference in shock therapy, suggesting potentially unnecessary ATP therapy. Empirical ATP programming in patients with HCM may be unbeneficial. abstract_id: PUBMED:11843460 Cost-effectiveness of implantable cardioverter defibrillator therapy. Cost-efficacy assessment of implantable cardioverter defibrillator (ICD) therapy has proved contentious and may have limited uptake of ICD therapy, particularly in Europe. Published modeling assessments are too inaccurate to determine clinical practice, and assessments based on clinical studies are incomplete (from the cost-efficacy viewpoint). Although ICD therapy seems certain to be most cost-effective in patients who are likely to have good longevity if their risk of sudden cardiac death is countered, the benefit of ICD therapy is not necessarily limited to such groups. Physicians and health economists need to develop a better understanding of how to assess high-technology therapy costs so that uptake of such therapy is appropriately expedited with due regard to ethical and cost constraints. abstract_id: PUBMED:22231644 Dual- versus single-coil implantable defibrillator leads: review of the literature. The preferred use of dual-coil implantable defibrillator lead systems in current implantable defibrillator therapy is likely based on data showing statistically lower defibrillation thresholds with dual-coil defibrillator lead systems. The following review will summarize the clinical data for dual- versus single-coil defibrillator leads in the left and right pectoral implant locations, and will then discuss the clinical implications of single- versus dual-coil usage for atrial defibrillation, venous complications, and the risks associated with lead extraction. It will be noted that there are no comparative clinical studies on the use and outcomes of single- versus dual-coil lead systems in implantable defibrillator therapy over a long-term follow-up. The limited long-term reliability of defibrillator leads is a major concern in implantable defibrillator and cardiac resynchronization therapy. A simpler single-coil defibrillator lead system may improve the long-term performance of implanted leads. Furthermore, the superior vena cava coil is suspected to increase interventional risk in transvenous lead extraction. Therefore, the need for objective data on extractions and complications will be emphasized. abstract_id: PUBMED:28678000 Implantable cardioverter defibrillator knowledge and end-of-life device deactivation: A cross-sectional survey. Background: End-of-life implantable cardioverter defibrillator deactivation discussions should commence before device implantation and be ongoing, yet many implantable cardioverter defibrillators remain active in patients' last days. Aim: To examine associations among implantable cardioverter defibrillator knowledge, patient characteristics and attitudes to implantable cardioverter defibrillator deactivation. Design: Cross-sectional survey using the Experiences, Attitudes and Knowledge of End-of-Life Issues in Implantable Cardioverter Defibrillator Patients Questionnaire. Participants were classified as insufficient or sufficient implantable cardioverter defibrillator knowledge and the two groups were compared. Setting/participants: Implantable cardioverter defibrillator recipients ( n = 270, mean age 61 ± 14 years; 73% male) were recruited from cardiology and implantable cardioverter defibrillator clinics attached to two tertiary hospitals in Melbourne, Australia, and two in Kentucky, the United States. Results: Participants with insufficient implantable cardioverter defibrillator knowledge ( n = 77, 29%) were significantly older (mean age 66 vs 60 years, p = 0.001), less likely to be Caucasian (77% vs 87%, p = 0.047), less likely to have received implantable cardioverter defibrillator shocks (26% vs 40%, p = 0.031), and more likely to have indications of mild cognitive impairment (Montreal Cognitive Assessment score &lt;24: 44% vs 16%, p &lt; 0.001). Insufficient implantable cardioverter defibrillator knowledge was associated with attitudes suggesting unwillingness to discuss implantable cardioverter defibrillator deactivation, even during the last days towards end of life ( p &lt; 0.05). Conclusion: Implantable cardioverter defibrillator recipients, especially those who are older or have mild cognitive impairment, often have limited knowledge about implantable cardioverter defibrillator deactivation. This study identified several potential teachable moments throughout the patients' treatment trajectory. An interdisciplinary approach is required to ensure that discussions about implantable cardioverter defibrillator deactivation issues are initiated at appropriate time points, with family members ideally also included. Answer: Baseline characteristics can provide some discrimination between patients likely versus unlikely to benefit from implantable defibrillator therapy, but the accuracy is not perfect. The Canadian Implantable Defibrillator Study (CIDS) suggested that a combination of three characteristics—age ≥70 years, ejection fraction (EF) ≤0.35, and New York Heart Association class >II—could categorize patients as likely (≥2 characteristics) or unlikely (<2 characteristics) to benefit from implantable cardioverter defibrillator (ICD) therapy. However, when these criteria were evaluated in the Antiarrhythmics Versus Implantable Defibrillators (AVID) Trial, only depressed EF was predictive of ICD efficacy. Advanced age and poorer functional class did not predict ICD efficacy in the AVID trial (PUBMED:11136493). In the context of elderly patients, the benefit of primary prophylactic ICD implantation appears to decrease with increasing age, and no patients over 80 years benefited from ICD therapy in the primary prevention group. However, in secondary prophylaxis, the benefit from ICD implantation was consistent across all age groups, suggesting that secondary prophylactic ICD implantation should not be denied based on age alone (PUBMED:37964443). The CIDS also identified that patients at the highest risk of death benefited most from ICD therapy, which could be identified based on age, poor ventricular function, and poor functional status. In this study, 13 of 15 deaths prevented by ICD occurred in patients with ≥2 of these risk factors (PUBMED:10758047). The MADIT-ICD benefit score was developed to predict the likelihood of prophylactic ICD benefit by assessing the risk of life-threatening ventricular tachyarrhythmias against the risk of non-arrhythmic mortality. This score integrates multiple factors and provides a personalized assessment of the potential benefit from ICD therapy (PUBMED:33417692). In summary, while certain baseline characteristics can help identify patients who are more likely to benefit from ICD therapy, the prediction is not absolute, and other factors such as the type of prophylaxis (primary vs. secondary) and individual risk assessments (such as the MADIT-ICD benefit score) may also be important in decision-making.
Instruction: Is one testicular specimen sufficient for quantitative evaluation of spermatogenesis? Abstracts: abstract_id: PUBMED:7615120 Is one testicular specimen sufficient for quantitative evaluation of spermatogenesis? Objective: To investigate whether quantitative analysis performed on one testicular specimen is adequate for quantitative evaluation of spermatogenic process. Design: Comparison of quantitative analysis of spermatogenic cell types in testicular cytologic aspirates of various sites of each testis. Setting: In each aspirate, a total of 500 Sertoli cells and cells at each of the spermatogenic stages were identified, counted, and grouped according to cell type. A quantitative cell type index was calculated for each type of cell in each aspirate. Mean cell type indexes then were calculated for each of the cell types in the three aspirates of each patient, and variations of a given sample from its mean were compared. Patients: Azoospermic or severely oligospermic infertile men. Interventions: Fine needle aspiration performed on the upper, middle, and lower poles of each testis. Results: Each of the aspirates showed wide deviations from the mean of the three aspirates for that patient. The deviation ranges of the cell type indexes of each of the spermatogenic stages were as follows: spermatogonia, 0.8% to 200%; spermatocytes, 1.4% to 94.3%; spermatids, 2.9% to 200%; and spermatozoa, 0.7% to 128%. In the majority of the patients, at least one of the three aspirates showed a cell type index score that was statistically different from the others. Conclusions: These results suggest that more than one testicular specimen is needed to evaluate quantitatively the spermatogenic process. abstract_id: PUBMED:1380990 Influence of testicular carcinoma on ipsilateral spermatogenesis. A histological review of radical orchiectomy specimens was performed to assess the impact of testicular cancer on spermatogenesis. Slides from 28 patients with testicular cancer were available for review, consisting of 14 pure seminomas, 12 embryonal carcinomas and 2 mixed tumors. For each specimen tubules adjacent (less than 3 mm.) to the tumor and distant (more than 3 mm.) from the tumor were evaluated. This study indicates that marked impairment of ipsilateral spermatogenesis is associated with testicular carcinoma, particularly in the vicinity of the tumor. The quality of distant spermatogenesis appears to be influenced by tumor type and not by elevation of known serum tumor markers, such as human chorionic gonadotropin and alpha-fetoprotein, nor by the presence of carcinoma in situ. abstract_id: PUBMED:11056119 Microsurgical TESE and the distribution of spermatogenesis in non-obstructive azoospermia. We wished to map the distribution of spermatogenesis in different regions of the testis in 58 men with non-obstructive azoospermia, and to develop a rational microsurgical strategy for the testicular sperm extraction (TESE) procedure. One goal was to maximize the chances for retrieving spermatozoa from such men, to minimize tissue loss and pain, and to preserve the chance for successful future procedures. Another goal was to expand upon the previously reported quantitative histological analysis of testicular tissue in 45 azoospermic men undergoing conventional TESE, this time using microsurgical as well as histological mapping. Tubular fullness observed at microsurgery and the presence of spermatozoa in the TESE specimen was compared with the quantitative histological analysis of spermatogenesis. Thus, our conclusions about the distribution of spermatogenesis are based on our experience with TESE in 103 consecutive cases of non-obstructive azoospermia. It was confirmed that men with non-obstructive azoospermia caused by germinal failure have a mean of 0 to 3 mature spermatids per seminiferous tubule in contrast to 17-35 mature spermatids per tubule in men with normal spermatogenesis and obstructive azoospermia. The former represented the threshold of quantitative spermatogenesis which must be exceeded in order for spermatozoa to 'spill over' into the ejaculate. Both testicular 'mapping' by multiple biopsy (n = 15) and microsurgical removal of contiguous strips of testicular tissue (n = 43) revealed a diffuse, rather than regional, quantitative distribution of spermatogenesis. A microsurgical approach resulted in the minimal amount of tissue loss and minimal-to-no pain (compared with the original 45 cases already reported). By this means it is often possible to immediately locate the few tubules with spermatogenesis at microsurgery, under local anaesthesia. But even in cases where greater amounts of tissue must be removed in order to find spermatozoa, the microsurgical TESE procedure prevents secondary testicular damage by protecting blood supply and preventing pain and atrophy from increased testicular pressure. Thus, future attempts at TESE-ICSI need not be compromised. abstract_id: PUBMED:35211802 Quantitative evaluation of spermatogenesis by fluorescent histochemistry. Identifying the types of spermatogenic cells that compose seminiferous tubules, as well as qualitative confirmation of the presence or absence of disorders, has been regarded as crucial in spermatogenesis. Sperm count and fertilizing capacity, both of which depend on the quality as well as quantity of spermatogenesis, are factors critical to fertilization. However, the quantitative assessment of spermatogenesis is not commonly practiced. Spermatogenesis has species-specific stages; when the specific stage in the seminiferous tubules is precisely determined, the types of spermatogenic cells in each stage can be spontaneously identified. Thereafter, a unique marker is used to classify the cells observed in each stage. Quantitative assessment of spermatogenesis has the potential to detect inapparent spermatogenesis disorders or numerically indicate the degree of the disorder. To this end, a histochemical approach using unique markers is indispensable for the quantitative assessment of spermatogenesis. Future developments in techniques to measure cell populations using computer software will further facilitate the establishment of quantitative assessment of spermatogenesis as a standard analysis method that can contribute significantly to advance our understanding of spermatogenesis. abstract_id: PUBMED:11489712 Suppression of spermatogenesis in ipsilateral and contralateral testicular tissues in patients with seminoma by human chorionic gonadotropin beta subunit. Objectives: The pathologic complexity of the testicular tumor makes it difficult to demonstrate exactly the relationship between the impaired spermatogenesis in patients with a testicular tumor and the serum level of the human chorionic gonadotropin beta subunit (beta-hCG). Therefore, we performed quantitative evaluation of spermatogenesis in ipsilateral and contralateral testicular tissues of seminoma to simplify the relation pathologically and endocrinologically and to demonstrate the exact correlation between spermatogenesis and serum beta-hCG levels. Methods: Fifty-three biopsy specimens from ipsilateral and contralateral testicular tissues of seminoma were analyzed histologically. The quantitative evaluation of spermatogenesis was performed by the mean Johnsen's score count (MJSC). Beta-hCG expression in seminoma was examined immunohistochemically. Serum beta-hCG, testosterone, estradiol, luteinizing hormone, and follicle-stimulating hormone levels were analyzed before orchiectomy. Results: A significant linear relationship (r = -0.82; P &lt;0.005) was found between the serum level of beta-hCG and the MJSC in contralateral testicular tissues but not in ipsilateral ones, although the suppression of spermatogenesis was observed in both sides without suppression of luteinizing hormone and/or follicle-stimulating hormone production. Conclusions: A clearcut fall in the MJSC with an associated rise in the serum level of beta-hCG was demonstrated in the contralateral testicular tissues but not in the ipsilateral ones of seminoma. It seems most likely that serum beta-hCG suppresses spermatogenesis in both ipsilateral and contralateral testicular tissues without the suppression occurring through the hypothalamus-pituitary-gonadal system, and also that some less well recognized factors affect spermatogenesis, making the relation between serum beta-hCG and MJSC obscure in ipsilateral testicular tissues. abstract_id: PUBMED:7911790 The value of quantitative DNA flow cytometry of testicular fine-needle aspirates in assessment of spermatogenesis: a study of 137 previously maldescended human testes. In order to assess the suitability of DNA flow cytometry of fine-needle aspirates for quantifying spermatogenesis, the results from DNA flow cytometry were compared to histological evaluation of testicular biopsies taken concomitantly from 171 previously maldescended testes. In 137 of 171 cases, sufficient material for flow cytometric as well as histological evaluation was obtained. Histological analysis of surgical biopsy specimens revealed spermatogenesis including the spermatid stage in 117 of the 137 gonads. In six of the 117 gonads no haploid cells were found using flow cytometry. On the other hand, surgical biopsies failed to reveal spermatogenesis in five cases in which the corresponding aspirates contained haploid cells. Both methods therefore seem equally sensitive in detection of spermatogenesis. Other types of histological patterns also corresponded to distinct DNA histograms. Thus, in 11 of 12 cases with Sertoli-cell-only pattern in all tubules, at least 95% of the cells had a diploid DNA content. Furthermore, predominance of tubules with maturation arrest at the primary spermatocyte level corresponded to an increased proportion of tetraploid cells. When compared to surgical biopsy, DNA flow cytometry of testicular fine-needle aspirates is a more objective, easy and rapid method, which is more convenient for the patient. This study has indicated that DNA flow cytometry is a suitable method of quantitative assessment of spermatogenesis. One of the first target groups might be men with azoospermia. In such men, DNA flow cytometric analysis of fine-needle aspirates and surgical biopsy are apparently of equal sensitivity in detecting gonads with spermatogenesis. We conclude that DNA flow cytometry may become an alternative method for the quantification of spermatogenesis. abstract_id: PUBMED:26221246 Shear wave elastography (SWE) is reliable method for testicular spermatogenesis evaluation after torsion. This study aims to investigate effect of torsion on testicular stiffness alteration in affected and concomitant testis using improved ultrasound method of shear wave elastography (SWE). We compared the morphology of the testicular spermatogenesis assessed with Johansen's scale on histology specimens with a mean stiffness measured by SWE. A total of 18 New Zealand white male rabbits were divided into two groups (group A and group B), animals from group A were subjected to operation of right testicle torsion while left testicle remained intact. In group B both testicles were normal and right testicle was subjected to sham operation. The protocol of measurement for mean stiffness value was calculated from three elastographic images obtained from each testicle. Significant difference in mean stiffness value and Johnsen' scaling was observed in both groups (A and B), as well as for normal and torted testicle in group A. The mean stiffness positively correlated with histologic grade on both sided testicles in group B, and left sided testicles in group A (P=0.045, r=0.43; group B; P=0.001, r=0.98), while histologic grade negatively correlated with mean stiffness in the group A, torted testicle (torsion P=0.012, r=-0.76). In this study testicular torsion, with consequently higher mean stiffness value determined by SWE, has qualitatively and quantitatively decreased spermatogenesis. Gradual morphology change in testicle unaffected by torsion has not been previously reported. This study confirmed that quantitative change in testicular tissue stiffness as well as change in testicular spermatogenesis can be reliably evaluated with SWE. abstract_id: PUBMED:34481709 The absence of spermatogenesis in radical orchiectomy specimen is associated with advanced-stage nonseminomatous testicular cancer. Background: To assess if clinical, pathological, and spermatogenesis factors are associated with clinical staging in patients with testicular germ cell tumors. Patients And Methods: We retrospectively reviewed the pathology reports and slides from 267 men who underwent radical orchiectomy for testicular cancer at our institution during 1998-2019. Histologic slides were reviewed and the presence of mature spermatozoa was documented. Clinical, laboratory and radiographic characteristics were recorded. Logistic regression analyses were used to identify factors associated with advanced disease stage at diagnosis. Results: Of 267 male patients, 115 (43%) patients had testicular non-seminomatous germ cell tumors (NSGCT) and 152 (57%) seminomatous germ cell tumors (SGCT). Among NSGCT patients, those presenting with metastatic disease had a higher proportion of predominant (&gt;50%) embryonal carcinoma (64% vs. 43%, respectively, P = 0.03), and lymphovascular invasion (45.8% vs. 26.6%, respectively, P = 0.03) than non-metastatic patients. Spermatogenesis was observed in 56/65 (86.2%) and 36/49 (73.5%) of non-metastatic and metastatic NSGCT patients, respectively (P = 0.09). On semen analysis, severe oligospermia (&lt;5 million/ml) was more common in metastatic than in non-metastatic NSGCT (26.5% vs. 8.3%, respectively, P = 0.04). On multivariate analysis, predominant embryonal carcinoma and lack of spermatogenesis in pathological specimens were associated with metastatic disease. Conclusion: The absence of spermatogenesis and a high proportion of embryonal carcinoma was associated with advanced disease in patients with NSGCT. Whether it may also translate as a predictor of oncologic outcome needs further evaluation. abstract_id: PUBMED:8924963 Treatments of testicular cancer and protection of spermatogenesis The deleterious effects of chimio or radio-therapy on spermatogenesis of men treated for testicular cancer are well known. Retroperitoneal lymphadenectomy is a risk for ejaculation process. Semen cryopreservation seems to be obligatory before any deleterious treatment. Prophylactic measures do exist such as lead protection during x-rays therapy, drugs with less toxicity but similar efficiency, selective lymphadenectomy. Moreover, positive results were reported form realised in animals to protect spermatogenesis. In men treated for testicular cancer, 4 protective studies were reported using either GnRH analogs (3 studies) or medroxyprogesterone (one study). No protective effects were obtained in these trials, and the reasons for such failure are discussed. Finally, new potential possibilities of spermatogenesis protection are suggested supported by the results of animal experiments or the occurrence of new drugs such as use of steroids, GnRH antagonist or testicular hypothermia. abstract_id: PUBMED:32594086 Testicular Microlithiasis Is Associated with Impaired Spermatogenesis in Patients with Unexplained Infertility. Introduction: Testicular microlithiasis (TML) was shown to be associated with an increased risk of infertility. However, the association of TML with spermatogenesis in patients with unexplained infertility is still unknown. In this study, we therefore investigated the effect of TML on hormones and sperm parameters in a large cohort of infertile men without major factors for impaired fertility and azoospermic men serving for comparison. Methods: Over a period of 10 years, we retrospectively analyzed 2,914 patients who attended our centre with the diagnosis of unexplained infertility and sperm count &gt;1 million/ejaculate, as well as 281 patients with unexplained azoospermia. From the 2,914 patients, we identified 218 patients with TML as revealed by ultrasound imaging. Further, 26 out of 281 azoospermic patients showed TML. Subsequently, we performed a thorough analysis of reproductive parameters and their association with TML. Results: The overall incidence of TML in patients with unexplained infertility and in unexplained azoospermic men was 7.5 and 9.3%, respectively. Patients with unexplained infertility and TML showed significantly smaller testicular volume, elevated FSH level, and lower sperm count and motility. Impaired spermatogenesis was not associated with the amount of microlithiasis, considered after classification into subgroups (&lt;5 vs. ≥5 microliths/testis), and instead was associated with presence or absence of TML. TML in unexplained infertile azoospermic patients was not significantly associated neither with andrological reproductive parameters nor with sperm retrieval rate in microsurgical testicular sperm extraction. Discussion/conclusion: TML itself, and not the number of microliths, is associated with impaired spermatogenesis in patients with unexplained infertility. The parameter TML alone is not sufficient to predict spermatogenic impairment in azoospermic patients. This study highlights the importance of ultrasound imaging in the clinical evaluation of infertile men, taking into account that TML is a negative co-factor for male fertility. Answer: No, one testicular specimen is not sufficient for quantitative evaluation of spermatogenesis. A study comparing quantitative analysis of spermatogenic cell types in testicular cytologic aspirates from various sites of each testis in azoospermic or severely oligospermic infertile men found that each of the aspirates showed wide deviations from the mean of the three aspirates for that patient. The deviation ranges of the cell type indexes for each of the spermatogenic stages were significant, suggesting that more than one testicular specimen is needed to evaluate quantitatively the spermatogenic process (PUBMED:7615120).
Instruction: Can left ventricular diastolic stiffness be measured noninvasively? Abstracts: abstract_id: PUBMED:12221410 Can left ventricular diastolic stiffness be measured noninvasively? Background: A noninvasive estimation of left ventricular (LV) diastolic chamber stiffness (K(LV)) is still a challenge. Experimental data suggests that K(lv) can be obtained by using Doppler mitral flow deceleration time (DT) as the only variable: K(lv) = (70/[DT-20])(2) mm Hg/mL. We assessed the accuracy of this noninvasive estimate of K(lv) by comparing it with invasive measurement of K(lv) in intact patients with a wide range of LV size and function under varying loading conditions. Methods: Twenty-five patients (age 54 +/- 12 years) with ischemic heart disease (n = 19) or primary LV dysfunction (n = 6), with a wide range of DT (79-324 ms) and ejection fraction (8%-57%), underwent simultaneous assessment of LV pressure by micromanometer and volume by 2-dimensional (2D) echocardiography-guided Doppler mitral flow velocity (where volume = mitral flow velocity integral x annular area) calibrated to 2D echocardiography stroke volume. Invasive K(lv) [delta pressure (from minimum to end-diastolic)/delta volume (during the same time interval)] was obtained at baseline and in 23 patients after LV unloading by prostaglandin E(1) (30-60 ng/kg/min) (n = 12), nitroglycerin (0.2 mg) (n = 9) or magnesium (1 g) (n = 2). Noninvasive K(lv) was estimated according to the above formula. Results: In this set of patients with normal mitral annular area (3.9 +/- 1.1 cm(2)/m(2)), multivariate analysis showed that DT is inversely related to K(lv) (P &lt;.001) but not to left atrial chamber stiffness, LV volume, relaxation time constant, mitral valve opening pressure, or area. The relation between noninvasively calculated and directly measured K(lv) was close to the line of identity under all conditions, (y = 0.93x + 0.05, r = 0.67, n = 48, P &lt;.001), although with a wide standard error of the estimate (0.26 mm Hg/mL). Conclusion: We conclude that K(lv) can be calculated +/- 0.5 mm Hg/mL from noninvasively measured DT in patients. abstract_id: PUBMED:26924139 Left ventricular hypertrophy and arterial stiffness in essential hypertension. Aim: The aim of this study was to evaluate the association between an increase in arterial stiffness and the development of left ventricular hypertrophy in essential hypertension patients. Materials And Methods: One hundred forty essential hypertension patients were included in the study. Patients were divided into two groups based on echocardiographic measurements; with left ventricular hypertrophy (n=70) and without left ventricular hypertrophy (n=70). The criterion for hypertrophy was accepted as an intraventricular septum and posterior wall thickness in diastole of 11 mm or above. Aortic stiffness values of the patients groups were measured noninvasively by arteriography through the brachial artery. Pulse wave velocity (PWV) measurements were used as indicators of arterial stiffness. Results: When compared to the group without left ventricular hypertrophy, elevated systolic blood pressure, mean blood pressure, and pulse pressure were located in the left ventricular hypertrophy group at a significant level (p &gt; 0.01). A statistically significant difference was not observed in the diastolic blood pressure and pulse measurements of the groups. Pulse wave velocity, the indicator of arterial stiffness, was elevated to a significant degree in the left ventricular hypertrophy group (p &gt; 0.01). While a positive correlation was found between pulse wave velocity and left ventricle mass index, microalbuminuria, high sensitive C-reactive protein (Hs-CRP), and left ventricle end-diastolic volume, a negative correlation was found between pulse wave propagation velocity and left ventricle E/A. Conclusions: In conclusion, pulse wave analysis is a valuable method for predicting cardiac hypertrophy in essential hypertension (Tab. 6, Ref. 25). abstract_id: PUBMED:37661275 Relationship between arterial stiffness, left ventricular diastolic function, and renal function in chronic kidney disease. Aim: In chronic kidney disease, IgA nephropathy, and left ventricular diastolic dysfunction have prognostic significance as well. However, the relationship between diastolic dysfunction, arterial stiffness, and renal function has not been fully elucidated. Methods: 79 IgA nephropathy patients (aged 46 ± 11 years) and 50 controls were investigated. Tissue Doppler imaging was used to measure early (Ea) and late (Aa) diastolic velocities. Arterial stiffness was measured by a photoplethysmographic (stiffness index (SI)) and an oscillometric method (aortic pulse wave velocity (PWVao)). Results: We compared the IgAN patients to a similar cardiovascular risk group with a preserved eGFR. A strong correlation was found between Ea/Aa and SI (p &lt; 0.001), also with PWVao (p &lt; 0.001), just in IgAN, and with eGFR (p &lt; 0.001) in both groups. IgAN patients were divided into groups CKD1-2 vs. CKD3-5. In the CKD 3-5 group, the incidence of diastolic dysfunction increased significantly: 39% vs. 72% (p = 0.003). Left ventricle rigidity (LVR) was calculated, which showed a close correlation with SI (p = 0.009) and eGFR (p = 0.038). By linear regression analysis, the independent predictors of SI were age, E/A, and E/Ea; SI was the predictor of LVR; and E/A and hypertension were the predictors of eGFR. Conclusion: In chronic kidney disease, increased cardiac rigidity and vascular stiffness coexist with decreased renal function, which is directly connected to diastolic dysfunction and vascular stiffness. On the basis of comparing the CKD group to the control group, vascular alterations in very early CKD can be identified. abstract_id: PUBMED:36533620 Cross-Sectional Relationships of Proximal Aortic Stiffness and Left Ventricular Diastolic Function in Adults in the Community. Background Stiffness of the proximal aorta may play a critical role in adverse left ventricular (LV)-vascular interactions and associated LV diastolic dysfunction. In a community-based sample, we sought to determine the association between proximal aortic stiffness measured by cardiovascular magnetic resonance (CMR) and several clinical measures of LV diastolic mechanics. Methods and Results Framingham Heart Study Offspring adults (n=1502 participants, mean 67±9 years, 54% women) with available 1.5T CMR and transthoracic echocardiographic measures were included. Measures included proximal descending aortic strain and aortic arch pulse wave velocity by CMR (2002-2006) and diastolic function (mitral Doppler E and A wave velocity, E wave area, and LV tissue Doppler e' velocity) by echocardiography (2005-2008). Multivariable linear regression analysis was used to relate CMR aortic stiffness measures to measures of echocardiographic LV diastolic function. All continuous variables were standardized. In multivariable-adjusted regression analyses, aortic strain was inversely associated with E wave deceleration time (estimated β=-0.10±0.032, P=0.001), whereas aortic arch pulse wave velocity was inversely associated with E/A ratio (estimated β=-0.094±0.027, P=0.0006), E wave area (estimated β=-0.070±0.027, P=0.010), and e' (estimated β=-0.061±0.027, P=0.022), all indicating associations of higher aortic stiffness by CMR with less favorable LV diastolic function. Compared with men, women had a larger inverse relationship between pulse wave velocity and E/A ratio (interaction β=-0.085±0.031, P=0.0064). There was no significant effect modification by age or a U-shaped (quadratic) relation between aortic stiffness and LV diastolic function measures. Conclusions Higher proximal aortic stiffness is associated with less favorable LV diastolic function. Future studies may clarify temporal relations of aortic stiffness with varying patterns and progression of LV diastolic dysfunction. abstract_id: PUBMED:33176384 Arterial Stiffness and Left Ventricular Diastolic Function in Endurance Athletes. The present study investigated the relationship between arterial stiffness and left ventricular diastolic function in endurance-trained athletes. Sixteen young male endurance-trained athletes and nine sedentary of similar age men participated in this study. Resting measures in carotid-femoral pulse wave velocity were obtained to assess arterial stiffness. Left ventricular diastolic function was assessed using 2-dimensional echocardiography. The athletes tended to have lower arterial stiffness than the controls (P=0.071). Transmitral A-waves in the athletes were significantly lower (P=0.018) than the controls, and left ventricular mass (P=0.034), transmitral E-wave/A-wave (P=0.005) and peak early diastolic mitral annular velocity at the septal site (P=0.005) in the athletes were significantly greater than the controls. A significant correlation was found between arterial stiffness and left ventricular diastolic function (E-wave: r=- 0.682, P=0.003, E-wave/A-wave: r=- 0.712, P=0.002, peak early diastolic mitral annular velocity at the septal site: r=- 0.557, P=0.025) in the athletes, whereas no correlation was found in controls. These results suggest that lower arterial stiffness is associated with higher left ventricular diastolic function in endurance-trained athletes. abstract_id: PUBMED:33859484 Significant Association Between Left Ventricular Diastolic Dysfunction, Left Atrial Performance and Liver Stiffness in Patients with Metabolic Syndrome and Non-Alcoholic Fatty Liver Disease. Purpose: The constitutive elements of the metabolic syndrome (MetS) are linked with both non-alcoholic fatty liver disease (NAFLD) and cardiovascular disease. Controlled attenuation parameter (CAP), and vibration controlled transient elastography (VCTE), are able to detect and quantify NAFLD, while conventional and two-dimensional speckle tracking echocardiography (2D-STE) is capable to identify subclinical changes in cardiac function. We wanted to evaluate whether there is any correspondence between left ventricular (LV) diastolic dysfunction and different degrees of liver steatosis and fibrosis in MetS subjects with NAFLD. Patients And Methods: A total of 150 adult subjects having MetS and a normal left ventricular (LV) systolic function were recorded in the study, while 150 age- and sex- matched adults without MetS were enrolled as controls. NAFLD was established by VCTE and CAP. The left heart systolic and diastolic function was evaluated by conventional and 2D-ST echocardiography. Left atrial (LA) stiffness was calculated as the ratio between the E/A ratio and the LA reservoir-strain. Results: In univariate regression analysis, the variables associated with LV diastolic dysfunction in MetS patients were: liver steatosis grade ≥2, liver fibrosis grade ≥2, the longitudinal LA peak strain during the reservoir phase, the LA strain rate during ventricular contraction and the LA stiffness. In multivariate logistic regression, two variables were selected as independent predictors of LV diastolic dysfunction, namely the liver stiffness (P=0.0003) and the LA stiffness (P&lt;0.0001). LA stiffness predicted subclinical LV diastolic dysfunction in MetS patients with a sensitivity of 45% and a specificity of 96% when using a cut-off value &gt;0.38, and was significantly correlated with liver steatosis stage ≥2 and liver fibrosis stage ≥2. Conclusion: The present study confirms the association between liver stiffness, LA stiffness and LV diastolic dysfunction in MetS patients. Our study suggests that liver elastography and 2D-STE should become habitual assessments in MetS patients. abstract_id: PUBMED:22093561 The relationship between left ventricular diastolic function and arterial stiffness in diabetic coronary heart disease Objectives: By measuring left ventricular diastolic function and arterial stiffness, this study aims to probe into the effect of diabetes mellitus (DM) on left ventricular diastolic function and arterial stiffness, and evaluate the correlation between left ventricular diastolic function and arterial stiffness. Methods: Seventy-six inpatients were enrolled. According to their coronary angiography, OGTT test results and past history of DM, patients were divided into controlled, CHD (coronary heart disease with no DM), and CHD + DM groups. Through invasive hemodynamic monitoring during left ventricular angiography, left ventricular end-diastolic pressure (LVEDP) and tau index were collected. Carotid-femoral pulse wave velocity (c-f PWV), reflected wave augmentation index (AIx@75) and other data reflecting the degree of arterial stiffness were collected bedside with non-invasive means. SPSS 18.0 was used for statistical analysis. Results: No significant difference was found between groups for LVEDP, tau index, and AIx@75. In terms of c-f PMV, The CHD + DM group (8.79 ± 1.59) cm/s differed significantly from the CHD group (7.43 ± 1.42) cm/s and the controlled group (6.83 ± 1.14) cm/s. No correlations were found between c-f PMV and LVEDP or tau index. A positive correlation was found between AIx@75 and tau index. Conclusions: Compared with the controlled group and CHD patients with no DM, CHD + DM patients show worse arterial stiffness with no difference in ventricular diastolic function. There is a positive correlation between arterial stiffness and diastolic dysfunction. abstract_id: PUBMED:24046515 Association of increased arterial stiffness and p wave dispersion with left ventricular diastolic dysfunction. Background: The association between increased arterial stiffness and left ventricular diastolic dysfunction (LVDD) may be influenced by left ventricular performance. P wave dispersion is not only a significant determinant of left ventricular performance, but is also correlated with LVDD. This study is designed to compare left ventricular diastolic function among patients divided by brachial-ankle pulse wave velocity (baPWV) and corrected P wave dispersion (PWDC) and assess whether the combination of baPWV and PWDC can predict LVDD more accurately. Methods: This cross-sectional study enrolled 270 patients and classified them into four groups according to the median values of baPWV and PWDC. LVDD was defined as impaired relaxation and pseudonormal/restrictive mitral inflow patterns. Results: The ratio of transmitral E wave velocity to early diastolic mitral annulus velocity (E/Ea) was higher in group with higher baPWV and PWDC than in the other groups (all p &lt;0.001). The prevalence of LVDD was higher in group with higher baPWV and PWDC than in the two groups with lower baPWV (p ≤ 0.001). The baPWV and PWDC were correlated with E/Ea and LVDD in multivariate analysis (p ≤ 0.030). The addition of baPWV and PWDC to a clinical mode could significantly improve the R square in prediction of E/Ea and C statistic and integrated discrimination index in prediction of LVDD (p ≤ 0.010). Conclusions: This study showed increased baPWV and PWDC were correlated with high E/Ea and LVDD. The addition of baPWV and PWDC to a clinical model improved the prediction of high E/Ea and LVDD. Screening patients by means of baPWV and PWDC might help identify the high risk group of elevated left ventricular filling pressure and LVDD. abstract_id: PUBMED:38284673 Evaluation of left ventricular stiffness with echocardiography. Half of patients with heart failure are presented with preserved ejection fraction (HFpEF). The pathophysiology of these patients is complex, but increased left ventricular (LV) stiffness has been proven to play a key role. However, the application of this parameter is limited due to the requirement for invasive catheterization for its measurement. With advances in ultrasound technology, significant progress has been made in the noninvasive assessment of LV chamber or myocardial stiffness using echocardiography. Therefore, this review aims to summarize the pathophysiological mechanisms, correlations with invasive LV stiffness constants, applications in different populations, as well as the limitations of echocardiography-derived indices for the assessment of both LV chamber and myocardial stiffness. Indices of LV chamber stiffness, such as the ratio of E/e' divided by left ventricular end-diastolic volume (E/e'/LVEDV), the ratio of E/SRe (early diastolic strain rates)/LVEDV, and diastolic pressure-volume quotient (DPVQ), are derived from the relationship between echocardiographic parameters of LV filling pressure (LVFP) and LV size. However, these methods are surrogate and lumped measurements, relying on E/e' or E/SRe for evaluating LVFP. The limitations of E/e' or E/SRe in the assessment of LVFP may contribute to the moderate correlation between E/e'/LVEDV or E/SRe/LVEDV and LV stiffness constants. Even the most validated measurement (DPVQ) is considered unreliable in individual patients. In comparison to E/e'/LVEDV and E/SRe/LVEDV, indices like time-velocity integral (TVI) measurements of pulmonary venous and transmitral flows may demonstrate better performance in assessing LV chamber stiffness, as evidenced by their higher correlation with LV stiffness constants. However, only one study has been conducted on the exploration and application of TVI in the literature, and the accuracy of assessing LV chamber stiffness remains to be confirmed. Regarding echocardiographic indices for LV myocardial stiffness evaluation, parameters such as epicardial movement index (EMI)/ diastolic wall strain (DWS), intrinsic velocity propagation of myocardial stretch (iVP), and shear wave imaging (SWI) have been proposed. While the alteration of DWS and its predictive value for adverse outcomes in various populations have been widely validated, it has been found that DWS may be better considered as an overall marker of cardiac function performance rather than pure myocardial stiffness. Although the effectiveness of iVP and SWI in assessing left ventricular myocardial stiffness has been demonstrated in animal models and clinical studies, both indices have their limitations. Overall, it seems that currently no echocardiography-derived indices can reliably and accurately assess LV stiffness, despite the development of several parameters. Therefore, a comprehensive evaluation of LV stiffness using all available parameters may be more accurate and enable earlier detection of alterations in LV stiffness. abstract_id: PUBMED:8565019 Left-ventricular diastolic function and hypertension. Left-ventricular diastolic function is influenced by multiple factors in hypertension. These factors include age, left-ventricular hypertrophy, afterload, left-atrial pressure, blood pressure, and myocardial stiffness. The influences of and interactions among these factors change the filling characteristics of the left ventricle and characterize diastolic function. Despite the complex nature of these interactions, important clinical information can be noninvasively derived by Doppler echocardiography and nuclear scintigraphy. The entire spectrum of hypertension is involved with and may alter left-ventricular diastolic function. Answer: Yes, left ventricular diastolic stiffness (K(LV)) can be measured noninvasively. A study has shown that K(LV) can be calculated from noninvasively measured Doppler mitral flow deceleration time (DT) using the formula K(LV) = (70/[DT-20])(2) mm Hg/mL. The accuracy of this noninvasive estimate was assessed by comparing it with invasive measurements in patients with a wide range of left ventricular size and function under varying loading conditions. The relationship between noninvasively calculated and directly measured K(LV) was close to the line of identity under all conditions, although with a wide standard error of the estimate (PUBMED:12221410). Additionally, advancements in ultrasound technology have made significant progress in the noninvasive assessment of left ventricular chamber or myocardial stiffness using echocardiography. Indices of left ventricular chamber stiffness, such as the ratio of E/e' divided by left ventricular end-diastolic volume (E/e'/LVEDV), the ratio of E/SRe (early diastolic strain rates)/LVEDV, and diastolic pressure-volume quotient (DPVQ), are derived from the relationship between echocardiographic parameters of left ventricular filling pressure (LVFP) and left ventricular size. However, these methods are surrogate and lumped measurements, relying on E/e' or E/SRe for evaluating LVFP, and may have limitations in the assessment of LVFP, contributing to the moderate correlation between these indices and left ventricular stiffness constants (PUBMED:38284673). Despite the development of several parameters, it seems that currently no echocardiography-derived indices can reliably and accurately assess left ventricular stiffness on their own. A comprehensive evaluation using all available parameters may be more accurate and enable earlier detection of alterations in left ventricular stiffness (PUBMED:38284673).
Instruction: Does Wrist Arthrodesis With Structural Iliac Crest Bone Graft After Wide Resection of Distal Radius Giant Cell Tumor Result in Satisfactory Function and Local Control? Abstracts: abstract_id: PUBMED:26728519 Does Wrist Arthrodesis With Structural Iliac Crest Bone Graft After Wide Resection of Distal Radius Giant Cell Tumor Result in Satisfactory Function and Local Control? Background: Many techniques have been described for reconstruction after distal radius resection for giant cell tumor with none being clearly superior. The favored technique at our institution is total wrist fusion with autogenous nonvascularized structural iliac crest bone graft because it is structurally robust, avoids the complications associated with obtaining autologous fibula graft, and is useful in areas where bone banks are not available. However, the success of arthrodesis and the functional outcomes with this approach, to our knowledge, have only been limitedly reported. Questions/purposes: (1) What is the success of union of these grafts and how long does it take? (2) How effective is the technique in achieving tumor control? (3) What complications occur with this type of arthrodesis? (4) What are the functional results of wrist arthrodesis by this technique for treating giant cell tumor of the distal radius? Methods: Between 2005 and 2013, 48 patients were treated for biopsy-confirmed Campanacci Grade III giant cell tumor of the distal radius. Of those, 39 (81% [39 of 48]) were treated with wrist arthrodesis using autogenous nonvascularized iliac crest bone graft. Of those, 27 (69% [27 of 39]) were available for followup at a minimum of 24 months (mean, 45 months; range, 24-103 months). During that period, the general indications for this approach were Campanacci Grade III and estimated resection length of 8 cm or less. Followup included clinical and radiographic assessment and functional assessment using the Disabilities of the Arm, Shoulder and Hand (DASH) score, the Musculoskeletal Tumor Society (MSTS) score, grip strength, and range of motion at every followup by the treating surgeon and his team. All functional results were from the latest followup of each patient. Results: Union of the distal junction occurred at a mean of 4 months (± 2 months) and union of the proximal junction occurred at a mean of 9 months (± 5 months). Accounting for competing events, at 12 months, the rate of proximal junction union was 56% (95% confidence interval [CI], 35%-72%), whereas it was 67% (95% CI, 45%-82%) at 18 months. In total, 11 of the 27 patients (41%) underwent repeat surgery on the distal radius, including eight patients (30%) who had complications and three (11%) who had local recurrence. The mean DASH score was 9 (± 7) (value range, 0-100, with lower scores representing better function), and the mean MSTS 1987 score was 29 (± 1) (value range, 0-30, with higher scores representing better function) as well as 96% (± 4%) of mean MSTS 1993 score (value range, 0%-100%, with higher scores representing better function). The mean grip strength was 51% (± 23%) of the uninvolved side, whereas the mean arc of forearm rotation was 113° (± 49°). Conclusions: Reconstruction of defects after resection of giant cell tumor of the distal radius with autogenous structural iliac crest bone graft is a facile technique that can be used to achieve favorable functional results with complications and recurrences comparable to those of other reported techniques. We cannot show that this technique is superior to other options, but it seems to be a reasonable option to consider when other reconstruction options such as allografts are not available. Level Of Evidence: Level IV, therapeutic study. abstract_id: PUBMED:34150385 Giant Cell Tumor of the Distal Radius: Wide Resection, Ulna Translocation With Wrist Arthrodesis. Giant cell tumor (GCT) of the bone is a locally aggressive neoplasm and usually managed with extended curettage and adjuvant therapy, which is associated with reduced risk of recurrence. The juxta-articular distal radius giant cell tumor is challenging due to the destruction of subchondral bone and articular cartilage, making it difficult to salvage the wrist joint anatomy and function. Various methods described include wide resection and reconstruction of allograft or centralization of the ulna with wrist arthrodesis. We present the functional outcome of distal end radius GCT, which was successfully managed with wide local excision, ulna translocation, and wrist arthrodesis. At the two years follow-up, the patient shows excellent functional outcome with supination and pronation movements and no local recurrence. abstract_id: PUBMED:34046295 Operative technique of distal radius resection and wrist fusion with iliac crest bone graft. Malignant lesions of distal radius and appropriately selected cases of benign aggressive lesions (giant cell tumor) of distal radius require resection for limb salvage. Post resection, reconstruction of that defect can be accomplished by either arthrodesis or arthroplasty both having their own pros and cons. In cases undergoing arthrodesis as modality of reconstruction, small defects (≤6 cm) can be reconstructed using autologous iliac crest bone graft which results in good cosmetic appearance and functional outcome. We have described in detail, the preoperative planning, surgical steps and rehabilitation of wrist fusion with iliac crest bone grafting post distal radius resection. abstract_id: PUBMED:31736610 Outcomes of short segment distal radius resections and wrist fusion with iliac crest bone grafting for giant cell tumor. Background: Distal radius is third most common site for occurrence of Giant cell tumor (GCT) of bone. Most of Campanacci grade II &amp; III cases require resection. Reconstructions of these defect are challenging. Though fibular arthroplasty provides mobility at wrist but is fraught with complications of donor site morbidity and instability with wrist pain. Wrist arthrodesis with ulna translocation provides stable reconstruction but is cosmetically less appealing. We present a series of 12 cases of GCT of distal radius treated with short segment (6 cm or less) resections and wrist fusion with iliac crest grafting. We evaluated donor site morbidity, functional and oncological outcomes. Objectives: To assess time to union, donor site morbidity, functional and oncological outcomes after wrist fusion with iliac crest bone grafting for distal radius resection (≤6 cm). Methods: Retrospective analysis was performed from a prospectively maintained database between January 2011 and December 2017, 12 patients (7 male and 5 female; 9 primary and 3 recurrent; all Campanacci grade III) were included. Mean age was 29 years (15-41 years) with mean resection length of 5.1 cm (4.5-6 cm). The dominant hand was involved in 6 patients. Time to union, donor site morbidity, functional and oncologiacal outcomes were evaluated. Functional outcomes were evaluated using Musculo-Skeletal Tumor Society (MSTS) score and Patient Rated Wrist Evaluation (PRWE) score. Grip strength and arc of forearm rotation were also evaluated. Results: All patients were available for analysis. No symptomatic donor site morbidity was observed. One patient had prominent implant following a fall and delayed union. Mean time to union for 22 osteotomy sites in rest of 11 patients (both proximal and distal) was 6 months (4-11 months). At median follow up of 45 months (18-78 months) 2 patients had soft tissue recurrence, 1 had a stable pulmonary metastasis. Local Recurrence rate was 17%. All patients returned to their pre surgery activity. Mean MSTS score was 25 (19-29) and PRWE score was 12 (6-28). Grip strength and Prono - supination measurements were available in 10 patients. Grip strength was 69% of non operated limb. Mean supination was 53° (0° to 80°) and mean protonation was 73° (40° to 80°). Mean arc of rotation was 126° (80° to 160°). Conclusion: Reconstruction of distal radius bone defects with Iliac crest bone grafting and wrist arthrodesis retains prono-supination while maintaining wrist girth (cosmesis). The oncologic and functional outcomes make it an acceptable modality in selected cases of distal radius tumours with short resection length (≤6 cm). abstract_id: PUBMED:33480185 Wrist Reconstruction after En bloc Resection of Bone Tumors of the Distal Radius. Wrist reconstruction after en bloc resection of bone tumors of the distal radius has been a great challenge. Although many techniques have been used for the reconstruction of long bone defects following en bloc resection of the distal radius, the optimal reconstruction method remains controversial. This is the first review to systematically describe various reconstruction techniques. We not only discuss the indications, functional outcomes, and complications of these reconstruction techniques but also review the technical refinement strategies for improving the stability of the wrist joint. En bloc resection should be performed for Campanacci grade III giant cell tumors (GCT) as well as malignant tumors of the distal radius. However, wrist reconstruction after en bloc resection of the distal radius represents a great challenge. Although several surgical techniques, either achieving a stable wrist by arthrodesis or reconstructing a flexible wrist by arthroplasty, have been reported, the optimal reconstruction procedure remains controversial. The purpose of this review was to investigate which reconstruction methods might be the best option by analyzing the indications, techniques, limitations, and problems of different reconstruction methods. With the advancement of imaging, surgical techniques and materials, some reconstruction techniques have been further refined. Each of the techniques discussed in this review has its advantages and disadvantages. Wrist arthrodesis seems to be preferred over wrist arthroplasty in terms of grip strength and long-term complications, while wrist arthroplasty seems to be superior to wrist arthrodesis in terms of wrist motion. All things considered, wrist arthroplasty with a vascularized fibular head autograft might be a good option because of better wrist function, acceptable grip strength, and a relatively lower complication rate. Moreover, wrist arthrodesis is still an option if the fibular head autograft reconstruction fails. Orthopaedic oncologists should familiarize themselves with the characteristics of each technique to select the most appropriate reconstruction method depending on each patient's situation. abstract_id: PUBMED:12087240 Vascularized fibular graft after excision of giant-cell tumor of the distal radius: wrist arthroplasty versus partial wrist arthrodesis. Several reconstructive procedures have been described for the complete defect of the distal radius that is created after a wide excision of a giant-cell tumor of bone, including hemiarthroplasty using the vascularized fibular head and partial wrist arthrodesis between a vascularized fibula and the scapholunate portion of the proximal carpal row. The objectives of this study are to compare clinical and radiographic results between the partial wrist arthrodesis and the wrist arthroplasty, and to discuss which procedure is superior. Four patients with giant-cell tumors involving the distal end of the radius were treated with en bloc resection and reconstruction with a free vascularized fibular graft. The wrists in two patients were reconstructed with an articular fibular head graft and the remaining two patients underwent partial wrist arthrodesis using a fibular shaft transfer. There was radiographic evidence of bone union at the host-graft junctions in all cases. In the newly reconstructed wrist joint, there was palmar subluxation of the carpal bones and degenerative changes in both patients. Local recurrence was seen in one patient. According to the functional results described by Enneking et al., the mean functional score was 67 percent. The functional scores including wrist/forearm range of motion in the cases with partial wrist arthrodesis were superior to those with wrist arthroplasty. A partial wrist arthrodesis using a vascularized fibular shaft graft appears a more useful and reliable procedure for reconstruction of the wrist after excision of the giant-cell tumor of the distal end of the radius than a wrist arthroplasty using the vascularized fibular head, although our study includes only a small number of patients. abstract_id: PUBMED:29388486 Distal Radius Allograft Reconstruction Utilizing a Step-Cut Technique After En Bloc Tumor Resection. Background: En bloc resection of the distal radius is a common treatment for advanced and recurrent giant cell tumors and less commonly for sarcoma. Various reconstructive options exist, including ulnar transposition, osteoarticular autograft and allograft, and allograft arthrodesis. We present a technique of reconstruction using a distal radius bulk allograft with a step-cut to allow for precise restoration of proper length and to promote bony union. Methods: Preoperative templating is performed with affected and contralateral radiographs to assess the size of the expected bony defect, location of the step-cut, and the optimal size of the distal radius allograft required. A standard dorsal approach to the distal radius is utilized, and the tumor is resected. A proximal row carpectomy is performed, and the plate/allograft construct is applied to the remaining host bone. Iliac crest bone graft is harvested and introduced at the graft-bone interface and radiocarpal arthrodesis sites. Results: We have previously reported outstanding union rates with the step-cut technique compared with a standard transverse cut. Conclusions: The technique described provides reproducible union and stabilization of the wrist and forearm with adequate function following en bloc resection of the distal radius for tumor. abstract_id: PUBMED:29734913 Vascularized Iliac Bone Lining in Downgraded Treatment of Campanacci Grade III Giant Cell Tumor of the Distal Radius. It is commonly accepted that wide en bloc resection followed by reconstruction is essential in progressive lesions (Campanacci grade III) for local control of possible recurrence. However, specific grade III can be downgraded and treated with intralesional curettage to preserve better wrist function, without increasing the recurrency rates. In this report, Grade III giant cell tumor of the distal radius was successfully treated using vascularized osseous graft from the inner lip of the iliac bone in addition to downgrading strategy. abstract_id: PUBMED:28142350 Giant cell tumor of distal radius treated with ulnar translocation and wrist arthrodesis. Introduction: The aim is to analyze the functional outcomes of patients of giant cell tumor (GCT) of distal radius treated with ulnar translocation and wrist arthrodesis. Methods: Study included 25 patients of aggressive GCT of distal radius, resected and reconstructed using ulnar translocation and wrist arthrodesis. The ulna-carpal radius fixation was performed with plate and screws. The patients were followed to bony union and minimum follow-up was 1 year. Result: Twenty-two patients were of Campanacci grade 3 and three patients were of Campanacci grade2. The mean follow-up was of 23 months (12-36). All patients had an excellent range of pronation and supination. The mean Musculoskeletal Tumor Society score was 24 (range 22-28). Grip strength of affected hand compared to the contra lateral hand was found good in 17 cases and average in 7 cases. The mean bone union time at ulna to radius junction was at 6.5 (5-8) months and ulna to carpal junction at 4.5 (4-6) months. The complications were surgical site infection (one case), recurrence (one case) and failure of union (one case), and ulna graft fracture with implant failure in (two cases). Conclusion: Reconstruction of distal end of radius using ulnar translocation and wrist arthrodesis provides excellent functional outcomes with preservation of rotational movement of forearm and hand function. Reconstruction of the distal radius by ulnar translocation without complete detachment from surrounding soft tissues functions like vascularized graft without use of microvascular techniques. abstract_id: PUBMED:3514026 Giant cell tumor of the distal end of the radius treated by the resection and free vascularized iliac crest graft. Juxtaarticular giant cell tumor involving the distal radius presents a special problem of reconstruction after tumor excision. A tailored block of vascularized iliac crest graft with its feeding artery (i.e., the deep circumflex iliac artery) joined to the ulnar artery was used as a replacement for the resected distal radius, thereby creating a new wrist joint. Excellent graft survivals were observed in all four cases. The wrists, when assessed 24-48 months after surgery, were found to have good ranges of movement, although with mild dorsal subluxation. These results compared well with other means of wrist reconstruction (i.e., the vascularized fibular graft). Although conventional means of arthrodesis give acceptable hand function, the absence of wrist motion is unacceptable to the young and active, who may prefer the method described in this article. Answer: Yes, wrist arthrodesis with structural iliac crest bone graft after wide resection of distal radius giant cell tumor can result in satisfactory function and local control. The technique has been reported to achieve favorable functional results with complications and recurrences comparable to those of other reported techniques (PUBMED:26728519). Union of the grafts typically occurs within a mean of 4 months for the distal junction and 9 months for the proximal junction, and the functional outcomes, as measured by the Disabilities of the Arm, Shoulder and Hand (DASH) score and the Musculoskeletal Tumor Society (MSTS) score, are generally good (PUBMED:26728519). Other studies have also reported satisfactory outcomes with this technique. For instance, a study on the functional outcome of distal end radius giant cell tumor managed with wide local excision, ulna translocation, and wrist arthrodesis showed excellent functional outcome with supination and pronation movements and no local recurrence at the two-year follow-up (PUBMED:34150385). Similarly, outcomes of short segment distal radius resections and wrist fusion with iliac crest bone grafting for giant cell tumor have shown that this approach retains prono-supination while maintaining wrist girth and provides acceptable oncologic and functional outcomes in selected cases with short resection length (≤6 cm) (PUBMED:31736610). The use of iliac crest bone grafting and wrist arthrodesis has been described in detail, emphasizing the good cosmetic appearance and functional outcome that can be achieved post distal radius resection (PUBMED:34046295). Moreover, the reconstruction of distal radius bone defects with iliac crest bone grafting and wrist arthrodesis has been advocated as an acceptable modality in selected cases of distal radius tumors (PUBMED:31736610). In summary, wrist arthrodesis with structural iliac crest bone graft after wide resection of distal radius giant cell tumor is a viable option that can result in satisfactory function and local control, with the added benefit of avoiding complications associated with obtaining autologous fibula graft and being useful in areas where bone banks are not available (PUBMED:26728519).
Instruction: Can an insufficient posture of children and adolescents be verified instrumentally? Abstracts: abstract_id: PUBMED:12183792 Can an insufficient posture of children and adolescents be verified instrumentally? Unlabelled: 144 children aged 6 to 17 years were examined with the Lasar-Posture device which raises a perpendicular through the centre of gravity. To make an indirect postural examination possible and to classify posture, the courses of the gravity perpendicular, the shoulder centre, the greater trochanter femoris and the lateral ankle and their change during the Matthiass' test were determined. Additionally, spinal alignment, spine flexibility and the sufficiency of posture were assessed. Results: The ability to achieve a sufficient posture correlated with age (p = 0,0004). The spinal alignment itself did not differ in the age groups but the hollow-round back showed a decreased ability to attain a sufficient posture (p &lt; 0.0001). The spine flexibility measured with the Ott's test decreased with age (p = 0.0001). In relation to the gravity perpendicular, the shoulder centre moved forward with increasing postural insufficiency (p = 0.0379). The course of the gravity perpendicular did not differ in the different types of spinal alignment but was always found in front of the lateral ankle. The greater trochanter of the children with a II degrees insufficiency at the beginning (p = 0.03/0.01) and end (p = 0.2/0.05) of the Matthiass' test was always in front of the gravity perpendicular in contrast to the other children. As expected, the shoulder centre was always behind the gravity perpendicular. It was found to be more ventral in healthy children than in those with a postural insufficiency (p = 0.01/0.004/0.005). Conclusion: Overall, a high rate of children with postural insufficiency was found. It is impossible to determine or classify them with the Lasar-Posture device. The future aim should be to develop a measuring technique that allows a standardised definition of posture and age related developmental variants. abstract_id: PUBMED:37387075 UPRITE: Promoting Positive Posture in Children and Adolescents. Technology use associated with habitual posture is linked with the decline in mental well-being. The objective of this study was to evaluate the potential of posture improvement through game play. 73 children and adolescents were recruited, and accelerometer data collected through game play was analyzed. The data analysis reveals that the game/app affects and encourages upright/vertical posture. abstract_id: PUBMED:27313382 Age-dependency of posture parameters in children and adolescents. [Purpose] Poor posture in children and adolescents is a well-known problem. Therefore, early detection of incorrect posture is important. Photometric posture analysis is a cost-efficient and easy method, but needs reliable reference values. As children's posture changes as they grow, the assessment needs to be age-specific. This study aimed to investigate the development of both one-dimensional posture parameter (body inclination angle) and complex parameter (posture index) in different age groups (childhood to adolescence). [Subjects and Methods] The participants were 372 symptom-free children and adolescents (140 girls and 232 boys aged 6-17). Images of their habitual posture were obtained in the sagittal plane. High-contrast marker points and marker spheres were placed on anatomical landmarks. Based on the marker points, the body inclination angle (INC) and posture index (PI) were calculated using the Corpus concepts software. [Results] The INC angle significantly increased with age. The PI did not change significantly among the age groups. No significant differences between the corresponding age groups were found for PI and INC for both sexes. [Conclusion] When evaluating posture using the body inclination angle, the age of the subject needs to be considered. Posture assessment with an age-independent parameter may be more suitable. abstract_id: PUBMED:28229267 Prevalence of incorrect body posture in children and adolescents with overweight and obesity. The ever increasing epidemics of overweight and obesity in school children may be one of the reasons of the growing numbers of children with incorrect body posture. The purpose of the study was the assessment of the prevalence of incorrect body posture in children and adolescents with overweight and obesity in Poznań, Poland. The population subject to study consisted of 2732 boys and girls aged 3-18 with obesity, overweight, and standard body mass. The assessment of body mass was performed based on BMI, adopting Cole's cutoff values. The evaluation of body posture was performed according to the postural error chart based on criteria complied by professor Dega. The prevalence rates of postural errors were significantly higher among children and adolescents with overweight and obesity than among the group with standard body mass. In the overweight group, it amounted to 69.2% and in the obese group to 78.6%. Conclusion: The most common postural deviations in obese children and adolescents were valgus knees and flat feet. Overweight and obesity in children and adolescents, predisposing to higher incidence of some types of postural errors, call for prevention programs addressing both health problems. What is Known: • The increase in the prevalence of overweight and obesity among children and adolescents has drawn attention to additional health complications which may occur in this population such as occurrence of incorrect body posture. What is New: • The modified chart of postural errors proved to be an effective tool in the assessment of incorrect body posture. • This chart may be used in the assessment of posture during screening tests and prevention actions at school. abstract_id: PUBMED:27814856 Assessment of the posture of adolescents in everyday clinical practice: Intra-rater and inter-rater reliability and validity of a posture index. Objectives: The assessment of the posture of children and adolescents using photometric methods has a long tradition in paediatrics, manual therapy and physiotherapy. It can be well integrated into the clinical routine and enables objective documentation. One-dimensional parameters such as angle sizes are mostly used in the diagnosis of postural defects in children and adolescents by means of photogrammetry. This study examined the posture index, a complex parameter, which evaluates the alignment of several trunk segments in the sagittal plane and is suitable for use as a screening parameter in everyday clinical practice. Methods: For this postural photographs were taken in the sagittal plane of the habitual posture in a subgroup of 105 adolescents (12.9 ± 2.6 years) for analysing validity, and in a subgroup of 25 adolescents (12.1 ± 2.8 years) for analysing reliability and objectivity. Marker spheres (12 mm) were placed on five anatomical landmarks. The posture was also evaluated clinically by experienced investigators (PT, MD, DSc). The distances of the marker points to the plumb line through the malleolus lateralis were calculated and the posture index calculated from these. In order to determine the objectivity, reliability and validity of the posture index, statistical parameters were calculated. Results: The posture index demonstrated very good objectivity (intraclass correlation coefficient ICC = 0.865), good reliability (Cronbach's alpha = 0.842) and good validity compared to the posture assessment done by the medical experts (Spearman's rho = 0.712). Conclusions: The posture index reflects a doctor's assessment of the posture of children and adolescents and is suitable as a clinical parameter for the assessment of postural defects. abstract_id: PUBMED:28744036 Interrelationship between postural balance and body posture in children and adolescents. [Purpose] This study examined possible interrelationships between postural sway and posture parameters in children and adolescents with a particular focus on posture weakness. [Subjects and Methods] 308 healthy children and adolescents (124 girls, 184 boys, aged 12.3 ± 2.5 years) participated in the study. Posture parameters (posture index, head protrusion, trunk inclination) were determined based on posture photos in the sagittal plane. Postural sway was measured during 20 seconds on a force plate. The Pearson's product-moment correlation coefficients between the anthropometric and posture parameters and the sway path length (SPL) were calculated, as well as the coefficient of determination R2. [Results] There is a weak but significant correlation between age or body mass index of the test subjects and the SPL. There is no statistically significant correlation between posture parameters and the SPL. Children and adolescents with posture weakness do not exhibit a changed SPL. [Conclusion] Therefore, therapy of poor posture must be considered separately from therapeutic measures for the improvement of balance skills. abstract_id: PUBMED:35808468 Balance and Posture in Children and Adolescents: A Cross-Sectional Study. Balance and posture are two topics that have been extensively studied, although with some conflicting findings. Therefore, the aim of this work is to analyze the relationship between the postural angles of the spine in the sagittal plane and the stable static balance. A cross-sectional study was conducted with children and adolescents from schools in northern Portugal in 2019. An online questionnaire was used to characterize the sample and analyze back pain. Spinal postural angle assessment (pelvic, lumbar, and thoracic) was performed using the Spinal Mouse®, while stabilometry assessment was performed using Namrol® Podoprint®. Statistical significance was set as α = 0.05. The results showed that girls have better balance variables. There is a weak correlation between the anthropometric variables with stabilometry variables and the postural angles. This correlation is mostly negative, except for the thoracic spine with anthropometric variables and the lumbar spine with BMI. The results showed that postural angles of the spine are poor predictors of the stabilometric variables. Concerning back pain, increasing the postural angle of the thoracic spine increases the odds ratio of manifestation of back pain by 3%. abstract_id: PUBMED:12029587 Ultrasound topometric measurements of thoracic kyphosis and lumbar lordosis in school children with normal and insufficient posture Aim: The posture of school children was examined in order to establish whether possible differences in clinically normal and insufficient posture can be detected by means of ultrasound topometry. Method: 105 healthy school children (56 female, 49 male) with an average age of 8 years (+/- 0.9) were examined. To classify the children as having sufficient or insufficient posture the Matthiass posture test was used. While the child stood in a relaxed position, the profile of the spine was measured with a topometric digitiser, recording each spinal processus from C7 to L5. Results: 42 children (22 female, 20 male) showed an insufficient posture during the Matthiass test. The topometrically measured angles of kyphosis and lordosis were significantly smaller in these children, corresponding to a clinically greater thoracic kyphosis and lumbar lordosis. No significant differences in the lateral and anterior-posterior trunk deviation, nor in the range of trunk sway in the relaxed standing position could be observed. Conclusion: Using ultrasound topometry, the posture of children with sufficient and insufficient posture can be differentiated by measuring, the angles of kyphosis and lordosis. This quantification could be used for objective monitoring of the posture. abstract_id: PUBMED:27042547 Changes in Habitual and Active Sagittal Posture in Children and Adolescents with and without Visual Input - Implications for Diagnostic Analysis of Posture. Introduction: Poor posture in children and adolescents has a prevalence of 22-65% and is suggested to be responsible for back pain. To assess posture, photometric imaging of sagittal posture is widely used, but usually only habitual posture positions (resting position with minimal muscle activity) are analysed. Aim: The objective of this study was 1) to investigate possible changes in posture-describing parameters in the sagittal plane, when the subjects changed from a habitual passive posture to an actively corrected posture, and 2) to investigate the changes in posture parameters when an actively corrected posture was to be maintained with closed eyes. Materials And Methods: In a group of 216 male children and adolescents (average 12.4 ± 2.5 years, range 7.0 - 17.6 years), six sagittal posture parameters (body tilt BT, trunk incline TI, posture index PI, horizontal distances between ear, shoulder and hip and the perpendicular to the ankle joint) were determined by means of photometric imaging in an habitual passive posture position, in an actively erect posture with eyes open, and in active stance with eyes closed. The change in these parameters during the transition between the posture positions was analysed statistically (dependent t-Test or Wilcoxon-Test) after Bonferroni correction (p&lt;0.004). Results: When moving from a habitual passive to an active posture BT, TI, PI, dEar, dShoulder, and dHip decreased significantly(p&lt; 0.004). When the eyes were closed, only the perpendicular distances (dEar, dShoulder, and dHip) increased significantly. The parameters that describe the alignment of the trunk sections in relation to each other (BT, TI, PI), remained unchanged in both actively regulated posture positions. Conclusion: Changes in sagittal posture parameters that occur when a habitual passive posture switches into an active posture or when an active posture is to be maintained while the eyes are closed can be used for diagnostic purposes regarding poor posture and posture regulation. abstract_id: PUBMED:26644658 Reliability of photographic posture analysis of adolescents. [Purpose] Postural problems of adolescents needs to be evaluated accurately because they may lead to greater problems in the musculoskeletal system as they develop. Although photographic posture analysis has been frequently used, more simple and accessible methods are still needed. The purpose of this study was to investigate the inter- and intra-rater reliability of photographic posture analysis using MB-ruler software. [Subjects and Methods] Subjects were 30 adolescents (15 girls and 15 boys, mean age: 16.4±0.4 years, mean height 166.3±6.7 cm, mean weight 63.8±15.1 kg) and photographs of their habitual standing posture photographs were taken in the sagittal plane. For the evaluation of postural angles, reflective markers were placed on anatomical landmarks. For angular measurements, MB-ruler (Markus Bader- MB Software Solutions, triangular screen ruler) was used. Photographic evaluations were performed by two observers with a repetition after a week. Test-retest and inter-rater reliability evaluations were calculated using intra-class correlation coefficients (ICC). [Results] Inter-rater (ICC&gt;0.972) and test-retest (ICC&gt;0.774) reliability were found to be in the range of acceptable to excellent. [Conclusion] Reference angles for postural evaluation were found to be reliable and repeatable. The present method was found to be an easy and non-invasive method and it may be utilized by researchers who are in search of an alternative method for photographic postural assessments. Answer: Yes, an insufficient posture of children and adolescents can be verified instrumentally, as evidenced by various studies that have utilized different methods and devices for posture assessment. For instance, the Lasar-Posture device was used to examine children aged 6 to 17 years, which involved determining the courses of the gravity perpendicular and other anatomical landmarks during the Matthiass' test. However, while this study found a high rate of children with postural insufficiency, it concluded that it is impossible to determine or classify them with the Lasar-Posture device alone, indicating a need for the development of a more standardized measuring technique (PUBMED:12183792). Other studies have employed different methods for assessing posture. Photometric posture analysis has been used as a cost-efficient and easy method, requiring reliable reference values that are age-specific due to changes in children's posture as they grow (PUBMED:27313382). The prevalence of incorrect body posture in children and adolescents with overweight and obesity has been assessed using a postural error chart based on criteria compiled by professor Dega, revealing higher rates of postural errors in these groups compared to those with standard body mass (PUBMED:28229267). The posture index, a complex parameter that evaluates the alignment of several trunk segments in the sagittal plane, has been shown to have good reliability and validity for use as a screening parameter in everyday clinical practice (PUBMED:27814856). Additionally, ultrasound topometric measurements have been able to differentiate between children with sufficient and insufficient posture by measuring the angles of kyphosis and lordosis (PUBMED:12029587). Furthermore, the reliability of photographic posture analysis using software like MB-ruler has been investigated, showing acceptable to excellent inter-rater and test-retest reliability for the evaluation of postural angles (PUBMED:26644658). In summary, while there are challenges in standardizing the assessment of posture in children and adolescents, various instrumental methods have been developed and validated for this purpose, demonstrating that it is indeed possible to verify insufficient posture instrumentally.
Instruction: Does sacral posterior rhizotomy suppress autonomic hyper-reflexia in patients with spinal cord injury? Abstracts: abstract_id: PUBMED:9467480 Does sacral posterior rhizotomy suppress autonomic hyper-reflexia in patients with spinal cord injury? Objective: To study the occurrence of autonomic hyper-reflexia (AHR) after intradural sacral posterior rhizotomy combined with intradural sacral anterior root stimulation, performed to manage the neurogenic hyper-reflexic bladder and to determine the pathophysiological basis of the uncontrolled hypertensive crisis after sacral de-afferentation. Patients And Methods: Ten patients with spinal cord injury operated using Brindley's method between September 1990 and February 1994 were reviewed. Systematic continuous non-invasive recordings of cardiovascular variables (using a photoplethysmograph) were made during urodynamic recordings and the pre- and post-operative vesico-urethral and cardiovascular data compared. Results: Nine of the 10 patients were examined using a new prototype measurement system; one woman refused the last urodynamic assessment. Eight of the nine patients who presented with AHR before operation still had the condition afterward. There was a marked elevation in systolic and diastolic blood pressure during the urodynamic examination in all eight patients, despite complete intra-operative de-afferentation of the bladder in five. The elevation of blood pressure started during the stimulation-induced bladder contractions and increased during voiding in all cases. Five patients showed a decrease in heart rate during the increase in blood pressure. However, in three patients the heart rate did not change or even sometimes slightly increased as the arterial blood pressure exceeded 160 mmHg, when the blood pressure and heart rate then increased together. Conclusions: These results confirm that even after complete sacral de-afferentation. AHR persisted in patients with spinal cord injury and always occurred during the stimulation-induced voiding phase. In cases of incomplete de-afferentation, small uninhibited bladder contractions without voiding occurred during the filling phase. The blood pressure then increased but never reached the value recorded during stimulation-induced micturition. Stimulation of afferents that enter the spinal cord by the thoracic and lumbar roots and that are not influenced by sacral rhizotomy could explain why AHR increases during urine flow. The distinct threshold of decreased heart rate by increasing blood pressure to &gt; 160 mmHg focuses attention on the chronotropic influences of the sympathetic nerves in the heart by an exhausted baroreceptor reflex. abstract_id: PUBMED:25392969 Predictive factors of stress incontinence after posterior sacral rhizotomy. Aims: The Brindley procedure, used since the 1980s, consists of implantation of a stimulator for sacral anterior root stimulation combined with a posterior sacral rhizotomy to enable micturition. Patients suitable for the procedure are patients with detrusor overactivity and a complete spinal cord lesion with intact sacral reflexes. S2 to S4 posterior sacral rhizotomy abolishes sacral hyperreflexia and may lead to decreased urethral closure pressure and loss of reflex adaptation of continence, leading to stress incontinence. Methods: In this retrospective study of 96 patients from Nantes or Le Mans, implanted with a Finetech-Brindley stimulator, we analyzed the incidence of stress incontinence one year after surgery and looked for predictive factors of stress incontinence one year after posterior sacral rhizotomy: age, gender, level of injury between T10 and L2 , previous urethral surgery, incompetent bladder neck, Maximum Urethral Closure Pressure before surgery less than 30 cmH2 O, compliance before surgery less than 30 ml/cmH2 0. Patients with persistent involuntary detrusor contractions with or without incontinence after surgery were excluded. Results: One year after surgery, 10.4% of the patients experienced stress incontinence. Urethral closure pressure was significantly decreased by 18% after posterior sacral rhizotomy (P = 0.002). This study highlights the only significant predictive factor of stress incontinence after rhizotomy: incompetent bladder neck (P = 0.002). Conclusions: As screening of patients undergoing the Brindley procedure is essential to achieve optimal postoperative results, on the basis of this study, we propose preoperative assessment to select the population of patients most likely to benefit from the Brindley procedure. abstract_id: PUBMED:8996369 Posterior sacral rhizotomy and intradural anterior sacral root stimulation for treatment of the spastic bladder in spinal cord injured patients. Purpose: The efficacy of intradural sacral posterior rhizotomy combined with intradural sacral anterior root stimulation in the treatment of the neurogenic hyperreflexic bladder was evaluated. Materials And Methods: We reviewed 10 spinal cord injured patients who underwent surgery between September 1990 and February 1994. Bladder function was compared preoperatively and postoperatively. Intraoperative data on electrostimulation of the detrusor and striated muscles were analyzed. Results: Stimulation of the anterior S3 and S4 roots was mostly used to empty the bladder (7 of 10 cases). Preoperative reflex incontinence disappeared in all patients postoperatively. Mean postoperative bladder capacity increases and mean postoperative post-void residual decreases were at least 340 ml. (p &lt; 0.01) and 140 ml. (p &lt; 0.01), respectively. Preoperative vesicorenal reflux disappeared in 2 and improved in 3 cases after sacral deafferentation. Autonomic hyperreflexia, which was present preoperatively in 6 patients, never disappeared but significantly improved after deafferentation. No major complications were noted postoperatively. Conclusions: Intradural sacral posterior rhizotomy combined with intradural sacral anterior root stimulation is a valuable method to treat the hyperreflexic bladder with incontinence resistant to conservative therapy in spinal cord injured patients. Autonomic hyperreflexia was decreased but not suppressed by posterior sacral rhizotomy. abstract_id: PUBMED:8795398 Neuromodulation of detrusor hyper-reflexia by functional magnetic stimulation of the sacral roots. Objective: To investigate the acute effects of functional magnetic stimulation (FMS) on detrusor hyper-reflexia using a multi-pulse magnetic stimulator. Patients And Methods: Seven male patients with established and intractable detrusor hyper-reflexia following spinal cord injury were studied. No patient was on medication and none had had previous surgery for detrusor hyper-reflexia. After optimization of magnetic stimulation of S2-S4 sacral anterior roots by recording toe flexor electromyograms, unstable detrusor activity was provoked during cystometry by rapid infusion of fluid into the bladder. The provocation test produced consistent and predictable detrusor hyper-reflexia. On some provocations, supramaximal FMS at 20 pulses/s for 5 s was applied at detrusor pressures which were &gt; 15 cmH2O. Results: Following FMS there was an obvious acute suppression of detrusor hyper-reflexia. There was a profound reduction in detrusor contraction, as assessed by the area under the curves of detrusor pressure with time. Conclusions: Functional magnetic stimulation applied over the sacrum can profoundly suppress detrusor hyper-reflexia in man. It may provide a non-invasive method of assessing patients for implantable electrical neuromodulation devices and as a therapeutic option in its own right. abstract_id: PUBMED:22639745 Radiofrequency sacral rhizotomy for the management of intolerable neurogenic bladder in spinal cord injured patients. Objective: To investigate the effect of radiofrequency (RF) sacral rhizotomy of the intolerable neurogenic bladder in spinal cord injured patients. Method: Percutaneous RF sacral rhizotomy was performed on 12 spinal cord injured patients who had neurogenic bladder manifested with urinary incontinence resisted to an oral and intravesical anticholinergic instillation treatment. Various combinations of S2, S3, and S4 RF rhizotomies were performed. The urodynamic study (UDS) was performed 1 week before RF rhizotomy. The voiding cystourethrogram (VCUG) and voiding diaries were compared 1 week before and 4 weeks after therapy. Total volume of daily urinary incontinence (ml/day) and clean intermittent catheterization (ml/time) volume of each time were also monitored. Results: After RF sacral rhizotomy, bladder capacity increased in 9 patients and the amount of daily urinary incontinence decreased in 11 patients. The mean maximal bladder capacity increased from 292.5 to 383.3 ml (p&lt;0.05) and mean daily incontinent volume decreased from 255 to 65 ml (p&lt;0.05). Bladder trabeculation and vesicoureteral reflux findings did not change 4 weeks after therapy. Conclusion: This study revealed that RF sacral rhizotomy was an effective method for neurogenic bladder with uncontrolled incontinence using conventional therapy among spinal cord injured patients. abstract_id: PUBMED:8632580 Results of the treatment of neurogenic bladder dysfunction in spinal cord injury by sacral posterior root rhizotomy and anterior sacral root stimulation. Purpose: We evaluated the results of treatment of neurogenic bladder dysfunction in spinal cord injury by sacral posterior root rhizotomy and anterior sacral root stimulation using the Finetech-Brindley stimulator. Materials And Methods: In 52 patients with spinal cord lesions and urological problems due to hyperreflexia of the bladder complete posterior sacral root rhizotomy was performed and a Finetech-Brindley sacral anterior root stimulator was implanted. All patients were evaluated and followed with a strict protocol. A minimal 6-month followup is available in 47 cases. Results: Complete continence was achieved in 43 of the 47 patients with 6 months of followup. A significant increase in bladder capacity was attained in all patients. Residual urine significantly decreased, resulting in a decreased incidence of urinary tract infections. In 2 patients upper tract dilatation resolved. In 3 patients rhizotomy was incomplete and higher sectioning of the roots was necessary. One implant had to be removed because of infection. Conclusions: The treatment of neurogenic bladder dysfunction in spinal cord injury by anterior sacral root stimulation with the Finetech-Brindley stimulator in combination with sacral posterior root rhizotomy provides excellent results with limited morbidity. abstract_id: PUBMED:19752869 Sacral rhizotomy: a salvage procedure in a patient with autonomic dysreflexia. Study Design: Case report. Objectives: To show the feasibility of sacral deafferentation as a salvage procedure to resolve life-threatening autonomic dysreflexia. Setting: Paraplegic center in Switzerland. Method And Results: In a patient presenting with acute autonomic dysreflexia leading to cardiac arrest, sacral deafferentation could prevent further episodes of autonomic dysreflexia. Conclusion: In patients with spinal cord injury, autonomic dysreflexia can be triggered by the bladder even without detrusor overactivity. In these cases, sacral deafferentation may be the only salvage procedure to prevent further serious health problems. Thus, this procedure augments the armamentarium of urologists dealing with patients suffering from spinal cord lesions. abstract_id: PUBMED:9833314 Implantation of anterior sacral root stimulators combined with posterior sacral rhizotomy in spinal injury patients. Brindley-Finetech sacral anterior root stimulators combined with posterior sacral rhizotomy were implanted in 68 males and 28 females with spinal cord lesions. In 9 patients the electrodes were implanted extradurally in the sacrum, and in 90 patients they were implanted intradurally (3 patients had a second extradural implant after a first intradural implant). Three patients died from causes unrelated to the implant. Of the 93 surviving patients, 83 used their implants for micturition and 82 were fully continent. The mean bladder capacity increased from 206 ml preoperatively to 564 ml after the operation. Three patients had a preoperative vesicorenal reflux that disappeared after surgery. In all, 51 patients used the stimulator for defecation. Erection was possible with electrical stimulation in 46 males and was used for coitus by 17 couples. Secondary deafferentation at the level of the conus was performed four times. Three patients who had a cerebrospinal fluid leak were operated on again. Two implants had to be removed because of infection. Sacral anterior root stimulation combined with sacral deafferentation is a welcome addition to the treatment of neurogenic bladder in spinal cord injury patients. abstract_id: PUBMED:26643667 Neurogenic bladder: Highly selective rhizotomy of specific dorsal rootlets maybe a better choice. Spinal cord injury results not only in motor and sensory dysfunctions, but also in loss of normal urinary bladder functions. A number of clinical studies were focused on the strategies for improvement of functions of the bladder. Completely dorsal root rhizotomy or selective specific S2-4 dorsal root rhizotomy suppress autonomic hyper-reflexia but have the same defects: it could cause detrusor and sphincter over-relaxation and loss of reflexive erection in males. So precise operation needs to be considered. We designed an experimental trail to test the possibility on the basis of previous study. We found that different dorsal rootlets which conduct impulses from the detrusor or sphincter can be distinguished by electro-stimulation in SD rats. Highly selective rhizotomy of specific dorsal rootlets could change the intravesical pressure and urethral perfusion pressure respectively. We hypothese that for neurogenic bladder following spinal cord injury, highly selective rhizotomy of specific dorsal rootlets maybe improve the bladder capacity and the detrusor sphincter dyssynergia, and at the same time, the function of other pelvic organ could be maximize retainment. abstract_id: PUBMED:31980830 Implantation of Sacral Nerve Stimulator Without Rhizotomy for Neurogenic Bladder in Patient With Spinal Cord Injury: 2-Dimensional Operative Video. There are approximately 12 000 new individuals with spinal cord injury (SCI) each year, and close to 200 000 individuals live with a SCI-related disability in the United States. The majority of patients with SCI have bladder dysfunction as a result of their injury, with over 75% unable to void volitionally following their injury. In patients with traumatic SCI, intermittent catheterization is commonly recommended, but a lack of adherence to clean intermittent catheterization (CIC) has been observed, with up to 50% discontinuing CIC within 5 yr of injury. The Finetech Brindley Bladder System (FBBS) is an implantable sacral nerve stimulator for improving bladder function in patients with SCI, avoiding the need for CIC. The FDA-approved implantation (Humanitarian Device Exemption H980008) of the FBBS is combined with a posterior rhizotomy to reduce reflex contraction of the bladder, improving continence. However, the posterior rhizotomy is irreversible and has unwanted effects; therefore, the current FDA-approved implantation is being studied without rhizotomy as part of a clinical trial (Investigational Device Exemption G150201) (ClinicalTrials.gov Identifier: NCT02978638). In this video, we present a case of a 66-yr-old female who presented 40-yr status post-T12 SCI, resulting in complete paraplegia and neurogenic bladder not satisfactorily controlled with CIC. We demonstrate the operative steps to complete the implantation of the device without rhizotomy in the first patient enrolled as part of the clinical trial Electrical Stimulation for Continence After SCI (NCT02978638). Appropriate IRB and patient consent were obtained. Answer: Sacral posterior rhizotomy does not fully suppress autonomic hyper-reflexia (AHR) in patients with spinal cord injury. Studies have shown that even after complete sacral de-afferentation, AHR persisted in patients with spinal cord injury and always occurred during the stimulation-induced voiding phase (PUBMED:9467480). Although intradural sacral posterior rhizotomy combined with intradural sacral anterior root stimulation is a valuable method to treat the hyperreflexic bladder with incontinence resistant to conservative therapy in spinal cord injured patients, autonomic hyperreflexia was decreased but not suppressed by posterior sacral rhizotomy (PUBMED:8996369). In a case report, sacral deafferentation was used as a salvage procedure to resolve life-threatening autonomic dysreflexia, indicating that in some cases, sacral deafferentation may be necessary to prevent serious health problems related to AHR (PUBMED:19752869). However, it is important to note that the posterior rhizotomy is irreversible and has unwanted effects; therefore, the implantation of sacral nerve stimulators is being studied without rhizotomy as part of a clinical trial (PUBMED:31980830).
Instruction: Does continuous positive airway pressure for extubation in congenital tracheoesophageal fistula increase the risk of anastomotic leak? Abstracts: abstract_id: PUBMED:27228265 Does continuous positive airway pressure for extubation in congenital tracheoesophageal fistula increase the risk of anastomotic leak? A retrospective cohort study. Aim: Immediate post-operative care of tracheoesophageal fistula (TEF) and oesophageal atresia (EA) requires mechanical ventilation. Early extubation is preferred, but subsequent respiratory distress may warrant re-intubation. Continuous positive airway pressure (CPAP) is a well-established modality to prevent extubation failures in preterm infants. However, it is not favoured in TEF/EA, because of the theoretical risk of oesophageal anastomotic leak (AL). The aim of this study was to find out if post-extubation CPAP is associated with increased risk of AL. Methods: Retrospective cohort study (2007-2014). Results: Fifty-one infants underwent primary repair in the newborn period. Median age at surgery was 24 h (interquartile range: 12, 24). In the post-extubation period, 10 received CPAP, whereas 41 did not. The median post-operative day at the commencement of CPAP was 2.5 days (interquartile range: 1, 6 days). Zero out of 10 in the CPAP group and 4/41 in the 'no CPAP' group developed AL on routine post-operative contrast studies (P = 0.57). Zero out of 10 in the CPAP group and 1/41 in the 'no CPAP group' developed recurrence of TEF necessitating re-surgery (P = 1.00). The neonate with recurrent fistula also had coarctation of aorta and needed protracted hospitalisation of 6 months, mainly because of the recurrence of TEF. Conclusion: The use of CPAP in the immediate post-extubation period after corrective surgery for TEF/EA appears to be safe and may not be associated with increased risk of AL or recurrence of the fistula. Information from other centres, surveys and large databases is needed to define the benefits and risks of use of CPAP in these infants. abstract_id: PUBMED:18084992 Supplemental jet ventilation in conscious patients following major oesophageal surgery. Intensive care unit patients are at particular risk of respiratory failure after major abdominal surgery. Non-invasive ventilation or application of continuous positive airway pressure through a face mask may stabilise respiratory function and avoid the need for endotracheal re-intubation. However; there are various contraindications to non-invasive ventilation and/or tracheal re-intubation, such as recent oesophageal anastomosis, anastomotic leakage or tracheal stenting for tracheo-oesophageal fistula. A specific management strategy consisting of continuous intratracheal jet ventilation to support spontaneous respiratory function is described in two patients with contraindications to non-invasive ventilation or mask continuous positive airway pressure after major oesophageal surgery. abstract_id: PUBMED:30814037 Postoperative noninvasive ventilation and complications in esophageal atresia-tracheoesophageal fistula. Purpose: This study examines the impact of postoperative noninvasive ventilation strategies on outcomes in esophageal atresia-tracheoesophageal fistula (EA-TEF) patients. Methods: A single center retrospective chart review was conducted on all neonates followed at the EA-TEF Clinic from 2005 to 2017. Primary outcomes were: survival, anastomotic leak, stricture, pneumothorax, and mediastinitis. Statistical significance was determined using Chi-square and logistic regression (p ≤ .05). Results: We reviewed 91 charts. Twenty-five infants (27.5%) were bridged with postextubation noninvasive ventilation (15 on Continuous Positive Airway Pressure (CPAP), 5 on Noninvasive Positive Pressure Ventilation (NIPPV), and 14 on High-Flow Nasal Cannula (HFNC)). Overall, 88 (96.7%) patients survived, 25 (35.7%) had a stricture, 14 (20%) had anastomotic leak, 9 (12.9%) had a pneumothorax, and 4 (5.7%) had mediastinitis. Use of NIPPV was associated with increased risk of mediastinitis (P = .005). Use of HFNC was associated with anastomotic leak (P = .009) and mediastinitis (P = .036). Conclusions: These data suggest that postoperative noninvasive ventilation techniques are associated with a significantly higher risk of anastomotic leak and mediastinitis. Further prospective research is needed to guide postoperative ventilation strategies in this population. Type Of Study: Retrospective study. Level Of Evidence: IV. abstract_id: PUBMED:36214334 Risk factors for adverse outcomes following surgical repair of esophageal atresia. A retrospective cohort study. Esophageal atresia (EA) is a life-threatening congenital malformation of the esophagus. Despite considerable recent advances in perinatal resuscitation and neonatal care, EA remains an important cause of mortality and morbidity, especially in low-income countries. The aim of this study was to assess risk factors for adverse outcomes following surgical repair of EA at a single center in Tunisia. We performed a retrospective analysis using medical records of neonates with surgical management of EA at our institution from 1 January 2007 to 31 December 2021. In total, 88 neonates were included with a mortality rate of 25%. There were 29 girls and 59 boys. The diagnosis of EA was suspected prenatally in 19 patients. The most common associated anomalies were congenital heart diseases. Prematurity, low birth weight, outborn birth, age at admission &gt;12 hours, congenital heart disease, postoperative sepsis, and anastomotic leak were risk factors for mortality following surgical repair of EA. Anastomotic tension was the only factor associated with short-term complications and the occurrence of short-term complications was predictive of mid-term complications. This study provides physicians and families with contemporary information regarding risk factors for adverse outcomes following surgical repair of EA. Thus, any effort to reduce these risk factors would be critical to improving patient outcomes and reducing cost. Future multi-institutional studies are needed to identify, investigate, and establish best practices and clinical care guidelines for neonates with EA. abstract_id: PUBMED:34823265 Anastomotic Stricture in End-to-End Anastomosis-Risk Factors in a Series of 261 Patients with Esophageal Atresia. Aim: To assess the risk factors for anastomotic stricture (AS) in end-to-end anastomosis (EEA) in patients with esophageal atresia (EA). Methods: With ethical consent, hospital records of 341 EA patients from 1980 to 2020 were reviewed. Patients with less than 3 months survival (n = 30) with Gross type E EA (n = 24) and with primary reconstruction (n = 21) were excluded. Outcome measures were revisional surgery for anastomotic stricture (RSAS) and number of dilatations required for anastomotic patency without RSAS. The factors that were tested for risk of RSAS or dilatations were distal tracheoesophageal fistula (TEF) at the carina in C-type EA (congenital TEF [CTEF]), type A/B EA, antireflux surgery (ARS), anastomotic leakage, recurrent TEF, and Spitz group and congenital heart disease. Main Results: A total of 266 patients, Gross type A (n = 17), B (n = 3), C (n = 237), or D (n = 9) underwent EEA (early n = 240, delayed n = 26). Early anastomotic breakdown required secondary reconstruction in five patients. Of the remaining 261 patients, 17 (6.1%) had RSAS, whereas 244 patients with intact end to end required a median of five (interquartile range: 2-8) dilatations for anastomotic patency. Main risk factors for RSAS or (&gt; 8) dilatations were CTEF, type A/B, ARS, and anastomotic leakage that increased the risk of RSAS or dilatations from 4.6- to 11-fold. Conclusion: The risk of severe AS is associated with long-gap EA, significant gastroesophageal reflux, and anastomotic leakage. abstract_id: PUBMED:27461430 Evaluation of the intraoperative risk factors for esophageal anastomotic complications after primary repair of esophageal atresia with tracheoesophageal fistula. Purpose: The aim of this study is to identify the risk factors for esophageal anastomotic stricture (EAS) and/or anastomotic leakage (EAL) after primary repair of esophageal atresia with tracheoesophageal fistula (EA/TEF) in infants. Methods: A retrospective chart review of 52 patients with congenital EA/TEF between January 2000 and December 2015 was conducted. Univariate and multivariate analyses were performed to identify the risk factors for anastomotic complications. Results: Twenty-four patients were excluded from the analysis because they had insufficient data, trisomy 18 syndrome, delayed anastomosis, or multi-staged operations; the remaining 28 were included. Twelve patients (42.9 %) had anastomotic complications. EAS occurred in 12 patients (42.9 %), and one of them had EAL (3.57 %). There was no correlation between anastomotic complications and birth weight, gestational weeks, sex, the presence of an associated anomaly, age at the time of repair, gap between the upper pouch and lower pouch of the esophagus, number of sutures, blood loss, and gastroesophageal reflux. Anastomosis under tension and tracheomalacia were identified as risk factors for anastomotic complications (odds ratio 15, 95 % confidence interval (CI) 1.53-390.0 and odds ratio 8, 95 % CI 1.33-71.2, respectively). Conclusion: Surgeons should carefully perform anastomosis under less tension to prevent anastomotic complications in the primary repair of EA/TEF. abstract_id: PUBMED:28953251 Respiratory Morbidity in Children with Repaired Congenital Esophageal Atresia with or without Tracheoesophageal Fistula. Congenital esophageal atresia with or without tracheoesophageal fistula (CEA ± TEF) is a relatively common malformation that occurs in 1 of 2500-4500 live births. Despite the refinement of surgical techniques, a considerable proportion of children experience short- and long-term respiratory complications, which can significantly affect their health through adulthood. This review focuses on the underlying mechanisms and clinical presentation of respiratory morbidity in children with repaired CEA ± TEF. The reasons for the short-term pulmonary impairments are multifactorial and related to the surgical complications, such as anastomotic leaks, stenosis, and recurrence of fistula. Long-term respiratory morbidity is grouped into four categories according to the body section or function mainly involved: upper respiratory tract, lower respiratory tract, gastrointestinal tract, and aspiration and dysphagia. The reasons for the persistence of respiratory morbidity to adulthood are not univocal. The malformation itself, the acquired damage after the surgical repair, various co-morbidities, and the recurrence of lower respiratory tract infections at an early age can contribute to pulmonary impairment. Nevertheless, other conditions, including smoking habits and, in particular, atopy can play a role in the recurrence of infections. In conclusion, our manuscript shows that most children born with CEA ± TEF survive into adulthood, but many comorbidities, mainly esophageal and respiratory issues, may persist. The pulmonary impairment involves many underlying mechanisms, which begin in the first years of life. Therefore, early detection and management of pulmonary morbidity may be important to prevent impairment in pulmonary function and serious long-term complications. To obtain a successful outcome, it is fundamental to ensure a standardized follow-up that must continue until adulthood. abstract_id: PUBMED:32253017 Is thoracoscopic esophageal atresia repair safe in the presence of cardiac anomalies? Background: Esophageal atresia (EA) is often associated with congenital heart disease (CHD). Repair of EA by the thoracoscopic approach places physiological stress on a newborn with CHD. This paper reviews the outcomes of infants with CHD who had undergone thoracoscopic EA repair, comparing their outcomes to those without CHD. Methods: This was a review of infants who underwent thoracoscopic EA repair from 2009 to 2017 at one institution. Operative time and outcomes were analyzed in relation to CHD status. Results: Twenty five infants underwent thoracoscopic EA repair during the study period. Seventeen (68%) had associated anomalies of whom 9 (36%) had cardiac anomalies. The mean operative time was 217 min. There was no difference in operative time between CHD and non-CHD cases (estimate 20 min longer operative time in the presence of a cardiac anomaly [95% CI -20 to 57]). Two cases were converted to open thoracotomy; both were non-CHD. There was no difference in the time to feeding, time in intensive care unit or time in hospital between CHD and non-CHD cases. Five patients developed an anastomotic leak (two CHD and three non-CHD) of which two were clinical; all were managed conservatively. There was no case of recurrent fistula. Conclusions: This pilot study did not find evidence that thoracoscopic EA repair compromised outcomes in children with congenital heart disease. A prospective multicenter study with long-term follow-up is recommended to confirm whether thoracoscopic repair in CHD is truly equivalent to the open operation. Type Of Study: Therapeutic. Level Of Evidence: Level III. abstract_id: PUBMED:30392126 The multi-disciplinary management of complex congenital and acquired tracheo-oesophageal fistulae. Aim Of The Study: Complex tracheo-oesophageal fistulae (TOF) are rare congenital or acquired conditions in children. We discuss here a multidisciplinary (MDT) approach adopted over the past 5 years. Methods: We retrospectively collected data on all patients with recurrent or acquired TOF managed at a single institution. All cases were investigated with neck and thorax CT scan. Other investigations included flexible bronchoscopy and bronchogram (B&amp;B), microlaryngobronchoscopy (MLB) and oesophagoscopy. All cases were subsequently discussed in an MDT meeting on an emergent basis if necessary. Main Results: 14 patients were referred during this study period of which half had a congenital aetiology and the other half were acquired. The latter included button battery ingestions (5/7) and iatrogenic injuries during oesophageal atresia (OA) repair. Surgical repair was performed on cardiac bypass in 3/7 cases of recurrent congenital fistulae and all cases of acquired fistulae. Post-operatively, 9/14 (64%) patients suffered complications including anastomotic leak (1), bilateral vocal cord paresis (1), further recurrence (1), and mortality (1). Ten patients continue to receive surgical input encompassing tracheal/oesophageal stents and dilatations. Conclusions: MDT approach to complex cases is becoming increasingly common across all specialties and is important in making decisions in these difficult cases. The benefits include shared experience of rare cases and full access to multidisciplinary expertise. abstract_id: PUBMED:3820002 Livaditis myotomy in long-gap esophageal atresia. During 1973 to 1983, Livaditis myotomy was performed on 12 esophageal atresia patients in whom anastomosis otherwise would have been impossible. Ten patients had the usual malformation with a distal fistula, while two had an isolated atresia. Four patients belonged to Waterston's risk group A, 4 to group B, and 4 to group C. In 11 cases anastomosis was permitted by myotomy. In one patient anastomosis was impossible even after myotomy. This baby weighed 1,380 g and she also had trisomy 18 with severe cardiac and renal anomalies leading to death. One patient had a refistula and died of pneumonia, she also had a severe congenital cardiac malformation. There were no anastomotic leakages to free pleural cavity nor complications related to the myotomy. The ten survivors were followed up for a mean period of 5.4 years (range 1.6 to 11.3 years). The subjective results at last follow-up according to the criteria of Desjardins were excellent in eight and good in two patients. Esophagograms showed no strictures, but two patients had a myotomy pouch. Answer: The use of continuous positive airway pressure (CPAP) for extubation in congenital tracheoesophageal fistula (TEF) does not appear to increase the risk of anastomotic leak (AL). A retrospective cohort study found that among infants who underwent primary repair for TEF and esophageal atresia (EA), those who received CPAP post-extubation did not have a higher incidence of AL compared to those who did not receive CPAP (PUBMED:27228265). This suggests that CPAP can be safely used in the immediate post-extubation period after corrective surgery for TEF/EA without an increased risk of AL or recurrence of the fistula. However, another study examining the impact of postoperative noninvasive ventilation strategies on outcomes in EA-TEF patients found that the use of noninvasive positive pressure ventilation (NIPPV) was associated with an increased risk of mediastinitis (PUBMED:30814037). The use of high-flow nasal cannula (HFNC) was associated with anastomotic leak and mediastinitis, suggesting that certain noninvasive ventilation techniques may be associated with a higher risk of complications postoperatively. In summary, while CPAP does not seem to increase the risk of AL in the immediate postoperative period for TEF/EA patients, caution should be exercised with other forms of noninvasive ventilation, as they may be associated with an increased risk of complications such as anastomotic leak and mediastinitis. Further research is needed to guide postoperative ventilation strategies in this population (PUBMED:30814037).
Instruction: Is Canadian surgical residency training stressful? Abstracts: abstract_id: PUBMED:22854151 Is Canadian surgical residency training stressful? Background: Surgical residency has the reputation of being arduous and stressful. We sought to determine the stress levels of surgical residents, the major causes of stress and the coping mechanisms used. Methods: We developed and distributed a survey among surgical residents across Canada. Results: A total of 169 participants responded: 97 (57%) male and 72 (43%) female graduates of Canadian (83%) or foreign (17%) medical schools. In all, 87% reported most of the past year of residency as somewhat stressful to extremely stressful, with time pressure (90%) being the most important stressor, followed by number of working hours (83%), residency program (73%), working conditions (70%), caring for patients (63%) and financial situation (55%). Insufficient sleep and frequent call was the component of residency programs that was most commonly rated as highly stressful (31%). Common coping mechanisms included staying optimistic (86%), engaging in enjoyable activities (83%), consulting others (75%) and exercising (69%). Mental or emotional problems during residency were reported more often by women (p = 0.006), who were also more likely than men to seek help (p = 0.026), but men reported greater financial stress (p = 0.036). Foreign graduates reported greater stress related to working conditions (p &lt; 0.001), residency program (p = 0.002), caring for family members (p = 0.006), discrimination (p &lt; 0.001) and personal and family safety (p &lt; 0.001) than Canadian graduates. Conclusion: Time pressure and working hours were the most common stressors overall, and lack of sleep and call frequency were the most stressful components of the residency program. Female sex and graduating from a non-Canadian medical school increased the likelihood of reporting stress in certain areas of residency. abstract_id: PUBMED:27692359 Surgical Residency Training in Developing Countries: West African College of Surgeons as a Case Study. Background: In 1904, William Halsted introduced the present model of surgical residency program which has been adopted worldwide. In some developing countries, where surgical residency training programs are new, some colleges have introduced innovations to the Halsted's original concept of surgical residency training. These include 1) primary examination, 2) rural surgical posting, and 3) submission of dissertation for final certification. Study Design: Our information was gathered from the publications on West African College of Surgeons' (WACS) curriculum of the medical schools, faculty papers of medical schools, and findings from committees of medical schools. Verbal information was also gathered via interviews from members of the WACS. Additionally, our personal experience as members and examiners of the college are included herein. We then noted the differences between surgical residency training programs in the developed countries and that of developing countries. Results: The innovations introduced into the residency training programs in the developing countries are mainly due to the emphasis placed on paper qualifications and degrees instead of performance. Conclusion: We conclude that the innovations introduced into surgical residency training programs in developing countries are the result of the misconception of what surgical residency training programs entail. abstract_id: PUBMED:27452315 Abortion training in Canadian obstetrics and gynecology residency programs. Objective: To evaluate the current state of abortion training in Canadian Obstetrics and Gynecology residency programs. Study Design: Surveys were distributed to all Canadian Obstetrics and Gynecology residents and program directors. Data were collected on inclusion of abortion training in the curriculum, structure of the training and expected competency of residents in various abortion procedures. Results: We distributed and collected surveys between November 2014 and May 2015. In total, 301 residents and 15 program directors responded, giving response rates of 55% and 94%, respectively. Based on responses by program directors, half of the programs had "opt-in" abortion training, and half of the programs had "opt-out" abortion training. Upon completion of residency, 66% of residents expected to be competent in providing first-trimester surgical abortion in an ambulatory setting, and 35% expected to be competent in second-trimester surgical abortion. Overall, 15% of residents reported that they were not aware of or did not have access to abortion training within their program, and 69% desired more abortion training during residency. Conclusion: Abortion training in Canadian Obstetrics and Gynecology residency programs is inconsistent, and residents desire more training in abortion. This suggests an ongoing unmet need for training in this area. Policies mandating standardized abortion training in obstetrics and gynecology residency programs are necessary to improve delivery of family planning services to Canadian women. Implications: Abortion training in Canadian Obstetrics and Gynecology residency programs is inconsistent, does not meet resident demand and is unlikely to fulfill the Royal College of Physicians and Surgeons of Canada objectives of training in the specialty. abstract_id: PUBMED:37580585 Expectations of surgical residents for their residency training Background: The young generation of surgeons sets new requirements for educational and working conditions. This is frequently interpreted as a lack of motivation to perform; however, instead there is a claim for high-quality training with the aim of acquiring competence in the operative and perioperative setting as well as for responsible working time models. Objective: Aim of this article is to present expectations of surgical residents regarding residency training under current influencing factors, such as a high workload, the new training regulations or expected changes in the hospital landscape and to identify options for optimizing training and working conditions. Material And Methods: In addition to an extensive literature search, this article is based on published opinions and survey results as well as lectures and discussions at congresses held in the past year. Results And Discussion: To ensure a modern high-quality education and to maintain enthusiasm for surgery, the need for adjustments and innovation must be recognized and long-requested modifictions to established structures must be implemented. Flexible working models, the structured and transparent implementation of surgical residency training and modern training units, long-term planning of training curricula and new personnel structures with an expansion of teaching and feedback culture are options for improvement. abstract_id: PUBMED:37081373 Spirituality and Religion in Canadian Psychiatric Residency Training: Follow-up Survey of Canadian Psychiatry Residency Programs. Objective: This study assesses the availability and nature of psychiatry resident training in religion and spirituality across Canada. Evidence shows that religious and spiritual topics are important to psychiatric patients and that psychiatrist competence in approaching these topics is correlated to whether they have had previous training in them. Prior studies have shown a lack of training in religion and spirituality in Canadian psychiatry programs and recommended incorporation into psychiatry residency curricula. Method: A survey was conducted, asking questions about the amount and type of training in religion and spirituality that was accessible to psychiatry residents in the 17 psychiatry residency programs in Canada. One response was sought from each institution by reaching out to the institutions' program directors and requesting that a knowledgeable faculty member complete the survey. Results: Out of 14 responding psychiatric residency programs, 2 reported no training opportunities in religion or spirituality, 4 reported only voluntary training opportunities that were largely resident directed, and 8 reported mandatory training. Conclusions: The number of Canadian psychiatry residency programs providing mandatory training in religion and spirituality has increased since the prior published survey in 2003 and there are fewer programs reporting no training at all. However, overall, Canadian psychiatry institutions still place less emphasis on religious/spiritual education than recommended by the international psychiatric community. Several Canadian institutions report well-received implementation of curricula on religion and spirituality that could inform other Canadian institutions. abstract_id: PUBMED:31920440 The impact of surgical training on the practice of recently graduated ophthalmologists at Riyadh's ophthalmology residency program. Purpose: To evaluate how well the training residency program prepared recent graduates to practice comprehensive ophthalmology with special focus on surgical competency. Methods: This is a cross-sectional study that included Ophthalmologists who graduated from Riyadh ophthalmology residency program between the years 2002-2012. A total of 126 graduates were invited through e-mails and electronic social media platforms to anonymously complete an electronic survey. The survey included questions that aim to assess the surgical competency of graduated ophthalmologists in doing various surgical procedures that were among the requirements of residency training. Results: Ninety participants in the mean age of 38.7 years completed the survey. The majority of respondents (93%) joined fellowship programs and around half of them sub-specialized in anterior segment. More than half (55.6%) of the respondents reported that the acquired surgical skills during residency training were adequate. By the end of the residency period, the respondents' competency in doing extra capsular cataract extraction was better than phacoemulsification while 52% of them reported incompetence in both glaucoma and strabismus surgeries whereas the majority were incompetent in oculoplastics' procedures (e.g. entropion repair). However, the majority felt competent in doing primary repair, minor and laser procedures. Lack of exposure was the major cause of such incompetency. Conclusion: This self-reported survey showed that the lack of adequate surgical exposure during residency training was the main reason of incompetency. This resulted in reduction of ophthalmologists' future practice of surgical procedures outside the scope of their sub-specialty. This emphasizes that physicians mainly practice what they surgically acquire during their fellowship training. abstract_id: PUBMED:30783619 Position Paper From the Association of Pathology Chairs: Surgical Pathology Residency Training. Training in surgical pathology specimen dissection and microscopic diagnosis is an integral part of pathology residency training, as surgical pathology is one of the defining activities of most pathologists. The Accreditation Council for Graduate Medical Education and the American Board of Pathology policies delineate guidelines and requirements for residency training. Both the ACGME and ABP require that residents are ready for "independent practice" upon completion of training (ACGME) and for board eligibility (ABP). This position paper, developed through a consensus process involving the Association of Pathology Chairs, including the Program Directors and Graduate Medical Education committee, expands on these guidelines and the importance of gross dissection as a part of training. abstract_id: PUBMED:25278788 Evaluation of the orthopedic residency training program in Saudi Arabia and comparison with a selected Canadian residency program. Objective: The primary aim of the present study was to assess the quality of the Saudi Orthopedic Residency Program. Methodology: As a comparator, a cross-sectional survey involving 76 Saudi residents from different training centers in Saudi Arabia namely; Riyadh, Jeddah, Medina, Abha, and Dammam and 15 Canadian. Results: The results showed that Canadian residents read more peer-reviewed, scholarly articles compared with Saudi residents (P=0.002). The primary surgical role for residents was to hold retractors during surgery. The survey respondents strongly supported the ability to recommend removal of incompetent trainers. Saudi trainees were more apprehensive of examinations than Canadian trainees (P&lt;0.0001). Most residents preferred studying multiple-choice questions before examinations. Saudi and Canadian participants considered their programs to be overcrowded. Unlike Canadian participants, Saudi trainees reported an inadequate level of training (P&lt;0.0001). Conclusion: Educational resources should be readily accessible and a mentorship system monitoring residents' progress should be developed. The role of the resident must be clearly defined and resident feedback should not be ignored. Given the importance of mastering basic orthopedic operative skills for residents, meaningful remedial action should be taken with incompetent trainers. abstract_id: PUBMED:35071646 A nationwide cross-sectional study to assess the impact of COVID-19 on surgical residency programs in India. Background: The COVID-19 pandemic with its plenitude of hardships has been a challenge for residents in training. Besides the fear of contracting the disease, the complete reconfiguration of hospital services has severely affected the surgical residency programs across India. The current study highlights the lacunae that have arisen in the residency programs and design appropriate solutions to reframe the remaining part of the surgical training. Materials And Methods: The present study is an observational study based on a questionnaire survey done in November 2020 aimed at gauging the mood and perspective of residents across India pursuing surgical residency programs on how the pandemic has impacted various domains of their residency namely surgical training, clinical skills, research, academic curriculum, and mental health. The analysis was performed using an Internet-based program. Results: Seventy-eight percent of residents felt that they have lost crucial months of surgical training. Impact on clinical skills was reported by 73.9% of residents. Almost half of the residents (49.1%) believed that there has been a definite increase in research activities and 81.7% believed that the academic curriculum was affected during the past 6 months. Furthermore, 86% of respondents admitted to have had a profound impact on their mental health as a result of the pandemic. Conclusion: Formulating reform policies in the current residency programs is paramount to train the current and future generation of surgeons. Whereas personal safety and wellness of the residents is paramount during these times, residency training via digital medical education and various other innovative ways needs to continue. abstract_id: PUBMED:28004190 Systematic review of the implementation of simulation training in surgical residency curriculum. Purpose: We reviewed the literature regarding the specific methods and strategies for implementing simulation-based training into the modern surgical residency curriculum. Residency programs are still struggling with how best to implement it into their curricula from a practical viewpoint. Methods: A systematic review was performed using Ovid MEDLINE, EMBASE, PubMed, PsycINFO, Web of Science, and other resources for studies involving the use of simulation for technical skills training in the surgical residency curriculum. Studies were selected based on the integration of simulation into the curriculum and/or a description of the details of implementation and the resources required. Results: In total, 2533 unique citations were retrieved based on this search, and 31 articles met the inclusion criteria. Most simulators were focused on laparoscopic procedures, and training occurred most often in a skills lab. The assessment of skills consisted mostly of speed of task completion. Only 4 studies addressed issues of cost, and 6 programs mentioned human resources without any mention of skills center personnel or administrative support. Conclusions: All of the studies described the nature of the simulation training, but very few commented on how it was actually implemented and what was needed from organizational, administrative and logistical perspectives. Answer: Yes, Canadian surgical residency training is considered stressful. A survey among surgical residents across Canada found that 87% of respondents reported most of the past year of residency as somewhat stressful to extremely stressful. The major stressors identified were time pressure (90%), number of working hours (83%), residency program (73%), working conditions (70%), caring for patients (63%), and financial situation (55%). Insufficient sleep and frequent call were the most commonly rated highly stressful components of the residency program (PUBMED:22854151).
Instruction: Is hand washing really needed in an intensive care unit? Abstracts: abstract_id: PUBMED:7600829 Is hand washing really needed in an intensive care unit? Objectives: To determine whether a rigorous antiseptic hand washing of bare hands with 4% chlorhexidine and alcohol reduced fingertip microbial colonization as compared with the use of boxed, clean, nonsterile latex gloves. In addition, to investigate if aseptic donning technique and/or a prior hand washing would reduce the level of glove contamination. Design: Prospective, randomized, crossover design, with each subject serving as his/her own control. Setting: University intensive care unit. Subjects: Forty-three intensive care nurses. Interventions: The fingertips of 20 nurses were cultured before and after a strict antiseptic hand washing and before and after the routine and aseptic donning of sterile gloves. Subsequently, the fingertips of 43 nurses were cultured before and after the casual donning of nonsterile gloves over unwashed hands and before and after a strict antiseptic hand washing. Fingertip cultures were plated directly on agar, incubated for 24 hrs, and counted and recorded as the number of colony-forming units (cfu) for each hand. Different colony types were then subcultured. Measurements And Main Results: Hand washing with antiseptic reduced colonization from 84 to 2 cfu (p &lt; .001). The proportion of cases with &gt; or = 200 cfu/hand was reduced from 30% to 9%. Aseptic or casual donning of sterile gloves, with or without prior antiseptic hand washing, resulted in consistently low glove counts between 0 and 1.25 cfu. Nonsterile gloves casually donned over washed or unwashed bare hands diminished the bioburden to 2.17 and 1.34 cfu, respectively. No qualitative difference was found in the microorganisms recovered from gloved or bare hands. Conclusions: Antiseptic hand washing and the use of nonsterile gloves over unwashed hands confer similar reductions in the number of microorganisms. There is no additional benefit with the use of aseptic donning technique, prior antiseptic hand washing, or the use of individually packaged sterile gloves. abstract_id: PUBMED:30189013 Addressing Hand Hygiene Compliance in a Low-Resource Neonatal Intensive Care Unit: a Quality Improvement Project. Objective: Our goal for this study was to quantify healthcare provider compliance with hand hygiene protocols and develop a conceptual framework for increasing hand hygiene compliance in a low-resource neonatal intensive care unit. Materials And Methods: We developed a 3-phase intervention that involved departmental discussion, audit, and follow-up action. A 4-month unobtrusive audit during night and day shifts was performed. The audit results were presented, and a conceptual framework of barriers to and solutions for increasing hand hygiene compliance was developed collectively. Results: A total of 1308 hand hygiene opportunities were observed. Among 1227 planned patient contacts, hand-washing events (707 [58.6%]), hand rub events (442 [36%]), and missed hand hygiene (78 [6.4%]) events were observed. The missed hand hygiene rate was 20% during resuscitation. Missed hand hygiene opportunities occurred 3.2 times (95% confidence interval, 1.9-5.3 times) more often during resuscitation procedures than during planned contact and 6.14 times (95% confidence interval, 2.36-16.01 times) more often when providers moved between patients. Structural and process determinants of hand hygiene noncompliance were identified through a root-cause analysis in which all members of the neonatal intensive care unit team participated. The mean hand-washing duration was 40 seconds. In 83% of cases, drying hands after washing was neglected. Hand recontamination after hand-washing was seen in 77% of the cases. Washing up to elbow level was observed in 27% of hand-wash events. After departmental review of the study results, hand rubs were placed at each bassinet to address these missed opportunities. Conclusions: Hand hygiene was suboptimal during resuscitation procedures and between patient contacts. We developed a conceptual framework for improving hand hygiene through a root-cause analysis. abstract_id: PUBMED:11918115 Limited impact of sustained simple feedback based on soap and paper towel consumption on the frequency of hand washing in an adult intensive care unit. Objective: To determine whether hand washing would increase with sustained feedback based on measurements of soap and paper towel consumption. Design: Prospective trial with a nonequivalent control group. Setting: Open multibed rooms in the Omaha Veterans Affairs Medical Center's Surgical Intensive Care Unit (SICU) and Medical Intensive Care Unit (MICU). Subjects: Unit staff. Intervention: Every weekday from May 26 through December 8, 1998, we recorded daytime soap and paper towel consumption, nurse staffing, and occupied beds in the SICU (intervention unit) and the MICU (control unit) and used these data to calculate estimated hand washing episodes (EHWEs), EHWEs per occupied bed per hour, and patient-to-nurse ratios. In addition, from May 26 through June 26 (baseline period) and from November 2 through December 8 (follow-up period), live observers stationed daily for random 4-hour intervals in the MICU and the SICU counted actual hand washing episodes (CHWEs). The intervention consisted of posting in the SICU, but not in the MICU, a graph showing the weekly EHWEs per occupied bed per hour for the preceding 5 weeks. Results: Directly counted hand washing fell in the SICU from a baseline of 2.68+/-0.72 (mean +/- standard deviation) episodes per occupied bed per hour to 1.92+/-1.35 in the follow-up period. In the MICU, episodes fell from 2.58+/-0.95 (baseline) to 1.74+/-0.69. In the MICU, the withdrawal of live observers was associated with a decrease in estimated episodes from 1.36+/-0.49 at baseline to 1.01+/-0.36, with a return to 1.16+/-0.50 when the observers returned. In the SICU, a similar decrease did not persist throughout a period of feedback. Estimated hand washing correlated negatively with the patient-to-nurse ratio (r = -0.35 for the MICU, r = -0.46 for the SICU). Conclusions: Sustained feedback on hand washing failed to produce a sustained improvement. Live observers were associated with increased hand washing, even when they did not offer feedback. Hand washing decreased when the patient-to-nurse ratio increased. abstract_id: PUBMED:35114323 Hand hygiene performance in an intensive care unit before and during the COVID-19 pandemic. The current COVID-19 pandemic has heightened the focus on infection prevention in hospitals. We evaluated hand hygiene compliance with alcohol-based hand rub via electronic observation among healthcare workers in an intensive care unit from 2017 to 2020. The COVID-19 pandemic was not associated with an increase in hand hygiene compliance. abstract_id: PUBMED:9800173 Hand dermatitis in intensive care units. An investigation of the prevalence of occupational hand dermatitis in two intensive care units at a large teaching hospital was conducted. Information concerning the presence of occupational hand dermatitis, frequency of hand-washing, severity of the rash, aggravating conditions, history of atopy, and demographic factors (age, race, gender, and occupation) was collected via a self-administered questionnaire. The prevalence of occupational hand dermatitis was found to be 55.6% in the total reporting population of the units and was 69.7% in the most highly exposed workers (those reporting a frequency of hand-washing exceeding 35 times per shift). No relationship was found between occupational hand dermatitis and reported age, gender, race, atopic status, history of previous hand dermatitis, and duration of employment. Hand-washing frequency greater than 35 times per shift was strongly associated with occupational hand dermatitis (odds ratio = 4.13, P &lt; 0.005). The high prevalence of occupational hand dermatitis found in this study of intensive care unit workers causes concern regarding the risk of health care workers in such units when exposed to blood-borne diseases. abstract_id: PUBMED:27720317 Developing professional habits of hand hygiene in intensive care settings: An action-research intervention. Objectives: To explore perceptions and unconscious psychological processes underlying handwashing behaviours of intensive care nurses, to implement organisational innovations for improving hand hygiene in clinical practice. Research Methodology: An action-research intervention was performed in 2012 and 2013 in the intensive care unit of a public hospital in Italy, consisting of: structured interviews, semantic analysis, development and validation of a questionnaire, team discussion, project design and implementation. Five general workers, 16 staff nurses and 53 nurse students participated in the various stages. Results: Social handwashing emerged as a structured and efficient habit, which follows automatically the pattern "cue/behaviour/gratification" when hands are perceived as "dirty". The perception of "dirt" starts unconsciously the process of social washing also in professional settings. Professional handwashing is perceived as goal-directed. The main concern identified is the fact that washing hands requires too much time to be performed in a setting of urgency. These findings addressed participants to develop a professional "habit-directed" hand hygiene procedure, to be implemented at beginning of workshifts. Conclusions: Handwashing is a ritualistic behaviour driven by deep and unconscious patterns, and social habits affect professional practice. Creating professional habits of hand hygiene could be a key solution to improve compliance in intensive care settings. abstract_id: PUBMED:17518892 Effectiveness of hand-washing teaching programs for families of children in paediatric intensive care units. Aims: The authors developed a video-centred teaching program based on social learning principles to demonstrate hand-washing technique. A comparison was made between families who viewed the video and families who were taught the same techniques with the aid of an illustrated poster in terms of compliance and improvement in hand-washing skills. Background: Nosocomial infections are a significant cause of morbidity and mortality in paediatric intensive care unit patients. Hand hygiene is considered the most important preventive action against hospital-acquired infections. A number of studies have shown that increased compliance with hand-washing guidelines for health-care workers leads to decreases in nosocomial infection rates. Furthermore, recommendations have been made to ensure that parents who visit their children in intensive care units wash their hands first. Study Design: Quasi-experimental time series. Compliance and accuracy measurements were collected during one to five visits following the initial teaching intervention. Methods: A total of 123 families, who visited paediatric intensive care units, were recruited and assigned to two groups - one experimental (61 families) and the other a comparison group (62). Participants in the comparison group were taught hand-washing skills using simple illustrations. A 20-item hand-washing checklist was used to examine hand-washing compliance and accuracy. Results: No significant differences were noted in terms of demographics between the two groups. Results from a general estimated equation analysis showed that families in the experimental group had higher compliance and accuracy scores at statistically significant levels. Conclusion: The video-based teaching program was effective in increasing compliance and accuracy with a hand-washing policy among families with children in intensive care units. Relevance To Clinical Practice: The education program is a simple, low-cost, low technology intervention for substantially reducing the incidence of nosocomial infection. abstract_id: PUBMED:36594651 Action research on promoting hand hygiene practices in an intensive care unit. Aim: Evaluate the intensive care acquired infections incidence and the change over time in infection practices in one intensive care unit. Design: We used an action research approach with cyclical activities. Methods: Our study included two cycles with hand hygiene observation based on the WHO's five-moments observation tool, observing hand hygiene practices, analysing the observations, and giving feedback on observations, intensive care acquired infection rates, and alcohol-based hand rub consumption. The Revised Standards for Quality Improvement Reporting Excellence is the basis for this research report describing research aimed at improving patient safety and quality of care. Results: During the study, annual alcohol-based hand rub consumption increased by 6.7 litres per 1000 patient days and observed hand hygiene compliance improved. In the first cycle of the study, there was a decrease in critical care acquired infection rates, but the improvement was not sustainable. abstract_id: PUBMED:36243174 Eye-tracking to observe compliance with hand hygiene in the intensive care unit: a randomized feasibility study. Background: Healthcare-associated infections are associated with increased patient mortality. Hand hygiene is the most effective method to reduce these infections. Despite simplification of this easy intervention, compliance with hand disinfection remains low. Current assessment of hand hygiene is mainly based on observation by hygiene specialists. The aim of this study was to investigate additional benefits of eye-tracking during the analysis of hand hygiene compliance of healthcare professionals in the intensive care unit. Methods: In a simulated, randomized crossover study conducted at the interdisciplinary intensive care unit at University Hospital Zurich, Switzerland, doctors and nurses underwent eye-tracking and completed two everyday tasks (injection of 10 μg norepinephrine via a central venous line, blood removal from the central line) in two scenarios where the locations of alcoholic dispensers differed ('in-sight' and 'out-of-sight'). The primary outcomes were dwell time, revisits, first fixation duration and average fixation duration on three areas of interest (central venous line, alcohol dispenser, protective glove box) for both scenarios. Compliance with hand hygiene guidelines was analysed. Findings: Forty-nine participants (35 nurses, 14 doctors) were included in this study. Eye-tracking provided additional useful information compared with conventional observations. Dwell time, revisits, first fixation duration and average fixation duration did not differ between the two scenarios for all areas of interest. Overall compliance with recommended hand hygiene measures was low in both doctors (mean 20%) and nurses (mean 42.9%). Conclusion: Compared with conventional observations, eye-tracking offered additional helpful insights and provided an in-depth analysis of gaze patterns during the recording of hand hygiene compliance in the intensive care unit. abstract_id: PUBMED:34483612 Improving Hand Hygiene Compliance of Intensive Care Unit by Using Pender's Model. Background: Hand hygiene (HH) compliance is an effective behavior in controlling hospital-acquired infection because the hand is the main means of transmitting infections in patient-medical staff communication as well as the inanimate environment. This study aimed to explore the effect of applying Pender's Health Promotion Model on the HH compliance of intensive care unit staff. Methods: This quasi-experimental study with a single research group was conducted from January to July in 2019. The required data were collected from 90 staff of the intensive care units of Imam Reza Hospital in Mashhad, Iran through 1796 and 2343 opportunity of monitoring before and after the intervention. The data collection instruments were a standard HH observation form and a researcher-made HH questionnaire in the light of Pender's health promotion model. The data were statistically analyzed in SPSS using Paired-samples T-test and Chi-squared test. Results: The mean age of the 90 included participants was 35.92 (± 6.5) years and the mean length of their work experience was 10 (±1.5). The hand hygiene index rose from 23% before the intervention to 41.4% after the intervention (p=0.001). Moreover, statistically significant differences were found in moments after touching surroundings (p=0.001), before and after touching a patient (p=0.001), and also in perceived barriers (p=0.015), interpersonal influences (p=0.008) and situational influences (p&lt;0.001). Conclusion: Pender's model showed to have improved the staff's HH compliance as a professional behavior. Answer: Yes, hand washing is indeed necessary in an intensive care unit (ICU). The studies provided highlight the importance of hand hygiene in reducing microbial colonization and preventing nosocomial infections in ICU settings. One study found that rigorous antiseptic hand washing with 4% chlorhexidine and alcohol significantly reduced fingertip microbial colonization compared to unwashed hands. It also showed that the use of nonsterile gloves over unwashed hands could similarly reduce the number of microorganisms, suggesting that hand washing and the use of gloves are both effective in maintaining hand hygiene in the ICU (PUBMED:7600829). Another study emphasized the suboptimal hand hygiene observed during resuscitation procedures and between patient contacts in a low-resource neonatal ICU. The study developed a conceptual framework for improving hand hygiene through a root-cause analysis, indicating the critical need for hand hygiene compliance (PUBMED:30189013). Research also indicates that hand hygiene practices are influenced by various factors, including the presence of live observers, patient-to-nurse ratios, and sustained feedback based on soap and paper towel consumption. However, sustained feedback alone did not produce a sustained improvement in hand washing, suggesting that more comprehensive strategies are needed to ensure consistent hand hygiene compliance (PUBMED:11918115). The COVID-19 pandemic, despite increasing the focus on infection prevention, did not lead to an increase in hand hygiene compliance in one study, highlighting the ongoing challenges in maintaining hand hygiene practices (PUBMED:35114323). Occupational hand dermatitis, which can be a consequence of frequent hand washing, was found to be prevalent among ICU workers, particularly those washing their hands more than 35 times per shift. This underscores the need for hand hygiene practices that also consider the skin health of healthcare workers (PUBMED:9800173). Interventions such as action-research projects and educational programs, including video-centered teaching and the use of Pender's Health Promotion Model, have been shown to improve hand hygiene compliance among ICU staff and families of children in pediatric ICUs (PUBMED:27720317, PUBMED:17518892, PUBMED:34483612). In summary, hand washing is a critical practice in ICUs to prevent infections and ensure patient safety. Various strategies and interventions are necessary to promote and sustain hand hygiene compliance among healthcare workers in these settings.
Instruction: Preoperative assessment of renal vascular anatomy for donor nephrectomy: Is CT superior to MRI? Abstracts: abstract_id: PUBMED:21193143 Preoperative assessment of renal vascular anatomy for donor nephrectomy: Is CT superior to MRI? Background: computed tomography angiography (CTA) and magnetic resonance angiography (MRA) are both used in the preoperative assessment of vascular anatomy before donor nephrectomy. Our objective was to determine retrospectively and to compare the sensitivity of CTA and MRA imaging in preoperative renal vascularisation in living kidney donors. Patients And Methods: between 1999 and 2007, 42 kidney donors were assessed in our center: 27 by MRA, 10 by CTA, and five by both techniques. Images were interpreted using multiplanar reconstructions. Results were compared retrospectively with peroperative findings; discordant cases were re-examined by an experienced radiologist. Numbers of vessels detected with imaging methods was compared with numbers actually found at the operating time. Results: MRA showed 35/43 arteries (Se 81.4 %) and 33/34 veins (Se 97.1 %), and CTA showed 18/18 arteries (Se 100 %) and 15/16 veins (Se 93.8 %). The presence of multiple arteries was detected in only one third of cases (3/9) on MRI scans; this difference was statistically significant. The missed arteries were not detected on second examination of the MRI scans with the knowledge of peroperative findings. Conclusion: MRA is less sensitive than CTA for preoperative vascularisation imaging in living renal donors, especially in the detection of multiple renal arteries. abstract_id: PUBMED:32441449 Assessment of renal vascular anatomy on multi-detector computed tomography in living renal donors. Background: Prospective renal donors are a select population of healthy individuals who have been thoroughly screened for significant comorbidities before they undergo multi-detector computed tomography angiography and urography (MDCT). Purpose: The aim of this study is to describe the anatomy of potential living renal donor subjects using MDCT over a 2-year period. The primary objective is to identify the renal arterial anatomy variations, with a secondary objective of identifying venous and collecting system/ureteric variations. Materials And Methods: A prospective study was performed of prospective living kidney transplant donors at a national kidney transplant centre. Study inclusion criteria were all potential kidney donors who underwent MDCT during the living-donor assessment process over a 2-year period. Results: Our cohort included 160 potential living donors who had MDCT; mean age was 45.6 years (range, 21-71). Two renal arteries were identified on the left in 40 subjects (25%) and on the right in 42 subjects (26.3%). A total of 3 or more renal arteries were identified on the left in 7 subjects (4.4%) and on the right in 7 subjects (4.4%). On the left, the distances between multiple arteries ranged from 1 mm to 43 mm, and on the right, they were 1 mm to 84 mm. Conclusions: Conventionally described anatomy was only seen on the left side in 70.6% and 69.4% on the right side of subjects. Single renal arteries are seen in 54.4% showing that conventional anatomy has a relatively low incidence. abstract_id: PUBMED:26896221 Renal Pretransplantation Work-up, Donor, Recipient, Surgical Techniques. Renal transplant is the single best treatment of end-stage renal disease. Computed tomography (CT) is an excellent method for the evaluation of potential renal donors and recipients. Multiphase CT is particularly useful because of detailed evaluation of the kidneys, including the vascular anatomy and the collecting system. MR imaging has no ionizing radiation, but is limited for stone detection, making it a less preferred method of evaluating donors. Preoperative knowledge of the renal vascular anatomy is essential to minimize risks for donors. Imaging evaluation of recipients is also necessary for vascular assessment and detection of incidental findings. abstract_id: PUBMED:37950223 Preoperative assessment of peripheral vascular invasion of pancreatic ductal adenocarcinoma based on high-resolution MRI. Objectives: Preoperative imaging of vascular invasion is important for surgical resection of pancreatic ductal adenocarcinoma (PDAC). However, whether MRI and CT share the same evaluation criteria remains unclear. This study aimed to compare the diagnostic accuracy of high-resolution MRI (HR-MRI), conventional MRI (non-HR-MRI) and CT for PDAC vascular invasion. Methods: Pathologically proven PDAC with preoperative HR-MRI (79 cases, 58 with CT) and non-HR-MRI (77 cases, 59 with CT) were retrospectively collected. Vascular invasion was confirmed surgically or pathologically. The degree of tumour-vascular contact, vessel narrowing and contour irregularity were reviewed respectively. Diagnostic criteria 1 (C1) was the presence of all three characteristics, and criteria 2 (C2) was the presence of any one of them. The diagnostic efficacies of different examination methods and criteria were evaluated and compared. Results: HR-MRI showed satisfactory performance in assessing vascular invasion (AUC: 0.87-0.92), especially better sensitivity (0.79-0.86 vs. 0.40-0.79) than that with non-HR-MRI and CT. HR-MRI was superior to non-HR-MRI. C2 was superior to C1 on CT evaluation (0.85 vs. 0.79, P = 0.03). C1 was superior to C2 in the venous assessment using HR-MRI (0.90 vs. 0.87, P = 0.04) and in the arterial assessment using non-HR-MRI (0.69 vs. 0.68, P = 0.04). The combination of C1-assessed HR-MRI and C2-assessed CT was significantly better than that of CT alone (0.96 vs. 0.86, P = 0.04). Conclusions: HR-MRI more accurately assessed PDAC vascular invasion than conventional MRI and may contribute to operative decision-making. C1 was more applicable to MRI scans, and C2 to CT scans. The combination of C1-assessed HR-MRI and C2-assessed CT outperformed CT alone and showed the best efficacy in preoperative examination of PDAC. abstract_id: PUBMED:25489128 Preoperative CT evaluation of potential donors in living donor liver transplantation. Living donor liver transplantation is an effective, life sustaining surgical treatment in patients with end-stage liver disease and a successful liver transplant requires a close working relationship between the radiologist and the transplant surgeon. There is extreme variability in hepatic vascular anatomy; therefore, preoperative imaging of potential liver donors is crucial not only in donor selection but also helps the surgeons in planning their surgical approach. In this article, we elaborate important aspects in evaluation of potential liver donors on multi-detector computed tomography (MDCT) and the utility of MDCT in presurgical assessment of the hepatic parenchyma, relevant hepatic vascular anatomy and segmental liver volumes. abstract_id: PUBMED:18953477 Multidetector computed tomography for preoperative evaluation of vascular anatomy in living renal donors. Background: Currently, multidetector computed tomographic (MDCT) angiography has become a noninvasive alternative imaging modality to catheter renal angiography for the evaluation of renal vascular anatomy in living renal donors. In this study, we investigated the diagnostic accuracy of 16-slice MDCT in the preoperative assessment of living renal donors. Methods: Fifty-nine consecutive living renal donors (32 men, 27 women) underwent MDCT angiography followed by open donor nephrectomy. All MDCT studies were performed by using a 16-slice MDCT scanner with the same protocol consisting of arterial and nephrographic phases followed by conventional abdominal radiography. The MDCT images were assessed retrospectively for the number and branching pattern of the renal arteries and for the number and presence of major or minor variants of the renal veins. The results were compared with open surgical results. Results: The sensitivity and specificity of MDCT for the detection of anatomic variants of renal arteries including the accessory arteries (n = 9), early arterial branching (n = 7) and major renal venous anomalies including the accessory renal veins (n = 3), late venous confluence (n = 4), circumaortic (n = 2) or retroaortic (n = 3) left renal veins were 100%. However, the sensitivity for identification of minor venous variants was 79%. All of three ureteral duplications were correctly identified at excretory phase conventional abdominal radiography. Conclusion: Sixteen-slice MDCT is highly accurate for the identification of anatomic variants of renal arteries and veins. Dual-phase MDCT angiography including arterial and nephrographic phases followed by conventional abdominal radiography enables complete assessment of renal donors without significant increase of radiation dose. However, the evaluation of minor venous variants may be problematic because of their small diameters and poor opacification. abstract_id: PUBMED:19266308 Diagnostic accuracy of a volume-rendered computed tomography movie and other computed tomography-based imaging methods in assessment of renal vascular anatomy for laparoscopic donor nephrectomy. To evaluate the diagnostic accuracy of computed tomography (CT)-based imaging methods for assessing renal vascular anatomy, imaging studies, including standard axial CT, three-dimensional volume-rendered CT (3DVR-CT), and a 3DVR-CT movie, were performed on 30 patients who underwent laparoscopic donor nephrectomy (10 right side, 20 left side) for predicting the location of the renal arteries and renal, adrenal, gonadal, and lumbar veins. These findings were compared with videos obtained during the operation. Two of 37 renal arteries observed intraoperatively were missed by standard axial CT and 3DVR-CT, whereas all arteries were identified by the 3DVR-CT movie. Two of 36 renal veins were missed by standard axial CT and 3DVR-CT, whereas 1 was missed by the 3DVR-CT movie. In 20 left renal hilar anatomical structures, 20 adrenal, 20 gonadal, and 22 lumbar veins were observed during the operation. Preoperatively, the standard axial CT, 3DVR-CT, and 3DVR-CT movie detected 11, 19, and 20 adrenal veins; 13, 14, and 19 gonadal veins; and 6, 11, and 15 lumbar veins, respectively. Overall, of 135 renal vascular structures, the standard axial CT, 3DVR-CT, and 3DVR-CT movie accurately detected 99 (73.3%), 113 (83.7%), and 126 (93.3%) vessels, respectively, which indicated that the 3DVR-CT movie demonstrated a significantly higher detection rate than other CT-based imaging methods (P &lt; 0.05). The 3DVR-CT movie accurately provides essential information about the renal vascular anatomy before laparoscopic donor nephrectomy. abstract_id: PUBMED:29786330 Reseach development of vascular anatomy and preoperative design technology of anterolateral thigh flap Objective: ?To summarize the present status and progress of vascular anatomy and preoperative design technology of the anterolateral thigh flap. Methods: ?The relative researches focused on vascular anatomy and preoperative design technology of the anterolateral thigh flap were extensively reviewed, analyzed, and summarized. Results: ?Vascular anatomy of the anterolateral thigh flap has been reported by numerous researchers, but perforators' location, origin, course, and the variation of the quantity have been emphasized. Meanwhile, the variation of descending branch, oblique branch, and lateral circumflex femoral artery has also been widely reported. Preoperative design technology of the anterolateral thigh flap includes hand-held Doppler, Color Doppler, CT angiography (CTA), magnetic resonance angiography, digital subtraction angiography, and digital technology, among which the hand-held Doppler is most widely used, and CTA is the most ideal, but each method has its own advantages and disadvantages. Conclusions: ?There is multiple variation of vascular anatomy of the anterolateral thigh flap. Though all kinds of preoperative design technologies can offer strong support to operation of anterolateral thigh flap, a simple, quick, precise, and noninvasive technology is the direction of further research. abstract_id: PUBMED:25489130 MDCT evaluation of potential living renal donor, prior to laparoscopic donor nephrectomy: What the transplant surgeon wants to know? As Laparoscopic Donor Nephrectomy (LDN) offers several advantages for the donor such as lesser post-operative pain, fewer cosmetic concerns and faster recovery time, there is growing global trend towards LDN as compared to open nephrectomy. Comprehensive pre-LDN donor evaluation includes assessment of renal morphology including pelvi-calyceal and vascular system. Apart from donor selection, evaluation of the regional anatomy allows precise surgical planning. Due to limited visualization during laparoscopic renal harvesting, detailed pre-transplant evaluation of regional anatomy, including the renal venous anatomy is of utmost importance. MDCT is the modality of choice for pre-LDN evaluation of potential renal donors. Apart from appropriate scan protocol and post-processing methods, detailed understanding of surgical techniques is essential for the Radiologist for accurate image interpretation during pre-LDN MDCT evaluation of potential renal donors. This review article describes MDCT evaluation of potential living renal donor, prior to LDN with emphasis on scan protocol, post-processing methods and image interpretation. The article laid special emphasis on surgical perspectives of pre-LDN MDCT evaluation and addresses important points which transplant surgeons want to know. abstract_id: PUBMED:19817306 Vascular anatomy of donor and recipient in living kidney transplantation Objective: To review the vascular anatomy of the donor and the recipient for the living kidney transplantation. Methods: The recent literature about the vessels of donor and recipient in clinical applications was extensively reviewed. Results: The pertinent vascular anatomy of the donor and recipient was essential for the screening of the proper candidates, surgical planning and long-term outcome. Early branching and accessory renal artery of the donor were particularly important to deciding the side of nephrectomy, surgical technique and anastomosing pattern, and their injuries were the most frequent factor of the conversion from laparoscopic to open surgery. With increase of laparoscopic nephrectomy in donors, accurate venous anatomy was paid more and more attention to because venous bleeding could also lead to conversion to open nephrectomy. Multidetector CT (MDCT) could supplant the conventional excretory urography and renal catheter angiography and could accurately depict the donors' vessels, vascular variations. In addition, MDCT can excellently evaluate the status of donor kidney, collecting system and other pertinent anatomy details. Conclusion: Accurate master of related vascular anatomy can facilitate operation plan and success of operation and can contribute to the rapid development of living donor kidney transplantation. MDCT has become the choice of preoperative one-stop image assessment for living renal donors. Answer: Based on the provided abstracts, CT appears to be superior to MRI for preoperative assessment of renal vascular anatomy in living kidney donors. A retrospective study comparing the sensitivity of CTA and MRA imaging in preoperative renal vascularization in living kidney donors found that MRA is less sensitive than CTA, especially in the detection of multiple renal arteries (PUBMED:21193143). The study reported that MRA showed a sensitivity of 81.4% for arteries and 97.1% for veins, while CTA showed 100% sensitivity for arteries and 93.8% for veins. The presence of multiple arteries was detected in only one-third of cases on MRI scans, and the missed arteries were not detected even on second examination of the MRI scans with the knowledge of peroperative findings. Additionally, multidetector computed tomography (MDCT) is highlighted as an excellent method for the evaluation of potential renal donors and recipients, providing detailed evaluation of the kidneys, including the vascular anatomy and the collecting system (PUBMED:26896221). MDCT angiography has become a noninvasive alternative imaging modality to catheter renal angiography for the evaluation of renal vascular anatomy in living renal donors, with high diagnostic accuracy (PUBMED:18953477). The sensitivity and specificity of MDCT for the detection of anatomic variants of renal arteries and major renal venous anomalies were reported to be 100%. Furthermore, MDCT is the modality of choice for pre-Laparoscopic Donor Nephrectomy (LDN) evaluation of potential renal donors, providing comprehensive assessment of renal morphology and vascular system, which is crucial for donor selection and precise surgical planning (PUBMED:25489130). MDCT has also been described as the choice of preoperative one-stop image assessment for living renal donors, capable of accurately depicting the donors' vessels and vascular variations (PUBMED:19817306). In conclusion, the evidence from the abstracts suggests that CT, particularly MDCT, is superior to MRI for preoperative assessment of renal vascular anatomy in the context of donor nephrectomy.
Instruction: The adjustment of children of Australian Vietnam veterans: is there evidence for the transgenerational transmission of the effects of war-related trauma? Abstracts: abstract_id: PUBMED:30934192 Transgenerational Transmission of Trauma: Psychiatric Evaluation of Offspring of Former "Comfort Women," Survivors of the Japanese Military Sexual Slavery during World War II. "Comfort women" are survivors of sexual slavery by the Imperial Japanese Army during World War II, who endured extensive trauma including massive rape and physical torture. While previous studies have been focused on the trauma of the survivors themselves, the effects of the trauma on the offspring has never been evaluated before. In this article, we reviewed the first study on the offspring of former "comfort women" and aimed to detect the evidence of transgenerational transmission of trauma. In-depth psychiatric interviews and the Structured Clinical Interview for DSM-5 Axis I Disorders were conducted with six offspring of former "comfort women." Among the six participants, five suffered from at least one psychiatric disorder including major depressive disorder, panic disorder, posttraumatic stress disorder, adjustment disorder, insomnia disorder, somatic symptom disorder, and alcohol use disorder. Participants showed similar shame and hyperarousal symptoms as their mothers regarding stimuli related to the "comfort woman" issue. Increased irritability, problems with aggression control, negative worldview, and low self-esteem were evident in the children of mothers with posttraumatic stress disorder. Finding evidence of transgenerational transmission of trauma in offspring of "comfort women" is important. Future studies should include more samples and adopt a more objective method. abstract_id: PUBMED:11437808 The adjustment of children of Australian Vietnam veterans: is there evidence for the transgenerational transmission of the effects of war-related trauma? Objective: The presence of posttraumatic stress disorder (PTSD) in trauma survivors has been linked with family dysfunction and symptoms in their children, including lower self-esteem, higher disorder rates and symptoms resembling those of the traumatized parent. This study aims to examine the phenomenon of intergenerational transfer of PTSD in an Australian context. Method: 50 children (aged 16-30) of 50 male Vietnam veterans, subgrouped according to their fathers' PTSD status, were compared with an age-matched group of 33 civilian peers. Participants completed questionnaires with measures of self-esteem, PTSD symptomatology and family functioning. Results: Contrary to expectations, no significant differences were found between the self-esteem and PTSD symptomatology scores for any offspring groups. Unhealthy family functioning is the area in which the effect of the veteran's PTSD appears to manifest itself, particularly the inability of the family both to experience appropriate emotional responses and to solve problems effectively within and outside the family unit. Conclusion: Methodological refinements and further focus on the role of wives/mothers in buffering the impact of veterans' PTSD symptomatology on their children are indicated. Further effort to support families of Veterans with PTSD is also indicated. abstract_id: PUBMED:8109652 A model of homelessness among male veterans of the Vietnam War generation. Objective: This study explored a multifactorial model of vulnerability to homelessness among male veterans of the Vietnam war generation. Method: Data from 1,460 male veterans who participated in the National Vietnam Veterans Readjustment Study were used to evaluate hypotheses about the causes of homelessness grouped into four sets of sequential variables: 1) premilitary risk factors, 2) war related and non-war-related traumatic experiences, 3) lack of social support at the time of discharge from military service, and 4) postmilitary psychiatric disorder and social dysfunction. Structural equation modeling was used to explore the posited model of risk factors for homelessness. Results: Postmilitary social isolation, psychiatric disorder, and substance abuse had the strongest direct effects on homelessness, although substantial indirect effects from stressors related to being in the war zone and from premilitary conduct disorder were observed. Several premilitary factors--year of birth, childhood physical or sexual abuse, other childhood traumas, and placement in foster care during childhood--also had direct effects on homelessness. Conclusions: Individual vulnerability to homelessness is most likely due to a multiplicity of psychiatric and nonpsychiatric factors, with independent influences emerging at each of four discrete time periods. In view of this complex pattern of influences, prevention efforts directed at individuals must address a very broad range of adjustment problems. abstract_id: PUBMED:30702385 Is silence about trauma harmful for children? Transgenerational communication in Palestinian families. Style of family communication is considered important in the transgenerational transmission of trauma. This study had three aims: first, to identify the contents of family communication about past national trauma; second, to examine how parents' current war trauma is associated with transgenerational communication; and third, to analyze the associations between transgenerational communication and children's mental health, measured as posttraumatic stress disorder (PTSD), depression and psychological distress. The study sample consisted of 170 Palestinian families in Gaza Strip, in which both mothers (n = 170) and fathers (n = 170) participated, each with their 11-13-year-old child. Mothers and fathers responded separately to three questions: 1) what did their own parents tell them about the War of 1948, Nakba?; 2) what did they tell their own children about the Nakba?; and 3) What did they tell their own children about the 1967 Arab-Israeli War and military occupation? Current war trauma, as reported separately by mothers, fathers and their children, refers to the Gaza War 2008/09. Children reported their symptoms of PTSD, depression, and psychological distress. Results revealed seven communication content categories and one category indicating maintaining silence about the traumas. Fathers' high exposure to current war trauma was associated with a higher level of communicating facts, reasons, and meanings regarding the1948 and 1967 wars, and mothers' high exposure to current war trauma was associated with a lower level of maintaining silence. Family communication about facts, reasons, and meanings was significantly associated with children not showing PTSD and marginally with not showing psychological distress, while maintaining silence was not associated with children's mental health. abstract_id: PUBMED:26754766 Adverse health consequences of the Vietnam War. The 40th anniversary of the end of the Vietnam War is a useful time to review the adverse health consequences of that war and to identify and address serious problems related to armed conflict, such as the protection of noncombatant civilians. More than 58,000 U.S. servicemembers died during the war and more than 150,000 were wounded. Many suffered from posttraumatic stress disorders and other mental disorders and from the long-term consequences of physical injuries. However, morbidity and mortality, although difficult to determine precisely, was substantially higher among the Vietnamese people, with at least two million of them dying during the course of the war. In addition, more than one million Vietnamese were forced to migrate during the war and its aftermath, including many "boat people" who died at sea during attempts to flee. Wars continue to kill and injure large numbers of noncombatant civilians and continue to damage the health-supporting infrastructure of society, expose civilians to toxic chemicals, forcibly displace many people, and divert resources away from services to benefit noncombatant civilians. Health professionals can play important roles in promoting the protection of noncombatant civilians during war and helping to prevent war and create a culture of peace. abstract_id: PUBMED:35010342 Transgenerational Transmission of Trauma across Three Generations of Alevi Kurds. Background: Thus far, most researchers on genocide and transgenerational transmissions have focused on the National Socialist Holocaust as the most abhorrent example of this severe human rights violation. Few data have been published on other ethnic or religious groups affected by genocidal actions in this context. Methodology: Using a mixed-method approach integrating qualitative interviews with standardized instruments (SCID and PDS), this study examines how individual and collective trauma have been handed down across three generations in an Alevi Kurd community whose members (have) suffered genocidal perpetrations over a longer time period (a "genocidal environment"). Qualitative, open-ended interviews with members of three generations answering questions yielded information on (a) how their lives are shaped by the genocidal experiences from the previous generation and related victim experiences, (b) how the genocidal events were communicated in family narratives, and (c) coping strategies used. The first generation is the generation which directly suffered the genocidal actions. The second generation consists of children of those parents who survived the genocidal actions. Together with their family (children, partner, relatives), this generation suffered forced displacement. Members of the third generation were born in the diaspora where they also grew up. Results: Participants reported traumatic memories, presented in examples in this publication. The most severe traumatic memories included the Dersim massacre in 1937-1938 in Turkey, with 70,000-80,000 victims killed, and the enforced resettlement in western Turkey. A content analysis revealed that the transgenerational transmission of trauma continued across three generations. SCID and PDS data indicated high rates of distress in all generations. Conclusions: Genocidal environments such as that of the Kurdish Alevis lead to transgenerational transmission mediated by complex factors. abstract_id: PUBMED:29230306 Cultural shift in mental illness: a comparison of stress responses in World War I and the Vietnam War. Objectives: Post-traumatic stress disorder is an established diagnostic category. In particular, over the past 20 years, there has been an interest in culture as a fundamental factor in post-traumatic stress disorder symptom manifestation. However, only a very limited portion of this literature studies the historical variability of post-traumatic stress within a particular culture. Design: Therefore, this study examines whether stress responses to violence associated with armed conflicts have been a culturally stable reaction in Western troops. Setting: We have compared historical records from World War I to those of the Vietnam War. Reference is also made to observations of combat trauma reactions in pre-World War I conflicts, World War II, the Korean War, the Falklands War, and the First Gulf War. Participants: The data set consisted of literature that was published during and after these armed conflicts. Main Outcome Measures: Accounts of World War I Shell Shock that describe symptom presentation, incidence (both acute and delayed), and prognosis were compared to the observations made of Vietnam War post-traumatic stress disorder victims. Results: Results suggest that the conditions observed in Vietnam veterans were not the same as those which were observed in World War I trauma victims. Conclusions: The paper argues that the concept of post-traumatic stress disorder cannot be stretched to cover the typical battle trauma reactions of World War I. It is suggested that relatively subtle changes in culture, over little more than a generation, have had a profound effect on how mental illness forms, manifests itself, and is effectively treated. We add new evidence to the argument that post-traumatic stress disorder in its current conceptualisation does not adequately account, not only for ethnocultural variation but also for historical variation in stress responses within the same culture. abstract_id: PUBMED:33075652 Family emotional climate in childhood and risk of PTSD in adult children of Australian Vietnam veterans. The mechanisms of intergenerational transmission of posttraumatic stress disorder (PTSD) from parent to child are not yet known. We hypothesised that the mechanisms involved in trauma transmission may be dependent upon sex specific caregiver-child dyads and these dyads may have a differential impact on post-traumatic stress disorder (PTSD). A non-clinical sample of adult offspring (N = 306) of Australian Vietnam veterans was interviewed in-person to assess the relationship between family emotional climate and caregiver attachment with the offspring's adult experience of post-traumatic stress disorder (PTSD). Attachment to the veteran father was not associated with sons' PTSD, but was for daughters. Attachment to mother was associated with PTSD and depression for both sons and daughters, with positive and warm attachment related to reduced PTSD diagnosis and its symptom clusters. A less positive family emotional environment was related to increased PTSD symptoms in daughters, while for sons a negative relationship style with their mother was related to increased frequency and severity of numbing/avoidance behaviours and hyperarousal symptoms. The findings suggest that sex-related differences in caregiver-child dyads do have a differential impact on PTSD symptom domains and may be one environmental mechanism by which trauma is transmitted across generations. abstract_id: PUBMED:11931602 Children's trauma and adjustment reactions to violent and nonviolent war experiences. Objective: According to most studies, more than half of children exposed to war meet criteria for posttraumatic stress disorder (PTSD). Although we know that children are adversely affected by atrocities of war, we do not know which specific war events are associated with children's stress reactions. For example, it is unclear whether differences exist in response to violent versus nonviolent war-trauma experiences. This study examined the relationship of violent and nonviolent war experiences to children's trauma reactions and adjustment in a group of children from Bosnia. Method: During the 1994 siege in Sarajevo, 791 children aged 6 to 16 years participated in a study of trauma experience and response, in which assessment questionnaires (Impact of Event Scale, PTSD Reaction Index, Children's Depression Inventory, Child Behavior Checklist, and War Experience Questionnaire) were completed by children and their teachers. Results: In this sample 41% had clinically significant PTSD symptoms. Children were adversely affected by exposure to both violent and nonviolent war-traumas. An additive effect of trauma exposure on trauma reactions was also found. However, many war experiences were not associated with children's adjustment and trauma reactions. Conclusions: Additive effects of violence and deprivations during war may overwhelm the coping skills of children and leave them vulnerable to externalizing and internalizing adjustment difficulties and symptoms of PTSD. abstract_id: PUBMED:32446554 Study on parent-to-child transmission related to Sri Lanka's civil war The dynamics of transmission between Tamil parents who fled the civil war in Sri Lanka and their children born in France were studied. The possible trauma that results from the transmission can affect the following generations in a more or less conscious way. The qualitative methodology was based on T-MADE, a method of transcultural analysis of the children's drawings, linked to the analysis of the parents' narrative. Answer: The study on the adjustment of children of Australian Vietnam veterans did not find significant differences in self-esteem and PTSD symptomatology scores between offspring groups, regardless of their fathers' PTSD status (PUBMED:11437808). This suggests that there is no clear evidence for the transgenerational transmission of the effects of war-related trauma in terms of PTSD symptoms and self-esteem in this population. However, the study did find that unhealthy family functioning was more prevalent in families where the veteran had PTSD, particularly in terms of emotional responsiveness and problem-solving within the family unit (PUBMED:11437808). This indicates that while direct symptoms of PTSD may not be transmitted across generations, the family environment and functioning can be affected by a parent's PTSD, which could indirectly influence the adjustment of children. The study also suggests that methodological refinements are needed and that the role of wives/mothers in buffering the impact of veterans' PTSD on their children should be further investigated (PUBMED:11437808). Additionally, the study indicates that there is a need for more support for families of veterans with PTSD (PUBMED:11437808). In summary, while direct transgenerational transmission of PTSD symptoms and self-esteem issues was not evident in the children of Australian Vietnam veterans, the broader family dynamics and functioning may be impacted by the veteran's PTSD, which could influence the children's adjustment in indirect ways.
Instruction: Do communication training programs improve students' communication skills? Abstracts: abstract_id: PUBMED:37541131 Communication partner training for SLT students: Changes in communication skills, knowledge and confidence. This paper describes the changes in communication skills, knowledge and confidence in Speech Language Therapy (SLT) students in conversations with People With Aphasia (PWA) after Training Con-tAct, a Dutch Communication Partner Training. Methods: On a voluntary basis, nine SLT students (2nd yr) completed Training Con-tAct, in which People With Aphasia (PWA) were involved as co-workers. A mixed method design with pre- and post-measures was used to analyze the students' communication skills, knowledge and confidence. A quantitative video analysis was used to measure changes in students' communication skills. Besides, a self-report questionnaire was used to measure the changes in students' knowledge and confidence regarding their communication with PWA. To evaluate the perspectives of the students on Training Con-tAct, additionally a focus group interview was held. Results: Regarding students' communication skills the outcomes revealed a significantly higher score on the 'supporting' competence in students who took part in Training Con-tAct. The mean scores for the 'acknowledging' and 'checking information' competences did not improve significantly. The outcomes of the questionnaire showed students gained more knowledge and confidence regarding communication with PWA. The focus group interview provided insights into: motivation for participating in Communication Partner Training, content and structure of the training, feedback in CPT, and learning experiences. Conclusion: The present study suggests that SLT students may benefit from Training Con-tAct as the training leads to better skills, more knowledge about aphasia and more confidence in communicating with PWA. Training Con-tAct could be a valuable addition to the curricula of all healthcare disciplines, and eventually support interprofessional collaboration, resulting in improved access to health care, which is important for communication vulnerable people. Further research with a larger sample size and a control group is required. abstract_id: PUBMED:38172793 Interprofessional communication skills training to improve medical students' and nursing trainees' error communication - quasi-experimental pilot study. Background: Interprofessional communication is of extraordinary importance for patient safety. To improve interprofessional communication, joint training of the different healthcare professions is required in order to achieve the goal of effective teamwork and interprofessional care. The aim of this pilot study was to develop and evaluate a joint training concept for nursing trainees and medical students in Germany to improve medication error communication. Methods: We used a mixed-methods, quasi-experimental study with a pre-post design and two study arms. This study compares medical students (3rd year) and nursing trainees (2nd year) who received an interprofessional communication skills training with simulation persons (intervention group, IG) with a control group (CG). Both cohorts completed identical pre- and post-training surveys using the German Interprofessional Attitudes Scale (G-IPAS) and a self-developed interprofessional error communication scale. Descriptive statistics, Mann-Whitney-U-test and Wilcoxon-test were performed to explore changes in interprofessional error communication. Results: A total of 154 were medical students, and 67 were nursing trainees (IG: 66 medical students, 28 nursing trainees / CG: 88 medical students, 39 nursing trainees). After training, there were significant improvements observed in the "interprofessional error communication" scale (p &lt; .001) and the "teamwork, roles, and responsibilities" subscale (p = .012). Median scores of the subscale "patient-centeredness" were similar in both groups and remained unchanged after training (median = 4.0 in IG and CG). Conclusions: Future studies are needed to find out whether the training sustainably improves interprofessional teamwork regarding error communication in acute care. abstract_id: PUBMED:29793706 Evaluating communication skills after long-term practical training among Japanese pharmacy students. Introduction: The goal of this study was to assess pharmacy students' satisfaction with long-term practical training programs at hospital and community pharmacies and how these programs benefitted communication skills. Methods: We asked 83 fifth-year pharmacy students to answer anonymous questionnaires assessing their satisfaction and perceived benefits of practical training and to complete Teramachi's Pharmacist Communication Skill Scale (TePSS-31), a measure of pharmacists' communication skills, after undergoing their practical training periods at hospital and community pharmacies in 2014. Results: Over 90% of students who underwent the practical training were satisfied with their experiences. Furthermore, they reported that practical training institution was helpful for improving their communication skills and gave them sufficient opportunity to interact with consulting patients, engage in role play with pharmacists or peers, and observe interactions between pharmacists and patients. Overall, over 80% of students felt that they had shown improvement in communication skills, indicating that the training was effective. We further reconfirmed that the TePSS-31 has good internal consistency. The total scores on the TePSS-31 after the hospital and community pharmacy training programs did not significantly differ, indicating that the place where the training was received did not influence students' acquisition of communication skills. Conclusions: Most students were satisfied with the long-term practical training at hospital and community pharmacies, and the training helped improve their communication skills for dealing with patients and coworkers. abstract_id: PUBMED:25148880 Final-year veterinary students' perceptions of their communication competencies and a communication skills training program delivered in a primary care setting and based on Kolb's Experiential Learning Theory. Veterinary graduates require effective communication skills training to successfully transition from university into practice. Although the literature has supported the need for veterinary student communication skills training programs, there is minimal research using learning theory to design programs and explore students' perceptions of such programs. This study investigated veterinary students' perceptions of (1) their communication skills and (2) the usefulness of a communication skills training program designed with Kolb's Experiential Learning Theory (ELT) as a framework and implemented in a primary care setting. Twenty-nine final-year veterinary students from the Ontario Veterinary College attended a 3-week communication skills training rotation. Pre- and post-training surveys explored their communication objectives, confidence in their communication skills, and the usefulness of specific communication training strategies. The results indicated that both before and after training, students were most confident in building rapport, displaying empathy, recognizing how bonded a client is with his or her pet, and listening. They were least confident in managing clients who were angry or not happy with the charges and who monopolized the appointment. Emotionally laden topics, such as breaking bad news and managing euthanasia discussions, were also identified as challenging and in need of improvement. Interactive small-group discussions and review of video-recorded authentic client appointments were most valuable for their learning and informed students' self-awareness of their non-verbal communication. These findings support the use of Kolb's ELT as a theoretical framework and of video review and reflection to guide veterinary students' learning of communication skills in a primary care setting. abstract_id: PUBMED:29606708 Nordic Pharmacy Students' Opinions of their Patient Communication Skills Training. Objective. To describe Nordic pharmacy students' opinions of their patient communication skills training (PCST), and the association between course leaders' reports of PCST qualities and students' perceptions of their training. Secondary objective was to determine what factors influence these associations. Methods. A cross-sectional questionnaire-based study was performed. The various curricula were categorized into three types (basic, intermediate and innovative training) and students were divided into three groups according to the type of training they had received. Multivariable logistic regression models were fitted with different opinions as outcomes and three types of training as exposure, using generalized estimation equations. Results. There were 370 students who responded (response rate: 77%). Students within the innovative group were significantly more likely to agree that they had received sufficient training, and to agree with the assertion that the pharmacy school had contributed to their level of skills compared to students in the basic group. Conclusion. There appears to be an association between larger and varied programs of training in patient communication skills and positive attitudes toward this training on the part of the students, with students reporting that they received sufficient training, which likely enhanced their skills. abstract_id: PUBMED:35839589 Effect of online communication skills training on effective communication and self-efficacy and self-regulated learning skills of nursing students: A randomized controlled study. Aim: The aim of this study was to determine the effect of online communication skills training conducted for first-year nursing students on effective communication and self-efficacy and self-regulated learning skills. Background: Communication skills are an important part of nursing care. Methods: This research was designed as a pre-test-post-test randomized controlled experimental study. The study population comprised first-year undergraduate nursing students of a state university in Turkey. A total of 60 students included in the study were divided into the two following groups: experimental (n = 30) and control (n = 30) groups. The research data were collected between 1 December 2020 and 1 March 2021. Pre-test and post-test forms were simultaneously provided to the groups. Post-tests were repeated 1 month after the pre-test was completed. A 2-day (a total of 12 h) communication skills training was conducted online for the students in the experimental group after the pre-test forms were filled. Information form, Effective Communication Skills Scale (ECSS), General Self-efficacy Scale (GSE) and Self-regulated Learning Skills Scale (SRLSS) were used to collect the data. Results: The effective communication and SRLSS mean scores of the nursing students were high and the GSE scores were below average. On comparing the groups, the post-test mean scores of the communication skills and GSE were found to decrease in both the groups compared with the pre-test ones. This decrease was significant only in the "ego-enhancing language" subdimension of ECSS (p &lt; 0.05). The post-test mean scores of the SRLSS increased in both the groups, but this increase was not significant (p &gt; 0.05). Conclusion: Although the SRLSS scores of the students increased in the post-test, the study results show that communication skills training did not have a significant effect on effective communication and self-efficacy and self-regulated learning skills. The results of this study are important in terms of guiding research and trainings that examine the effects of communication skills. abstract_id: PUBMED:28507364 Evaluation of a communication skills training course for medical students using peer role-play. Objective: To evaluate the effect of using peer role-playing in learning the communication skills as a step in the development of the communication skills training course delivered to pre-clinical medical students. Methods: This study was conducted at the King Abdulaziz University, Jeddah, Saudi Arabia, between September 2014 and February 2015 and comprised medical students. Mixed methods design was used to evaluate the developed communication skills training course. Tests were conducted before and after the communication skills training course to assess the students' self-reported communication. After the course, the students completed a satisfaction survey. Focus groups were conducted to assess the behavioural and organisational changes induced by the course. SPSS 16 was used for data analysis.. Results: Of the293 respondents, 246(84%) were satisfied with the course. Overall, 169(58%) subjects chose the lectures as the most helpful methods for learning the communication skills while 124(42%) considered practical sessions as the most helpful method. Besides, 237(81%) respondents reported that the role-play was beneficial for their learning, while 219(75%) perceived the video-taped role-play as an appropriate method for assessing the communication skills. Conclusions: Peer role-play was found to be a feasible and well-perceived alternative method in facilitating the acquisition of communication skills.. abstract_id: PUBMED:32951927 Longitudinal study: Impact of communication skills training and a traineeship on medical students' attitudes toward communication skills. Objectives: To study longitudinally students' attitudes towards communication skills (CS) in order to examine whether CS training (CST) has an enduring impact on medical students' attitudes toward being a lifelong learner of CS. Methods: 105 students completed the Communication Skills Attitude Scale at 4 times: before CST, after CST and before and after a traineeship. Results: Our final sample size is 105 students. CST improved the attitudes of our students toward CS, and the traineeship stabilised those attitudes. However, while the improvement in positive attitudes was sustained over time, negative attitudes increased 6 months after CST. Conclusion: CST using experiential methods in a safe environment has the potential to improve students' attitudes towards CS. A short traineeship in general medicine allows students to quickly integrate CST into clinical practice, without deteriorating their attitudes toward CS. However, 6 months of medical lessons without CST reinforces students' negative attitudes. Practice Implications: To avoid the deterioration of attitudes over time, CST should be continuous or at least spaced at intervals less than 6 months and supported by the institutional authorities. In addition, placing the CST close to an observation traineeship in general practice seems an interesting way to prevent further deterioration of attitudes. abstract_id: PUBMED:26767096 Investigating the key factors in designing a communication skills program for medical students: A qualitative study. Introduction: Medical students have a serious need to acquire communication skills with others. In many medical schools, special curriculums are developed to improve such skills. Effective training of communication skills requires expert curriculum design. The aim of this study was to explore the experiences and views of experts and stakeholders in order to design a suitable training program in communication skills for medical students. Methods: The content analysis approach was used in this qualitative study. Forty-three participants were selected from the faculty, nurses, physicians, residents, and medical students at Mashhad University of Medical Sciences using purposive sampling. The data were collected through focus group discussions and semi-structured interviews. To ensure the accuracy of the data, the criteria of credibility, transferability, dependability, and conformability were met. The data were analyzed by MAXQDA software using the Graneheim &amp; Lundman model. Results: The findings of this study consisted of two main themes, i.e., "The vast nature of the present communication skills training" and "administrative requirements of the training program regarding communication skills." The first theme included the educational needs of students, the problems associated with training people to have good communication skills, the importance of good communication skills in performing professional duties, communication skills and job requirements, the learning environment of communication skills, and the status of existing training programs for communication skills. Strategies and suitable methods for teaching communication skills and methods of evaluating the students in this regard also were obtained. Conclusion: The findings of this study were the elements required to design a proper and local model to teach communication skills to medical students through analyzing the concepts of effective communication. The results of this study can be useful for medical faculties in designing a proper program for teaching medical students how to communicate effectively with patients and colleagues. abstract_id: PUBMED:35857414 Virtual Geritalk: Does Intensive Virtual Communication Skills Training Improve Use of Serious Illness Communication Skills? Background: The COVID-19 pandemic presented a unique opportunity to adapt in-person communication skills training to a virtual format. Objective: Examine use of serious illness communication skills by learners after participating in an intensive virtual communication skills training. Design: Prospective cohort study. Setting/Subjects: Icahn School of Medicine at Mount Sinai, Clinical Fellows. Measurements: Family Meeting Communication Assessment Tool (FAMCAT) assessed fundamental and advanced communication skills. Results: As compared to a historical benchmark obtained after a prior in-person course, the virtual course showed equivalent or better use of communication skills across fundamental skills and advanced skills. Conclusions: Our findings suggest that a virtual communication skills training is associated with the use of serious illness communication skills in the clinical setting by learners. Answer: Yes, communication training programs have been shown to improve students' communication skills across various healthcare disciplines. Studies have demonstrated that such training leads to better communication skills, more knowledge, and increased confidence in students' ability to communicate effectively in their respective fields. For instance, Speech Language Therapy (SLT) students who completed Training Con-tAct, a Dutch Communication Partner Training involving People With Aphasia (PWA), showed significantly higher scores in the 'supporting' competence and gained more knowledge and confidence in communicating with PWA (PUBMED:37541131). Similarly, medical students and nursing trainees who received interprofessional communication skills training exhibited significant improvements in "interprofessional error communication" and "teamwork, roles, and responsibilities" (PUBMED:38172793). Japanese pharmacy students reported that long-term practical training at hospital and community pharmacies was helpful in improving their communication skills (PUBMED:29793706). Veterinary students also found a communication skills training program based on Kolb's Experiential Learning Theory to be useful, particularly interactive small-group discussions and video-recorded client appointments (PUBMED:25148880). Nordic pharmacy students who received more extensive and varied patient communication skills training reported that they had received sufficient training and contributed to their level of skills (PUBMED:29606708). However, an online communication skills training for nursing students did not show a significant effect on effective communication and self-efficacy, although self-regulated learning skills scores increased (PUBMED:35839589). Peer role-play was found to be a beneficial and well-perceived method for medical students to acquire communication skills (PUBMED:28507364). A longitudinal study revealed that communication skills training improved medical students' attitudes toward communication skills, and a traineeship stabilized those attitudes (PUBMED:32951927). Expert curriculum design is crucial for effective communication skills training, as indicated by a qualitative study exploring the experiences and views of experts and stakeholders (PUBMED:26767096). Lastly, a virtual communication skills training was associated with the use of serious illness communication skills in the clinical setting by learners (PUBMED:35857414). In summary, communication training programs are effective in enhancing communication skills among students in healthcare-related fields, although the impact may vary depending on the training method and the context in which it is delivered.
Instruction: Is guidewire exchange a better approach for subclavian vein re-catheterization for chronic hemodialysis patients? Abstracts: abstract_id: PUBMED:15332641 Subclavian vein flexible guidewire knotting. A potential serious complication in hemodialysis patients. Background: Complications after subclavian vein catheterization are well-documented in the literature. The purpose of this article is to present a case of a rare and potential serious complication of flexible guidewire knotting in subclavian vein catheterization in a hemodialysis patient. Method: A 72 year-old woman on hemodialysis due to diabetes has had left upper extremity arteriovenous fistula (AV-F) thrombosed on the scheduled hemodialysis day. A right subclavian vein catheterization for hemodialysis was decided and attempted, that ended up with a flexible guidewire knotting. Results: The flexible guidewire was splinted with the Kit's dilator and after great effort, under radioscopic control the knotting was managed to be unknotted and the guidewire removed. Conclusions: Our case suggests that a rare and potential serious complication of subclavian vein catheterization for hemodialysis can be successfully managed with appropriate approach and skillful maneuvers. abstract_id: PUBMED:33586500 Minimal guidewire length for central venous catheterization of the right subclavian vein: A CT-based consecutive case series. Background: Central venous catheter (CVC) misplacement occurs frequently after right subclavian vein catheterization. It can be avoided by using ultrasound to confirm correct guidewire tip position in the lower superior vena cava prior to CVC insertion. However, retraction of the guidewire during the CVC insertion may dislocate the guidewire tip from its desired and confirmed position, thereby resulting in CVC misplacement. The aim of this study was to determine the minimal guidewire length required to maintain correct guidewire tip position in the lower superior vena cava throughout an ultrasound-guided CVC placement in the right subclavian vein. Methods: One hundred adult patients with a computed tomography scan of the chest were included. By using multiplanar reconstructions from thin-sliced images, the distance from the most plausible distal puncture site of the right subclavian vein to the optimal guidewire tip position in the lower superior vena cava was measured (vessel length). In addition, measurements of equipment in common commercial over-the-wire percutaneous 15-16 cm CVC kits were performed. The 95th percentile of the vessel length was used to calculate the required minimal guidewire length for each CVC kit. Results: The 95th percentile of the vessel length was 153 mm. When compared to the calculated minimal guidewire length, the guidewires were up to 108 mm too short in eight of eleven CVC kits. Conclusion: After confirmation of a correct guidewire position, retraction of the guidewire tip above the junction of the brachiocephalic veins should be avoided prior to CVC insertion in order to preclude dislocation of the catheter tip towards the right internal jugular vein or the left subclavian vein. This study shows that many commercial over-the-wire percutaneous 15-16 cm CVC kits contain guidewires that are too short for right subclavian vein catheterization, i.e., guidewire retraction is needed prior to CVC insertion. abstract_id: PUBMED:8712623 Complex extravascular dislocation of a guidewire during catheterization of the subclavian vein A case is presented in which a guidewire, during subclavian catheterization in a 63 years old patient, became knotted outside the vein, behind the clavear head of the sternocleidomastoid muscle. The wire was easily removed by surgical approach and the post operative course was uneventful. The causes of this complication are discussed and a few main points are emphasized when the Seldinger technique is employed for subclavian vein catheterization. abstract_id: PUBMED:12685975 Applications and complications of subclavian vein catheterization for hemodialysis. Objective: To study the indications, complications and duration of 605 subclavian catheters inserted over a period of 4 years as venous access for the management of renal failure in local setup. Design: Cross-sectional descriptive study. Place And Duration Of Study: Hemodialysis section, Department of Urology and Kidney Transplantation, Lahore General Hospital, Lahore. The study was conducted from October, 1998 to July, 2002. Subjects And Methods: All patients coming for dialysis during the period of October 1998 to July 2002 were included information noted on specific form. Results: Among the patients who underwent subclavian vein catheterization, 75.2% patients were suffering from chronic renal failure and 24.7% patients were admitted for acute renal failure. Among chronic renal failure patients, 21.9% catheters had to be replaced due to various complications e.g. thrombosis, infection or kinking of the catheter. The subclavian catheters remained in place for a mean duration of 4 weeks. Early complications encountered were arterial puncture, inability to cannulate the innominate vein, hemothorax, puncture of thoracic duct, hemomediastinum, arrhythmias and pulmonary hematoma in 10.7%, 16.5%, 0.5%, 0.2%, 0.6% and 0.2% of patients respectively. Mortality attributed to the procedure occurred in 0.1 % cases. Delayed complications included early infection in 15% catheterizations while delayed infection occurred in 39 % cases. Conclusion: Percutaneous subclavian catheterization is valuable, relatively easy to learn and safe method with acceptable rate of complications for patients necessitating hemodialysis and no established permanent vascular access. abstract_id: PUBMED:1777904 Subclavian vein stenosis: complication of subclavian vein catheterization for hemodialysis. Subclavian vein catheterization is a relatively safe procedure. Few long-term complications have been reported. We recently diagnosed subclavian vein stenosis in a 14-year-old peritoneal dialysis patient. The stenosis occurred 2 years after the use of a subclavian vein catheter for temporary hemodialysis. Stenosis became clinically apparent by progressive painless swelling of the right arm and was documented by venography. abstract_id: PUBMED:10869922 Brachial plexus injury during subclavian vein catheterization for hemodialysis. Although the subclavian vein is often used for placement of double-lumen hemodialysis catheters, the risk factors for complications for the patients with chronic renal failure are underestimated. We report a case of a patient with chronic renal failure in whom brachial plexus injury was caused by both a compressive hematoma and direct insertion of a needle resulting from a subclavian vein catheterization attempt for hemodialysis. This case emphasizes the need for determining the coagulation status of the patient especially with chronic renal failure before performing invasive procedures. abstract_id: PUBMED:2644609 Complications in hemodialysis performed by catheterization of the subclavian vein Hemodialysis was performed in 349 cases on 35 patients with 50 special subclavian catheters. The catheters were inserted infraclavicularly with Seldinger's technique. The cannulation period was 26.6 (1-148) days and on the average 7 hemodialyses (1-63) were performed through 1 catheter. The aspects of subclavian catheterization, indication and complications are described. In 3 patients suffering from chronic uremia the end of the catheter (3-5 cm) in the subclavian vein was found broken after 1-5-6 weeks long "single-needle" dialysis. The broken end became fixed into the segmental artery of the lung and did not cause any complication during the long (6-14-33 months) observation period, thus its open or transluminal removal was not considered necessary. In the opinion of the authors the "single-needle" hemodialysis should be avoided to prevent similar complications. The use of the "two-needle" treatment or a catheter with double lumen is advisable. abstract_id: PUBMED:3813748 Subclavian vein stenosis as a complication of subclavian catheterization for hemodialysis. Thirteen patients had placement of a subclavian vein catheter for temporary vascular access for hemodialysis. Peripheral venography was performed within two to six weeks of catheter placement. Forty-six percent (six of 13 patients) developed subclavian vein narrowing, which resolved in two patients. The duration of catheter placement had no impact on the incidence of this complication. Subclavian vein catheterization can frequently lead to subclavian vein stenosis, which often will resolve spontaneously. Consideration should be given to placement of subclavian lines on the contralateral side of a planned permanent vascular access. abstract_id: PUBMED:16214203 Is guidewire exchange a better approach for subclavian vein re-catheterization for chronic hemodialysis patients? Background: The objectives of this study were to compare outcomes and survival rates of subclavian vein re-catheterization through guide wire exchange (GWE) or de novo insertion (DN). Materials And Methods: The study was conducted in a retrospective manner. Medical records of 36 patients who received percutaneous subclavian vein re-catheterization for hemodialysis in our institution during the period from April 1, 2001 to September 30, 2004 were reviewed. All patients had at least 2 catheter insertions records in our institute. Incidences of adverse events (infection, thrombosis) were compared between GWE and DN groups using x2 test. Predictors for adverse event occurrences were analyzed using logistic regression models. Cox proportional hazard model was used to investigate the predictors for adverse event-free catheter days. Kaplan-Meire survival curves were computed and compared using log rank test. Results: Information were generated from 98 catheters (41 from DN, 57 from GWE groups). The average catheter usage was 2.8+/-0.9 devices per patient and the mean catheter-indwelling-day was 125.4+/-129.5 days in this cohort. We found GWE group had significantly lower thrombosis rate (49.1% vs. 85.4% for DN group, P&lt;0.000) in general. Surgical approach was a significant risk factors for catheter thrombosis (GWE vs. DN, odds ratio=0.261, P=0.05). The actuarial survival rates for GWE were &gt; or =30 days, 85.4%; &gt; or =60 days, 75.5%; &gt; or =90 days, 64.5%; &gt; or =180 days, 44.3%. The actuarial survival rates for DN were &gt; or =30 days, 70.7%; &gt; or =60 days, 58.5%; &gt; or =90 days, 34.2%; &gt; or =180 days, 18.4%. GWE group catheters had significantly higher catheter survival rates (P=0.0009). Mahukar catheter (hazard ratio 0.514, P=0.03), non-shock (hazard ratio 3.358, P=0.04), and older age (hazard ratio 0.958, P=0.026) were predictors of adverse event-free remaining catheter days. Conclusion: We suggest that GWE might be a favorable option over DN insertion when revised subclavian vein catheterization is inevitable. GWE can be performed repeatedly without compromising catheter outcomes. abstract_id: PUBMED:15143695 Contralateral hemothorax: a late complication of subclavian vein catheterization Central venous catheterization is extensively used in unstable patients who need hemodynamic monitoring and in patients who require prolonged treatment such as, chemotherapy, antibiotics therapy, parenteral nutrition, or temporary hemodialysis. Subclavian vein catheterization is the preferred approach for hemodialysis, especially as it does not restrict the patient. Most of the complications related to this procedure are insignificant, however, occasionally they may be life threatening and require surgical intervention. We present a case study of a 77-year-old woman suffering from chronic renal failure. A subclavian catheter was inserted, and the patient started hemodialysis. Three weeks later, during hemodialysis, she complained of right upper abdominal and right chest pain. Chest X-ray showed a moderate to large right pleural effusion, with pleurocentesis confirming the presence of hemothorax. Answer: Yes, guidewire exchange (GWE) appears to be a better approach for subclavian vein re-catheterization for chronic hemodialysis patients. A study comparing outcomes and survival rates of subclavian vein re-catheterization through GWE or de novo insertion (DN) found that the GWE group had a significantly lower thrombosis rate (49.1% vs. 85.4% for DN group, P<0.000) in general. Surgical approach was identified as a significant risk factor for catheter thrombosis, with GWE being less risky compared to DN (odds ratio=0.261, P=0.05). Additionally, the GWE group catheters had significantly higher catheter survival rates (P=0.0009), suggesting that GWE might be a favorable option over DN insertion when revised subclavian vein catheterization is inevitable. The study concluded that GWE can be performed repeatedly without compromising catheter outcomes (PUBMED:16214203).
Instruction: Can the smallest depth of ascitic fluid on sonograms predict the amount of drainable fluid? Abstracts: abstract_id: PUBMED:19618437 Can the smallest depth of ascitic fluid on sonograms predict the amount of drainable fluid? Purpose: To investigate the correlation between the 'smallest fluid depth' (SFD) measured on sonography (US) at the 'paracentesis pocket' with the amount of fluid drained in patients referred for US-guided large-volume paracentesis. Methods: US examinations performed to guide 60 paracenteses in 29 patients with large-volume ascites were reviewed and the SFD measured at the site of the paracentesis. The SFD was measured from the most superficial bowel loop to the abdominal wall. The SFD measurements were compared with the drained fluid volume (DFV) measurements. Results: The average DFV per paracentesis was 5.2 L with an average SFD measurement of 5.4 cm. For every 1-cm increase in the measured SFD, there was an average 1-L increase in the DFV. After applying this relationship to the measured depth in each case, the comparison between the estimated fluid volume (EFV) on US and the DFV demonstrated a &lt;1-L difference in 38 of 60 paracenteses (63.3%) and a &lt;2-L difference in 51 of 60 paracenteses (85%). Conclusion: The SFD measured at the site of paracentesis shows a correlation with the drained fluid volume and can be used for fluid volume estimation on US. abstract_id: PUBMED:21243350 Interloop fluid in intussusception: what is its significance? Background: Sonography has been used to predict pneumatic reduction outcome in children with intussusception. Objective: To assess the prognostic significance of fluid between the intussusceptum and intussuscepiens with respect to reduction outcome, lead point or necrosis. Materials And Methods: Sonograms of children with a discharge diagnosis of intussusception from four institutions were reviewed for interloop fluid and correlated with results of pneumatic reduction and surgical/pathological findings when available. Maximal dimension of interloop fluid on a transverse image and fluid complexity were evaluated. Results: Of 166 cases, 36 (21.7%) had interloop fluid. Pneumatic reduction was successful in 21 (58.3%) with fluid and 113 (87.6%) without. The average largest fluid dimension was 8.7 mm (range 5 mm-19 mm, median 8 mm) in cases with successful reduction and 12.8 mm (range 4 mm-26 mm, median 12.5 mm) in unsuccessful reduction (p &lt; 0.05). Fluid dimension equal to or greater than 9 mm correlated with failed reduction (p &lt; 0.0001;odds ratio 13:1). In 36 cases with interloop fluid that required surgery, there were four lead points and three necrosis. In cases without fluid with surgical reduction, there was one lead point and one necrosis. Interloop fluid correlated with lead point (p &lt; 0.04) or necrosis (p &lt; 0.03). Its significance increased with larger amounts of fluid (p &lt; 0.0001). Patient age/fluid complexity did not correlate with reduction outcome (p = 0.9). Conclusion: Interloop fluid was associated with increased failure of pneumatic reduction and increased likelihood of lead point or necrosis, particularly when the maximum dimension exceeded 9 mm. abstract_id: PUBMED:25437594 Predicting the amount of intraperitoneal fluid accumulation by computed tomography and its clinical use in patients with perforated peptic ulcer. The correlation between the amount of peritoneal fluid and clinical parameters in patients with perforated peptic ulcer (PPU) has not been investigated. The authors' objective was to derive a reliable formula for determining the amount of peritoneal fluid in patients with PPU before surgery, and to evaluate the correlation between the estimated amount of peritoneal fluid and clinical parameters. We investigated 62 consecutive patients who underwent emergency surgery for PPU, and in whom prediction of the amount of accumulated intraperitoneal fluid was possible by computed tomography (CT) using the methods described by Oriuchi et al. We examined the relationship between the predicted amount of accumulated intraperitoneal fluid and that measured during surgery, and the relationship between the amount of fluid predicted preoperatively or measured during surgery and several clinical parameters. There was a significant positive correlation between the amount of fluid predicted by CT scan and that measured during surgery. When patients with gastric ulcer and duodenal ulcer were analyzed collectively, the predicted amount of intraperitoneal fluid and the amount measured during surgery were each associated with the period from onset until CT scan, perforation size, the Mannheim peritoneal index, and the severity of postoperative complications according to the Clavien-Dindo classification. Our present results suggest that the method of Oriuchi et al is useful for predicting the amount of accumulated intraperitoneal fluid in patients with PPU, and that this would be potentially helpful for treatment decision-making and estimating the severity of postoperative complications. abstract_id: PUBMED:33175457 The expected frequency and amount of free peritoneal fluid estimated using the abdominal FAST-applied abdominal fluid scores in healthy adult and juvenile dogs. Objective: To estimate the frequency and amount of free peritoneal fluid in juvenile and adult dogs using the abdominal focused assessment with sonography for trauma (AFAST) abdominal fluid scoring system. Design: Prospective case series. Animals: Healthy, privately owned juvenile and adult dogs. Procedures: Dogs undergoing routine surgical sterilization were evaluated at induction with AFAST and assigned measurements and fluid scores. A surgeon scored the degree of peritoneal fluid found during ovariohysterectomy. Results: Ninety-two dogs were enrolled (46 juveniles and 46 adults). Ninety-three percent and 52% were AFAST positive for peritoneal fluid, respectively. The AFAST-positive view frequency for right lateral recumbency in juveniles was diaphragmatico-hepatic (DH) 100%, spleno-renal (SR) 20%, cysto-colic (CC) 40%, and hepato-renal (HR) 20% versus adults, DH 60%, SR 20%, CC 0%, and HR 0%, respectively. The AFAST-positive view frequency for left lateral recumbency was DH 93%, SR 44%, CC 24%, and HR 12% in juveniles, and DH 50%, SR 3%, CC 3%, and HR 10% in adults. Overall abdominal fluid scores (AFS) in juvenilles were 0 (n = 3), 1 (n = 14), 2 (n = 22), 3 (n = 6), and 4 (n = 1); and in adults, scores were 0 (n = 22), 1 (n = 18), 2 (n = 6), and 3 and 4 (n = 0). The AFS differed between adults and juveniles (P &lt; 0.001). Most dogs had maximum fluid dimensions ≤3 × 3 mm and width of fluid stripes ≤3 mm. The AFS was positively correlated to fluid amount observed during ovariohysterectomy with fair agreement (kappa = 0.233, P = 0.012). Conclusions And Clinical Relevance: This study establishes the frequency and amount of free peritoneal fluid in healthy juvenile and adult dogs during AFAST. Maximum fluid pocket dimensions of ≤3 × 3 mm and fluid stripe widths of ≤3 mm in dogs with AFS 1 and 2 may be normal. The DH view was most frequently positive. abstract_id: PUBMED:6611052 Sonographic detection of subtle pelvic fluid collections. The sonographic demonstration of small quantities of free intraperitoneal fluid often indicates significant pelvic pathology. In a review of pelvic fluid collections in 146 female patients, however, it became apparent that an overly distended urinary bladder may mask small quantities of free intraperitoneal fluid. The "mass effect" of a distended bladder may cause fluid in the pouch of Douglas to migrate to other parts of the peritoneal cavity, such as the peritoneal reflection over the fundus of the uterus. Fluid in this location produces a characteristic triangular "cap" and was present in 42 patients (29% of the study group). In 10 patients (6.9%) this was the only visible fluid collection. In addition, sonograms obtained after partial voiding demonstrated small quantities of free pelvic fluid in 14 patients (9.6% of the study group) that were not detected on routine full bladder scans. The sonographic appearance of small amounts of intraperitoneal fluid seen over the uterine fundus and the value of post-void scans are stressed in the demonstration of small quantities of intraperitoneal fluid. abstract_id: PUBMED:11781902 Isolated fluid in the cul-de-sac: how well does it predict ectopic pregnancy? We examined the risk of ectopic pregnancy among patients with isolated abnormal cul-de-sac fluid at transvaginal ultrasound. We conducted a retrospective cohort study of all ED patients presenting January 1995 to August 1999 with abdominal pain or vaginal bleeding and a positive beta-hCG test. The risk of ectopic pregnancy in patients with a moderate volume of anechoic fluid was compared with those with either a large volume of anechoic fluid or any echogenic fluid. Ectopic pregnancy was diagnosed in 16/38: 42%(95% CI 26%-59%) of patients with isolated cul-de-sac fluid, 5/23: 22% (95% CI 7%-42%) of patients with moderate amount of anechoic fluid, and 11/15: 73% (95% CI 45%-92%) of patients with a large volume of fluid or any echogenic fluid. These differences were significant (P =.005). Patients with isolated abnormal cul-de-sac fluid are at moderate risk for ectopic pregnancy. The risk increases if the fluid is echogenic or the volume is large. abstract_id: PUBMED:7760465 Sonographic detection of free pelvic peritoneal fluid. Transvaginal ultrasonography was performed on 113 women prior to laparoscopic sterilization. The amount and character of the peritoneal fluid present in the pelvis was assessed at the end of the operative procedure. Sonographically, free pelvic peritoneal fluid was seen in 42.5% of the patients. Laparoscopically, the average amount of fluid present was 11.2 ml with an average of 16.5 ml present in patients with FPPF and 7.2 ml present in patients without FPPF (P &lt; 0.0001). Sonographic measurement of fluid volume was found to significantly underestimate the amount of fluid present at laparoscopy (P &lt; 0.0001). Endometriosis and pelvic adhesions significantly changed the sonographic findings. abstract_id: PUBMED:23980214 Clinical outcomes of pediatric patients with acute abdominal pain and incidental findings of free intraperitoneal fluid on diagnostic imaging. Objectives: The presence of free intraperitoneal fluid on diagnostic imaging (sonography or computed tomography [CT]) may indicate an acute inflammatory process in children with abdominal pain in a nontraumatic setting. Although clinical outcomes of pediatric trauma patients with free fluid on diagnostic examinations without evidence of solid-organ injury have been studied, similar studies in the absence of trauma are rare. Our objective was to study clinical outcomes of children with acute abdominal pain of nontraumatic etiology and free intraperitoneal fluid on diagnostic imaging (abdominal/pelvic sonography, CT, or both). Methods: We conducted a retrospective review of medical records of children aged 0 to 18 years presenting to a pediatric emergency department with acute abdominal pain (nontraumatic) between April 2008 and March 2009. Patients with intraperitoneal free fluid on imaging were divided into 2 groups: group I, imaging suggestive of an intra-abdominal surgical condition such as appendicitis; and group II, no evidence of an acute surgical condition on imaging, including patients with equivocal studies. Computed tomograms and sonograms were reviewed by a board-certified radiologist, and the free fluid volume was quantitated. Results: Of 1613 patients who underwent diagnostic imaging, 407 were eligible for the study; 134 (33%) had free fluid detected on diagnostic imaging. In patients with both sonography and CT, there was a significant correlation in the free fluid volume (r = 0.79; P &lt; .0005). A significantly greater number of male patients with free fluid had a surgical condition identified on imaging (57.4% versus 25%; P &lt; .001). Children with free fluid and an associated condition on imaging were more likely to have surgery (94.4% versus 6.3%; P &lt; .001). Conclusions: We found clinical outcomes (surgical versus nonsurgical) to be most correlated with a surgical diagnosis on diagnostic imaging and not with the amount of fluid present. abstract_id: PUBMED:3692090 Can the protein concentration of the ascitic fluid in ascites predict the occurrence of an infection? In cirrhotic patients, spontaneous bacterial peritonitis is frequent and severe. This study was performed to determine if low protein concentration in ascitic fluid on admission could predict the occurrence of spontaneous bacterial peritonitis during hospitalization. Ninety-two cirrhotic patients with ascites, without spontaneous bacterial peritonitis were studied. Bacteriologic study and cultures of ascitic fluid were performed on admission and repeated every 5 days, and if any suspicion of infection occurred; 11 patients developed spontaneous bacterial peritonitis during hospitalization. Among the 92 patients in the study, protein concentration in ascitic fluid was initially less than 10 g/l in 45 and 10 of these 45 patients (22 p. 100) developed spontaneous bacterial peritonitis during hospitalization; protein concentration in ascitic fluid was initially greater than 10 g/l in 47 patients; only one of these 47 patients (2.1 p. 100) developed spontaneous bacterial peritonitis during hospitalization. This difference (22 p. 100 vs 2.1 p. 100) was significant (p less than 0.01). Ascitic fluid protein concentration (6.9 +/- 2.3 g/l) was significantly lower (p less than 0.01) in the spontaneous bacterial peritonitis group than in patients without peritonitis (13.8 +/- 10.5 g/l). These results suggest that: 1) ascitic fluid protein concentration on admission is lower in patients who will develop spontaneous bacterial peritonitis during hospitalization than in patients without infection and 2) patients with ascitic fluid protein concentration under 10 g/l on admission represent an high risk group for spontaneous bacterial peritonitis. abstract_id: PUBMED:9049011 Preincubation in peritoneal fluid decreases the follicular fluid-induced acrosomal reactivity of human spermatozoa. The aim of this study was to determine the effects of preincubation in peritoneal fluid on the follicular fluid-induced acrosomal reactivity of human spermatozoa in vitro. Thirty women participating in our IVF-ET program were given a GnRH-analogue, highly purified FSH and hCG in order to induce superovulation. Peritoneal and follicular fluids were aspirated during pick-up laparoscopy, centrifuged, filtered and frozen until use. An aliquot of swim-up suspension from normospermic semen specimens (n = 30) was incubated with peritoneal fluid or HAM-F10 for 30-180 min, and follicular fluid (in volumetric proportion approximately 50/50 with peritoneal fluid) was subsequently added. The percentage of acrosomally-reacted spermatozoa was assessed using the FITC-conjugated Pisum sativum lectin before and after incubation in peritoneal fluid or control medium, as well as after follicular fluid addition. Peritoneal fluid was not able to stimulate acrosomal reactivity; further, preincubation in peritoneal fluid decreased, but not abolished, the follicular fluid-induced acrosomal reactivity. A longer pre-incubation in peritoneal fluid was associated with a lower percentage of reacted spermatozoa in response to the addition of follicular fluid. In conclusion, our data suggest that peritoneal fluid acts maintaining spermatozoa in an unreacted status in the upper female genital tract. After mixing with follicular fluid, a phenomenon that is likely to occur at ovulation, peritoneal fluid reduces, but does not abolish, the stimulating effect of follicular fluid on acrosomal reactivity. Answer: Yes, the smallest depth of ascitic fluid on sonograms can predict the amount of drainable fluid. A study investigated the correlation between the 'smallest fluid depth' (SFD) measured on sonography at the 'paracentesis pocket' and the amount of fluid drained in patients referred for ultrasound-guided large-volume paracentesis. The results showed that for every 1-cm increase in the measured SFD, there was an average 1-L increase in the drained fluid volume (DFV). The comparison between the estimated fluid volume (EFV) on ultrasound and the DFV demonstrated a less than 1-L difference in 63.3% of paracenteses and a less than 2-L difference in 85% of paracenteses. Therefore, the SFD measured at the site of paracentesis shows a correlation with the drained fluid volume and can be used for fluid volume estimation on ultrasound (PUBMED:19618437).
Instruction: Adolescents' contraceptive use and pregnancy history: is there a pattern? Abstracts: abstract_id: PUBMED:29550455 Developing strategies to address contraceptive needs of adolescents: exploring patterns of use among sexually active adolescents in 46 low- and middle-income countries. Objective: We explore the patterns of adolescents' need for contraception in 46 low- and middle-income countries. Methods: Using data from the Demographic and Health Surveys, we estimate the prevalence of never-use, ever-use and current contraceptive use of sexually active adolescent girls ages 15-19. We use weighted fixed-effects meta-analytic techniques to estimate summary measures. Finally, we highlight country profiles of adolescent contraceptive use. Results: More than half (54.4%) of sexually active or girls in unions report never using contraception, while 13.3% report having used contraception but not currently doing so. Nearly a third report currently using a contraceptive method: 24.6% are using a modern short-term method, 2.5% are using a most effective method, and 5.2% are using a traditional method. Conclusions: We find significant heterogeneity across countries as well as within countries based on adolescents needs for spacing, limiting and method preference. With more than half of sexually active adolescents having never used contraception, the potential for unwanted pregnancies is high. Implications: While there is no single strategy to address adolescents' contraceptive needs, country programs may want to consider the heterogeneity of adolescents' risks for unintended pregnancy and tailor programs to align with the profile of adolescents in their settings. abstract_id: PUBMED:34412604 Individual and context correlates of the oral pill and condom use among Brazilian female adolescents. Background: Studies have examined the impact of contextual factors on the use of contraceptives among adolescents and found that many measures of income and social inequality are associated with contraceptive use. However, few have focused on maternal and primary health indicators and its influence on adolescent contraceptive use. This paper assesses whether maternal mortality rates, antenatal care visits, and primary healthcare coverage are associated with pill and condom use among female adolescents in Brazil. Methods: We used data from the Study of Cardiovascular Risks in Adolescents (ERICA), a national, school-based cross-sectional study conducted in Brazil. A subsample of all female adolescents who had ever had sexual intercourse and were living in one of the 26 State capitals and the Federal District was selected (n = 7415). Multilevel mixed effects logistic regression models were estimated to examine the effect of contextual variables on pill and condom use. Results: Sixty-five percent of female adolescents reported using pill while 21.9% reported using condom during the last sexual intercourse. Adolescents living in municipalities with low maternal mortality and high antenatal care coverage were significantly more likely to use pill during the last sexual intercourse compared to those from municipalities with high maternal mortality and low antenatal care coverage. Primary healthcare coverage (proportion of the population covered by primary healthcare teams) was not significantly associated with either condom or pill use during the last sexual intercourse. Conclusion: Our findings suggest that promoting the use of pill among female adolescents may require approaches to strengthen healthcare systems rather than those focused solely on individual attributes. abstract_id: PUBMED:34663362 Factors influencing contraceptive decision making and use among young adolescents in urban Lilongwe, Malawi: a qualitative study. Background: The prevalence of teenage pregnancies in Malawi is 29%. About 25% of those are married while 30% are unmarried adolescents (15-19 years old) who use contraceptives. Data on contraceptive use has focused on older adolescents (15-19 years old) leaving out the young adolescents (10-14 years old). This study assessed factors that influence contraceptive decision-making and use among young adolescents aged 10-14 years. Methods: This was a qualitative study that used the Theory of Reasoned Action (TRA) model to understand the processes that influence contraceptive decision-making among young adolescents (10-14 years old) in urban Lilongwe. The study was conducted in six youth health-friendly service centers and 12 youth clubs. Two focus group discussions and 26 in-depth interviews were conducted among sexually active in and out of school young adolescents and key informants. The results are organized into themes identified during the analysis. Results: Results showed that contraceptive decision-making is influenced by social factors (individual, interpersonal, society) and adolescents' perceptions regarding hormonal contraceptives. There is also a disconnect between Education and Adolescent Sexual and Reproductive Health policies. Conclusion: The findings suggest that interventions that scale up contraceptive use need male and female involvement in decision making. Addressing myths around contraceptives, and harmonization of Education and Sexual and Reproductive Health policies in the country would motivate adolescents to use contraceptives. abstract_id: PUBMED:37585001 Susceptibility of Nigerian adolescents to pregnancy and use of modern contraceptives. Nearly half of pregnancies amongst adolescent girls between ages 15 and 19 are unplanned, one outcome of this is unsafe abortion. Nigerian adolescents aged 15-19 have higher proportion of unmet needs for contraception than those aged 20-24, raising pertinent questions on their perceived susceptibility to pregnancy. Using the Health Belief Model, this article examined the effect of perceived susceptibility to pregnancy on modern contraceptive use among adolescents in Nigeria. Weighted data for 983 sexually active unmarried adolescents aged 15-19 years was extracted from the 2018 Nigerian Demographic and Health Survey. Binomial logistic regression was modelled to test for this relationship. Results showed that there is no significant association between perceived susceptibility and modern contraceptive use. However, adolescents who make use of the internet (AOR=1.659, CI 1.046-2.630), and those who had a sexual partner (AOR=4.051, CI 1.960-8.639), more than one partner in the last 12 months (AOR=6.037, CI 2.292-15.902) were more likely to use modern contraceptive. Young adolescents in Nigeria needs to be sensitized about reproductive health and the importance of the use of contraceptive. abstract_id: PUBMED:23545373 Sexual initiation, contraceptive use, and pregnancy among young adolescents. Objective: To present new data on sexual initiation, contraceptive use, and pregnancy among US adolescents aged 10 to 19, and to compare the youngest adolescents' behaviors with those of older adolescents. Methods: Using nationally representative data from several rounds of the National Survey of Family Growth, we performed event history (ie, survival) analyses to examine timing of sexual initiation and contraceptive use. We calculated adolescent pregnancy rates by single year of age using data from the National Center for Health Statistics, the Guttmacher Institute, and the US Census Bureau. Results: Sexual activity is and has long been rare among those 12 and younger; most is nonconsensual. By contrast, most older teens (aged 17-19) are sexually active. Approximately 30% of those aged 15 to 16 have had sex. Pregnancy rates among the youngest teens are exceedingly low, for example, ∼1 per 10 000 girls aged 12. Contraceptive uptake among girls as young as 15 is similar to that of their older counterparts, whereas girls who start having sex at 14 or younger are less likely to have used a method at first sex and take longer to begin using contraception. Conclusions: Sexual activity and pregnancy are rare among the youngest adolescents, whose behavior represents a different public health concern than the broader issue of pregnancies to older teens. Health professionals can improve outcomes for teenagers by recognizing the higher likelihood of nonconsensual sex among younger teens and by teaching and making contraceptive methods available to teen patients before they become sexually active. abstract_id: PUBMED:30046195 Influence of Contraception Use on the Reproductive Health of Adolescents and Young Adults. Oral contraceptives (OCs) are often prescribed to adolescents and young adults for the treatment of health problems and to avoid unwanted pregnancies. We hypothesized that the use of OCs, among adolescents and young adults, is associated with a greater likelihood of pregnancy, abortion, sexually transmitted diseases (STDs), pelvic inflammatory disease (PID), and sexual behaviors that will enhance those problems (i.e., earlier sexual debut and more sexual partners) than adolescents and young adults not using OCs. To test this hypothesis, data from 1,365 adolescents and young adults in the 2011-2013 National Survey of Family Growth (NSFG) were used to describe the influence of ever use of OCs on ever having sex, sexual debut, multiple sexual partners, STDs, PID, pregnancy, and abortion. A secondary purpose was to evaluate protective factors from unhealthy sexual practices like religiosity, church attendance, and intact families. We found that the "ever use" of OCs by US adolescents and young adults results in a greater likelihood of ever having sex, STDs, PID, pregnancy, and abortion compared with those adolescents and young adults who never used OCs. Furthermore, those adolescents who ever used OCs had significantly more male sexual partners than those who never used OCs, and they also had an earlier sexual debut by almost two years. Conversely, we found that frequent church attendance, identification of the importance of religion, and having an intact family among adolescents were associated with less likelihood of unsafe sexual practices. We concluded that the use of OCs by adolescents and young adults might be considered a health risk. Further research is recommended to confirm these associations. Summary: The purpose of this article was to show the correlation between contraceptive use in adolescents and negative sexual outcomes. We used data from the 2011-2013 NSFG and demonstrated that never married adolescents who used oral hormonal contraception were three times more likely to have an STD, have PID, and to become pregnant, and, surprisingly, ten times more likely of having an abortion compared to noncontracepting adolescents. These are outcomes that contraception is intended to prevent. These data also showed that the contraceptors had significantly more male partners than their contraceptive counterparts. Protective factors such as church attendance and family cohesiveness were associated with a decreased likelihood of sexual activity. abstract_id: PUBMED:34056060 Determinants of Long-acting Reversible Contraception (LARC) Initial and Continued Use among Adolescents in the United States. Long-acting reversible contraception (LARC) has gained attention as a promising strategy for preventing unintended adolescent pregnancies in the United States. However, LARC use among adolescents at risk for pregnancy remains low compared to women in their 20s. The purpose of the current study was to synthesize the empirical literature published between 2010 and 2018 identifying the facilitators of and barriers to adolescents' (&lt; age 20) LARC use in the United States. Thirty quantitative and qualitative studies were included in the current systematic review. The facilitators of and barriers to adolescent LARC use fell within five themes: LARC method characteristics, individual characteristics, social networks, healthcare systems, and historical time and geographical region. Barriers to adolescent LARC use largely echoed those identified in previous research noting the barriers to LARC use among young adult women (e.g., provider concerns with placing IUDs for nulliparous women, common adverse side effects associated with some LARC methods). However, qualitative studies identified adolescents' mothers as central figures in helping adolescents successfully obtain the LARC methods they desired. Conversely, adolescents' partners seemed to only play a minor role in adolescents' contraceptive decisions. Findings within the reviewed studies also suggested some subpopulations of adolescents may be experiencing pressure to initiate LARC use or have less ability to have their LARC device removed if they wish to discontinue use. Adolescent health practitioners and clinicians should consider the unique social-environmental influences of adolescents' contraceptive access and behaviors to best meet adolescents' contraceptive needs and desires. abstract_id: PUBMED:30228031 Pediatric Provider Education and Use of Long-Acting Reversible Contraception in Adolescents. Introduction: Pediatric primary care providers prescribe the majority of contraception to adolescents, but they often lack training in long-acting reversible contraception (LARC). Our objective was to assess whether a provider education initiative was associated with a change in LARC use for adolescents. Method: Using electronic medical records, we examined LARC use for 7,331 women ages 15 to 21 years with an established primary care provider before and after a provider education initiative on LARC. We used an interrupted time series design to examine trends in LARC use related to the intervention. Results: Before the intervention, 3.4% to 3.8% of adolescents were using a LARC method, and LARC use was declining by 4 devices/10,000 adolescents per month (95% confidence interval = [-5, -2] per 10,000 adolescents). After the intervention, LARC use stabilized. The number of adolescents using a LARC method increased nonsignificantly at 3, 6, 9, and 12 months after the intervention. Discussion: Education of pediatric primary care providers reversed a trend toward decreased use of long-acting reversible contraception. abstract_id: PUBMED:33510494 Use of Contraceptives among Adolescents: What Does Global Evidence Show and How Can Nepal Learn? Background: Adolescent pregnancy is a global health problem. Early pregnancies among adolescents have major health consequences for adolescent mothers and their babies. Contraceptives can prevent early pregnancy and its consequences. However, there is a low use of contraceptives among adolescents. Global evidence has shown which programmatic approaches are effective to increase the use of contraceptives among adolescents. Methods: This is not a systematic review. Desk review was done using Google Scholar and PubMed. Different policies, strategies, and reports published by agencies were also reviewed. Results: There is a low use of contraceptives and high unmet need for family planning and high adolescent fertility rate. Various studies conducted in different parts of the world have shown that there are some programmatic approaches implemented which are effective to improve the contraceptives use among adolescents. We have categorized the findings into three parts; i) delivery of services ii) increasing demand for services, and iii) creating an enabling environment. Conclusions: The use of contraceptives is low among adolescents in low- and middle-income countries including Nepal. So, the current programmatic approaches should be reviewed and the evidence-based practices implemented to bring better results. Ministry of Health and Population and partner agencies in Nepal also need to review the current programmatic approaches and implement them based on the evidence-based practices to improve contraceptives use among adolescents. abstract_id: PUBMED:36000420 Relationship Between Dating Violence and Contraceptive Use Among Texas Adolescents. The recent overturning of Roe v. Wade has the potential to adversely impact reproductive health among adolescents experiencing unplanned pregnancies from dating violence. We examined the associations between contraceptive use and dating violence among Texas high schoolers in the years leading up to this new law. Youth Risk Behavior Surveillance System data from Texas 9th to 12th graders from 2011, 2013, 2017, and 2019 were analyzed. Multinomial logistic regression analyses examined the association between contraceptive use and key descriptive predictors (physical and/or sexual dating violence, survey year, age, sex, and race/ethnic group). Eleven percent of Texas adolescents surveyed reported experiencing either physical or sexual dating violence and 2% reported experiencing both types of violence. Those who experienced any dating violence were significantly more likely to report not using contraception versus those who did not experience violence (12.5% vs. 68.3%, p = 0.01). Adolescents who experienced any type of dating violence were more likely to report using hormonal contraception, condom use, or withdrawal versus those who did not experience dating violence. Hispanic adolescents were 63% more likely than their ethnic group counterparts to use no contraception (odds ratio [OR] 1.63; 95% confidence interval [CI] [1.11-2.40]). A significant proportion of Texas adolescents reported experiencing dating violence, and this group also reported higher noncontraception use versus those not experiencing dating violence. Given new strict Texas antiabortion laws, dating violence prevention and contraceptive use promotion to prevent unwanted reproductive outcomes such as sexually transmitted infections or unplanned pregnancies are imperative in this age group. Answer: The available abstracts suggest that there is indeed a pattern in adolescents' contraceptive use and pregnancy history, although this pattern varies across different contexts and is influenced by a multitude of factors. In low- and middle-income countries, more than half of sexually active adolescent girls have never used contraception, and the potential for unwanted pregnancies is high (PUBMED:29550455). In Brazil, female adolescents living in municipalities with low maternal mortality and high antenatal care coverage were more likely to use the pill, suggesting that stronger healthcare systems may promote contraceptive use (PUBMED:34412604). In Malawi, contraceptive decision-making among young adolescents is influenced by social factors, perceptions regarding hormonal contraceptives, and a disconnect between education and adolescent sexual and reproductive health policies (PUBMED:34663362). In Nigeria, there is no significant association between perceived susceptibility to pregnancy and modern contraceptive use, but internet use and having a sexual partner were associated with higher contraceptive use (PUBMED:37585001). In the United States, sexual activity and pregnancy are rare among the youngest adolescents, and contraceptive uptake among girls as young as 15 is similar to that of older adolescents (PUBMED:23545373). However, the use of oral contraceptives among adolescents and young adults is associated with a greater likelihood of pregnancy, abortion, sexually transmitted diseases, and sexual behaviors that enhance those problems (PUBMED:30046195). Long-acting reversible contraception (LARC) use among adolescents in the United States is low compared to women in their 20s, with barriers including provider concerns and adverse side effects (PUBMED:34056060). Education of pediatric primary care providers was found to reverse a trend toward decreased use of LARC (PUBMED:30228031). In Nepal, the use of contraceptives is low among adolescents, and evidence-based programmatic approaches are needed to improve contraceptive use (PUBMED:33510494). In Texas, adolescents who experienced dating violence were more likely to report not using contraception, highlighting the need for dating violence prevention and contraceptive use promotion (PUBMED:36000420). Overall, the pattern indicates that contraceptive use among adolescents is influenced by individual, social, and systemic factors, and that there is a need for tailored interventions to address these factors and improve contraceptive use and reduce unwanted pregnancies among adolescents.
Instruction: Reduced quadriceps strength relative to body weight: a risk factor for knee osteoarthritis in women? Abstracts: abstract_id: PUBMED:9811049 Reduced quadriceps strength relative to body weight: a risk factor for knee osteoarthritis in women? Objective: To determine whether baseline lower extremity muscle weakness is a risk factor for incident radiographic osteoarthritis (OA) of the knee. Methods: This prospective study involved 342 elderly community-dwelling subjects (178 women, 164 men) from central Indiana, for whom baseline and followup (mean interval 31.3 months) knee radiographs were available. Lower extremity muscle strength was measured by isokinetic dynamometry and lean tissue (i.e., muscle) mass in the lower extremities by dual x-ray absorptiometry. Results: Knee OA was associated with an increase in body weight in women (P = 0.0014), but not in men. In both sexes, lower extremity muscle mass exhibited a strong positive correlation with body weight. In women, after adjustment for body weight, knee extensor strength was 18% lower at baseline among subjects who developed incident knee OA than among the controls (P = 0.053), whereas after adjustment for lower extremity muscle mass, knee extensor strength was 15% lower than in the controls (P not significant). In men, in contrast, adjusted knee extensor strength at baseline was comparable to that in the controls. Among the 13 women who developed incident OA, there was a strong, highly significant negative correlation between body weight and extensor strength (r = -0.740, P = 0.003), that is, the more obese the subject, the greater the reduction of quadriceps strength. In contrast, among the 14 men who developed incident OA, a modest positive correlation existed between weight and quadriceps strength (r = 0.455, P = 0.058). No correlation between knee flexor (hamstring) strength and knee OA was seen in either sex. Conclusion: Reduced quadriceps strength relative to body weight may be a risk factor for knee OA in women. abstract_id: PUBMED:35573645 Knee Osteoarthritis: Kinesiophobia and Isometric Strength of Quadriceps in Women. Introduction: Osteoarthritis is a disease characterized by progressive wear and tear of the joint, with the knee being the most affected region. These patients have reduced mobility and mobility, among other symptoms. Thus, it is necessary to know the variables that influence the ability to walk. Objective: To analyze how much the gait capacity, in the performance of the six-minute walk test, can be influenced by the maximum isometric strength of the quadriceps or by kinesiophobia in women with knee osteoarthritis. Materials And Methods: This is a cross-sectional study with a sample of 49 women diagnosed with osteoarthritis. The evaluation was carried out in a single moment. Variables studied isometric quadriceps strength, level of fear of movement (kinesiophobia), and ability to walk. Simple linear regression analyzes were performed, with gait ability as the dependent variable and maximum isometric strength and kinesiophobia as independent. Data were presented with mean and standard deviation and were analyzed by the SPSS Statistic 22.0 software, considering p &lt; 0.05 as significant. Results: The maximum isometric strength presents a significant difference, directly interfering with the gait ability; as kinesiophobia does not show a statistically significant difference, it does not directly interfere with the ability to walk. Conclusion: Maximal quadriceps isometric strength directly interferes with gait ability in women with knee osteoarthritis, thus suggesting the inclusion of this strategy in treatment programs for this population. abstract_id: PUBMED:21497317 Obesity and knee osteoarthritis are not associated with impaired quadriceps specific strength in adults. Objective: To assess whether adults, aged 50-59 years, who are obese or moderately to severely obese have impaired quadriceps strength and muscle quality in comparison with adults who are not obese, both groups with and without knee osteoarthritis (OA). Design: Cross-sectional observational study. Setting: Rural community acquired sample. Subjects: Seventy-seven men and 84 women, aged 50-59 years. Methods: Comparisons by using mixed models for clustered data (2 lower limbs per participant) between groups defined by body mass index (BMI) (&lt;30 kg/m(2), 30-35 kg/m(2), and ≥35 kg/m(2)), with and without knee OA MAIN OUTCOME MEASUREMENT: The slope of the relationship between quadriceps muscle cross-sectional area (CSA) and isokinetic knee extensor strength (dynamometer) in each BMI and OA group. Results: There were 113 limbs (48.7% women), 101 limbs (38.6% women), and 89 limbs (73.0% women) in the &lt;30 kg/m(2), 30-35 kg/m(2), and ≥35 kg/m(2) BMI groups, respectively; knee OA was present in 10.6%, 28.7%, and 58.4% of the limbs in each of these respective groups. Quadriceps CSA did not significantly differ among BMI groups in either gender or between subjects with and without knee OA. Peak quadriceps strength also did not significantly differ by BMI group or by the presence of knee OA. Multivariable analyses also demonstrated that peak quadriceps strength did not differ by BMI group, even after adjusting for (a) gender, (b) OA status, (c) intramuscular fat, or (d) quadriceps attenuation. The slopes for the relationships between quadriceps strength and CSA did not differ by BMI group, OA status, or their interaction. Conclusions: Individuals who were obese and at risk for knee OA did not appear to have altered muscle strength or muscle quality compared with adults who were not obese and were aged 50-59 years. The absence of a difference in the relationship between peak quadriceps strength and CSA provided further evidence that there was not an impairment in quadriceps muscle quality in this cohort, which suggests that factors other than strength might mediate the association between obesity and knee OA. abstract_id: PUBMED:34000629 Muscle strength gains after strengthening exercise explained by reductions in serum inflammation in women with knee osteoarthritis. Background Individuals with knee osteoarthritis have elevated circulating inflammatory markers and altered cartilage properties but it is unclear if these features adapt to exercise. We aimed to determine (1) whether inflammatory markers, cartilage transverse relaxation time and thickness mediate the effect of body mass index (BMI) on quadriceps strength at baseline; and (2) whether these changes explain variance in quadriceps strength improvements after 12 weeks of exercise in women with knee osteoarthritis. Methods This secondary analysis (17 women with clinical knee osteoarthritis) of a randomized control trial compared supervised group interventions, 3 times/week for 12 weeks (36 sessions): (a) weight-bearing progressive resistive quadriceps exercise or (b) attention control. (1) From baseline, separate linear regressions were conducted with strength (Nm/kg) as the dependent, BMI as the predictor, and c-reactive protein, tumor necrosis factor, interleukin-6, cartilage transverse relaxation time or thickness as potential mediators. (2) Multiple linear regression analyses were completed with 12-week strength change (post-pre) as the dependent, change in serum inflammatory markers and cartilage measurements as predictors, and age, BMI and adherence as covariates. Findings (1) At baseline, there was no mediation. (2) A decrease in each of interleukin-6 (β = -0.104 (95% confidence intervals: -0.172, -0.036), R2 = 0.51, P &lt; 0.007) and tumor necrosis factor (β = -0.024 (-0.038, -0.009), R2 = 0.54, P &lt; 0.005) was associated with strength gains. Interpretation At baseline, inflammatory markers and cartilage measurements do not act as mediators of BMI on quadriceps strength. After 12 weeks of exercise, reduced interleukin-6 and tumor necrosis factor were associated with increased quadriceps strength in women with knee osteoarthritis. abstract_id: PUBMED:31889244 Implications of evaluating leg muscle mass and fat mass separately for quadriceps strength in knee osteoarthritis: the SPSS-OK study. Objective: To examine the influence of obesity on quadriceps strength by separately analyzing body mass index (BMI) as fat mass and leg muscle mass in patients with knee osteoarthritis (KOA). Methods: The Screening for People Suffering Sarcopenia in Orthopedic cohort of Kobe (SPSS-OK) study was a single-center cross-sectional study that recruited 906 patients with KOA. Fat mass and leg muscle mass were measured by bio-impedance. Isometric knee extension torque (Nm) was measured as quadriceps strength. A series of general linear models were fitted to estimate the continuous associations of BMI and fat mass with quadriceps strength, with adjustment of confounders. In the fitted models, both BMI and fat mass were treated as restricted cubic spline functions. Results: A continuous, non-linear relationship between BMI and quadriceps strength was found (P = 0.008 for non-linearity). In patients with a BMI of 16-25 kg/m2, increasing quadriceps strength was observed. However, in patients with a BMI of 25-40 kg/m2, quadriceps strength seemed similar. Additionally, an inverted U-shaped relationship between fat mass and quadriceps strength was demonstrated (P = 0.04 for non-linearity). In those with a fat mass of 10-20 kg, increasing quadriceps strength was seen. However, in patients with a fat mass of 20-30 kg, quadriceps strength showed a decreasing trend. Independent of fat mass, leg muscle mass was linearly associated with greater quadriceps strength. Conclusion: Our study suggests that there are independent associations between the leg muscle mass, fat mass, and quadriceps strength. It is difficult to easily predict quadriceps strength using only BMI. Key Points: • An increase in body mass index (BMI) up to 25 kg/m2was associated with increasing quadriceps strength. • Quadriceps strength remained almost unchanged among patients with a BMI of &gt; 25 kg/m2. • The association between fat mass and quadriceps strength had an inverted U-shaped relationship, suggesting the importance of the separate assessment of fat mass and muscle mass in patients with knee osteoarthritis, especially those who are overweight or obese. abstract_id: PUBMED:26619822 Effect of whole body vibration training on quadriceps muscle strength in individuals with knee osteoarthritis: a systematic review and meta-analysis. Background: Several studies have reported the effects of whole body vibration (WBV) training on muscle strength. This systematic review investigates the current evidence regarding the effects of WBV training on quadriceps muscle strength in individuals with knee osteoarthritis (OA). Data Sources: We searched PubMed, CINAHL, Embase, Scopus, PEDro, and Science citation index for research articles published prior to March 2015 using the keywords whole body vibration, vibration training, strength and vibratory exercise in combination with the Medical Subject Heading 'Osteoarthritis knee'. Study Selection: This meta-analysis was limited to randomized controlled trials published in the English language. Data Extraction: The quality of the selected studies was assessed by two independent evaluators using the PEDro scale and criteria given by the International Society of Musculoskeletal and Neuronal Interactions (ISMNI) for reporting WBV intervention studies. The risk of bias was assessed using the Cochrane collaboration's tool for domain-based evaluation. Isokinetic quadriceps muscle strength was calculated for each intervention. Results: Eighteen studies were identified in the search. Of these, four studies met the inclusion criteria. Three of these four studies reached high methodological quality on the PEDro scale. Out of the four studies, only one study found significantly greater quadriceps muscle strength gains following WBV compared to the control group. Conclusions: In three of the four studies that compared a control group performing the same exercise as the WBV groups, no additional effect of WBV on quadriceps muscle strength in individuals with knee OA was indicated. abstract_id: PUBMED:20463561 Isometric quadriceps strength in women with mild, moderate, and severe knee osteoarthritis. Objective: Quadriceps weakness is a common clinical sign in persons with moderate-to-severe osteoarthritis and results in physical disability; however, minimal data exist to establish whether quadriceps weakness is present in early stages of the disease. Therefore, our purpose was to determine whether quadriceps weakness was present in persons with early radiographic and cartilaginous evidence of osteoarthritis. Further, we sought to determine whether quadriceps strength decreases as osteoarthritis severity increases. Design: Three hundred forty-eight women completed radiologic and magnetic resonance imaging evaluation, in addition to strength testing. Anterior-posterior radiographs were graded for tibiofemoral osteoarthritis severity using the Kellgren-Lawrence scale. Scans from magnetic resonance imaging were used to assess medial tibiofemoral and patellar cartilage based on a modification of the Noyes scale. The peak knee extension torque recorded was used to represent strength. Results: Quadriceps strength (Nm/kg) was 22% greater in women without radiographic osteoarthritis than in women with osteoarthritis (P &lt; 0.05). Quadriceps strength was also greater in women with Noyes' medial tibial and femoral cartilage scores of 0 when compared in women with Noyes' grades 2 and 3-5 (P &lt; or = 0.05). Conclusions: Women with early evidence of osteoarthritis had less quadriceps strength than women without osteoarthritis as defined by imaging. abstract_id: PUBMED:31585496 Influence of Antagonistic Hamstring Coactivation on Measurement of Quadriceps Strength in Older Adults. Background: There is limited understanding of how antagonist muscle coactivation relates to measurement of strength in both individuals with and without knee osteoarthritis (KOA). Objective: This study sought to determine whether hamstring coactivation during a maximal quadriceps activation task attenuates net quadriceps strength. Design: Cross-sectional cohort analysis was conducted using data from the 60-month visit of the Multicenter Osteoarthritis Study (MOST). Setting: Laboratory. Participants: A sample of 2328 community-dwelling MOST participants between the ages of 55 and 84 years, with or at elevated risk for KOA, completed the 60-month MOST follow-up visit. Of these, 1666 met inclusion criteria for the current study. Interventions: Not applicable. Main Outcome Measure(s): Quadriceps strength; percentage of combined hamstring coactivation (HC), medial HC, and lateral HC. Quadriceps and hamstring strength were assessed using an isokinetic dynamometer. Surface electromyography was used to assess muscle activation patterns. General linear models, adjusted for age, BMI, Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC), Kellgren-Lawrence (KL) grade and study site, modeled the relationship between antagonist hamstring coactivation and quadriceps strength. Results: Men had significantly greater quadriceps strength (P &lt; .001), history of knee injury (P &lt; .001) and surgery (P = .002), and greater presence of varus malalignment (P &lt; .001). Women had greater pain (P &lt; .001) and proportion of KL grade ≥2 (P = .017). Gender-specific analyses revealed combined HC (P = .013) and lateral HC inversely associated with quadriceps strength in women (P = .023) but not in men (combined HC P = .320, lateral HC P = .755). A nonlinear association was detected between quadriceps strength and medial HC. Assessment of quartiles of medial HC revealed the third quartile had reduced quadriceps strength when compared to the lowest quartile of coactivation in both men and women. Conclusions: Hamstring coactivation attenuates measured quadriceps strength in women with or at elevated risk for KOA. Level Of Evidence: II. abstract_id: PUBMED:32342241 Structural severity, phase angle, and quadriceps strength among patients with knee osteoarthritis: the SPSS-OK study. Introduction/objectives: The associations between severity of knee osteoarthritis (KOA) and phase angle (PhA) and between PhA and quadriceps strength in patients with KOA are unclear. This study examined (1) whether the structural severity of KOA affects PhA and (2) whether PhA affects quadriceps strength in patients with KOA. Method: Data of 1093 patients with KOA, obtained from Screening for People Suffering Sarcopenia in the Orthopedic cohort of Kobe study, were analyzed. PhA was determined by bioimpedance. Quadriceps strength was measured using a handheld dynamometer. Structural severity of KOA was determined using Kellgren-Lawrence radiographic grading scale. A series of general linear models were fitted to estimate the magnitude of differences in PhA by differences in KOA severity and quadriceps strength by differences in PhA. Results: The mean age of the patients was 72.8 years, and 78% were women. Increasing KOA severity was associated with decreasing PhA, especially in men. In women, only grade 4 KOA was associated with a decrease in PhA (P for interaction = 0.048). PhA per leg was positively associated with quadriceps strength per leg, independent of age, sex, leg muscle mass, pain, and KOA severity (mean difference per 1° increase = 7.54 Nm, 95% confidence interval = 5.51-9.57 Nm). The association between PhA and quadriceps strength differed neither by sex nor by KOA severity (P for interaction = 0.133 and 0.185, respectively). Conclusions: PhA decreased with increasing KOA severity, and increasing PhA was associated with increasing quadriceps strength. Clinicians should, therefore, evaluate PhA to assess quadriceps strength in patients with KOA. Key Points • PhA gradually decreased with increasing severity of KOA, especially in men. • Increasing PhA was associated with increasing quadriceps strength. • Clinicians should focus on increasing muscle mass and PhA. abstract_id: PUBMED:26474770 Quadriceps muscle strength, radiographic knee osteoarthritis and knee pain: the ROAD study. Background: The objective of this study was to clarify the association of quadriceps muscle strength with knee pain using a large-scale, population-based cohort of the Research on Osteoarthritis/osteoporosis Against Disability (ROAD) study. Methods: From the 2566 subjects at the third visit of the ROAD study, the present study analyzed 2152 subjects who completed radiographic examinations and measurements of muscle strength and mass (690 men and 1462 women; mean age, 71.6 ± 12.2 years). Knee pain was assessed by an experienced orthopedist. Knee osteoarthritis (OA) was defined according to Kellgren-Lawrence (KL) grade. Quadriceps muscle strength and muscle mass at the lower limbs were measured by the Quadriceps Training Machine (QTM-05F, Alcare Co., Ltd. Tokyo, Japan) and the Body Composition Analyzer MC-190 (Tanita Corp., Tokyo, Japan), respectively. Results: Quadriceps muscle strength and weight bearing index (WBI: quadriceps muscle strength by weight) were significantly associated with knee pain after adjustment for age and body mass index, whereas grip strength and muscle mass at the lower limbs were not. The significant association of quadriceps muscle strength with knee pain was independent of radiographic knee OA. Conclusion: The present cross-sectional study showed an independent association of quadriceps muscle strength with knee pain. Answer: The evidence from the provided abstracts suggests that reduced quadriceps strength relative to body weight may indeed be a risk factor for knee osteoarthritis (OA) in women. A prospective study involving elderly community-dwelling subjects found that women who developed incident knee OA had 18% lower knee extensor strength at baseline after adjusting for body weight, and there was a strong negative correlation between body weight and extensor strength among these women (PUBMED:9811049). This indicates that the more obese the subject, the greater the reduction in quadriceps strength, suggesting that reduced quadriceps strength relative to body weight could be a risk factor for knee OA in women. Another study found that maximal quadriceps isometric strength directly interferes with gait ability in women with knee osteoarthritis, implying the importance of quadriceps strength in the management of knee OA and potentially as a risk factor (PUBMED:35573645). However, a cross-sectional observational study found no significant difference in peak quadriceps strength by body mass index (BMI) group or by the presence of knee OA in adults aged 50-59 years, suggesting that factors other than strength might mediate the association between obesity and knee OA (PUBMED:21497317). A secondary analysis of a randomized control trial indicated that after 12 weeks of exercise, reduced interleukin-6 and tumor necrosis factor were associated with increased quadriceps strength in women with knee osteoarthritis (PUBMED:34000629). This suggests that inflammation may play a role in quadriceps strength and potentially in the risk of knee OA. The SPSS-OK study highlighted the importance of evaluating leg muscle mass and fat mass separately for quadriceps strength in knee OA, showing that there are independent associations between leg muscle mass, fat mass, and quadriceps strength (PUBMED:31889244). This complexity indicates that simply using BMI may not be sufficient to predict quadriceps strength and its relation to knee OA risk. In summary, while there is evidence to support the notion that reduced quadriceps strength relative to body weight is a risk factor for knee OA in women, the relationship is complex and may involve other factors such as inflammation and the separate contributions of muscle and fat mass.
Instruction: Does bacille Calmette-Guérin scar size have implications for protection against tuberculosis or leprosy? Abstracts: abstract_id: PUBMED:8762845 Does bacille Calmette-Guérin scar size have implications for protection against tuberculosis or leprosy? Setting: Total population study in Karonga District, northern Malawi, in which the overall vaccine efficacy of bacille Calmette-Guérin (BCG) has been found to be -7% against tuberculosis and 54% against leprosy. Objective: To examine the relationship between BCG scar size and protection against tuberculosis and leprosy. Design: Cohort study in which 85,134 individuals were screened for tuberculosis and 82,265 for leprosy between 1979 and 1984, and followed up between 1986 and 1989. Results: Of the BCG scar positive individuals whose scars were measured, 31/3 2471 were later identified with tuberculosis and 81/31 879 with leprosy. In 19,114 individuals, of whom 17 developed tuberculosis, tuberculin induration was measured at first examination. Mean scar sizes increased with increasing tuberculin induration in all except the oldest individuals. Mean scar sizes were lowest in individuals aged &lt; 10 years, highest in individuals aged 10-29 years and intermediate in older individuals. There was some evidence (P = 0.08) for an increase in tuberculosis risk with increasing scar size, which probably reflects the known correlation between scar size and tuberculin status at the time of vaccination. There was no clear association between BCG scar size and leprosy incidence. Conclusions: We find no evidence that increased BCG scar size is a correlate of vaccine-induced protective immunity against either tuberculosis or leprosy. abstract_id: PUBMED:34525502 Bacillus of Calmette and Guérin (BCG) and the risk of leprosy in Ciudad del Este, Paraguay, 2016-2017. Objectives: Paraguay has experienced a 35% reduction in the detected incidence of leprosy during the last ten years, as the vaccination coverage against tuberculosis (Bacillus of Calmette and Guérin [BCG] vaccine) reached ≥95% among infants. The objective of this case-control study was to evaluate the protective effect of BCG on the risk of leprosy. Methods: We used a population-based case-control study of 20 leprosy confirmed cases reported among residents of Ciudad del Este, Paraguay, diagnosed in 2016-2017. Three controls were selected from a random sample of households from the city. We assessed vaccine effectiveness using 1- odds ratio [OR], and confounding for age, gender, education, occupation, and marital status using stratified and exact logistic regression, and explored if there was effect modification calculating the synergy factor (SF) and relative excess risk due to interaction (RERI). Results: After controlling for age, gender, education, occupation and marital status, the OR of BCG scar on the risk of leprosy was 0.10 (95% confidence interval [CI], 0.02 to 0.45), for an estimate of vaccine effectiveness of 89.5% reduced risk of leprosy (95% CI, 55.2 to 98.1). There was evidence of heterogeneity by which the effectiveness of BCG seemed stronger among younger persons (Breslow-Day and Z-test of the SF had a p&lt;0.05), and both the RERI and SF indicated a less then multiplicative and additive interaction of BCG and younger age. Conclusions: BCG vaccination was associated with a decreased risk of leprosy in the study population, particularly in persons born after 1980. abstract_id: PUBMED:15661132 Effectiveness of Bacillus Calmette Guerin (BCG) vaccination in the prevention of leprosy: a population-based case-control study in Yavatmal District, India. Objective: To estimate the effectiveness of Bacillus Calmette Guerin (BCG) vaccination in the prevention of leprosy. Study design. Population-based case-control study. Methods: The study was carried out in Yavatmal District, Maharashtra, India. It included 364 cases of leprosy (diagnosed by the World Health Organization's criteria), born since 1962, that were detected during a leprosy survey conducted by the Government of Maharashtra in 2,175,514 people. Each case was pair-matched with one neighbourhood control for age, sex and socio-economic status. Exclusion criteria for controls included past or current history of tuberculosis or leprosy. BCG vaccination status was assessed by examination for the presence of a BCG scar, immunization records if available and information from subjects/parents of children. Subjects who were uncertain about BCG vaccination were not included. Results: A significant protective association between BCG and leprosy was observed [odds ratio=0.46, 95% confidence intervals (CI) 0.34-0.61]. Overall vaccine effectiveness (VE) was 54% (95% CI 39-66). BCG effectiveness against multibacillary, paucibacillary and single skin lesion leprosy was 68% (95% CI 26-86), 57% (95% CI 29-74) and 48% (95% CI 22-65), respectively. Analysis of linear trend revealed a significant linear association between the protective effect of BCG and the type of leprosy. The BCG vaccine was more effective in those aged &lt; or =20 years compared with those aged &gt;20 years (VE 61%, 95% CI), among females compared with males (VE 60%, 95% CI), in lower socio-economic strata compared with upper and middle strata (VE 57%, 95% CI), and in subjects who had a BCG scar size &lt; or =5 mm compared with those with a BCG scar size &gt;5 mm (VE 61%, 95% CI). However, these differences were not statistically significant, as reflected by the overlapping 95% CIs. The overall prevented fraction was 35% (95% CI 22-46). Conclusion: The current study identified a beneficial role of BCG vaccination in the prevention of leprosy in the study population. abstract_id: PUBMED:32758336 Avoiding COVID-19 complications with diabetic patients could be achieved by multi-dose Bacillus Calmette-Guérin vaccine: a case study of beta cells regeneration. Diabetes mellitus (DM) is one of the major risk factors for COVID-19 complications as it is one of the chronic immune-compromising conditions especially if patients have uncontrolled diabetes, poor HbA1c and/or irregular blood glucose levels. Diabetic patients' mortality rates with COVID-19 are higher than those of cardiovascular or cancer patients. Recently, Bacillus Calmette-Guérin (BCG) vaccine has shown successful results in reversing diabetes in both rats and clinical trials based on different mechanisms from aerobic glycolysis to beta cells regeneration. BCG is a multi-face vaccine that has been used extensively in protection from tuberculosis (TB) and leprosy and has been repositioned for treatment of bladder cancer, diabetes and multiple sclerosis. Recently, COVID-19 epidemiological studies confirmed that universal BCG vaccination reduced morbidity and mortality in certain geographical areas. Countries without universal policies of BCG vaccination (Italy, Nederland, USA) have been more severely affected compared to countries with universal and long-standing BCG policies that have shown low numbers of reported COVID-19 cases. Some countries have started clinical trials that included a single dose BCG vaccine as prophylaxis from COVID-19 or an attempt to minimize its side effects. This proposed research aims to use BCG vaccine as a double-edged weapon countering both COVID-19 and diabetes, not only as protection but also as therapeutic vaccination. The work includes a case study of regenerated pancreatic beta cells based on improved C-peptide and PCPRI laboratory findings after BCG vaccination for a 9 year old patient. The patient was re-vaccinated based on a negative tuberculin test and no scar at the site of injection of the 1st BCG vaccination at birth. The authors suggest and invite the scientific community to take into consideration the concept of direct BCG re-vaccination (after 4 weeks) because of the reported gene expressions and exaggerated innate immunity consequently. As the diabetic MODY-5 patient (mutation of HNF1B, Val2Leu) was on low dose Riomet® while eliminating insulin gradually, a simple analytical method for metformin assay was recommended to ensure its concentration before use as it is not approved yet by the Egyptian QC labs. abstract_id: PUBMED:21423646 BCG-mediated protection against Mycobacterium ulcerans infection in the mouse. Background: Vaccination with Mycobacterium bovis bacille Calmette-Guérin (BCG) is widely used to reduce the risk of childhood tuberculosis and has been reported to have efficacy against two other mycobacterial diseases, leprosy and Buruli ulcer caused by M. ulcerans (Mu). Studies in experimental models have also shown some efficacy against infection caused by Mu. In mice, most studies use the C57BL/6 strain that is known to develop good cell-mediated protective immunity. We hypothesized that there may be differences in vaccination efficacy between C57BL/6 and the less resistant BALB/c strain. Methods: We evaluated BCG vaccine efficacy against challenge with ∼3×10(5)M. ulcerans in the right hind footpad using three strains: initially, the Australian type strain, designated Mu1617, then, a Malaysian strain, Mu1615, and a recent Ghanaian isolate, Mu1059. The latter two strains both produce mycolactone while the Australian strain has lost that capacity. CFU of both BCG and Mu and splenocyte cytokine production were determined at intervals after infection. Time to footpad swelling was assessed weekly. Principal Findings: BCG injection induced visible scars in 95.5% of BALB/c mice but only 43.4% of C57BL/6 mice. BCG persisted at higher levels in spleens of BALB/c than C57BL/6 mice. Vaccination delayed swelling and reduced Mu CFU in BALB/c mice, regardless of challenge strain. However, vaccination was only protective against Mu1615 and Mu1617 in C57BL/6 mice. Possible correlates of the better protection of BALB/c mice included 1) the near universal development of BCG scars in these mice compared to less frequent and smaller scars observed in C57BL/6 mice and 2) the induction of sustained cytokine, e.g., IL17, production as detected in the spleens of BALB/c mice whereas cytokine production was significantly reduced, e.g., IL17, or transient, e.g., Ifnγ, in the spleens of C57BL/6 mice. Conclusions: The efficacy of BCG against M. ulcerans, in particular, and possibly mycobacteria in general, may vary due to differences in both host and pathogen. abstract_id: PUBMED:18229442 Scar size and effectiveness of Bacillus Calmette Guerin (BCG) vaccination in the prevention of tuberculosis and leprosy: a case-control study. Background: The study was undertaken to estimate the effectiveness of BCG vaccination in relation to scar size in the prevention of tuberculosis and leprosy. Methods: The present study was designed as hospital-based pair-matched case-control study and was carried out at Government Medical College Hospital, Nagpur, Maharashtra, India. It included 877 cases of tuberculosis and 292 cases of leprosy (diagnosed by WHO criteria), born onwards 1962. Each case was pair-matched with one control for age, sex and socio-economic status. BCG vaccination status was assessed by examination for the presence of BCG scar, immunisation records if available and information from subjects/parents of children. Subjects uncertain about BCG vaccination were not included. The diameter of the BCG scar was measured both across and along the arm in millimeters using a plastic ruler. The average was then calculated. Results: A significant protective association between BCG vaccination and tuberculosis (OR=0.38, 95% CI 0.31-0.47) and leprosy (OR = 0.38, 95% CI 0.26-0.55) was observed. The overall vaccine effectiveness (VE) was 62% (95% CI 53-69) against tuberculosis and 62% (95% CI 45- against leprosy. Vaccine effectiveness against tuberculosis and leprosy was non-significantly greater in the group who had BCG scar size &lt; or =5 mm as compared to subjects who had BCG scar size &gt; 5 mm. Thus there was no clear association between BCG scar size and its effectiveness. Conclusion: The current study did not identify any significant association between BCG scar size and its effectiveness against tuberculosis or leprosy. abstract_id: PUBMED:1347338 Efficacy of BCG vaccine against leprosy and tuberculosis in northern Malawi. Protection afforded by BCG (bacillus Calmette-Guérin) vaccines against tuberculosis and leprosy varies widely between different populations. In the only controlled trial which assessed protective efficacy of BCG (Danish and Pasteur strains) against both diseases, there was slightly more protection against leprosy than against tuberculosis. We have studied the protective efficacy of BCG (Glaxo, freeze dried) vaccine against these two diseases in Karonga District, northern Malawi. BCG vaccination was introduced into this population in 1974. Prior information about BCG scar status was available for 83,455 individuals followed up between 1979 and 1989. 414 new cases of leprosy and 180 new cases of tuberculosis were found in this population over that period. Protection was estimated at 50% or greater against leprosy, and there was no evidence for lower protection against multibacillary (84%; 95% confidence interval 26% to 97%) than against paucibacillary (51%; 30% to 66%) disease. There was no statistically significant protection by BCG against tuberculosis in this population. These findings add to the evidence that BCG vaccines afford greater protection against leprosy than against tuberculosis. abstract_id: PUBMED:31858747 Cytological diagnosis in a clinically unsuspected case of disseminated BCGosis: A case report. Bacille Calmette-Guerin (BCG) vaccine is administered worldwide to neonates and considered safe. Serious complications like disseminated BCGosis are extremely rare occurrences (&lt;1 per million vaccinations). A 6 months male was brought to paediatric outpatient department with fever and swelling over the dorsum of the left hand for 5 days. On examination, he was febrile and had hepatosplenomegaly. X-ray of the hand showed lytic lesions in the first and second metacarpals. Provisional clinical diagnosis included Langerhans cell histiocytosis, congenital syphilis, and haematological malignancy. Fine Needle Aspiration Cytology (FNAC) was done from the swelling and showed diffuse sheets of histiocytes with both intracellular and extracellular rod-shaped unstained structures along with inflammatory cells. These ghost images stained positive with ZN stain. A cytological diagnosis of atypical mycobacteria vs leprosy was made. Child was revisited and found to have an active BCG scar. Further investigations showed low serum IgM and positive AFB culture. These bacilli were confirmed by GenoType MTBDR plus test as Mycobacterium bovis. Despite Antitubercular therapy, the patient succumbed to death. This case highlights the variable clinical presentation of BCGosis. Its occurrence may unmask any underlying immunodeficiency. If unfamiliar with the above cytological features and in absence of routinely performed special stains, the cytopathologist may miss these notorious organisms and treat such cases like suppurative lesions. To conclude, an early and definitive diagnosis of BCGosis can be established on FNAC which would ensure timely management and better outcome in this highly lethal entity. abstract_id: PUBMED:25569674 Effectiveness of routine BCG vaccination on buruli ulcer disease: a case-control study in the Democratic Republic of Congo, Ghana and Togo. Background: The only available vaccine that could be potentially beneficial against mycobacterial diseases contains live attenuated bovine tuberculosis bacillus (Mycobacterium bovis) also called Bacillus Calmette-Guérin (BCG). Even though the BCG vaccine is still widely used, results on its effectiveness in preventing mycobacterial diseases are partially contradictory, especially regarding Buruli Ulcer Disease (BUD). The aim of this case-control study is to evaluate the possible protective effect of BCG vaccination on BUD. Methodology: The present study was performed in three different countries and sites where BUD is endemic: in the Democratic Republic of the Congo, Ghana, and Togo from 2010 through 2013. The large study population was comprised of 401 cases with laboratory confirmed BUD and 826 controls, mostly family members or neighbors. Principal Findings: After stratification by the three countries, two sexes and four age groups, no significant correlation was found between the presence of BCG scar and BUD status of individuals. Multivariate analysis has shown that the independent variables country (p = 0.31), sex (p = 0.24), age (p = 0.96), and presence of a BCG scar (p = 0.07) did not significantly influence the development of BUD category I or category II/III. Furthermore, the status of BCG vaccination was also not significantly related to duration of BUD or time to healing of lesions. Conclusions: In our study, we did not observe significant evidence of a protective effect of routine BCG vaccination on the risk of developing either BUD or severe forms of BUD. Since accurate data on BCG strains used in these three countries were not available, no final conclusion can be drawn on the effectiveness of BCG strain in protecting against BUD. As has been suggested for tuberculosis and leprosy, well-designed prospective studies on different existing BCG vaccine strains are needed also for BUD. abstract_id: PUBMED:8691924 Randomised controlled trial of single BCG, repeated BCG, or combined BCG and killed Mycobacterium leprae vaccine for prevention of leprosy and tuberculosis in Malawi. Karonga Prevention Trial Group. Background: Repeat BCG vaccination is standard practice in many countries for prevention of tuberculosis and leprosy, but its effectiveness has not been evaluated. The addition of Mycobacterium leprae antigens to BCG might improve its effectiveness against leprosy. A double-blind, randomised, controlled trial to evaluate both these procedures was carried out in Karonga District, northern Malawi, where a single BCG vaccine administered by routine health services had previously been found to afford greater than 50% protection against leprosy, but no protection against tuberculosis. Methods: Between 1986 and 1989, individuals lacking a BCG scar were randomly assigned BCG alone (27,904) or BCG plus killed M leprae (38,251). Individuals with a BCG scar were randomly allocated placebo (23,307), a second BCG (23,456), or BCG plus killed M leprae (8102). Incident cases of leprosy and tuberculosis were ascertained over the subsequent 5-9 years. Findings: 139 cases of leprosy were identified by May, 1995; 93 of these were diagnostically certain, definitely postvaccination cases. Among scar-positive individuals, a second BCG vaccination gave further protection against leprosy (about 50%) over a first BCG vaccination. The rate ratio for all diagnostically certain, definitely postvaccination cases, all ages, was 0.51 (95% CI 0.25-1.03, p = 0.05) for BCG versus placebo. This benefit was apparent in all subgroups, although the greatest effect was among individuals vaccinated below 15 years of age (RR = 0.40 [95% CI 0.15-1.01], p = 0.05). The addition of killed M leprae did not improve the protection afforded by a primary BCG vaccination. The rate ratio for BCG plus killed M leprae versus BCG alone among scar-negative individuals was 1.06 (0.62-1.82, p = 0.82) for all ages, though 0.37 (0.11-1.24, p = 0.09) for individuals vaccinated below 15 years of age. 376 cases of postvaccination pulmonary tuberculosis and 31 of glandular tuberculosis were ascertained by May, 1995. The rate of diagnostically certain tuberculosis was higher among scar-positive individuals who had received a second BCG (1.43 [0.88-2.35], p = 0.15) than among those who had received placebo and there was no evidence that any of the trial vaccines contributed to protection against pulmonary tuberculosis. Interpretation: In a population in which a single BCG vaccination affords 50% or more protection against leprosy, but none against tuberculosis, a second vaccination can add appreciably to the protection against leprosy, without providing any protection against tuberculosis. Answer: The relationship between bacille Calmette-Guérin (BCG) scar size and protection against tuberculosis or leprosy has been investigated in various studies, but the evidence is not consistent. One cohort study in Karonga District, northern Malawi, found no evidence that increased BCG scar size correlates with vaccine-induced protective immunity against either tuberculosis or leprosy (PUBMED:8762845). Similarly, another case-control study conducted in Nagpur, Maharashtra, India, did not identify any significant association between BCG scar size and its effectiveness against tuberculosis or leprosy (PUBMED:18229442). However, other studies have shown a protective effect of BCG vaccination against leprosy. For instance, a population-based case-control study in Ciudad del Este, Paraguay, found that BCG vaccination was associated with an 89.5% reduced risk of leprosy, with evidence suggesting that the effectiveness of BCG seemed stronger among younger persons (PUBMED:34525502). Another population-based case-control study in Yavatmal District, India, observed a significant protective association between BCG and leprosy, with overall vaccine effectiveness of 54% (PUBMED:15661132). In the context of other mycobacterial diseases, a study on the effectiveness of BCG against Mycobacterium ulcerans infection in mice indicated that BCG vaccination delayed swelling and reduced bacterial CFU in BALB/c mice, regardless of the challenge strain (PUBMED:21423646). However, a case-control study on the effectiveness of routine BCG vaccination on Buruli ulcer disease in the Democratic Republic of the Congo, Ghana, and Togo did not observe significant evidence of a protective effect of routine BCG vaccination on the risk of developing either Buruli ulcer disease or severe forms of the disease (PUBMED:25569674). In summary, while some studies suggest a protective effect of BCG vaccination against leprosy, particularly in younger individuals, the size of the BCG scar does not appear to have a clear association with the level of protection against tuberculosis or leprosy. The effectiveness of BCG vaccination may vary due to differences in both host and pathogen, and further research is needed to fully understand the relationship between BCG scar size and vaccine efficacy (PUBMED:8762845; PUBMED:18229442).
Instruction: Insurance claims data: a possible solution for a national sports injury surveillance system? Abstracts: abstract_id: PUBMED:24928588 Insurance claims data: a possible solution for a national sports injury surveillance system? An evaluation of data information against ASIDD and consensus statements on sports injury surveillance. Background: Before preventive actions can be suggested for sports injuries at the national level, a solid surveillance system is required in order to study their epidemiology, risk factors and mechanisms. There are guidelines for sports injury data collection and classifications in the literature for that purpose. In Sweden, 90% of all athletes (57/70 sports federations) are insured with the same insurance company and data from their database could be a foundation for studies on acute sports injuries at the national level. Objective: To evaluate the usefulness of sports injury insurance claims data in sports injury surveillance at the national level. Method: A database with 27 947 injuries was exported to an Excel file. Access to the corresponding text files was also obtained. Data were reviewed on available information, missing information and dropouts. Comparison with ASIDD (Australian Sports Injury Data Dictionary) and existing consensus statements in the literature (football (soccer), rugby union, tennis, cricket and thoroughbred horse racing) was performed in a structured manner. Result: Comparison with ASIDD showed that 93% of the suggested data items were present in the database to at least some extent. Compliance with the consensus statements was generally high (13/18). Almost all claims (83%) contained text information concerning the injury. Conclusions: Relatively high-quality sports injury data can be obtained from a specific insurance company at the national level in Sweden. The database has the potential to be a solid base for research on acute sports injuries in different sports at the national level. abstract_id: PUBMED:26967548 Injury Scheme Claims in Gaelic Games: A Review of 2007-2014. Context: Gaelic games (Gaelic football and hurling) are indigenous Irish sports with increasing global participation in recent years. Limited information is available on longitudinal injury trends. Reviews of insurance claims can reveal the economic burden of injury and guide cost-effective injury-prevention programs. Objective: To review Gaelic games injury claims from 2007-2014 for male players to identify the costs and frequencies of claims. Particular attention was devoted to lower limb injuries due to findings from previous epidemiologic investigations of Gaelic games. Design: Descriptive epidemiology study. Setting: Open-access Gaelic Athletic Association Annual Reports from 2007-2014 were reviewed to obtain annual injury-claim data. Patients Or Other Participants: Gaelic Athletic Association players. Main Outcome Measure(s): Player age (youth or adult) and relationships between lower limb injury-claim rates and claim values, Gaelic football claims, hurling claims, youth claims, and adult claims. Results: Between 2007 and 2014, €64 733 597.00 was allocated to 58 038 claims. Registered teams had annual claim frequencies of 0.36 with average claim values of €1158.4 ± 192.81. Between 2007 and 2014, average adult claims were always greater than youth claims (6217.88 versus 1036.88), while Gaelic football claims were always greater than hurling claims (5395.38 versus 1859.38). Lower limb injuries represented 60% of all claims. The number of lower limb injury claims was significantly correlated with annual injury-claim expenses (r = 0.85, P = .01) and adult claims (r = 0.96, P = .01) but not with youth claims (r = 0.69, P = .06). Conclusions: Reducing lower limb injuries will likely reduce injury-claim expenses. Effective injury interventions have been validated in soccer, but whether such changes can be replicated in Gaelic games remains to be investigated. Injury-claim data should be integrated into current elite injury-surveillance databases to monitor the cost effectiveness of current programs. abstract_id: PUBMED:30324803 Medical Claims at National Collegiate Athletic Association Institutions: The Athletic Trainer's Role. Context: National Collegiate Athletic Association (NCAA) institutions are required to certify insurance coverage of medical expenses for injuries student-athletes sustain while participating in NCAA events. Institutions assign this role to a variety of employees, including athletic trainers (ATs), athletic administrators, business managers, secretaries, and others. In 1994, Street et al observed that ATs were responsible for administering medical claim payments at 68.1% of institutions. Anecdotally, ATs do not always feel well suited to perform these tasks. Objective: To investigate the ways athletic associations and departments coordinate athletic medical claims and the role of ATs in this process. Design: Cross-sectional study. Setting: Online Web-based survey. Patients Or Other Participants: All 484 National Athletic Trainers' Association members self-identified as a head AT within an NCAA collegiate or university setting were solicited to respond to the online Web-based survey. Responses from 184 (38%) head ATs employed in collegiate settings were analyzed. Main Outcome Measure(s): Institutional demographic characteristics, type of insurance coverage, person assigned to handle insurance claims, hours spent managing claims, and training for the task. Results: In 62% of institutions, an AT was responsible for processing athletic medical claims. The head and assistant ATs spent means of 6.17 and 10.32 hours per week, respectively, managing claims. Most respondents (62.1%) reported no formal training in handling athletic medical insurance claims. When asked when and how it was most appropriate to learn these concepts, 35.3% cited within an accredited athletic training program curriculum, 32.9% preferred on-the-job training, and 31.1% selected via continuing education. Conclusions: At NCAA institutions, ATs were responsible for administering athletic medical claims, a task in which most had no formal training. An AT may not possess adequate skills or time to handle athletic medical claims. Even if ATs are not solely responsible for this task, they remain involved as the coordinators of care. Athletic training programs, professional organizations that offer continuing education, and hiring institutions should consider focusing on and training appropriate personnel to manage athletic medical claims. abstract_id: PUBMED:27467365 Death in Community Australian Football: A Ten Year National Insurance Claims Report. While deaths are thought to be rare in community Australian sport, there is no systematic reporting so the frequency and leading causes of death is unknown. The aim of this study was to describe the frequency and cause of deaths associated with community-level Australian Football (AF), based on insurance-claims records. Retrospective review of prospectively collected insurance-claims for death in relation to community-level AF activities Australia-wide from 2004 to 2013. Eligible participants were aged 15+ years, involved in an Australian football club as players, coaches, umpires or supporting roles. Details were extracted for: year of death, level of play, age, sex, anatomical location of injury, and a descriptive narrative of the event. Descriptive data are presented for frequency of cases by subgroups. From 26,749 insurance-claims relating to AF, 31 cases were in relation to a death. All fatalities were in males. The initial event occurred during on-field activities of players (football matches or training) in 16 cases. The remainder occurred to people outside of on-field football activity (n = 8), or non-players (n = 7). Road trauma (n = 8) and cardiac conditions (n = 7) were the leading identifiable causes, with unconfirmed and other causes (including collapsed or not yet determined) comprising 16 cases. Although rare, fatalities do occur in community AF to both players and people in supporting roles, averaging 3 per year in this setting alone. A systematic, comprehensive approach to data collection is urgently required to better understand the risk and causes of death in participants of AF and other sports. abstract_id: PUBMED:31416755 Australian netball injuries in 2016: An overview of insurance data. Objectives: The objective of this study is to profile the netball-specific sporting injuries from in a national community-level insurance claim database. Design: An audit of insurance injury claims. Methods: An electronic dataset containing successful injury insurance claim data from the 2016 netball season was retrospectively coded. Data were de-identified and coded to meet the Orchard Sports Injury Classification System. Descriptive data reported included age, injury date, activity type, anatomical injury location, nature of injury, weather conditions, indoor/outdoor surface, quarter injury occurred, and open text for injury description. Results: The dataset contained 1239 claims that were approved for payment by the insurance company. The overall incidence rate was 2.936 successful injury claims per 1000 participants. The average age of players with claims was 34years. The majority of successful claims came from players aged 22 to 29years (n=328; 27%) and 30-39years (n=279; 23%) age groups. Of the successful claims for injury, most occurred during matches (n=1116; 92%), and were for injuries to the knee (n=509; 42%) and ankle (n=356; 29%) and for sprains/ligament damage (n=687; 57%) or fractures (n=182; 15%). Conclusions: Netball injuries profiled by an injury insurance dataset of successful claims mostly occurred to the knee and ankle. Sprains and ligament damage were the most common type of injury. This study strengthens the evidence for national injury prevention policies and strategies. Findings from the current study could be used in future to expand into mechanisms of injury, and injury diagnoses. abstract_id: PUBMED:32510262 Track and field injuries resulting in emergency department visits from 2004 to 2015: an analysis of the national electronic injury surveillance system. Objectives: Determine national estimates of injuries, mechanisms of injury (MOI), and injury severity among men and women engaging in track and field activities in the United States (U.S.), aged 18 years and older, who present to emergency departments (ED). Methods: Retrospective analyses of injury narratives were conducted using data from the National Electronic Injury Surveillance System (NEISS) of the Consumer Product Safety Commission (CPSC), comprising individuals 18 and older presenting to U.S. EDs from 2004 to 2015, with injuries associated with track and field, applying the NEISS product code 5030 and patient narratives. National injury estimates were calculated using sample weights. National injury incidence rates were determined using U.S. census estimate data (denominator), and comparisons of categorical variables by gender were made using a chi-squared test, and associated p-values. Results: Estimated 42,947 ED visits among individuals 18 and older presented for track and field-related injuries in the U.S. from 2004 to 2015, consisting of 23,509 incidents among men, and 19,438 among women. The highest rates of injury occurred in 2010 among men, and 2011 among women, with 3.47, and 2.70 injuries per 100,000 U.S. population, respectively. No statistically significant differences (α = 0.05) were found between genders for injury severity (p = 0.32), injury diagnosis (p = 0.30), and body region (p = 0.13), but there was a significant difference overall between genders for mechanism of injury (p = 0.01). Conclusions: To develop appropriate injury preventive interventions for track and field athletes, additional studies exploring associations between injury characteristics, namely the mechanisms of injury, and gender, are necessary. abstract_id: PUBMED:10083695 Evaluating Tackling Rugby Injury: the pilot phase for monitoring injury. Objective: To assess the suitability of two previously unused data sources for monitoring rugby injury throughout New Zealand. Method: Interviews were conducted with respondents sampled from players registered with the Rugby Football Unions (RFUs) and players claiming for rugby injuries from the Accident Rehabilitation and Compensation Insurance Corporation (ACC) in Auckland and Dunedin. Results: Of the 500 RFU players sampled, 63% were interviewed and of these 39 (12%) had been injured playing rugby union. Of the 456 ACC claimants sampled, 66% were interviewed and 265 (88%) had been injured playing rugby union. Conclusion: Identifying injured players through ACC claims was more efficient, both procedurally and because a smaller sample size was required to detect changes in incidence. Implications: With no routine surveillance of sports injury being undertaken, recording sporting codes in national injury surveillance systems would assist the monitoring of sports injury. abstract_id: PUBMED:9889533 Sport-related dental injury claims to the New Zealand Accident Rehabilitation &amp; Compensation Insurance Corporation, 1993-1996: analysis of the 10 most common sports, excluding rugby union. A large number of New Zealanders participate in sport, either formally or informally; sporting injuries are common. In New Zealand, the Accident Rehabilitation &amp; Compensation Insurance Corporation (ACC) is the main organisation that covers sports-related dental claims. Rugby union claims are the most common. The ACC's national data from 1993 to 1996 relating to dental claims for sports injuries (excluding rugby union) were analysed. This study identified 45 other sports in which participants are also at risk for dental injuries. Total claims per sport for each year were determined, and the "top 10" sports for claims per year were identified and compared for any change over the years studied. The top 10 sports for 1993 and 1994 were, in descending order: swimming, rugby league, basketball, cricket, hockey, soccer, netball, squash, softball-baseball, and tennis. Data for 1995 and 1996 revealed a similar trend, except that touch rugby displaced tennis as the tenth-ranked sport. The most common age group for claims was the age group 10-19 years, with a male:female ratio of approximately 2:1. Many sports, in addition to rugby union, place their participants at risk of dental injury. Awareness of prevention of dental injuries should be more widely promoted for all sports. abstract_id: PUBMED:1913025 Sports insurance and national governing bodies. A postal survey was conducted of the attitudes and advice of Welsh governing bodies of amateur sports and their Cardiff-based clubs towards personal sports insurance. Information on 36 of the 39 sports surveyed (92%) was sufficient for analysis. Twenty-two of these 36 sports (61%) organized insurance at a national level, one at club level (3%) and 13 (36%) provided no insurance advice. Only 12 sports (33%) insisted on mandatory insurance cover. Many sportsmen and women are left to search for an appropriate insurance cover themselves or remain uninsured. Owing the lack of advice in 36% of cases it was decided to survey Cardiff-based insurance brokers to establish what type of policy they would recommend. Only 41% of the brokers were able to offer a suitable policy. Club policies recommended by the Central Council for Physical Recreation provide individuals with the same sports cover (but at a lower premium) as is available from a personal accident policy. However, cover is somewhat inadequate and may provide a false sense of security. Sports with mandatory sports insurance are not necessarily those with the highest risk of disabling injury. abstract_id: PUBMED:38054439 'After they see what you have, they contact you': a qualitative study of national team football players', coaches' and clinicians' experiences in engaging with a context-specific injury surveillance system. Background: Sports injury surveillance systems aid injury prevention, but their development without considering end-users' perspectives has led to low adherence and honesty in self-reporting by players, compromising their effectiveness. Context-specific injury surveillance systems have been proposed to address these challenges, but there is a limited understanding of stakeholders' perceptions and experiences in using them. Objective: Following the implementation of a context-specific injury surveillance system with 100% player adherence over 60 weeks, the study aimed to investigate Under-21 Maltese national football team players', coaches' and clinicians' experiences of how and why they engaged with the system. Method: Seventeen semi-structured interviews with Under-21 Maltese male national football team players (n = 12), their coaches (n = 3) and clinicians (n = 2) were conducted. Data were analysed using reflexive thematic analysis. Results: Participants highlighted questionnaire-related factors that motivated engagement with the context-specific injury surveillance system (theme 1) and factors influencing further engagement with the system based on stakeholders' actions (theme 2). Perceived outcomes experienced as a result of engaging with the injury surveillance system (theme 3), in turn, motivated players to continue reporting and engaging with the system. Conclusion: To encourage players' sustained reporting, injury-related information collected from well-designed questionnaires should serve to stimulate communication and teamwork among stakeholders, to prevent injuries and enhance performance. Answer: Insurance claims data has been evaluated and considered as a potential solution for a national sports injury surveillance system in several studies. The evaluation of sports injury insurance claims data in Sweden showed that such data could be a solid base for research on acute sports injuries at the national level, with 93% of the suggested data items from the Australian Sports Injury Data Dictionary (ASIDD) present in the database to at least some extent (PUBMED:24928588). Similarly, in Gaelic games, reviewing insurance claims revealed the economic burden of injury and could guide cost-effective injury-prevention programs, suggesting that injury-claim data should be integrated into current elite injury-surveillance databases (PUBMED:26967548). In the context of the National Collegiate Athletic Association (NCAA) institutions, athletic trainers (ATs) were found to be responsible for processing athletic medical claims in 62% of institutions, despite most having no formal training in handling these claims (PUBMED:30324803). This indicates that insurance claims data is already being used in some capacity for injury surveillance and management, although there may be a need for better training and systems to support this role. The analysis of insurance-claims records for community-level Australian Football (AF) provided insights into the frequency and cause of deaths associated with the sport, suggesting that insurance claims data can offer valuable information for understanding and preventing fatalities in sports (PUBMED:27467365). Similarly, insurance claim data was used to profile netball-specific sporting injuries, which could strengthen the evidence for national injury prevention policies and strategies (PUBMED:31416755). In New Zealand, identifying injured rugby players through Accident Rehabilitation and Compensation Insurance Corporation (ACC) claims was found to be more efficient than other methods, indicating the potential utility of insurance claims data for monitoring sports injuries (PUBMED:10083695). However, it was also noted that many sports participants may be left uninsured or without adequate advice on insurance, highlighting a gap in the current insurance systems related to sports (PUBMED:1913025). Lastly, the implementation of a context-specific injury surveillance system with high player adherence in Maltese national football highlighted the importance of designing systems that engage stakeholders effectively, which could be a consideration for integrating insurance claims data into broader surveillance efforts (PUBMED:38054439).
Instruction: Depression among surviving caregivers: does length of hospice enrollment matter? Abstracts: abstract_id: PUBMED:15569897 Depression among surviving caregivers: does length of hospice enrollment matter? Objective: Many terminally ill patients enroll in a hospice late in their illness, and recent data indicate decreasing lengths of hospice enrollment, yet we know little about the impact of hospice enrollment length on surviving caregivers. This is the first study the authors know of that examines the association between hospice enrollment length and subsequent major depressive disorder among surviving caregivers. Method: The authors conducted a prospective cohort study with 174 primary family caregivers of consecutively enrolled hospice patients with cancer between October 1999 and September 2001. Using data from in-person interviews at the time of enrollment and 6-8 months after the patient's death, they estimated with logistic regression the adjusted risk of major depressive disorder with the Structured Clinical Interview for the DSM-IV axis I modules based on the number of days of hospice care before death. Results: Caregivers of patients enrolled with hospice for 3 or fewer days were significantly more likely to have major depressive disorder at the follow-up interview than caregivers of those with longer hospice enrollment (24.1% versus 9.0%, respectively), adjusted for baseline major depressive disorder and other potential confounders. Conclusions: The findings identify a target group for whom bereavement services might be most needed. The authors also suggest that earlier hospice enrollment may help reduce the risk of major depressive disorder during the first 6-8 months of bereavement, which raises concerns about recent trends toward decreasing lengths of hospice enrollment before death. abstract_id: PUBMED:26009859 Association Between Hospice Use and Depressive Symptoms in Surviving Spouses. Importance: Family caregivers of individuals with serious illness are at risk for depressive symptoms and depression. Hospice includes the provision of support services for family caregivers, yet evidence is limited regarding the effect of hospice use on depressive symptoms among surviving caregivers. Objective: To determine the association between hospice use and depressive symptoms in surviving spouses. Design, Setting, And Participants: We linked data from the Health and Retirement Study, a nationally representative longitudinal survey of community-dwelling US adults 50 years or older, to Medicare claims. Participants included a propensity score-matched sample of 1016 Health and Retirement Study decedents with at least 1 serious illness and their surviving spouses interviewed between August 2002 and May 2011. We compared the spouses of individuals enrolled in hospice with the spouses of individuals who did not use hospice, performing our analysis between January 30, 2014, and January 16, 2015. Exposures: Hospice enrollment for at least 3 days in the year before death. Main Outcomes And Measures: Spousal depressive symptom scores measured 0 to 2 years after death with the Center for Epidemiologic Studies Depression Scale, which is scored from 0 (no symptoms) to 8 (severe symptoms). Results: Of the 1016 decedents in the matched sample, 305 patients (30.0%) used hospice services for 3 or more days in the year before death. Of the 1016 spouses, 51.9% had more depressive symptoms over time (mean [SD] change, 2.56 [1.65]), with no significant difference related to hospice use. A minority (28.2%) of spouses of hospice users had improved Center for Epidemiologic Studies Depression Scale scores compared with 21.7% of spouses of decedents who did not use hospice, although the difference was not statistically significant (P = .06). Among the 662 spouses who were the primary caregivers, 27.3% of spouses of hospice users had improved Center for Epidemiologic Studies Depression Scale scores compared with 20.7% of spouses of decedents who did not use hospice; the difference was not statistically significant (P = .10). In multivariate analysis, the odds ratio for the association of hospice enrollment with improved depressive symptoms after the spouse's death was 1.63 (95% CI, 1.00-2.65). Conclusions And Relevance: After bereavement, depression symptoms increased overall for surviving spouses regardless of hospice use. A modest reduction in depressive symptoms was more likely among spouses of hospice users than among spouses of nonhospice users. abstract_id: PUBMED:37155702 Alzheimer's Disease and Related Dementias: Caregiver Perspectives on Hospice Re-Enrollment Following a Hospice Live Discharge. Background: The number of individuals dying of Alzheimer's disease and related dementias (ADRDs) is steadily increasing and they represent the largest group of hospice enrollees. In 2020, 15.4% of hospice patients across the United States were discharged alive from hospice care, with 5.6% decertified due to being "no longer terminally ill." A live discharge from hospice care can disrupt care continuity, increase hospitalizations and emergency room visits, and reduce the quality of life for patients and families. Furthermore, this discontinuity may impede re-enrollment into hospice services and receipt of community bereavement services. Objectives: The aim of this study is to explore the perspectives of caregivers of adults with ADRDs around hospice re-enrollment following a live discharge from hospice. Design: We conducted semistructured interviews of caregivers of adults with ADRDs who experienced a live discharge from hospice (n = 24). Thematic analysis was used to analyze data. Results: Three-quarters of participants (n = 16) would consider re-enrolling their loved one in hospice. However, some believed they would have to wait for a medical crisis (n = 6) to re-enroll, while others (n = 10) questioned the appropriateness of hospice for patients with ADRDs if they cannot remain in hospice care until death. Conclusions: A live discharge for ADRD patients impacts caregivers' decisions on whether they will choose to re-enroll a patient who has been discharged alive from hospice. Further research and support of caregivers through the discharge process are necessary to ensure that patients and their caregivers remain connected to hospice agencies postdischarge. abstract_id: PUBMED:27233144 Is It the Difference a Day Makes? Bereaved Caregivers' Perceptions of Short Hospice Enrollment. Context: Hospice enrollment for less than one month has been considered too late by some caregivers and at the right time for others. Perceptions of the appropriate time for hospice enrollment in cancer are not well understood. Objectives: The objectives of the study were to identify contributing factors of hospice utilization in cancer for ≤7 days, to describe and compare caregivers' perceptions of this as "too late" or at the "right time." Methods: Semistructured, in-depth, in-person interviews were conducted with a sample subgroup of 45 bereaved caregivers of people who died from cancer within seven days of hospice enrollment. Interviews were transcribed and entered into Atlas.ti for coding. Data were grouped by participants' perceptions of the enrollment as "right time" or "too late." Results: Overall, the mean length of enrollment was MLOE = 3.77 (SD = 1.8) days and ranged from three hours to seven days. The "right time" group (N = 25 [56%]) had a MLOE = 4.28 (SD = 1.7) days. The "too late" group (N = 20 [44%]) had a MLOE = 3.06 (SD = 1.03) days. The difference was statistically significant (P = 0.029). Precipitating factors included: late-stage diagnosis, continuing treatment, avoidance, inadequate preparation, and systems barriers. The "right time" experience was characterized by: perceived comfort, family needs were met, preparedness for death. The "too late" experience was characterized by perceived suffering, unprepared for death, and death was abrupt. Conclusion: The findings suggest that one more day of hospice care may increase perceived comfort, symptom management, and decreased suffering and signal the need for rapid response protocols. abstract_id: PUBMED:16505131 Length of hospice enrollment and subsequent depression in family caregivers: 13-month follow-up study. Objective: Although more people are using hospice than ever before, the average length of hospice enrollment is decreasing. Little is known about the effect of hospice length of enrollment on surviving family caregivers. The authors examine the association between patient length of hospice enrollment and major depressive disorder (MDD) among the surviving primary family caregivers 13 months after the patient's death. Methods: The authors conducted a three-year longitudinal study of 175 primary family caregivers of patients with terminal cancer who consecutively enrolled in the participating hospice from October 1999 through September 2001. Interviews were conducted with the primary family caregiver when the patient first enrolled with hospice and again 13 months after the patient's death. The authors used multivariate logistic regression models to estimate caregivers' adjusted risk at 13 months postloss for MDD, assessed using the Structured Clinical Interview for the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (SCID). Results: The effect of very short hospice length of enrollment (three days or less) compared with longer lengths of enrollment on caregiver MDD 13 months after their loss was nonsignificant in unadjusted analyses. The adjusted risk of MDD was significantly elevated for caregivers of patients who had very short hospice enrollments (adjusted odds ratio: 8.76, 95%confidence interval: 1.09-70.19) only after adjusting for baseline MDD, caregiver gender, caregiver age, kinship relationship to patient, caregiver education, caregiver chronic conditions, and caregiver burden. The adjustment for caregiver burden resulted in the greatest increase in odds ratio for very short hospice length of enrollment on risk of caregiver MDD 13 months after the loss. Conclusions: This study identifies a potential target group of family caregivers, characterized by hospice length of enrollment and several caregiver features, who might be most in need of mental health interventions. abstract_id: PUBMED:32828932 "Are They Saying It How I'm Saying It?" A Qualitative Study of Language Barriers and Disparities in Hospice Enrollment. Context: Language barriers contribute significantly to disparities in end-of-life (EOL) care. However, the mechanisms by which these barriers impact hospice care remains underexamined. Objectives: To gain a nuanced understanding of how language barriers and interpretation contribute to disparities in hospice enrollment and hospice care for patients with limited English proficiency. Methods: Qualitative, individual interviews were conducted with a variety of stakeholders regarding barriers to quality EOL care in diverse patient populations. Interviews were audiorecorded and transcribed verbatim. Data were coded using NVivo 11 (QSR International Pty Ltd., Melbourne, Australia). Three researchers analyzed all data related to language barriers, first individually, then in group meetings, using a grounded theory approach, until they reached consensus regarding themes. Institutional review board approval was obtained. Results: Twenty-two participants included six nurses/certified nursing assistants, five physicians, three administrators, three social workers, three patient caregivers, and two chaplains, self-identifying from a variety of racial/ethnic backgrounds. Three themes emerged regarding language barriers: 1) structural barriers inhibit access to interpreters; 2) variability in accuracy of translation of EOL concepts exacerbates language barriers; and 3) interpreters' style and manner influence communication efficacy during complex conversations about prognosis, goals of care, and hospice. Our theoretical model derived from the data suggests that Theme 1 is foundational and common to other medical settings. However, Theme 2 and particularly Theme 3 appear especially critical for hospice enrollment and care. Conclusion: Language barriers present unique challenges in hospice care because of the nuance and compassion required for delicate goals of care and EOL conversations. Reducing disparities requires addressing each level of this multilayered barrier. abstract_id: PUBMED:23733549 Racial and ethnic differences in hospice enrollment among children with cancer. Background: Hospice is an important provider of end of life care. Adult minorities are less likely to enroll on hospice; little is known regarding the prevalence of pediatric hospice use or the characteristics of its users. Our primary objective was to determine whether race/ethnicity was associated with hospice enrollment in children with cancer. We hypothesized that minority (Latino) race/ethnicity is negatively associated with hospice enrollment in children with cancer. Procedure: In this single-center retrospective cohort study, inclusion criteria were patients who died of cancer or stem cell transplant between January 1, 2006 and December 31, 2010. The primary outcome variable was hospice enrollment and primary predictor was race/ethnicity. Results: Of the 202 patients initially identified, 114 met inclusion criteria, of whom 95 were enrolled on hospice. Patient race/ethnicity was significantly associated with hospice enrollment (P = 0.02), the association remained significant (P = 0.024) after controlling for payor status (P = 0.995), patient diagnosis (P = 0.007), or religion (P = 0.921). Latinos enrolled on hospice significantly more often than patients of other races. Despite initial enrollment on hospice however, 34% of Latinos and 50% of non-Latinos had withdrawn from hospice at the time of death (P = 0.10). Race/ethnicity was not significantly associated with dying on hospice. Conclusions: These results indicate that race/ethnicity and diagnosis are likely to play a role in hospice enrollment during childhood. A striking number of patients of all race/ethnicities left hospice prior to death. More studies describing the impact of culture on end of life decision-making and the hospice experience in childhood are warranted. abstract_id: PUBMED:28260997 Children with intellectual disability and hospice utilization. Over 42,000 children die each year in the United States, including those with intellectual disability (ID). Survival is often reduced when children with intellectual disability also suffer from significant motor dysfunction, progressive congenital conditions, and comorbidities. Yet, little is known about hospice care for children with intellectual disability. The purpose of this study was to explore the relationship between intellectual disability and hospice utilization. Additionally, we explored whether intellectual disability combined with motor dysfunction, progressive congenital conditions, and comorbidities influenced pediatric hospice utilization. Using a retrospective cohort design and data from the 2009 to 2010 California Medicaid claims files, we conducted a multivariate analysis of hospice utilization. This study shows that intellectual disability was negatively related to hospice enrollment and length of stay. We also found that when children had both intellectual disability and comorbidities, there was a positive association with enrolling in hospice care. A number of clinical implications can be drawn from the study findings that hospice and palliative care nurses use to improve their clinical practice of caring for children with ID and their families at end of life. abstract_id: PUBMED:27697564 Continuous Home Care Reduces Hospice Disenrollment and Hospitalization After Hospice Enrollment. Context: Among the four levels of hospice care, continuous home care (CHC) is the most expensive care, and infrequently provided in practice. Objectives: To identify hospice and patient characteristics associated with the use of CHC and to examine the associations between CHC utilization and hospice disenrollment or hospitalization after hospice enrollment. Methods: Using 100% fee-for-service Medicare claims data for beneficiaries aged 66 years or older who died between July and December 2011, we identified the percentage of hospice agencies in which patients used CHC in 2011 and determined hospice and patient characteristics associated with the use of CHC. Using multivariable analyses, we examined the associations between CHC utilization and hospice disenrollment and hospitalization after hospice enrollment, adjusted for hospice and patient characteristics. Results: Only 42.7% of hospices (1533 of 3592 hospices studied) provided CHC to at least one patient during the study period. Patients enrolled with for-profit, larger, and urban located hospices were more likely to use CHC (P &lt; 0.001). Within these 1533 hospices, only 11.4% of patients used CHC. Patients who were white, had cancer, and had more comorbidities were more likely to use CHC. In multivariable models, compared with patients who did not use CHC, patients who used CHC were less likely to have hospice disenrollment (adjusted odds ratio 0.21; 95% CI 0.19, 0.23) and less likely to be hospitalized after hospice enrollment (adjusted odds ratio 0.37; 95% CI 0.34, 0.40). Conclusion: Although a minority of patients uses CHC, such services may be protective against hospice disenrollment and hospitalization after hospice enrollment. abstract_id: PUBMED:31403706 Institutional Special Needs Plans and Hospice Enrollment in Nursing Homes: A National Analysis. Background: Institutional Special Needs Plans (I-SNPs) in nursing homes could impact hospice use by residents with advanced illness. Little is known about their relationship. Objective: To determine whether I-SNP availability has been associated with changes in hospice utilization. Design: Federal data from 2011 and 2013 were extracted from the Minimum Data Set (MDS) and other sources. Multilevel models evaluated I-SNP-, resident-, and facility-related variables as predictors of hospice utilization. Setting: All US nursing homes in 2011 (N = 15 750) and 2013 (N = 15 732). Participants: Nursing home residents enrolled in Medicare or in both Medicare and Medicaid. Measurements: Nursing home and resident data were obtained from Centers for Medicare and Medicaid Services sources: the MDS 3.0, Master Summary Beneficiary File, and Special Needs Plan Comprehensive Report. Results: The mean number of residents per nursing home was 210.9 (SD = 167.1) in 2011 and 217.2 (SD = 171.5) in 2013. The prevalence of I-SNP contracts in nursing homes increased between 2011 and 2013, from 55.2% (N = 8691) to 61.1% (N = 9605), respectively (P &lt; .001). In multivariate analyses, greater hospice enrollment in nursing homes was associated with having at least one I-SNP enrollee per month; year (2013 higher than 2011); smaller facility size; urban (vs rural) setting; location in the Northeast (vs Midwest); lower average resident mental status; higher average resident mobility; younger residents, on average; and facilities with higher proportions of residents with specific diagnoses (cancer, cirrhosis, and dementia). After adjusting for resident and nursing home characteristics, the association between monthly I-SNP presence and hospice enrollment was found only in nursing homes with 50 or greater beds and there was a positive relationship with increasing size. Conclusions: Growth of I-SNPs has been associated with changes in hospice utilization, and the relationship varies by facility size. Studies are needed to clarify the nature of this association and determine whether care may be improved through coordination of these programs. J Am Geriatr Soc 67:2537-2544, 2019. Answer: The length of hospice enrollment does appear to matter when it comes to the risk of depression among surviving caregivers. A study found that caregivers of patients enrolled in hospice for 3 or fewer days were significantly more likely to have major depressive disorder at the follow-up interview than caregivers of those with longer hospice enrollment (24.1% versus 9.0%, respectively), even after adjusting for baseline major depressive disorder and other potential confounders (PUBMED:15569897). This suggests that earlier hospice enrollment may help reduce the risk of major depressive disorder during the first 6-8 months of bereavement. Another study, however, found no significant difference in depressive symptoms related to hospice use when comparing spouses of individuals enrolled in hospice with the spouses of individuals who did not use hospice. Although a minority of spouses of hospice users had improved depression scores compared to spouses of decedents who did not use hospice, the difference was not statistically significant (PUBMED:26009859). A 13-month follow-up study also identified that the effect of very short hospice length of enrollment (three days or less) compared with longer lengths of enrollment on caregiver major depressive disorder 13 months after their loss was significant only after adjusting for several factors including baseline MDD, caregiver burden, and other caregiver features (PUBMED:16505131). These findings indicate that while there is an association between the length of hospice enrollment and the risk of depression among surviving caregivers, the relationship may be complex and influenced by various factors. It is important to note that the impact of hospice enrollment length on depression may vary, and further research is needed to fully understand this relationship. Additionally, the studies suggest that bereavement services might be most needed for caregivers of patients with shorter hospice enrollments, and that there may be a need for interventions targeting this group to help reduce the risk of depression following the loss of a loved one.
Instruction: Does depression mediate the relation between fatigue severity and disability in chronic fatigue syndrome sufferers? Abstracts: abstract_id: PUBMED:19073290 Does depression mediate the relation between fatigue severity and disability in chronic fatigue syndrome sufferers? Objective: Chronic fatigue syndrome (CFS) is often associated with significant levels of disability. Although fatigue and depression have been found to be independently related to severity of disability, it is not clear how these three factors are mutually related. The present study sought to address this issue by specifically testing a model of mediation whereby depression was hypothesized to influence relations between fatigue and disability. Methods: Participants included 90 individuals seeking treatment for CFS at a tertiary care facility. Each provided demographic information and completed standardized measures of depression and fatigue severity, as well as a measure of disability, which assessed difficulties in physical, psychosocial, and independence domains. Results: Analyses indicated that depression and fatigue were positively correlated with one another, as well as all three disability domains. Analyses of mediation indicated that depression completely mediated the relation between fatigue and psychosocial disability and partially mediated the relation between fatigue and the other two disability domains. Indirect effects tests indicated that the inclusion of depression in the statistical models was statistically meaningful. Conclusions: These results replicate previous findings that fatigue and depression are independently related to disability in those with CFS. A more complex statistical model, however, suggested that depression severity substantially influenced the strength of the relation between fatigue and disability levels across a range of domains, including complete mediation in areas involving psychosocial functioning. These results may aid in clarifying contemporary conceptualizations of CFS and provide guidance in the identification of appropriate treatment targets. abstract_id: PUBMED:12479992 Psychological correlates of functional status in chronic fatigue syndrome. Background: The present study was designed to test a cognitive model of impairment in chronic fatigue syndrome (CFS) in which disability is a function of severity of fatigue and depressive symptoms, generalized somatic symptom attributions and generalized illness worry. Methods: We compared 45 CFS and 40 multiple sclerosis (MS) outpatients on measures of functional ability, fatigue severity, depressive symptoms, somatic symptom attribution and illness worry. Results: The results confirmed previous findings of lower levels of functional status and greater fatigue among CFS patients compared to a group of patients with MS. Fatigue severity was found to be a significant predictor of physical functioning but not of psychosocial functioning in both groups. In CFS, when level of fatigue was controlled, making more somatic attributions was associated with worse physical functioning, and both illness worry and depressive symptoms were associated with worse psychosocial functioning. Conclusions: Our findings support the role of depression and illness cognitions in disability in CFS sufferers. Different cognitive factors account for physical and psychosocial disability in CFS and MS. The SF-36 may be sensitive to symptom attributions, suggesting caution in its interpretation when used with patients with ill-defined medical conditions. abstract_id: PUBMED:22316329 Chronic fatigue syndrome after Giardia enteritis: clinical characteristics, disability and long-term sickness absence. Background: A waterborne outbreak of Giardia lamblia gastroenteritis led to a high prevalance of long-lasting fatigue and abdominal symptoms. The aim was to describe the clinical characteristics, disability and employmentloss in a case series of patients with Chronic Fatigue Syndrome (CFS) after the infection. Methods: Patients who reported persistent fatigue, lowered functional capacity and sickness leave or delayed education after a large community outbreak of giardiasis enteritis in the city of Bergen, Norway were evaluated with the established Centers for Disease Control and Prevention criteria for CFS. Fatigue was self-rated by the Fatigue Severity Scale (FSS). Physical and mental health status and functional impairment was measured by the Medical Outcome Severity Scale-short Form-36 (SF-36). The Hospital Anxiety and Depression Scale (HADS) was used to measure co-morbid anxiety and depression. Inability to work or study because of fatigue was determined by sickness absence certified by a doctor. Results: A total of 58 (60%) out of 96 patients with long-lasting post-infectious fatigue after laboratory confirmed giardiasis were diagnosed with CFS. In all, 1262 patients had laboratory confirmed giardiasis. At the time of referral (mean illness duration 2.7 years) 16% reported improvement, 28% reported no change, and 57% reported progressive course with gradual worsening. Mean FSS score was 6.6. A distinctive pattern of impairment was documented with the SF-36. The physical functioning, vitality (energy/fatigue) and social functioning were especially reduced. Long-term sickness absence from studies and work was noted in all patients. Conclusion: After giardiasis enteritis at least 5% developed clinical characteristics and functional impairment comparable to previously described post-infectious fatigue syndrome. abstract_id: PUBMED:12069872 Cognitive functioning in chronic fatigue syndrome and the role of depression, anxiety, and fatigue. Objective: This study was designed to investigate the role of depression, anxiety, and fatigue in Chronic Fatigue Syndrome (CFS) sufferers' objective and subjective cognitive performance. Methods: Twenty-three CFS sufferers and 23 healthy control participants were compared on objective and subjective assessments of cognitive performance. Depression, anxiety, and fatigue were also evaluated. Results: CFS sufferers did not demonstrate any impairment in objective cognitive functioning compared to the control group, and objective performance was not related to their higher levels of depression or their level of fatigue. Depression scores only accounted for a small amount of the variance in CFS sufferers' lower subjective assessment of their cognitive performance compared to control participants. There were no differences between the groups on anxiety scores. Conclusion: The results are discussed in terms of the heterogeneity of the CFS population and the complex interaction of symptomatological factors that characterise CFS. abstract_id: PUBMED:23619200 Depression in paediatric chronic fatigue syndrome. Objective: To describe the prevalence of depression in children with chronic fatigue syndrome (CFS)/myalgic encephalomyelitis (ME) and investigate the relationship between depression in CFS/ME and clinical symptoms such as fatigue, disability, pain and school attendance. Design: Cross-sectional survey data using the Hospital Anxiety and Depression Scale (HADS) collected at assessment. Setting: Specialist paediatric CFS/ME service in the South West. Patients: Children aged 12-18 years with CFS/ME. Main Outcome Measure: Depression was defined as scoring &gt;9 on the HADS depression scale. Results: 542 subjects had complete data for the HADS and 29% (156/542) (95% CI 25% to 33%) had depression. In a univariable analysis, female sex, poorer school attendance, and higher levels of fatigue, disability, pain, and anxiety were associated with higher odds of depression. Age of child and duration of illness were not associated with depression. In a multivariable analysis, the factors most strongly associated with depression were disability, with higher scores on the physical function subscale of the 36 item Short Form (SF-36). Conclusions: Depression is commonly comorbid with CFS/ME, much more common than in the general population, and is associated with markers of disease severity. It is important to screen for, identify and treat depression in this population. abstract_id: PUBMED:9226607 The relation of sleep difficulties to fatigue, mood and disability in chronic fatigue syndrome. The relationship of sleep complaints to mood, fatigue, disability, and lifestyle was examined in 69 chronic fatigue syndrome (CFS) patients without psychiatric disorder, 58 CFS patients with psychiatric disorder, 38 psychiatric out-patients with chronic depressive disorders, and 45 healthy controls. The groups were matched for age and gender. There were few differences between the prevalence or nature of sleep complaints of CFS patients with or without current DSM-IIIR depression, anxiety or somatization disorder. CFS patients reported significantly more naps and waking by pain, a similar prevalence of difficulties in maintaining sleep, and significantly less difficulty getting off to sleep compared to depressed patients. Sleep continuity complaints preceded fatigue in only 20% of CFS patients, but there was a strong association between relapse and sleep disturbance. Certain types of sleep disorder were associated with increased disability or fatigue in CFS patients. Disrupted sleep appears to complicate the course of CFS. For the most part, sleep complaints are either attributable to the lifestyle of CFS patients or seem inherent to the underlying condition of CFS. They are generally unrelated to depression or anxiety in CFS. abstract_id: PUBMED:21463167 Self-critical perfectionism, stress generation, and stress sensitivity in patients with chronic fatigue syndrome: relationship with severity of depression. Chronic Fatigue Syndrome (CFS) is a highly disabling disorder that is part of a broader spectrum of chronic pain and fatigue disorders. Although the etiology and pathogenesis of CFS largely remain unclear, there is increasing evidence that CFS shares important pathophysiological disturbances with mood disorders in terms of disturbances in the stress response and the stress system. From a psycho-dynamic perspective, self-critical perfectionism and related personality factors are hypothesized to explain in part impairments of the stress response in both depression and CFS. Yet, although there is ample evidence that high levels of self-critical perfectionism are associated with stress generation and increased stress sensitivity in depression, evidence supporting this hypothesis in CFS is currently lacking. This study therefore set out to investigate the relationship between self-critical perfectionism, the active generation of stress, stress sensitivity, and levels of depression in a sample of 57 patients diagnosed with CFS using an ecological momentary assessment approach. Results showed, congruent with theoretical assumptions, that self-critical perfectionism was associated with the generation of daily hassles, which in turn predicted higher levels of depression. Moreover, multilevel analyses showed that self-critical perfectionism was related to increased stress sensitivity in CFS patients over a 14-day period, and that increased stress sensitivity in turn was related to increased levels of depression. The implications of these findings for future research and particularly for the development of psychodynamic treatment approaches of CFS and related conditions are discussed. abstract_id: PUBMED:23759150 The role of neuroticism, perfectionism and depression in chronic fatigue syndrome. A structural equation modeling approach. Objective: Previous studies have reported consistent associations between Neuroticism, maladaptive perfectionism and depression with severity of fatigue in Chronic Fatigue Syndrome (CFS). Depression has been considered a mediator factor between maladaptive perfectionism and fatigue severity, but no studies have explored the role of neuroticism in a comparable theoretical framework. This study aims to examine for the first time, the role of neuroticism, maladaptive perfectionism and depression on the severity of CFS, analyzing several explanation models. Methods: A sample of 229 CFS patients were studied comparing four structural equation models, testing the role of mediation effect of depression severity in the association of Neuroticism and/or Maladaptive perfectionism on fatigue severity. Results: The model considering depression severity as mediator factor between Neuroticism and fatigue severity is the only one of the explored models where all the structural modeling indexes have fitted satisfactorily (Chi square=27.01, p=0.079; RMSE=0.047, CFI=0.994; SRMR=0.033). Neuroticism is associated with CFS by the mediation effect of depression severity. This personality variable constitutes a more consistent factor than maladaptive perfectionism in the conceptualization of CFS severity. abstract_id: PUBMED:21584732 Self-esteem mediates the relationship between maladaptive perfectionism and depression in chronic fatigue syndrome. Patients with chronic fatigue syndrome (CFS) often experience depression which may negatively affect prognosis and treatment outcome. Research has shown that depression in CFS is associated with maladaptive or self-critical perfectionism. However, currently, little is known about factors that may explain this relationship, but studies in nonclinical samples suggest that low self-esteem may be an important mediator of this relationship. The present study therefore examined whether self-esteem mediated the cross-sectional association between maladaptive perfectionism and severity of depression in 192 patients meeting Centres for Disease Control and Prevention criteria for CFS. Patients completed self-report measures of maladaptive perfectionism, self-esteem, depression, and fatigue. Regression analyses and more direct tests of indirect effects (i.e., the Sobel test and bootstrapping) were used to test for mediation. Congruent with expectations, we found that self-esteem fully mediated the relationship between maladaptive perfectionism and depression in CFS. Findings from this study suggest that self-esteem may explain the link between maladaptive perfectionism and depression in CFS, which may have important implications for the treatment and prevention of depression in these patients. abstract_id: PUBMED:21414450 Metacognitions and negative emotions as predictors of symptom severity in chronic fatigue syndrome. Objective: Chronic fatigue syndrome (CFS) describes a condition that is primarily characterized by fatigue and flu-like symptoms that are not alleviated by rest. This study investigated the relationship among metacognitions, negative emotions, and symptom severity in CFS. Methods: A total of 96 patients who had received a diagnosis of CFS according to the Oxford Criteria completed a battery of self-report measures that consisted of the Depression Anxiety Stress Scales, the 30-Item Metacognitions Questionnaire, the Chalder Fatigue Questionnaire (CFQ), and the RAND 36-Item Short-Form Health Survey-Physical Functioning. Results: Correlation analyses showed that negative emotions and metacognitions were positively correlated with measures of symptom severity and that metacognitions were a better predictor of symptom severity than anxiety and depression. Hierarchical regression analyses indicated that (1) lack of cognitive confidence predicted both mental and physical factors of the CFQ and physical functioning independently of negative emotions and (2) beliefs about the need to control thoughts predicted the mental factor of the CFQ independently of negative emotions and lack of cognitive confidence. Conclusion: The data support the potential application of the metacognitive model of psychological disorder to understanding CFS. Answer: Yes, depression does mediate the relation between fatigue severity and disability in chronic fatigue syndrome (CFS) sufferers. The study represented in abstract PUBMED:19073290 found that depression completely mediated the relation between fatigue and psychosocial disability and partially mediated the relation between fatigue and the other two disability domains. The inclusion of depression in the statistical models was statistically meaningful, suggesting that depression severity substantially influenced the strength of the relation between fatigue and disability levels across a range of domains. This indicates that in the context of CFS, depression is not just a comorbid condition but plays a significant role in the relationship between the experience of fatigue and the resulting disability.
Instruction: Does staff-patient agreement on needs for care predict a better mental health outcome? Abstracts: abstract_id: PUBMED:17537280 Does staff-patient agreement on needs for care predict a better mental health outcome? A 4-year follow-up in a community service. Background: Patients treated in primary care settings report better mental outcomes when they agree with practitioners about the nature of their core presenting problems. However, no study has examined the impact of staff-patient agreement on treatment outcomes in specialist mental health services. We investigated whether a better staff-patient agreement on needs for care predicts more favourable outcome in patients receiving community-based psychiatric care. Method: A 3-month prevalence cohort of 188 patients with the full spectrum of psychiatric conditions was assessed at baseline and at 4 years using the Camberwell Assessment of Need (CAN), both staff (CAN-S) and patient versions (CAN-P), and a set of standardized outcome measures. Baseline staff-patient agreement on needs was included among predictors of outcome. Both clinician-rated (psychopathology, social disability, global functioning) and patient-rated (subjective quality of life and satisfaction with services) outcomes were considered. Results: Controlling for the effect of sociodemographics, service utilization and changes in clinical status, better staff-patient agreement makes a significant additional contribution in predicting treatment outcomes not only on patient-rated but also on clinician-rated measures. Conclusions: Mental health care should be provided on the basis of a negotiation process involving both professionals and service users to ensure effective interventions; every effort should be made by services to implement strategies aiming to increase consensus between staff and patients. abstract_id: PUBMED:21850522 Multiple perspectives on mental health outcome: needs for care and service satisfaction assessed by staff, patients and family members. Purpose: Community-based mental health care requires the involvement of staff, patients, and their family members when both planning intervention programmes and evaluating mental health outcomes. The present study aimed to compare the perceptions of these three groups on two important subjective mental health outcome measures--needs for care and service satisfaction--to identify potential areas of discrepancy. Methods: The sample consisted of patients with a DSM diagnosis of psychosis and attending either outpatient or day centres operating in a community-based care system. Staff, patients and family members were assessed by using the CAN and the VSSS to evaluate, respectively, needs for care and service satisfaction. Kappa statistics were computed to assess agreement in the three groups. Results: Patients identified significantly fewer basic (e.g. daytime activities, food, accommodation) and functioning needs (e.g. self-care, looking after home, etc.) than staff or family members. Only fair levels of agreement were found in the three groups (average kappa was 0.48 for staff and patients, 0.54 for staff and family members, and 0.45 for patients and relatives), with patients and family members showing more areas of discrepancies in both needs and service satisfaction. Conclusions: These findings provide further support for the idea that mental health services should routinely involve patients and their relatives when planning and evaluating psychiatric intervention and that this policy is a premise for developing a partnership care model. abstract_id: PUBMED:11098809 The perception of needs for care in staff and patients in community-based mental health services. The South-Verona Outcome Project 3. Objective: The present study aims to assess needs for care rated by patients and staff and their agreement on needs assessment in a community-based mental health service by using the Camberwell Assessment of Need (CAN). Method: The Italian version of the CAN was used in a sample of 247 patient-staff pairs. Results: Patients and staff showed poor agreement on both the presence of a need and on whether need had been met or not. Higher disability predicted a higher number of patient-rated needs, while higher disability, higher number of service contacts and patient unemployment predicted a higher number of staff-rated needs. Lower global functioning predicted higher disagreement in patients and staff ratings of needs. Conclusion: Patients and staff show different perceptions of needs for care and therefore multiple perspectives should be taken into account for planning and providing effective needs-led mental health care. abstract_id: PUBMED:21170779 Assisted living facility administrator and direct care staff views of resident mental health concerns and staff training needs. This community needs assessment surveyed 21 administrators and 75 direct care staff at 9 larger and 12 smaller assisted living facilities (ALFs) regarding perceptions of resident mental health concerns, direct care staff capacity to work with residents with mental illness, and direct care staff training needs. Group differences in these perceptions were also examined. Both administrators and directcare staff indicated that direct care staff would benefit from mental health-related training, and direct care staff perceived themselves as being more comfortable working with residents with mental illness than administrators perceived them to be. Implications for gerontological social work are discussed. abstract_id: PUBMED:33000312 Agreement between patients and mental healthcare providers on unmet care needs in child and adolescent psychiatry. Purpose: In mental health care, patients and their care providers may conceptualize the nature of the disorder and appropriate action in profoundly different ways. This may lead to dropout and lack of compliance with the treatments being provided, in particular in young patients with more severe disorders. This study provides detailed information about patient-provider (dis)agreement regarding the care needs of children and adolescents. Methods: We used the Camberwell Assessment of Need (CANSAS) to assess the met and unmet needs of 244 patients aged between 6 and 18 years. These needs were assessed from the perspectives of both patients and their care providers. Our primary outcome measure was agreement between the patient and care provider on unmet need. By comparing a general outpatient sample (n = 123) with a youth-ACT sample (n = 121), we were able to assess the influence of severity of psychiatric and psychosocial problems on the extent of agreement on patient's unmet care needs. Results: In general, patients reported unmet care needs less often than care providers did. Patients and care providers had the lowest extents of agreement on unmet needs with regard to "mental health problems" (k = 0.113) and "information regarding diagnosis/treatment" (k = 0.171). Comparison of the two mental healthcare settings highlighted differences for three-quarters of the unmet care needs that were examined. Agreement was lower in the youth-ACT setting. Conclusions: Clarification of different views on patients' unmet needs may help reduce nonattendance of appointments, noncompliance, or dropout. Routine assessment of patients' and care providers' perceptions of patients' unmet care needs may also help provide information on areas of disagreement. abstract_id: PUBMED:12390218 Mental health training and development needs of community agency staff. Emphasis has long been placed in UK national policy on providing 'seamless' mental health services to meet both the health and social care needs of service users. While attention has been paid to the training required by specialist mental health and primary care staff in order to achieve this, the needs of other community agency staff have received less attention. The present article describes a study designed to identify the training needs of staff working within a broad range of agencies. Focus group discussions were used to explore participants' experiences of mental health problems amongst clients, their confidence in dealing with these, current sources of support and perceived training needs. The results indicate that participants in all agencies routinely encountered a range of problems. Colleagues were the main source of support, followed by line managers, but supervision structures and wider organisational support were lacking in some cases. Joint working with specialist mental health services was almost universally problematic and all groups identified a range of training needs. On the basis of the results, the present authors put forward suggestions as to how these needs might be met. abstract_id: PUBMED:18463938 The needs of mothers with severe mental illness: a comparison of assessments of needs by staff and patients. To identify the concordance in assessments of health and social care needs of pregnant women and mothers with severe mental illness as assessed by patients themselves and their mental healthcare professionals. Thirty-five staff-patient pairs were recruited from inpatient and community services. Staff and patients completed the Camberwell Assessment of Need--Mothers Version. There were significant differences in the total number of needs (p &lt; 0.01) and total number of unmet needs (p &lt; 0.001) reported by staff and patients themselves. There was moderate or better agreement on the presence of an unmet need in eight of 26 life domains. Agreement was low in several domains relevant to being a mother--notably pregnancy care, safety to child/others, and practical and emotional childcare domains. Unmet needs were particularly common in the areas of daytime activities, sleep, psychological distress and violence and abuse. Staff and pregnant women and mothers with severe mental illness moderately agree about health and social care needs but agree less often on which needs are unmet. This highlights the importance of the views of the mothers themselves, as well as assessments by staff. abstract_id: PUBMED:25945122 Mental health and social service needs for mental health service users in Japan: a cross-sectional survey of client- and staff-perceived needs. Background: The appropriate utilization of community services by people with mental health difficulties is becoming increasingly important in Japan. The aim of the present study was to describe service needs, as perceived by people with mental health difficulties living in the community and their service providers. We analyzed the difference between two necessity ratings using paired data in order to determine implications related to needs assessment for mental health services. Methods: This cross-sectional study used two self-reported questionnaires, with one questionnaire administered to mental health service users living in the community and another questionnaire to staff members providing services to those users at community service facilities. The study was conducted in psychiatric social rehabilitation facilities for people with mental health difficulties in Japan. The paired client and staff responses rated needs for each kind of mental health and social service independently. The 19 services listed in the questionnaire included counseling and healthcare, housing, renting, daily living, and employment. Overall, 246 individuals with mental health difficulties were asked to participate in this study, and after excluding invalid responses, 188 client-staff response dyads (76.4% of recruited people, 83.6% of people who gave consent) were analyzed in this study. A Wilcoxon matched-pairs signed rank test was used to compare the perceived needs, and weighted and unweighted Kappa statistics were calculated to assess rating agreement within client-staff dyads. Results: Over 75% of participants in our study, who were people with mental health difficulties living in the community, regarded each type of mental health service as "somewhat necessary," or "absolutely necessary" to live in their community. Most clients and staff rated healthcare facilities with 24/7 crisis consultation services as necessary. Agreement between client and staff ratings of perceived needs for services was low (Kappa = .02 to .26). Services regarding housing, renting a place to live, and advocacy had the same tendency in that clients perceived a higher need when compared to staff perceptions (p &lt; .01). Conclusions: It is essential for the service providers to identify the services that each user needs, engage in dialogue, and involve clients in service planning and development. abstract_id: PUBMED:31538555 Perceived need and barriers to adolescent mental health care: agreement between adolescents and their parents. Aims: Mental disorders cause high burden in adolescents, but adolescents often underutilise potentially beneficial treatments. Perceived need for and barriers to care may influence whether adolescents utilise services and which treatments they receive. Adolescents and parents are stakeholders in adolescent mental health care, but their perceptions regarding need for and barriers to care might differ. Understanding patterns of adolescent-parent agreement might help identify gaps in adolescent mental health care. Methods: A nationally representative sample of Australian adolescents aged 13-17 and their parents (N = 2310), recruited between 2013-2014, were asked about perceived need for four types of adolescent mental health care (counselling, medication, information and skill training) and barriers to care. Perceived need was categorised as fully met, partially met, unmet, or no need. Cohen's kappa was used to assess adolescent-parent agreement. Multinomial logistic regressions were used to model variables associated with patterns of agreement. Results: Almost half (46.5% (s.e. = 1.21)) of either adolescents or parents reported a perceived need for any type of care. For both groups, perceived need was greatest for counselling and lowest for medication. Identified needs were fully met for a third of adolescents. Adolescent-parent agreement on perceived need was fair (kappa = 0.25 (s.e. = 0.01)), but poor regarding the extent to which needs were met (kappa = -0.10 (s.e. = 0.02)). The lack of parental knowledge about adolescents' feelings was positively associated with adolescent-parent agreement that needs were partially met or unmet and disagreement about perceived need, compared to agreement that needs were fully met (relative risk ratio (RRR) = 1.91 (95% CI = 1.19-3.04) to RRR = 4.69 (95% CI = 2.38-9.28)). Having a probable disorder was positively associated with adolescent-parent agreement that needs were partially met or unmet (RRR = 2.86 (95% CI = 1.46-5.61)), and negatively with adolescent-parent disagreement on perceived need (RRR = 0.50 (95% CI = 0.30-0.82)). Adolescents reported most frequently attitudinal barriers to care (e.g. self-reliance: 55.1% (s.e. = 2.39)); parents most frequently reported that their child refused help (38.7% (s.e. = 2.69)). Adolescent-parent agreement was poor for attitudinal (kappa = -0.03 (s.e. = 0.06)) and slight for structural barriers (kappa = 0.02 (s.e. = 0.09)). Conclusions: There are gaps in the extent to which adolescent mental health care is meeting the needs of adolescents and their parents. It seems important to align adolescents' and parents' needs at the beginning and throughout treatment and to improve communication between adolescents and their parents. Both might provide opportunities to increase the likelihood that needs will be fully met. Campaigns directed towards adolescents and parents need to address different barriers to care. For adolescents, attitudinal barriers such as stigma and mental health literacy require attention. abstract_id: PUBMED:35915459 "The team needs to feel cared for": staff perceptions of compassionate care, aids and barriers in adolescent mental health wards. Background: Compassion is vital in healthcare. Current understandings of the nature of compassionate care, its aids and barriers, are more theoretically developed than grounded in staff experience. This study explores staff perceptions of compassionate care in child and adolescent mental health wards. Methods: Three focus groups were conducted with a total of 35 staff from adolescent mental health wards (10-12 people in each group), on the nature of compassionate care, aids and barriers. Transcripts were analysed using thematic analysis. A follow-up survey with 36 workers from other UK child and adolescent mental health wards was completed and means and standard deviations of responses were analysed to confirm wider resonance of themes. Results: Elements of compassionate care fell into six themes relating to individual, team and organisational factors: emotional connection, sense of being valued, attention to the whole person, understanding, good communication, and practical help/resources. Aids and barriers mirrored each other, and showed that what staff think is key to the nature of compassionate care for patients is also what they feel they need to receive to be able to show compassionate care. Conclusions: This study suggests that staff need the same elements of compassion as those which they seek to provide. A greater emphasis needs to be placed on providing staff with individual, team and organisational level resources which help them to feel compassionately held within the interconnected systems in which they work, in order to be able to continue to provide high level compassionate care. Staff need to be nourished, valued and compassionately cared for in order to be able to care compassionately for the patients they look after. Answer: Yes, staff-patient agreement on needs for care does predict a better mental health outcome. A study that investigated the impact of staff-patient agreement on treatment outcomes in patients receiving community-based psychiatric care found that better staff-patient agreement on needs made a significant additional contribution in predicting treatment outcomes on both patient-rated and clinician-rated measures, even after controlling for the effect of sociodemographics, service utilization, and changes in clinical status (PUBMED:17537280). This suggests that mental health care should be provided based on a negotiation process involving both professionals and service users to ensure effective interventions, and strategies should be implemented to increase consensus between staff and patients.
Instruction: Is aneusomy of chromosome 9 alone a valid biomarker for urinary bladder cancer screening? Abstracts: abstract_id: PUBMED:16619518 Is aneusomy of chromosome 9 alone a valid biomarker for urinary bladder cancer screening? Background: Detection of genetically-changed tumor cells in the urine is one of the new approaches for the screening of bladder carcinomas. In a previous study, numerical aberrations of chromosome 9 were found in 85.18% of bladder tumors studied by the fluorescence in situ hybridization (FISH) technique. The purpose of the present study was to investigate whether chromosome 9 aneusomy alone is a valid, cost effective, biomarker for bladder cancer screening. Materials And Methods: Twenty-seven voided urine specimens obtained from 22 bladder cancer patients, either at initial diagnosis or at the follow-up, were analyzed by the FISH technique with the centromeric probe specific for chromosome 9. Results: In all except 2 out of the 13 specimens with a histological confirmation of cancer, FISH analysis showed aneusomy 9 (sensitivity 84.61%). Among 6 cases with a negative cystoscopy but a positive FISH analysis, 3 recurred within the following 2 months, while 2 no-recurrent patients continued to show positive FISH findings after 6 months. One patient was considered to be false-positive. Four cases with a negative cystoscopy showed disomy 9 and 2 of them recurred. Conclusion: Aneusomy 9 has a high sensitivity (84.61%) for the detection of bladder cancer. Patients with a negative cystoscopy but with aneusomy 9 should be kept under close clinical surveillance for potential disease recurrence. However, negative FISH results might not be a negative predictor for disease recurrence. Our results encourage further studies with a large number of patients and a long-term follow-up with concurrent FISH analysis. abstract_id: PUBMED:12368185 Identification of chromosome 9 alterations and p53 accumulation in isolated carcinoma in situ of the urinary bladder versus carcinoma in situ associated with carcinoma. Carcinoma in situ (CIS) of the urinary bladder is a flat, aggressive lesion and may be the most common precursor of invasive bladder cancer. Although chromosome 9 alterations are among the earliest and most prevalent genetic alterations in bladder cancer, discrepancy exists about the frequency of chromosome 9 losses in CIS. We analyzed 22 patients with CIS of the bladder (15 patients with isolated CIS, 7 patients combined with synchronous pTa or pT1 carcinomas) for gains and losses of chromosome (peri)centromere loci 1q12, 7p11-q11, 9p11-q12, and 9p21 harboring the INK4A/ARF locus (p16(INK4A)/p14(ARF)) and INK4B (p15(INK4B)) by multiple-target fluorescence in situ hybridization, and for p53 protein accumulation by immunohistochemistry. In 15 of 20 (75%) CIS lesions analyzed p53 overexpression was detected, whereas aneusomy for chromosomes 1 and 7 was identified in 20 of 22 (91%) CIS. In 13 of 22 (60%) CIS cases analyzed, 12 of which were not associated with a synchronous pTa or pT1 carcinoma, no numerical losses for chromosome 9 (p11-q12 and 9p21) were detected as compared with chromosomes 1 and 7. Furthermore 6 of 12 (50%) patients showed a metachronous invasive carcinoma within 2 years. In the remaining nine biopsies CIS lesions (40%) were recognized that showed losses of chromosome 9p11-q12 and 9p21, six of these were associated with a synchronous pTa or pT1 carcinoma. Three of these carcinomas were pTa and exhibited loss of 9q12 as well as a homozygous deletion of 9p21. The others were invasive carcinomas in which CIS lesions were also recognized that showed no numerical loss of chromosome 9, but did show an accumulation of p53. In conclusion our data demonstrate that predominantly isolated CIS lesions contained cells with no specific loss of chromosome 9, as opposed to CIS lesions with synchronous carcinomas that showed evidence of chromosome 9 loss. Furthermore our data strengthen the proposition that p53 mutations (p53 overexpression) precede loss of chromosomes 9 and 9p21 in CIS as precursor for invasive bladder cancer, as opposed to noninvasive carcinomas where chromosome 9 (9p11-q12) losses are early and frequently combined with homozygous deletions of 9p21. abstract_id: PUBMED:11410506 Alterations of the 9p21 and 9q33 chromosomal bands in clinical bladder cancer specimens by fluorescence in situ hybridization. Purpose: To better define cytogenetic mechanisms of CDKN2 loss at 9p21 and of DBCCR1 loss at 9q33 in bladder cancer, and to determine correlation with p53 and pRb. Experimental Design: Two-color fluorescence in situ hybridization (FISH) using a chromosome 9 centromeric probe and locus-specific probes was performed. p53 and pRb were assessed by immunohistochemistry. Results: Thirty-seven of fifty-five (67%) samples exhibited 9p21 loss, and 32 of 44 (73%) exhibited 9q33 loss. Twelve of 43 informative samples exhibited only 9p21 loss (5 cases) or only 9q33 loss (7 cases). Homozygous deletions were noted at 9p21 and 9q33 in 31 and 14% of cases, respectively, but 9q33 homozygous deletions were generally observed in only a minor clone. There was no correlation of any chromosome 9 loss with stage, but stage did correlate with chromosome 9 ploidy status; aneusomy 9 was observed in 33% of T(a) lesions and 71% of more advanced cases (P = 0.01). Aneusomy 9 was loosely correlated with p53 abnormalities (P = 0.07), but no correlation between any chromosome 9 and pRb abnormalities was discerned. Conclusions: This study strengthens the proposition that chromosome 9 losses occur early in bladder oncogenesis and before p53 alterations or development of aneusomy. The correlation of aneusomy 9 with p53 abnormalities is consistent with the presumed role of p53 in maintaining cytogenetic stability. Although the observed homozygous deletions strengthen the hypotheses that CDKN2 and DBCCR1 are important tumor suppressor genes, there is no evidence that either is a more critical or an earlier target for oncogenesis. abstract_id: PUBMED:8443801 Preliminary mapping of the deleted region of chromosome 9 in bladder cancer. Inactivation of a suppressor gene by deletion of chromosome 9 is a candidate initiating event in bladder carcinogenesis. We have used 13 polymorphic markers spanning the length of chromosome 9 in order to map the region of deletion in human bladder carcinomas. In the majority of tumors loss of heterozygosity was found at all informative sites along the chromosome, indicating deletion of the entire chromosome. Nine tumors had selective deletions of chromosome 9. Mapping of the deleted region in these tumors suggests that the target gene is located between D9S22 at 9q22 and D9S18 at 9p12-13. abstract_id: PUBMED:8358736 Role of chromosome 9 in human bladder cancer. The tumors of 20 patients with multifocal primary transitional cell carcinoma of the bladder or lymph node metastases were examined for molecular genetic defects which we have previously found to be present in &gt; 50% of invasive tumors. These included loss of heterozygosity (LOH) of chromosome 9, which occurs in superficial as well as invasive bladder tumors, and LOH of chromosome 17p and p53 mutations, which are commonly found only in invasive tumors. Analysis of multiple or recurrent primary tumors in 7 patients for these markers was generally consistent with recently published data that the tumors are monoclonal in origin and that p53 mutations occur as a late event in the generation of invasive bladder cancers. Comparison of the primary tumors and metastases to regional lymph nodes in 14 patients demonstrated a complete concordance between the molecular genetic defects present, showing that LOH of chromosomes 9 and 17p and p53 mutations occurred in the primary tumors before metastasis. Because of the importance of chromosome 9 in bladder cancer, we mapped the location of a putative tumor suppressor gene by restriction fragment length polymorphism analysis of 123 cases obtained in this and earlier studies. Most of the tumors showed LOH for more than one marker on chromosome 9. Results of mapping of 4 tumors with partial deletion of chromosome 9 suggests that the tumor suppressor gene is located between 9p12 and 9q34.1. abstract_id: PUBMED:9426683 Loss of heterozygosity on chromosome 9 and loss of chromosome 9 copy number are separate events in the pathogenesis of transitional cell carcinoma of the bladder. The most frequent genetic aberration found in transitional cell carcinoma (TCC) of the bladder involves chromosome 9. Loss of heterozygosity (LOH) analyses show deletions of both chromosome 9p and 9q, while in situ hybridization studies suggest a significant percentage of tumours with monosomy 9. To investigate the types of chromosome 9 losses that occur in bladder cancer, we have studied 40 tumours with different techniques such as in situ hybridization (ISH), flow cytometry and LOH analysis. LOH for one or more markers was found in 43% of the tumours. This percentage does not differ from previous reports. With ISH, complete monosomy for chromosome 9 was observed in only 1 of the 40 tumours. Four other tumours had monosomic subpopulations, representing 23-40% of the cells. In 18 cases, an underrepresentation of the chromosome 9 centromere relative to chromosome 6 or to the ploidy of the tumour was observed, including the cases with monosomy. In 5 of these 18 cases, the relative loss could not be confirmed by LOH. In addition, when LOH and a relative underrepresentation were observed in the same tumour, the extent of LOH as measured by the intensity of allele loss, was often not related to the extent of underrepresentation. We therefore conclude that complete monosomy of chromosome 9 is rare in TCCs of the bladder and that a relative loss of centromere signal may not be related to a loss compatible with inactivation of a tumour suppressor gene. LOH was found in TCCs of all stages and grades. Our results suggest that loss of tumour suppressor genes on chromosome 9 is an early event in the pathogenesis of bladder cancer. abstract_id: PUBMED:8187066 Chromosome 9 allelic losses and microsatellite alterations in human bladder tumors. Chromosome 9 allelic losses have been reported as a frequent and early event occurring in bladder cancer. It has been postulated that a candidate tumor suppressor gene may reside on this chromosome, alterations of which may lead to the development of a subset of superficial bladder tumors. More recently, the involvement of two different regions harboring suppressor loci, one on each of both chromosome 9 arms, has been proposed. We undertook the present study with the objectives of better defining the deleted regions of chromosome 9 in bladder tumors, as well as evaluating the frequency of microsatellite alterations affecting certain loci on this chromosome in urothelial neoplasia. Seventy-three primary bladder tumors were analyzed using a set of highly polymorphic markers, and results were correlated with pathological parameters associated with poor clinical outcome. We observed that, overall, 77% of the tumors studied showed either loss of heterozygosity for one or more chromosome 9 markers and/or microsatellite abnormalities at chromosome 9 loci. Detailed analyses showed that two regions, one on 9p at the interferon cluster, and the other on 9q associated with the q34.1-2 bands, had the highest frequencies of allelic losses. Furthermore, Ta lesions appeared to present mainly with 9q abnormalities, while T1 tumors displayed a mixture of aberrant 9p and 9q genotypes. These observations indicate that loss of heterozygosity of 9p may be associated with the development of superficial tumors with a more aggressive biological behavior or, alternatively, they may be related to early disease progression. In addition, microsatellite alterations were documented in over 40% of amplified cases. Taken together, these data suggest that two different tumor suppressor gene loci on chromosome 9 are involved as tumorigenic events in bladder cancer and that chromosome 9 microsatellite alterations are frequent events occurring in urothelial neoplasia. abstract_id: PUBMED:9149891 Cigarette smoking and chromosome 9 alterations in bladder cancer. Epidemiological studies suggest that bladder cancer may be caused by carcinogens in tobacco and certain occupational exposures. Molecular studies have shown that chromosome 9 alterations and TP53 mutations are the most frequent events in bladder cancer. To date, the relationships between epidemiological risk factors and genetic alterations have not been fully explored in bladder cancer. The purpose of this study was to explore the association between smoking and chromosome 9 aberrations in bladder cancer cases. Seventy-three patients with bladder cancer at Memorial Sloan-Kettering Cancer Center were evaluated for smoking history, occupational history, and chromosome 9 alterations. The epidemiological data were abstracted from medical charts. Patients' tumor tissues were analyzed using RFLP and microsatellite polymorphism assays for detection of chromosome 9 alterations. Elevated odds ratios (ORs) were found for chromosome 9 alterations in smokers compared to those in nonsmokers (OR = 4.2; 95% confidence interval, 1.02-17.0) after controlling for age, sex, race, occupational history, and stage of disease. The ORs were 3.6 for those smoking &lt; or = 20 cigarettes per day and 5.8 for those smoking &gt; 20 cigarettes per day. No association was found between occupational history and chromosome 9 alterations. This study supplies evidence suggestive of the link between smoking and chromosome 9 alterations in the etiology of bladder cancer and indicates that potential tumor suppressor genes on chromosome 9 may be involved in smoking-related bladder carcinogenesis. abstract_id: PUBMED:8221642 Evidence for two bladder cancer suppressor loci on human chromosome 9. Most carcinomas of the bladder show loss of heterozygosity for markers on human chromosome 9, which suggests that one or more tumor suppressor genes are located on this chromosome. Several observations suggest that such alterations are an important early step in tumorigenesis. We analyzed the pattern of allelic loss in 46 primary carcinomas of the bladder using 19 polymorphic markers from chromosome 9. While most tumors with allelic loss showed loss of heterozygosity for all informative markers that were tested, six tumors demonstrated only partial loss of chromosome 9. Two tumors with partial loss contained deletions that predominantly involved the q arm, as shown by previous studies. The other four tumors contained deletions that predominantly or exclusively involved the p arm, with a common region of loss between D9S161 (9p21) and the telomere. The results show that there is no single common region of loss on chromosome 9 and identify two distinct regions of loss that may contain bladder tumor suppressor loci. abstract_id: PUBMED:16471327 C-erb-2 gene amplification and chromosomal anomalies in bladder cancer: preliminary results. Studies recently suggested that different genetic factors are involved in the development and progression of bladder cancer. In this study, 30 consecutive patients affected by bladder neoplasm were evaluated in order to analyze the frequency of c-erb-2 gene amplification and chromosome 7, 9, 17 aneusomy using fluorescence in situ hybridization (FISH) technique. C-erb-2 gene amplification, chromosome 17 gain and aneusomy were respectively observed in 3.7% (1/27), in 47% (12/27) and in 74% (20/27) of examined tumors. Moreover, chromosome 7 and 9 aneusomy were detected in 74% (20/27) and in 72% (16/27) of specimens. A statistically significant correlation was observed between chromosome 17 aneusomy and tumor stage and grade (r: 0.642, p = 0.0001; r: 0.385, p = 0.04, respectively). In conclusion, we observed a low incidence of C-erb-2 gene amplification, while chromosome 17 aneusomy was confirmed as a marker of advanced and aggressive bladder cancer. Answer: Aneusomy of chromosome 9 has been studied as a potential biomarker for urinary bladder cancer screening. In one study, aneusomy 9 demonstrated a high sensitivity (84.61%) for the detection of bladder cancer, suggesting that it could be a valid biomarker for screening purposes. The study also recommended that patients with a negative cystoscopy but with aneusomy 9 should be kept under close clinical surveillance for potential disease recurrence, although negative FISH results might not be a negative predictor for disease recurrence (PUBMED:16619518). However, the validity of aneusomy 9 as a standalone biomarker for bladder cancer screening is not without contention. Other studies have shown that chromosome 9 alterations are common in bladder cancer, but the specific type of alteration can vary. For example, one study found that isolated carcinoma in situ (CIS) lesions often contained cells with no specific loss of chromosome 9, as opposed to CIS lesions with synchronous carcinomas that showed evidence of chromosome 9 loss (PUBMED:12368185). Another study found that chromosome 9 losses occur early in bladder oncogenesis and before p53 alterations or development of aneusomy, with aneusomy 9 loosely correlated with p53 abnormalities (PUBMED:11410506). Furthermore, research has indicated that loss of heterozygosity (LOH) on chromosome 9 and loss of chromosome 9 copy number are separate events in the pathogenesis of transitional cell carcinoma of the bladder (PUBMED:9426683). This suggests that aneusomy of chromosome 9 alone may not capture the full spectrum of genetic alterations associated with bladder cancer. In conclusion, while aneusomy of chromosome 9 has shown promise as a biomarker for bladder cancer detection, its validity as a standalone marker for screening is uncertain. It may be more effective when used in conjunction with other diagnostic methods or biomarkers. Further studies with larger patient cohorts and long-term follow-up are encouraged to better establish the role of chromosome 9 aneusomy in bladder cancer screening (PUBMED:16619518).
Instruction: Can academic radiology departments become more efficient and cost less? Abstracts: abstract_id: PUBMED:9807566 Can academic radiology departments become more efficient and cost less? Purpose: To determine how successful two large academic radiology departments have been in responding to market-driven pressures to reduce costs and improve productivity by downsizing their technical and support staffs while maintaining or increasing volume. Materials And Methods: A longitudinal study was performed in which benchmarking techniques were used to assess the changes in cost and productivity of the two departments for 5 years (fiscal years 1992-1996). Cost per relative value unit and relative value units per full-time equivalent employee were tracked. Results: Substantial cost reduction and productivity enhancement were realized as linear improvements in two key metrics, namely, cost per relative value unit (decline of 19.0% [decline of $7.60 on a base year cost of $40.00] to 28.8% [$12.18 of $42.21]; P &lt; or = .001) and relative value unit per full-time equivalent employee (increase of 46.0% [increase of 759.55 units over a base year productivity of 1,651.45 units] to 55.8% [968.28 of 1,733.97 units]; P &lt; .001), during the 5 years of study. Conclusion: Academic radiology departments have proved that they can "do more with less" over a sustained period. abstract_id: PUBMED:35995692 Using an Annual Diversity, Equity and Inclusion Dashboard to Accelerate Change in Academic Radiology Departments. Despite widespread interest in creating a more equitable and inclusive culture, a lack of workforce diversity persists in Radiology, in part due to a lack of universal and longitudinal metrics across institutions. In an attempt to establish benchmarks, a subset of the Society of Chairs of Academic Radiology Departments (SCARD) Diversity, Equity and Inclusion (DEI) Committee volunteered to design a DEI dashboard as a potential tool for academic radiology programs to use to document and track their progress. This freely-available, modular dashboard includes suggested (plus optional department-defined) DEI activities/parameters and suggested assessment criteria across three domains: faculty, residents &amp; fellows, and medical students; it can be completed, in whole or in part, by departmental leaders annually. The suggested metrics and their associated rubrics were derived from the collective experiences of the five working group members, all of whom are chairs of academic radiology departments. The resulting dashboard was unanimously approved by the remaining 14 DEI committee members and endorsed by the SCARD board of directors. abstract_id: PUBMED:24303644 The cost of doing business in academic radiology departments. This study identifies the major sources of overhead fees/costs and subsidies in academic radiology departments (ARDs) in the US and determines the differences between them based on geographic location or the size of their affiliated hospital. ARDs in the Northeast had the highest level of financial support from their affiliated hospitals when compared to those in the South/Southwest; however, a greater number of Midwest ARDs receive high levels of funding for teaching from their medical schools when compared to the northeast. Significantly fewer ARDs affiliated with hospitals of less than 200 beds receive subsidies for their activities when compared to those affiliated with larger hospitals. Differences in levels of overhead costs/ subsidies available to ARDs are associated with either geographic location or the size of the affiliated hospital. The reasons for these differences may be related to a variety of legal, contractual, or fiscal factors. Investigation of existing geographic and affiliate size fiscal differences and their causes by ARDs may be of benefit. abstract_id: PUBMED:22700755 Trends in spinal pain management injections in academic radiology departments. Background And Purpose: There is a paucity of information present in the current literature with regard to the role of SPMI performance in academic radiology centers. Our aim was to evaluate the current practice patterns for the performance of SPMIs in academic radiology departments. Materials And Methods: A survey of 186 academic radiology departments in the United States was conducted between March 2009 and May 2009. The survey included questions on departmental demographics, recent trends in departmental SPMI performance, type of physicians who refer to radiology for SPMI performance, types of SPMIs offered, the fraction of total institutional SPMI volume performed by radiologists, and the current state of resident and fellow SPMI training proficiency. Results: Forty-five of the 186 (21.4%) surveys were completed and returned. Twenty-eight of the 45 responding departments stated that they performed SPMIs; the other 17 stated that they did not. Among the 28 responding departments that perform SPMIs, 6 (21.4%), 5 (17.9%), and 8 (28.6%) stated that the number of departmental SPMIs had, respectively, increased, decreased, or remained stable during the past 5 years. SPMI referrals to radiology were made by orthopedic surgeons, neurologic surgeons, neurologists, psychiatrists, anesthesiologists, and internal medicine physicians. CESIs, SNRBs, facet injections, and synovial cyst aspirations are the most frequently performed injections. Fellows and residents become proficient in 88.5% and 51.9%, respectively, of SPMI-performing departments. Most departments perform &lt;50% of the SPMI volume of their respective institutions. Conclusions: Most responding academic radiology departments perform SPMIs. Most fellows and just more than half of residents at SPMI-performing departments achieve SPMI proficiency. For the most part, the number of SPMIs performed in responding departments has been stable during the past 5 years. abstract_id: PUBMED:17411658 Enhancing research in academic radiology departments: recommendations of the 2003 Consensus Conference. Opportunities for funded radiologic research are greater than ever, and the amount of federal funding coming to academic radiology departments is increasing. Even so, many medical school-based radiology departments have little or no research funding. Accordingly, a consensus panel was convened to discuss ways to enhance research productivity and broaden the base of research strength in as many academic radiology departments as possible. The consensus panel included radiologists who have leadership roles in some of the most well-funded research departments, radiologists who direct other funded research programs, and radiologists with related expertise. The goals of the consensus panel were to identify the attributes associated with successful research programs and to develop an action plan for radiology research on the basis of these characteristics. abstract_id: PUBMED:2345093 Research and research training in academic radiology departments. A survey of department chairmen. We surveyed 121 chairmen of academic radiology departments to assess how these departments select and educate their residents and fellows in research. Eighty-six chairmen responded (71%). The majority of their programs select at least some of their trainees for their potential as researchers and nearly all encourage trainees to perform research. The more the selection process focuses on research, the greater the percentage of residents and fellows that participate in research during training. Nonetheless, only about one-third of residents and half of the fellows perform and publish research. Only half the programs offer formal research seminars and few trainees opt for additional research training. These results may relate to the relatively small percentage of faculty performing prospective clinical and laboratory research. These findings are disappointing in the light of previous results suggesting that performing research, publication, and formal research education during training correlate highly with the development of successful research careers. Chairmen could increase the likelihood of trainees choosing research careers and being successful in publishing research by providing early exposure to research experiences and providing formalized research training. abstract_id: PUBMED:15470808 Enhancing research in academic radiology departments: recommendations of the 2003 Consensus Conference. Opportunities for funded radiologic research are greater than ever, and the amount of federal funding coming to academic radiology departments is increasing. Even so, many medical school-based radiology departments have little or no research funding. Accordingly, a consensus panel was convened to discuss ways to enhance research productivity and broaden the base of research strength in as many academic radiology departments as possible. The consensus panel included radiologists who have leadership roles in some of the best-funded research departments, radiologists who direct other funded research programs, and radiologists with related expertise. The goals of the consensus panel were to identify the attributes associated with successful research programs and to develop an action plan for radiology research based on these characteristics. abstract_id: PUBMED:23265973 Distribution of scholarly publications among academic radiology departments. Purpose: The aim of this study was to determine whether the distribution of publications among academic radiology departments in the United States is Gaussian (ie, the bell curve) or Paretian. Methods: The search affiliation feature of the PubMed database was used to search for publications in 3 general radiology journals with high Impact Factors, originating at radiology departments in the United States affiliated with residency training programs. The distribution of the number of publications among departments was examined using χ(2) test statistics to determine whether it followed a Pareto or a Gaussian distribution more closely. Results: A total of 14,219 publications contributed since 1987 by faculty members in 163 departments with residency programs were available for assessment. The data acquired were more consistent with a Pareto (χ(2) = 80.4) than a Gaussian (χ(2) = 659.5) distribution. The mean number of publications for departments was 79.9 ± 146 (range, 0-943). The median number of publications was 16.5. The majority (&gt;50%) of major radiology publications from academic departments with residency programs originated in &lt;10% (n = 15 of 178) of such departments. Fifteen programs likewise produced no publications in the surveyed journals. Conclusion: The number of publications in journals with high Impact Factors published by academic radiology departments more closely fits a Pareto rather than a normal distribution. abstract_id: PUBMED:15286311 Enhancing research in academic radiology departments: recommendations of the 2003 Consensus Conference. Opportunities for funded radiologic research are greater than ever, and the amount of federal funding coming to academic radiology departments is increasing. Even so, many medical school-based radiology departments have little or no research funding. Accordingly, a consensus panel was convened to discuss ways to enhance research productivity and broaden the base of research strength in as many academic radiology departments as possible. The consensus panel included radiologists who have leadership roles in some of the best-funded research departments, radiologists who direct other funded research programs, and radiologists with related expertise. The goals of the consensus panel were to identify the attributes associated with successful research programs and to develop an action plan for radiology research based on these characteristics. abstract_id: PUBMED:28118038 Quality metrics currently used in academic radiology departments: results of the QUALMET survey. Objective: We present the results of the 2015 quality metrics (QUALMET) survey, which was designed to assess the commonalities and variability of selected quality and productivity metrics currently employed by a large sample of academic radiology departments representing all regions in the USA. Methods: The survey of key radiology metrics was distributed in March-April of 2015 via personal e-mail to 112 academic radiology departments. Results: There was a 34.8% institutional response rate. We found that most academic departments of radiology commonly utilize metrics of hand hygiene, report turn around time (RTAT), relative value unit (RVU) productivity, patient satisfaction and participation in peer review. RTAT targets were found to vary widely. The implementation of radiology peer review and the variety of ways in which peer review results are used within academic radiology departments, the use of clinical decision support tools and requirements for radiologist participation in Maintenance of Certification also varied. Policies for hand hygiene and critical results communication were very similar across all institutions reporting, and most departments utilized some form of missed case/difficult case conference as part of their quality and safety programme, as well as some form of periodic radiologist performance reviews. Conclusion: Results of the QUALMET survey suggest many similarities in tracking and utilization of the selected quality and productivity metrics included in our survey. Use of quality indicators is not a fully standardized process among academic radiology departments. Advances in knowledge: This article examines the current quality and productivity metrics in academic radiology. Answer: Yes, academic radiology departments can become more efficient and cost less. A longitudinal study assessing changes in cost and productivity over 5 years in two large academic radiology departments found substantial cost reduction and productivity enhancement. The study tracked two key metrics: cost per relative value unit, which declined by 19.0% to 28.8%, and relative value units per full-time equivalent employee, which increased by 46.0% to 55.8% (PUBMED:9807566). This indicates that academic radiology departments have successfully managed to "do more with less" over a sustained period, proving that increased efficiency and reduced costs are achievable in this setting.
Instruction: Is barium trapping in rectoceles significant? Abstracts: abstract_id: PUBMED:7607041 Is barium trapping in rectoceles significant? Purpose: This study was designed to determine whether rectocele size and contrast retention are significant. Methods: Evacuation proctography and simultaneous intrarectal pressure measurements from a small, noncompliant balloon catheter were performed in three matched groups of 11 constipated female patients with rectoceles, rectoceles and contrast trapping of &gt; 10 percent, and no rectocele. Computerized image analysis was used to measure rectocele area and evacuation. Results: In the two groups with rectoceles, there was no significant difference in rectocele area or width pre-evacuation. The anorectal angle, pelvic floor descent, maximum anal canal width, evacuation time or completeness, maximum and distal intrarectal pressure, or need to digitate did not differ significantly between the groups. In seven patients with barium trapping (64 percent) the intrarectal pressure dropped abruptly as the balloon entered the rectocele, suggesting that trapping results from sequestration into the vagina, closing part of the rectocele from the normal intrarectal pressure zone. Conclusion: Because no impairment of evacuation appears to be associated with either a large rectocele or trapping, these evacuation problems should not be directly attributed to these proctographic findings. abstract_id: PUBMED:11907721 Barium trapping in rectoceles: are we trapped by the wrong definition? Background: Barium trapping within a rectocele is a criterion used by surgeons to select which patients with rectoceles should undergo operative repair. This proctographic study compared the presence and depth of barium trapping within a rectocele on postevacuation radiography with those seen on posttoilet radiography after further evacuation in the privacy of the bathroom. Methods: Eighty-two consecutive patients with evidence of barium trapping on postevacuation radiographs of a fluoroscopic dynamic cystoproctographic examination were reviewed retrospectively. The size of the rectoceles and the depth of barium trapping on the postevacuation and subsequent posttoilet radiographs were measured. Results: The posttoilet radiographs showed resolution of the barium trapping in 47 (57%) of the 82 patients. Resolution of the trapping was directly related to rectocele size. The mean differences in the depth of barium trapping between the postevacuation and posttoilet radiographs were significant for all sizes of rectocele. Conclusion: Barium trapping in rectoceles changes with the degree of rectal evacuation. More complete evacuation was shown on the posttoilet radiograph than on the postevacuation radiograph. Consequently, the posttoilet radiograph may be more appropriate for the preoperative assessment of barium trapping within rectoceles. abstract_id: PUBMED:34221640 Barium Defecating Proctography and Dynamic Magnetic Resonance Proctography: Their Role and Patient's Perception. Objectives: The objectives of the study were to compare the imaging findings and patient's perception of barium defecating proctography and dynamic magnetic resonance (MR) proctography in patients with pelvic floor disorders. Material And Methods: This is a prospective study conducted on patients with pelvic floor disorders who consented to undergo both barium proctography and dynamic MR proctography. Imaging findings of both the procedures were compared. Inter-observer agreement (IOA) for key imaging features was assessed. Patient's perception of these procedures was assessed using a short questionnaire and a visual analog scale. Results: Forty patients (M: F =19:21) with a mean age of 43.65 years and range of 21-75 years were included for final analysis. Mean patient experience score was significantly better for MR imaging (MRI) (p &lt; 0.001). However, patients perceived significantly higher difficulty in rectal evacuation during MRI studies (p = 0.003). While significantly higher number of rectoceles (p = 0.014) were diagnosed on MRI, a greater number of pelvic floor descent (p = 0.02) and intra-rectal intussusception (p = 0.011) were diagnosed on barium proctography. The IOA for barium proctography was substantial for identifying rectoceles, rectal prolapse and for determining M line, p &lt; 0.001. There was excellent IOA for MRI interpretation of cystoceles, peritoneoceles, and uterine prolapse and substantial to excellent IOA for determining anal canal length and anorectal angle, p &lt; 0.001. The mean study time for the barium and MRI study was 12 minutes and 15 minutes, respectively. Conclusion: Barium proctography was more sensitive than MRI for detecting pelvic floor descent and intrarectal intussusception. Although patients perceived better rectal emptying with barium proctography, the overall patient experience was better for dynamic MRI proctography. abstract_id: PUBMED:3393650 Solitary rectal ulcer syndrome: findings at barium enema study and defecography. Sixteen cases of histopathologically proved solitary rectal ulcer syndrome were encountered. Fifteen patients underwent barium enema study; in nine cases the findings--including rectal stricture, granularity of the mucosa, and thickened rectal folds-were nonspecific. In six cases the study was normal. All patients had a long history of defecation disorders, and defecography was performed in all. In seven cases, intussusception of the rectal wall was seen; in another case the intussusception was accompanied by a rectocele. One case showed rectal prolapse. In four cases, failed relaxation of the puborectalis occurred and prevented the passage of the bolus; in another case there was abnormal perineal descent. In two patients studies were normal. In patients with defecation disorders, the possibility of this syndrome should be considered. Defecography is the method of choice for establishing the diagnosis. abstract_id: PUBMED:27886434 Integrated total pelvic floor ultrasound in pelvic floor defaecatory dysfunction. Aim: Imaging for pelvic floor defaecatory dysfunction includes defaecation proctography. Integrated total pelvic floor ultrasound (transvaginal, transperineal, endoanal) may be an alternative. This study assesses ultrasound accuracy for the detection of rectocele, intussusception, enterocele and dyssynergy compared with defaecation proctography, and determines if ultrasound can predict symptoms and findings on proctography. Treatment is examined. Method: Images of 323 women who underwent integrated total pelvic floor ultrasound and defaecation proctography between 2011 and 2014 were blindly reviewed. The size and grade of rectocele, enterocele, intussusception and dyssynergy were noted on both, using proctography as the gold standard. Barium trapping in a rectocele or a functionally significant enterocele was noted on proctography. Demographics and Obstructive Defaecation Symptom scores were collated. Results: The positive predictive value of ultrasound was 73% for rectocele, 79% for intussusception and 91% for enterocele. The negative predictive value for dyssynergy was 99%. Agreement was moderate for rectocele and intussusception, good for enterocele and fair for dyssynergy. The majority of rectoceles that required surgery (59/61) and caused barium trapping (85/89) were detected on ultrasound. A rectocele seen on both transvaginal and transperineal scanning was more likely to require surgery than if seen with only one mode (P = 0.0001). If there was intussusception on ultrasound the patient was more likely to have surgery (P = 0.03). An enterocele visualized on ultrasound was likely to be functionally significant on proctography (P = 0.02). There was, however, no association between findings on imaging and symptoms. Conclusion: Integrated total pelvic floor ultrasound provides a useful screening tool for women with defaecatory dysfunction such that defaecatory imaging can avoided in some. abstract_id: PUBMED:8816549 Defecography in healthy subjects: comparison of three contrast media. Purpose: To determine if differences in the viscosity of defecographic contrast media influence radiographic findings. Materials And Methods: Twenty asymptomatic volunteers underwent defecography three times with a different contrast medium used for each examination. The contrast media varied in viscosity from a thin barium liquid to a commercial barium paste formulated for defecography and to an extremely thick, specially prepared barium contrast paste. Results: Significant differences (P &lt; .05) between media were demonstrated for measurements of the anorectal angle and anorectal junction during liquid medium voiding. Differences in pelvic floor descent and evacuation time were not significant (P &gt; .05). Rectoceles occurred in 14 subjects and were demonstrated with all media. Low-grade intussusceptions were more prevalent with the liquid medium, but their occurrence was not statistically significantly more frequent (P &gt; .05). Conclusion: Altering the viscosity of the barium contrast medium used for defecography does not substantially affect the subsequent radiographic findings. abstract_id: PUBMED:3349875 Radiologic studies of rectal evacuation in adults with idiopathic constipation. A consecutive series of 58 patients with idiopathic constipation and 20 control subjects were studied by evacuation proctography and measurements were made of changes during rectal expulsion. A wide range was found in the control group. The anorectal angle, pelvic floor descent, and the presence or size of an anterior rectocele did not discriminate between the control and patient groups. Internal intussusception was rare. Among constipated patients, the only significant differences from normal were in the time taken to expel barium and the amount of barium remaining in the distal rectum. The majority of control subjects (15 of 20) evacuated most of the barium within 20 seconds whereas 45 of 58 constipated patients took a longer time. Using the area of barium on a lateral view of the rectum as a measure, 19 of 20 control subjects evacuated at least 60 percent of the barium from the distal 4 cm of the rectum compared with only 25 of 58 patients. A varying degree of defecatory impairment was thus established among many patients with constipation. The patients were subdivided into those with a normal or abnormal whole gut transit rate as an indication of colonic function, and those who did or did not need to digitally evacuate the rectum as a clinical manifestation of an anorectal disorder. No obvious differences were found between these subgroups using the parameters measured. abstract_id: PUBMED:22251617 Barium proctography vs magnetic resonance proctography for pelvic floor disorders: a comparative study. Aim: Accurate and reliable imaging of pelvic floor dynamics is important for tailoring treatment in pelvic floor disorders; however, two imaging modalities are available. Barium proctography (BaP) is widely used, but involves a significant radiation dose. Magnetic resonance (MR) proctography allows visualization of all pelvic midline structures but patients are supine. This project investigates whether there are measurable differences between BaP and MR proctography. Patient preference for the tests was also investigated. Methods: Consecutive patients referred for BaP were invited to participate (National Research Ethics Service approved). Participants underwent BaP in Poole and MR proctography in Dorchester. Proctograms were reported by a consultant radiologist with pelvic floor subspecialization. Results: A total of 71 patients were recruited. Both tests were carried out on 42 patients. Complete rectal emptying was observed in 29% (12/42) on BaP and in 2% (1/42) on MR proctography. Anismus was reported in 29% (12/42) on BaP and 43% (18/42) on MR proctography. MR proctography missed 31% (11/35) of rectal intussusception detected on BaP. In 10 of these cases no rectal evacuation was achieved during MR proctography. The measure of agreement between grade of rectal intussusception was fair (κ=0.260) although MR proctography tended to underestimate the grade. Rectoceles were extremely common but clinically relevant differences in size were evident. Patients reported that they found MR proctography less embarrassing but harder to empty their bowel. Conclusions: The results demonstrate that MR proctography under-reports pelvic floor abnormalities especially where there has been poor rectal evacuation. abstract_id: PUBMED:9934752 FECOM: a new artificial stool for evaluating defecation. Objective: Traditionally, barium paste has been used for performing defecography. Because this substance is not stool-like, barium defecography may not accurately represent defecatory function. Our aim was to prospectively compare the utility of a new artificial stool, "FECOM"--a silicon-filled and barium-coated, deformable device the shape and consistency of which mimicked a normal formed stool with that of barium paste. Methods: Defecography was performed after placing FECOM or barium paste in a random order in 12 healthy subjects (two men and 10 women). We evaluated the changes in anorectal angle, rectal morphology, rectal sensation, and the subjects' preference for a "stool-like" device. Results: Anorectal angle at rest, during squeeze, cough, and straining were each greater with the FECOM when compared with the barium paste (p &lt; 0.006). Anterior rectocele (nine), mucosal intussusception (four), and incontinence (three) were identified only with barium defecography. Nine (75%) subjects preferred FECOM to barium paste (p &lt; 0.001) and reported that expulsion of this device mimicked more closely their stools at home (p &lt; 0.05). Conclusion: The anorectal angle is influenced by the form and consistency of stool material and is lower with barium paste. The detection of rectocele, mucosal intussusception, and barium leakage in normal subjects during barium defecography questions the significance of these findings. FECOM appears to be a realistic alternative to barium paste for performing defecography. abstract_id: PUBMED:30417419 Systematic review with meta-analysis: defecography should be a first-line diagnostic modality in patients with refractory constipation. Background: Defecography is considered the reference standard for the assessment of pelvic floor anatomy and function in patients with a refractory evacuation disorder. However, the overlap of radiologically significant findings seen in patients with chronic constipation (CC) and healthy volunteers is poorly defined. Aim: To systematically review rates of structural and functional abnormalities diagnosed by barium defecography and/or magnetic resonance imaging defecography (MRID) in patients with symptoms of CC and in healthy volunteers. Methods: Electronic searches of major databases were performed without date restrictions. Results: From a total of 1760 records identified, 175 full-text articles were assessed for eligibility. 63 studies were included providing data on outcomes of 7519 barium defecographies and 668 MRIDs in patients with CC, and 225 barium defecographies and 50 MRIDs in healthy volunteers. Pathological high-grade (Oxford III and IV) intussuscepta and large (&gt;4 cm) rectoceles were diagnosed in 23.7% (95% CI: 16.8-31.4) and 15.9% (10.4-22.2) of patients, respectively. Enterocele and perineal descent were observed in 16.8% (12.7-21.4) and 44.4% (36.2-52.7) of patients, respectively. Barium defecography detected more intussuscepta than MRID (OR: 1.52 [1.12-2.14]; P = 0.009]). Normative data for both barium defecography and MRID structural and functional parameters were limited, particularly for MRID (only one eligible study). Conclusions: Pathological structural abnormalities, as well as functional abnormalities, are common in patients with chronic constipation. Since structural abnormalities cannot be evaluated using nonimaging test modalities (balloon expulsion and anorectal manometry), defecography should be considered the first-line diagnostic test if resources allow. Answer: The significance of barium trapping in rectoceles has been a subject of investigation, and the findings suggest that it may not be as critical as previously thought. A study designed to determine whether rectocele size and contrast retention are significant found that there was no impairment of evacuation associated with either a large rectocele or trapping, indicating that these problems should not be directly attributed to proctographic findings (PUBMED:7607041). Another study highlighted that barium trapping in rectoceles changes with the degree of rectal evacuation, and more complete evacuation was shown on posttoilet radiographs than on postevacuation radiographs, suggesting that the posttoilet radiograph may be more appropriate for the preoperative assessment of barium trapping within rectoceles (PUBMED:11907721). Furthermore, a comparative study between barium proctography and magnetic resonance proctography for pelvic floor disorders found that MR proctography under-reports pelvic floor abnormalities, especially where there has been poor rectal evacuation (PUBMED:22251617). This indicates that barium proctography may be more sensitive in detecting certain pelvic floor disorders, although patient experience was better with dynamic MRI proctography (PUBMED:34221640). A systematic review with meta-analysis suggested that defecography should be a first-line diagnostic modality in patients with refractory constipation, as structural abnormalities cannot be evaluated using nonimaging test modalities (PUBMED:30417419). This supports the idea that while barium trapping in rectoceles can be observed, it may not be the sole factor to consider when diagnosing and treating pelvic floor disorders. In summary, while barium trapping in rectoceles has been used as a criterion for surgical decision-making, its significance is now questioned, and a more comprehensive approach to rectal evacuation assessment is recommended.
Instruction: Is one eye better than two in strabismus? Abstracts: abstract_id: PUBMED:34539178 The Outcome of One-to-Four Muscle Surgery by Intraoperative Relaxed Muscle Positioning with Adjustable Suture Technique in Thyroid Eye Disease. Purpose: To identify the outcome of one-to-four muscle surgery by intraoperative relaxed muscle positioning with adjustable suture technique for the treatment of thyroid eye disease. Methods: Ninety patients diagnosed with thyroid eye disease who underwent intraoperative relaxed muscle positioning with adjustable suture technique at Ramathibodi Hospital from January 1, 2015 through December 31, 2020 were included in this retrospective study. The patient demographic data were evaluated. Pre- and post-operative ocular alignment and diplopia status were measured after a follow-up period of at least 6 months. Successful outcomes were categorized into two parts: motor outcome and sensory outcome. Successful motor outcome was defined as vertical deviation equal to 4 prism diopters or less and horizontal deviation equal to 10 prism diopters or less in primary position. Successful sensory outcome was defined as the absence of diplopia in primary position. Results: Ninety patients were included in this study, and the mean age of strabismic surgery was 56.6 ± 10.1 years old. Thirty-nine patients had a history of orbital decompression surgery. Mean follow-up time was 33.7 ± 11.8 months. The success of motor and sensory outcomes exhibited a decrease from one-to-four muscle surgery. Motor success decreased from one-muscle to four-muscle surgery (84.62%, 81.58%, 75.00%, and 64.29%) and sensory success similarly decreased (84.62%, 84.21%, 75.00%, and 78.57%). However, the comparative outcomes of motor success and sensory success were not statistically different among groups (p = 0.58 and 0.84). Lower lid retractions were found in 12 patients (13.33%). Conclusion: Intraoperative relaxed muscle positioning technique might be a successful option for the correction of thyroid eye disease-associated strabismus. This technique may be done with one-to-four muscle surgery, which yields success in both motor and sensory outcomes. abstract_id: PUBMED:1790734 Covering one eye in fixation-disparity measurement causes slight movement of fellow eye. In the subjective measurement of fixation disparity (FD), the subject fuses contours presented in the peripheral macular areas of both eyes (fusion lock). The position of the eyes relative to each other is monitored by means of two haploscopically seen vertical lines presented in the central macular area, one above and one below a binocularly seen horizontal line. The subject is instructed to shift one of the vertical lines horizontally until the two are aligned, while fixating their intersection with the horizontal line. It has recently been questioned whether the foveolae really are pointed towards the perceived intersection. In this study, we monitored the position of one eye while intermittently covering the fellow eye, while the subject maintained fixation of the intersection of the remaining vertical line and the horizontal line. We found slight differences in position of the measured eye, depending on whether the other eye was covered or not, i.e. depending on the presence or absence of fusion in the macular periphery. These differences were more pronounced in the non-dominant eye. abstract_id: PUBMED:20001950 Is one eye better than two in strabismus? Or does the misaligned amblyopic eye interfere with binocular vision? A preliminary functional MRI study. Purpose: The aim of this study was to determine if patients with strabismic amblyopia could have increased occipital visual cortex activation with monocular stimulation of the sound fixing eye, rather than with simultaneous stimulation of both eyes. Methods: A prospective study was performed including 12 patients with strabismus and amblyopia, who were evaluated using functional MRI with visual stimulation paradigms. The measurements were made in the occipital visual cortex, assessing the response to the binocular and monocular stimulation. Results: 12 out of 12 patients showed an increased cortical response of the healthy eye in comparison to the amblyopic one. Nine of the 12 patients showed larger cortical activation with visual stimulation of the healthy eye compared to the binocular condition analysis. Three out of the 12 cases had a greater activation area when the stimulation was binocular rather than monocular, 2 of whom had a relatively small angle of strabismus. Conclusions: Patients with amblyopia and strabismus could see better with only one eye instead of both eyes. This could be related to inhibition of the binocular function of the brain by the misaligned amblyopic eye. abstract_id: PUBMED:33727457 One-year Profile of Eye Diseases in Infants (PEDI) in secondary (rural) eye care centers in South India. Purpose: The aim of this study was to report the proportion and patterns of eye diseases observed among infants seen at two rural eye care centers in South India. Methods: A retrospective review of case records of infants seen between January 1, 2017 and December 31, 2017 at two rural secondary eye care centers attached to L V Prasad Eye Institute, Hyderabad. Data were collected regarding their demographic profile, the pattern of eye problems observed, management at the facility itself, and need for referrals. Results: During this period, a total of 3092 children were seen. Among them, 141 were infants (4.56%, 71 boys: 70 girls, median age: 8 months). Twenty-five percent of infants were less than 6 months of age. The most common eye problem was congenital nasolacrimal duct obstruction (n = 76, 53.90%), followed by conjunctivitis (n = 33, 23.40%), retinopathy of prematurity (n = 4, 2.84%) and strabismus (n = 3, 2.13%). One case each of congenital cataract and suspected retinoblastoma were identified. Majority of the cases (58.8%) belonged to the oculoplastic and orbital surgery sub-specialty. Sixteen percent of the infants (n = 23) had sight-threatening eye problems. Twenty percent (n = 28) were referred to tertiary care hospital for further management. Conclusion: Profile of eye disease in infants in secondary or rural eye care centers ranged from simple to complex, including sight-threatening diseases. While our study concluded that nearly 4/5th of these eye problems were simple and could be managed by a well-trained comprehensive ophthalmologist, 20% of these cases required a referral to a tertiary care center. abstract_id: PUBMED:11096219 Eye deviation in patients with one-and-a-half syndrome. To understand malalignments of the visual axes in one-and-a-half syndrome, we measured eye positions in 4 patients with this syndrome under two conditions: with Frenzel goggles to prevent eye fixation and without Frenzel goggles. When fixation was prevented with the Frenzel goggles, all patients showed mild outward deviation in both eyes. Removal of the Frenzel goggles elicited adduction of the eye ipsilateral to the side of the lesion for fixation, with greater outward deviation of the contralateral eye (acute stage), or adduction of both eyes to midposition for biocular fixation (convalescent stage). In 3 patients whose outward eye deviation with Frenzel goggles was greater on the ipsilateral side, a transition from one-and-a-half syndrome to ipsilateral internuclear ophthalmoplegia was noted, whereas a transition to ipsilateral gaze palsy was seen in the one patient whose deviation was greater on the contralateral side. These findings suggest that in one-and-a-half syndrome patients, the eyes tend to be in divergent positions when fixation is prevented; ipsilateral eye deviation may result from medial longitudinal fasciculus involvement, and contralateral eye deviation may result from paramedian pontine reticular formation involvement. Viewing a target may lead to a secondary deviation or adaptation of eye positions for fixation. abstract_id: PUBMED:30093093 A study of the visual symptoms in two-dimensional versus three-dimensional laparoscopy. Aim: There are reports of visual strains and associated symptoms when operating in a 3D laparoscopic environment. We aimed to study the extent of visual symptoms seen in 3D versus conventional 2D imaging in volunteers performing laparoscopic tasks and study the effect of eye exercises on 3D laparoscopy. Methods: Twenty four consented laparoscopic novices were required to undergo a visual acuity test (Snellen chart) and eye deviation test (Maddox Wing). A battery of specific isolated laparoscopic tasks lasting 30 min was developed to test their ability to detect changes in 2D and 3D environments separately. Before and after the 2D and 3D laparoscopic tasks, subjects were asked to complete a standardised questionnaire designed to scale (from 0 to 10) their visual symptoms (blurred vision, difficulty in refocusing from one distance to another, irritated or burning eyes, dry eyes, eyestrain, headache and dizziness). Participants who underwent 3D laparoscopic tasks were randomized into two groups, those who received two minutes eye exercises before performing the tasks and those who didn't. Independent t-test was used for the statistical analysis of this study. Results: Visual symptoms and eye strain were significant in 2D (p &lt; 0.01) and difficulty in refocusing from one distance to another was significant in 3D laparoscopic imaging (p &lt; 0.05). There was no significant effect of the simple eye exercises on relieving the visual symptoms in the 3D group. Conclusion: Visual symptoms were present in both 2D and 3D imaging laparoscopy. Eye strain was prominent in 2D imaging, while difficulty in refocusing from one distance to another was prominent in 3D. Eye exercises for 3D visual symptoms did not bring any significant improvement. abstract_id: PUBMED:31758028 Relative contributions to vergence eye movements of two binocular cues for motion-in-depth. When we track an object moving in depth, our eyes rotate in opposite directions. This type of "disjunctive" eye movement is called horizontal vergence. The sensory control signals for vergence arise from multiple visual cues, two of which, changing binocular disparity (CD) and inter-ocular velocity differences (IOVD), are specifically binocular. While it is well known that the CD cue triggers horizontal vergence eye movements, the role of the IOVD cue has only recently been explored. To better understand the relative contribution of CD and IOVD cues in driving horizontal vergence, we recorded vergence eye movements from ten observers in response to four types of stimuli that isolated or combined the two cues to motion-in-depth, using stimulus conditions and CD/IOVD stimuli typical of behavioural motion-in-depth experiments. An analysis of the slopes of the vergence traces and the consistency of the directions of vergence and stimulus movements showed that under our conditions IOVD cues provided very little input to vergence mechanisms. The eye movements that did occur coinciding with the presentation of IOVD stimuli were likely not a response to stimulus motion, but a phoria initiated by the absence of a disparity signal. abstract_id: PUBMED:29231434 Exploration on "three eye-needling" technique of acupuncture The "three eye-needling" technique is one of the important component of Jin's three needling therapy, mainly used for the treatment of eye disorders such as optic atrophy, macular pigment degeneration, myopia, hyperopia, strabismus, amblyopia, diplopia, glaucoma, cataract, etc. In the paper, Jin's "three eye-needling" technique is explored, including the keys of manipulation, operation procedure and basic skills. This technique is particularly for "mind regulation", focusing on tranquilizing, observing and concentrating the mind. The precise selection of point is the basic requirement, the technique for fixing, pressing and pricking are the most important link. The needle insertion with one hand is adopted with gentle rotation manipulation. Mind regulation, point selection and specific operation are coordinated with each other to bring the function of "three eye-needling" technique into play and achieve better therapeutic effects. abstract_id: PUBMED:28222686 Ipsiversive ictal eye deviation in inferioposterior temporal lobe epilepsy-Two SEEG cases report. Background: Versive seizure characterized by conjugate eye movement during epileptic seizure has been considered commonly as one of the most valuable semiological signs for epilepsy localization, especially for frontal lobe epilepsy. However, the lateralizing and localizing significance of ictaleye deviation has been questioned by clinical observation of a series of focal epilepsy studies, including frontal, central, temporal, parietal and occipital epilepsy. Case Presentation: Two epileptic cases characterized by ipsiversive eye deviation as initial clinical sign during the habitual epileptic seizures are presented in this paper. The localization of the epileptogenic zone of both of the cases has been confirmed as inferioposterior temporal region by the findings of ictalstereoelectroencephalography (SEEG) and a good result after epileptic surgery. Detailed analysis of the exact position of the key contacts of the SEEG electrodes identified the overlap between the location of the epileptogenic zone and human MT/MST complex, which play a crucial role in the control of smooth pursuit eye movement. Conclusion: Ipsiversive eye deviation could be the initial clinical sign of inferioposterior temporal lobe epilepsy and attribute to the involvement of human MT/MST complex, especially human MST whichwas located on the anterior/dorsal bank of the anterior occipital sulcus (AOS). abstract_id: PUBMED:7393323 Eye movement in strabismic cats. Strabismus is a common clinical condition in which the visual axes of the eyes do not intersect on the object being viewed. As such, the ability to achieve single binocular vision by fusing the images of a single object in the two eyes is lost. In most cases of strabismus, one of the two eyes is clearly deviated and the other eye is used for fixation, although in some cases each eye is used alternately for fixation. While much attention has been devoted to the motor capabilities of the deviating eye in strabismus, little attention has been given to the visuomotor competence of the other eye. We report here that, if one eye of a kitten is made to deviate by surgery, the visuomotor capacities of the other, "normal", eye are affected. A reduction in the ability to follow the movement of a large striped drum is observed with binocular viewing, even when stimuli are viewed monocularly with the normal eye. This means that anomalous visual input from the deviated eye during stimulation is not the cause of the reduced oculomotor capacities. Answer: Based on the provided abstracts, it appears that in certain cases of strabismus, one eye may indeed function better than two. Specifically, the study reported in PUBMED:20001950 found that patients with strabismic amblyopia showed increased cortical response in the occipital visual cortex when the sound fixing eye was stimulated monocularly, compared to when both eyes were stimulated. This suggests that these patients could see better with only one eye instead of both, potentially due to inhibition of binocular function by the misaligned amblyopic eye. However, it is important to note that this conclusion is preliminary and based on functional MRI studies, which measure brain activity rather than direct visual performance. The overall treatment and management of strabismus are likely to be more complex and individualized, taking into account the specific circumstances and needs of each patient.
Instruction: Does sexual violence contribute to elevated rates of anxiety and depression in females? Abstracts: abstract_id: PUBMED:12214797 Does sexual violence contribute to elevated rates of anxiety and depression in females? Background: It is well documented that females have higher rates of internalizing disorders (anxiety, depression) than males. It is also well known that females have higher exposure to childhood sexual abuse and sexual assault. Recently, it has been proposed that the higher levels of internalizing disorders in females may be caused by their greater exposure to sexual violence. Method: Data were gathered as part of the Christchurch Health and Development Study. In this study a cohort of 1265 children born in Christchurch, New Zealand, in 1977 have been studied from birth to age 21 years. The measures collected included: major depression and anxiety, childhood sexual abuse and adolescent sexual assault. Results: Findings confirmed the established conclusion that internalizing disorders are over twice as common in females than males (ORs 2.2-2.7). In addition, it was found that females were exposed to higher rates of sexual violence than males (ORs 5.1-8.4). Statistical control for gender related differences in exposure to sexual violence reduced the associations between gender and anxiety and depression. Nonetheless, even after such control, gender was significantly (P &lt; 0.0001) related to both anxiety (OR = 1.8; 95% CI, 1.3-2.4) and depression (OR = 1.9; 95% CI, 1.4-2.3). Conclusions: Greater female exposure to sexual violence may be a factor that contributes to greater female susceptibility to internalizing disorders. However, even after adjustment for gender differences in exposure to sexual violence it is clear that a substantial relationship between gender and internalizing disorder persists. abstract_id: PUBMED:30541612 Mental health, violence and psychological coercion among female and male trafficking survivors in the greater Mekong sub-region: a cross-sectional study. Background: Human trafficking is a pervasive global crime with important public health implications that entail fundamental human rights violations in the form of severe exploitation, violence and coercion. Sex-specific associations between types of violence or coercion and mental illness in survivors of trafficking have not been established. Methods: We conducted a cross-sectional study with 1015 female and male survivors of trafficking (adults, adolescents and children) who received post-trafficking assistance services in Cambodia, Thailand or Vietnam and had been exploited in various labor sectors. We assessed anxiety and depression with the Hopkins Symptoms Checklist (HSCL-25) and post-traumatic stress disorder (PTSD) symptoms with the Harvard Trauma Questionnaire (HTQ), and used validated questions from the World Health Organization International Study on Women's Health and Domestic Violence to measure physical and sexual violence. Sex-specific modified Poisson regression models were estimated to obtain prevalence ratios (PRs) and their 95% confidence intervals (CI) for the association between violence (sexual, physical or both), coercion, and mental health conditions (anxiety, depression and PTSD). Results: Adjusted models indicated that for females, experiencing both physical and sexual violence, compared to not being exposed to violence, was a strong predictor of symptoms of anxiety (PR = 2.08; 95% CI: 1.64-2.64), PTSD (PR = 1.55; 95% CI: 1.37-1.74), and depression (PR = 1.57; 95% CI: 1.33-1.85). Among males, experiencing physical violence with additional threats made with weapons, compared to not being exposed to violence, was associated with PTSD (PR = 1.59; 95% CI: 1.05-2.42) after adjustment. Coercion during the trafficking experience was strongly associated with anxiety, depression, and PTSD in both females and males. For females in particular, exposure to both personal and family threats was associated with a 96% elevated prevalence of PTSD (PR = 1.96; 95% CI: 1.32-2.91) and more than doubling of the prevalence of anxiety (PR = 2.11; 95% CI: 1.57-2.83). Conclusions: The experiences of violence and coercion in female and male trafficking survivors differed and were associated with an elevated prevalence of anxiety, depression, and PTSD in both females and males. Mental health services must be an integral part of service provision, recovery and re-integration for trafficked females and males. abstract_id: PUBMED:35041868 A comparison of symptoms of bipolar and unipolar depression in postpartum women. Background: Distinguishing postpartum women with bipolar from unipolar depression remains challenging, particularly in obstetrical and primary care settings. The post-birth period carries the highest lifetime risk for the onset or recurrence of Bipolar Disorder (BD). Characterization of differences between unipolar and bipolar depression symptom presentation and severity is critical to differentiate the two disorders. Methods: We performed a secondary analysis of a study of 10,000 women screened by phone with the Edinburgh Postnatal Depression Scale at 4-6 weeks post-birth. Screen-positive mothers completed the Structured Clinical Interview for DSM-4 and those diagnosed with BD and unipolar Major Depressive Disorder (UD) were included. Depressive symptoms were assessed with the 29-item Structured Interview Guide for the Hamilton Rating Scale for Depression (SIGH-ADS). Results: The sample consisted of 728 women with UD and 272 women with BD. Women with BD had significantly elevated levels of depression severity due to the higher scores on 8 of the 29 SIGH-ADS symptoms. Compared to UD, women with BD had significantly higher rates of comorbid anxiety disorders and were twice as likely to report sexual and/or physical abuse. Limitations: Only women who screened positive for depression were included in this analysis. Postpartum women with unstable living situations, who were hospitalized or did not respond to contact attempts did not contribute data. Conclusions: Severity of specific symptom constellations may be a useful guide for interviewing postpartum depressed women along with the presence of anxiety disorder comorbidity and physical and/or sexual abuse. abstract_id: PUBMED:16418106 Effects of administering sexually explicit questionnaires on anger, anxiety, and depression in sexually abused and nonabused females: implications for risk assessment. Human sexuality researchers and institutional review boards often are concerned about the sensitive nature of the information that they obtain and whether this type of research increases the psychological risks to participants. To date, there are almost no empirical data that address this issue. We administered state and trait measures of anger, anxiety, and depression to 207 females who were administered four questionnaires that asked them to reveal highly sensitive, sexually explicit information, including questions regarding childhood sexual abuse. Then they were readministered the state and trait measures of distress. We found no significant differences, even among those who reported being sexually abused as children, suggesting that such studies do not significantly increase the risk of psychological harm to participants. abstract_id: PUBMED:25123985 Rates and risk factors associated with depressive symptoms during pregnancy and with postpartum onset. The objectives of this study were to evaluate the prevalence of depressive symptoms in the third trimester of pregnancy and at 3 months postpartum and to prospectively identify risk factors associated with elevated depressive symptoms during pregnancy and with postpartum onset. About 364 women attending antenatal clinics or at the time of their ultrasound were recruited and completed questionnaires in pregnancy and 226 returned their questionnaires at 3 months postpartum. Depressed mood was assessed by the Edinburgh Postnatal Depression Scale (EPDS; score of ≥ 10). The rate of depressed mood during pregnancy was 28.3% and 16.4% at 3 months postpartum. Among women with postpartum depressed mood, 6.6% were new postpartum cases. In the present study, belonging to a non-Caucasian ethnic group, a history of emotional problems (e.g. anxiety and depression) or of sexual abuse, comorbid anxiety, higher anxiety sensitivity and having experienced stressful events were associated with elevated depressed mood during pregnancy. Four risk factors emerged as predictors of new onset elevated depressed mood at 3 months postpartum: higher depressive symptomatology during pregnancy, a history of emotional problems, lower social support during pregnancy and a delivery that was more difficult than expected. The importance of identifying women at risk of depressed mood early in pregnancy and clinical implications are discussed. abstract_id: PUBMED:17454517 Effects of completing sexual questionnaires in males and females with histories of childhood sexual abuse: implications for institutional review boards. Few studies have sought to examine empirically the immediate effects of participation in sexual abuse research. The present study investigated the effects of childhood sexual abuse on measures of personality and psychological functioning in 250 males and females. The null hypothesis was that sexually abused and nonabused groups would show no significant differences between pre-and post-testing on measures of state anxiety, state depression, and state anger. No significant differences between pre-and post-testing were observed between nonabused, abused, and severely abused participants. In addition, there were no gender differences among the groups. Findings from this study support those of Savell, Kinder, and Young (2006) and have significant implications for Institutional Review Boards (IRB) as they suggest that participation in childhood sexual abuse or sexuality research does not place sexually abused individuals at greater than minimal risk for immediate increases in anxiety, depression, or anger. abstract_id: PUBMED:25648979 Childhood trauma and adult interpersonal relationship problems in patients with depression and anxiety disorders. Introduction: Although a plethora of studies have delineated the relationship between childhood trauma and onset, symptom severity, and course of depression and anxiety disorders, there has been little evidence that childhood trauma may lead to interpersonal problems among adult patients with depression and anxiety disorders. Given the lack of prior research in this area, we aimed to investigate characteristics of interpersonal problems in adult patients who had suffered various types of abuse and neglect in childhood. Methods: A total of 325 outpatients diagnosed with depression and anxiety disorders completed questionnaires on socio-demographic variables, different forms of childhood trauma, and current interpersonal problems. The Childhood Trauma Questionnaire (CTQ) was used to measure five different forms of childhood trauma (emotional abuse, emotional neglect, physical abuse, physical neglect, and sexual abuse) and the short form of the Korean-Inventory of Interpersonal Problems Circumplex Scale (KIIP-SC) was used to assess current interpersonal problems. We dichotomized patients into two groups (abused and non-abused groups) based on CTQ score and investigated the relationship of five different types of childhood trauma and interpersonal problems in adult patients with depression and anxiety disorders using multiple regression analysis. Result: Different types of childhood abuse and neglect appeared to have a significant influence on distinct symptom dimensions such as depression, state-trait anxiety, and anxiety sensitivity. In the final regression model, emotional abuse, emotional neglect, and sexual abuse during childhood were significantly associated with general interpersonal distress and several specific areas of interpersonal problems in adulthood. No association was found between childhood physical neglect and current general interpersonal distress. Conclusion: Childhood emotional trauma has more influence on interpersonal problems in adult patients with depression and anxiety disorders than childhood physical trauma. A history of childhood physical abuse is related to dominant interpersonal patterns rather than submissive interpersonal patterns in adulthood. These findings provide preliminary evidence that childhood trauma might substantially contribute to interpersonal problems in adulthood. abstract_id: PUBMED:30086534 Childhood trauma dependent anxious depression sensitizes HPA axis function. Anxious depression is a common subtype of major depressive disorder (MDD) and is associated with greater severity and poorer outcome. Alterations of the hypothalamic-pituitary-adrenal (HPA) axis, especially of the glucocorticoid receptor (GR) function, are often observed in MDD, but evidence lacks for anxious depression. Childhood adversity is known to influence both the HPA axis and risk of MDD. Therefore, we investigated GR-function in anxious depression dependent on childhood adversity. We enrolled 144 depressed in-patients (49.3% females). Anxious depression was defined using the Hamilton Depression Rating Scale (HAM-D) anxiety/somatization factor score ≥7. Blood draws were performed at 6 pm before and 3 h after 1.5 mg dexamethasone ingestion for measurement of cortisol, ACTH and blood count to assess GR-function and the immune system. In a subgroup of n = 60 FKBP5 mRNA controlled for FKBP5 genotype was measured before and after dexamethasone. Childhood adversity was evaluated using the Childhood Trauma Questionnaire (CTQ). We identified 78 patients (54.2%) with anxious depression who showed a greater severity and worse outcome. These patients were more often exposed to sexual abuse (30% vs. 16%/p = 0.04) and emotional neglect (76% vs. 58%/p = 0.02) than patients with non-anxious depression. Anxious depressed patients showed an enhanced GR-induced FKBP5 mRNA expression (F = 5.128; p = 0.03) and reduced cortisol levels, partly dependent on sexual abuse (F = 7.730; p = 0.006). Additionally, the GR-induced leukocyte response was enhanced in patients with sexual abuse (F = 7.176; p = 0.008). Anxious depression in dependence of childhood trauma is associated with heightened sensitivity of the HPA axis and the immune system which should be considered for treatment algorithms and targets. abstract_id: PUBMED:12100347 Gender differences in depression risk and coping factors in a clinical sample. Objective: To examine gender differences in depression risk and coping factors in a clinical sample of patients with a diagnosis of DSM-IV major depression. Method: Patients were assessed for substance use and abuse, family history of psychiatric disorder, interpersonal depressogenic factors and lifetime history of anxiety disorders. Trait anxiety, coping styles when depressed, parental bonding, marital features and personality style were also measured. Patients were reassessed at 12-month follow-up. Results: There were few gender differences in experience of depression (either in duration, type or severity prior to treatment) in a group with established episodes of major depression but women reported more emotional arousability when depressed. Women reported higher rates of dysfunctional parenting and childhood sexual abuse, and rated their partners as less caring and as more likely to be a depressogenic stressor. Men were more likely to have a generalized anxiety disorder at assessment, to use recreational drugs prior to presentation. Men were rated as having a more rigid personality style and 'Cluster A' personality traits both at assessment and follow-up. Conclusion: There were few gender differences in severity or course of established episodes of major depression. Gender differences were related to levels of arousal, anxiety disorders, and repertoires for dealing with depression, rather than depressive symptoms per se. abstract_id: PUBMED:16893817 History of child abuse and severity of adult depression: the mediating role of cognitive schema. The link between childhood abuse, adult depression, and anxiety has been well studied, but few studies have empirically explored the mechanism of that link. Using a clinical sample of women, this study examined the relationship between retrospectively measured childhood abuse and neglect and current adult symptoms of anxiety and depression, via the mediating effects of cognitive style. This study found that women who reported a positive abuse history were significantly more depressed and exhibited more maladaptive schemas than women who did not report a history of abuse. Specifically, it would appear that cognitive styles marked by interpersonal disconnection and rejection are particularly pathogenic. These findings contribute to the growing literature by providing support for the role of cognitions in mediating the link between childhood abuse and psychopathology. Answer: Yes, sexual violence does contribute to elevated rates of anxiety and depression in females. The Christchurch Health and Development Study, which followed a cohort of children from birth to age 21, found that females were exposed to higher rates of sexual violence than males and that this exposure was associated with higher rates of internalizing disorders such as anxiety and depression. Even after controlling for gender differences in exposure to sexual violence, a significant relationship between gender and internalizing disorders persisted, suggesting that sexual violence may be a contributing factor to the greater susceptibility of females to these disorders (PUBMED:12214797). Additionally, a cross-sectional study with survivors of trafficking in the greater Mekong sub-region indicated that for females, experiencing both physical and sexual violence was a strong predictor of symptoms of anxiety, depression, and post-traumatic stress disorder (PTSD). The study highlighted the importance of integrating mental health services into recovery and re-integration for trafficked females and males (PUBMED:30541612). In the context of postpartum women, those with bipolar depression reported higher rates of comorbid anxiety disorders and were twice as likely to report sexual and/or physical abuse compared to those with unipolar depression, suggesting a link between sexual violence and increased severity of depression and anxiety symptoms (PUBMED:35041868). Moreover, research on depressive symptoms during pregnancy and postpartum onset identified a history of sexual abuse as a risk factor associated with elevated depressive mood during pregnancy (PUBMED:25123985). Similarly, childhood trauma, including sexual abuse, has been associated with adult interpersonal relationship problems in patients with depression and anxiety disorders, indicating long-term psychological effects of such experiences (PUBMED:17454517). In summary, the evidence from these studies supports the conclusion that sexual violence is a factor that contributes to higher rates of anxiety and depression in females.
Instruction: Computerized tomography of the epiphyseal union of the medial clavicle: an auxiliary method of age determination during adolescence and the 3d decade of life? Abstracts: abstract_id: PUBMED:9272998 Computerized tomography of the epiphyseal union of the medial clavicle: an auxiliary method of age determination during adolescence and the 3d decade of life? Purpose: To establish a reference population for the stages of epiphyseal union of the medical clavicle determined by CT. Material And Methods: Retrospectively, the thoracic CTs of patients under 30 years of age were reevaluated. Basic conditions were the lack of a bone development disorder and a sufficient assessment of the medial clavicle in a bone window setting. The stages of epiphyseal union were categorized as follows: Stage 1 refers to nonunion without ossification of the epiphysis, Stage 2 to nonunion with a separate and ossified epiphysis, Stage 3 to partial, and Stage 4 to complete union. Results: Up to now, 279 individuals could be included in the study. Stage 1 was observed till age 16, Stage 2 occurred from ages 13 through 22, Stage 3 was found from ages 16 through 26. Stage 4 was first noted at age 22, and in 100% of the sample at age 27. Conclusions: CT is well suitable to determine the stages of epiphyseal union of the medial clavicle. It may become a generally accepted method of age identification during adolescence and the 3rd decade of life. The presented data serve as a reference population at least for white Europeans. abstract_id: PUBMED:32559612 Computed tomographic analysis of medial clavicular epiphyseal fusion for age estimation in Indian population. Forensic age estimation is a crucial aspect of the identification process. While epiphyseal fusion of long bones has been studied for age estimation since a long time, over the past few years, the role of medial clavicular epiphyseal fusion in age estimation is being explored. The medial clavicular epiphyseal fusion can be used to estimate age in young adults, and can also determine whether medicolegally significant ages of 16 and 18 years have been attained by an individual. The present study aimed at generating regression models to estimate age by evaluating the medial clavicular epiphyseal fusion in Indian population using Schmeling et al. and Kellinghaus et al. method, and to assess whether an individual's age is over medicolegally significant thresholds of 16 and 18 years. Degree of ossification of the medial clavicular epiphysis was studied in CT images of 350 individuals aged 10.01-35.47 years. Significant statistical correlation (P &lt; 0.001) was observed between the degree of fusion and the chronological age of the participants, with Spearman's correlation (ρ) = 0.918 in females, and ρ = 0.905 in males. Regression models were generated using degree of ossification of medial end of clavicle of 350 individuals (147 females and 203 males) and these models were applied on a test set of 50 individuals (25 females and 25 males). Mean absolute error of 1.50 for females, 1.14 for males, and 1.32 for the total test set was observed when the variance between the chronological ages and estimated ages was calculated. abstract_id: PUBMED:3726097 Skeletal age determination in adolescence and the 3d decade of life The determination of bone age of hand and wrist focussing primarily on bone centre development is limited to childhood and during this period has diagnostic significance as well as therapeutic consequences. At puberty the fusion of epiphyseal growth plates is more important: it reflects the termination of growth and the biological stage of development. An extension of knowledge can be obtained by correlating age beyond 18 years into the third decade of life. This allows sex determination and reflects maturation processes through the appearance and fusion of the apophyses of the iliac crest and the ischial bone. The indications are mainly forensic and for an individual's identification. abstract_id: PUBMED:30878257 The epiphyseal scar joint line distance and age are important factors in determining the optimal screw length for medial malleoli fractures. Aim: The screw length is important to achieve a stable fixation for medial malleoli fractures. We aimed to evaluate the optimal screw length for different age groups in surgically treated medial malleoli fractures. The second aim was to identify the utility of the distance of epiphyseal scar to joint line or joint line to medullary space for assessment of screw length. Material Method: 368 X-rays and computed tomography (CT) images of ankle joints were retrospectively evaluated for optimal screw length, epiphyseal scar to joint line distance, joint to medullary space distance. The mean screw length for each decade was calculated. The correlations of screw length with age, screw length with distance of epiphyseal scar to joint line, and screw length with distance of joint line to medullary space were evaluated. Results: The optimal screw length was obviously decreased in patients in 61-70 and &gt;70 years old group (p = 0.002). As the distance of epiphyseal scar from joint line was increased, the optimal length of screw was also increased (p = 0.001). The distance of epiphyseal scar from joint line was decreased by age (p = 0.011). Conclusion: The optimal screw length was decreased by age and the epiphyseal scar to joint line distance could be a clue for optimal screw length in medial malleoli fractures. abstract_id: PUBMED:9724422 Bone age determination based on the study of the medial extremity of the clavicle. The development of the medial clavicular epiphysis and its fusion with the clavicular shaft have been a subject of medical research since the second decade of this century. Computed tomography provides the imaging modality of choice in analyzing the maturation process of the sternal end of the clavicle. In a retrospective study, we analyzed normal development in 380 individuals under the age of 30 years. The appearance of an epiphyseal ossification center occurred between ages 11 and 22 years. Partial union was found from age 16 until age 26 years. Complete union was first noted at age 22 years and in 100 % of the sample at age 27 years. Based on these data, age-related standardized age distributions and 95 % reference intervals were calculated. Compared to the experience recorded in the relevant literature, there are several landmarks that show no significant change between different ethnic groups and different periods of publication; these are the onset of ossification, the time span of partial union, and the appearance of complete union. Despite the relatively long time spans of the maturation stages, bone age estimation based on the study of the development of the medial clavicular epiphysis may be a useful tool in forensic age identification in living individuals, especially if the age of the subject is about the end of the second or the beginning of the third decade of life (e. g. in determining the applicability of adult or juvenile penal systems). Another possible use is in identifying human remains whose age is estimated at under 30 years. abstract_id: PUBMED:37606638 Patients with trochlear dysplasia have dysplastic medial femoral epiphyseal plates. Purpose: To investigate the growth of the epiphyseal plate in patients with trochlea dysplasia using a 3D computed tomography (CT)-based reconstruction of the bony structure of the distal femur. The epiphysis plate was divided into a medial part and a lateral part to compare their differences in patients with trochlear dysplasia. Methods: This retrospective study included 50 patients with trochlea dysplasia in the study group and 50 age- and sex-matched patients in the control group. Based on the CT images, MIMICS was used to reconstruct the bony structure of the distal femur. Measurements included the surface area and volume of the growth plate (both medial and lateral), the surface area and capacity of the proximal trochlea, trochlea-physis distance (TPD) (both medial and lateral), and height of the medial and lateral condyle. Results: The surface area of the medial epiphyseal plate (1339.8 ± 202.4 mm2 vs. 1596.6 ± 171.8 mm2), medial TPD (4.9 ± 2.8 mm vs. 10.6 ± 3.0 mm), height of the medial condyle (1.1 ± 2.5 mm vs. 4.9 ± 1.3 mm), and capacity of the proximal trochlear groove (821.7 ± 230.9 mm3 vs. 1520.0 ± 498.0 mm3) was significantly smaller in the study group than in the control group. A significant positive correlation was found among the area of the medial epiphyseal plate, the medial TPD, the height of the medial condyle and the capacity of the proximal trochlear groove (r = 0.502-0.638). Conclusion: The medial epiphyseal plate was dysplastic in patients with trochlea dysplasia. There is a significant positive correlation between the surface area of the medial epiphyseal plate, medial TPD, height of the medial condyle and capacity of the proximal trochlear groove, which can be used to evaluate the developmental stage of the trochlea in clinical practice and to guide targeted treatment of trochlear dysplasia. Level Of Evidence: III. abstract_id: PUBMED:23866072 Epiphyseal union of the cervical vertebral centra: its relationship to skeletal age and maturation of thoracic vertebral centra. Epiphyseal union stages for cervical vertebral centra (ring epiphyses) were documented for 55 individuals (females and males, ages 14-27 years) from the Terry Collection, using the Albert and Maples method 1, to examine both its relationship to age at death and to thoracic data collected from the same individuals using the same method. Results showed a moderate correlation between cervical ring union and age (r = 0.63, p = 0.000), and a fairly low correlation between cervical and thoracic ring union (r = 0.41, p = 0.002). Paired samples t-tests yielded a statistically significant difference between cervical and thoracic union mean values (p = 0.01). Union progressed earlier in cervical vertebrae and in females. Results indicated fairly substantial variation in both sexes. Findings may serve as a basic guideline for estimating a general age range at death for unknown skeletal remains and to corroborate findings from other skeletal age indicators. abstract_id: PUBMED:36729183 Automated localization of the medial clavicular epiphyseal cartilages using an object detection network: a step towards deep learning-based forensic age assessment. Background: Deep learning is a promising technique to improve radiological age assessment. However, expensive manual annotation by experts poses a bottleneck for creating large datasets to appropriately train deep neural networks. We propose an object detection approach to automatically annotate the medial clavicular epiphyseal cartilages in computed tomography (CT) scans. Methods: The sternoclavicular joints were selected as structure-of-interest (SOI) in chest CT scans and served as an easy-to-identify proxy for the actual medial clavicular epiphyseal cartilages. CT slices containing the SOI were manually annotated with bounding boxes around the SOI. All slices in the training set were used to train the object detection network RetinaNet. Afterwards, the network was applied individually to all slices of the test scans for SOI detection. Bounding box and slice position of the detection with the highest classification score were used as the location estimate for the medial clavicular epiphyseal cartilages inside the CT scan. Results: From 100 CT scans of 82 patients, 29,656 slices were used for training and 30,846 slices from 110 CT scans of 110 different patients for testing the object detection network. The location estimate from the deep learning approach for the SOI was in a correct slice in 97/110 (88%), misplaced by one slice in 5/110 (5%), and missing in 8/110 (7%) test scans. No estimate was misplaced by more than one slice. Conclusions: We demonstrated a robust automated approach for annotating the medial clavicular epiphyseal cartilages. This enables training and testing of deep neural networks for age assessment. abstract_id: PUBMED:27015321 CT evaluation of medial clavicular epiphysis as a method of bone age determination in adolescents and young adults. Purpose: We aimed to investigate the use of computed tomography (CT) staging of the medial clavicular epiphysis ossification in forensic bone age determination, and find a CT criterion to determine whether an individual is adult or not. Methods: Chest CT and pulmonary CT angiography exams of 354 patients between 10 and 30 years of age (mean, 21.4 years) were retrospectively evaluated for epiphyseal ossification phase of the bilateral medial clavicles (708 clavicles) and compared with the sex and chronologic age of the individuals. The ossification phase of the medial clavicular epiphyses was classified from stage I to stage V using a modified staging system. Results: Epiphyseal ossification center appeared from 11 to 21 years of age. Partial fusion occurred between 16 and 23 years of age. Complete fusion was first achieved at the ages of 18 and 19 years for male and female individuals, respectively. The probability of an individual being ≥18 years old was 70.8% in stage III A and 100% in stages III B, IV, and V in females and males. Conclusion: CT evaluation of the medial clavicular epiphysis is helpful in forensic age determination and stage III B can be used as a criterion to make the prediction that an individual is older than 18 years. abstract_id: PUBMED:33541216 Forensic age estimation using conventional radiography of the medial clavicular epiphysis: A systematic review. Of the many roles that forensic anthropologists and medico-legal professionals need to perform, forensic age estimation is one of the most frequent and important. Scoring medial clavicular epiphyseal (MCE) fusion is a method used to estimate age in young adults. The aim of this systematic review is to assess the reliability and reproducibility of MCE fusion visualised by conventional radiography and scored by Schmeling's grading system to determine whether an individual has attained the age of 18 years. Four articles were acquired after screening 4589 articles across four databases, and these were subjected to qualitative and quantitative synthesis. The risk of bias was calculated in the qualitative synthesis using the QUADAS-2 tool. Horizontal box plots were constructed to see whether MCE fusion as visualised by conventional radiography can be used to ascertain whether an individual has attained the age of maturity (18 years). It was observed that stages 4 and 5 of the Schmeling's method of age estimation from MCE fusion are observed only in individuals aged ≥18 years. This indicates that MCE fusion, when visualised using x-rays, which are associated with less ionisation radiation compared to computed tomography, can be used to verify the attainment of the age majority in individuals. Answer: Yes, computerized tomography (CT) of the epiphyseal union of the medial clavicle is an auxiliary method of age determination during adolescence and the third decade of life. Studies have established reference populations for the stages of epiphyseal union of the medial clavicle determined by CT, which can be used to estimate age in young adults and determine whether medicolegally significant ages of 16 and 18 years have been attained (PUBMED:9272998; PUBMED:32559612). The stages of epiphyseal union are categorized into nonunion without ossification (Stage 1), nonunion with a separate and ossified epiphysis (Stage 2), partial union (Stage 3), and complete union (Stage 4), with these stages occurring at different ages (PUBMED:9272998). CT evaluation of the medial clavicular epiphysis has been shown to be helpful in forensic age determination, with certain stages indicating a high probability that an individual is older than 18 years (PUBMED:27015321). Additionally, the development of the medial clavicular epiphysis and its fusion with the clavicular shaft has been a subject of research, and CT provides the imaging modality of choice for analyzing this maturation process (PUBMED:9724422). Therefore, CT of the medial clavicular epiphyseal union is a well-suited and generally accepted method for age identification during adolescence and the third decade of life, particularly in forensic and medico-legal contexts.
Instruction: Evaluation of physiological FDG uptake in the skeleton in adults: is it uniformly distributed? Abstracts: abstract_id: PUBMED:24950889 Evaluation of physiological FDG uptake in the skeleton in adults: is it uniformly distributed? Aim: The aim of this study was to study whether FDG was uniformly distributed throughout the skeleton and whether age and gender affected this biodistribution. Material And Methods: A total of 158 patients were included in this retrospective study. None of the patients had received prior treatment that had affected the bone marrow and patients with bone metastases, trauma, benign and/or malignant hematologic disorders were excluded from the study. The SUVmax from the 24 different locations in the skeleton was obtained and all the values were compared with each other. Results: FDG uptake in the skeleton was not uniform in both sexes. While the highest FDG uptake was seen in the L3 vertebra, the lowest glucose metabolism was observed in the diaphysis of the femur. Concerning the vertebral column, FDG uptakes were also non-uniform and the SUVmax gradually increased from the cervix to the lumbar spine. The mean skeletal SUVmax was decreased in accordance with age in both genders. Conclusion: FDG was not uniformly distributed throughout the skeleton in both sexes. It had a tendency to increase from the appendicular to axial skeleton and from cervical to lumbar spine in the vertebral column that may be related with the normal distribution of the red bone marrow. Additionally, the glycolytic metabolism of the whole skeleton was gradually decreased in accordance with the age in both sexes. abstract_id: PUBMED:21614339 Physiological uptake in FDG PET simulating disease. Many potential pitfalls and artefacts have been described in PET imaging that uses F-18 fluorodeoxyglucose (FDG). Normal uptake of FDG occurs in many sites of the body and may cause confusion in interpretation particularly in oncology imaging. Clinical correlation, awareness of the areas of normal uptake of FDG in the body and knowledge of variation in uptake as well as benign processes that are FDG avid are necessary to avoid potential pitfalls in image interpretation. In this context, optimum preparation of patients for their scans can be instituted in an attempt to reduce the problem. Many of the problems and pitfalls associated with areas of normal uptake of FDG can be solved by using PET CT imaging. PET CT imaging has the ability to correctly attribute FDG activity to a structurally normal organ on CT. However, the development of combined PET CT scanners also comes with its own specific problems related to the combined PET CT technique. These include misregistration artefacts due to respiration and the presence of high density substances which may lead to artefactual overestimation of activity if CT data are used for attenuation correction. abstract_id: PUBMED:33392345 Physiological FDG uptake in growth plate on pediatric PET. Objectives: 18F-Fluorodeoxyglucose (FDG) uptake in children is different from that in adults. Physiological accumulation is known to occur in growth plates, but the pattern of distribution has not been fully investigated. Our aim was to evaluate the metabolic activity of growth plates according to age and location. Methods: We retrospectively evaluated 89 PET/CT scans in 63 pediatric patients (male : female=25 : 38, range, 0-18 years). Patients were classified into four age groups (Group A: 0-2 years, Group B: 3-9 years, Group C: 10-14 years and Group D: 15-18 years). The maximum standardized uptake value (SUVmax) of the proximal and distal growth plates of the humerus, the forearm bones and the femur were measured. The SUVmax of each site and each age group were compared and statistically analyzed. We also examined the correlations between age and SUVmax. Results: As for the comparison of SUVmax in each location, the SUVmax was significantly higher in the distal femur than those in the other sites (p&lt; 0.01). SUVmax in the distal humerus and the proximal forearm bones were significantly lower than those in the other sites (p&lt; 0.01). In the distal femur, there was large variation in SUVmax, while in the distal humerus and the proximal forearm bones, there was small variation. As for the comparison of SUVmax in each age group, the SUVmax in group D tended to be lower than those in the other groups, but in the distal femur, there was no significant difference among each age group. Conclusion: Our data indicate that FDG uptake in growth plates varies depending on the site and age with remarkable uptake especially in the distal femur. abstract_id: PUBMED:33517516 Series of myocardial FDG uptake requiring considerations of myocardial abnormalities in FDG-PET/CT. Distinct from cardiac PET performed with preparation to control physiological FDG uptake in the myocardium, standard FDG-PET/CT performed with 4-6 h of fasting will show variation in myocardial FDG uptake. For this reason, important signs of myocardial and pericardial abnormality revealed by myocardial FDG uptake tend to be overlooked. However, recognition of possible underlying disease will support further patient management to avoid complications due to the disease. This review demonstrates the mechanism of FDG uptake in the myocardium, discusses the factors affecting uptake, and provides notable image findings that may suggest underlying disease. abstract_id: PUBMED:21712915 New Intraspinal cause of physiological FDG uptake. We present a paediatric case of Papillary Ca thyroid under evaluation for elevated Thyroglobulin (Tg) level with negative (131)I wholebody scintigraphy. Differentiated thyroid cancer (DTC) arises from follicular epithelium and retains basic biological features like expression of sodium iodide symporter (NIS), which is the cellular basis of radio iodine ((131)I) concentration during thyroid ablation. Once dedifferentiation of thyroid cells occurs, cells fail to concentrate (131)I, posing both diagnostic and therapeutic problems in DTC and one may have to resort to other imaging techniques for disease localization. As DTC progression is slow, patients have a relatively good prognosis. However children with thyroid malignancies need aggressive management, as initial presentation itself maybe with nodal metastases. It is well known that FDG PET CT apart from its oncological applications, is also used in the evaluation of vascular inflammation especially Takayasu's arteritis. It is also reported in literature, that (18)F-FDG uptake can be seen relatively frequently in the arterial tree of cancer patients. Dunphy et al reported the association of vascular FDG uptake in inflammation as well as in normal arteries. This study typically describes FDG uptake in a patchwork of normal vessel, focal inflammation and or calcification of vessels. The other plausible reasons for significant vascular (18)F-FDG uptake are drugs such as potent non steroidal anti-inflammatory agents, dexamethasone, prednisone and tacrolimus. Our patient showed false positive (18)F Fluorodeoxyglucose (FDG) uptake in spinal cord at D11/12 and D12/L1 vertebral levels in FDG PET CT imaging performed as part of raised Thyroglobulin workup. This intra spinal FDG uptake is attributed to physiological uptake and inadequate FDG clearance from artery of Adamkiewicz, which can be added as a new physiological cause of FDG uptake unreported in literature as yet. abstract_id: PUBMED:36306026 Differentiation of lower limb vasculitis from physiological uptake on FDG PET/CT imaging. Purposes: To analyze the difference of 2-deoxy-2-[18F]fluoro-D-glucose (18F-FDG) uptake between vasculitis and non-vasculitic patients in PET/CT imaging and the factors related to vascular uptake in non-vasculitic patients. To investigate the feasibility of identifying vasculitis of the lower limb and physiological uptake with delayed imaging. Procedures: Among 244 patients who underwent PET/CT examination, imaging features of patients with or without vasculitis were retrospectively analyzed. The factors related to FDG uptake in the lower limb vessels of non-vasculitic patients were analyzed. Another 44 patients with suspected systemic vasculitis in PET/CT were prospectively studied to analyze the efficacy of delayed imaging on differentiating vascular uptake in lower limbs. Results: In PET/CT imaging of patients with vasculitis, involvement of trunk vessels showed segmental or diffuse FDG distribution. Lower limb vascular involvement showed reticular uptake accompanied by nodular or patchy changes. In non-vasculitic patients, vascular uptake mainly showed linear uptake in lower limb vessels and there was no significant difference in uptake degree compared with vasculitis patients. Body weight and interval time were the independent influence factors of vascular uptake in lower limbs of non-vasculitic patients. In delayed imaging, lower limb vasculitis all showed reticular uptake and physiological uptake all showed a linear pattern. ROC analysis showed the change rate of SUVmax (≥ 20%) between early and delayed imaging could delineate physiological vascular uptake with a sensitivity of 100% and specificity of 81.0%. Conclusions: When PET/CT is used for the diagnosis and classification of vasculitis, the physiological uptake of lower limb vessels may mislead the diagnosis. PET/CT imaging features or delayed imaging improved diagnostic efficacy. abstract_id: PUBMED:32749586 Physiologic and hypermetabolic breast 18-F FDG uptake on PET/CT during lactation. Objective: To investigate the patterns of breast cancer-related and lactation-related 18F-FDG uptake in breasts of lactating patients with pregnancy-associated breast cancer (PABC) and without breast cancer. Methods: 18F-FDG-PET/CT datasets of 16 lactating patients with PABC and 16 non-breast cancer lactating patients (controls) were retrospectively evaluated. Uptake was assessed in the tumor and non-affected lactating tissue of the PABC group, and in healthy lactating breasts of the control group, using maximum and mean standardized uptake values (SUVmax and SUVmean, respectively), and breast-SUVmax/liver-SUVmean ratio. Statistical tests were used to evaluate differences and correlations between the groups. Results: Physiological uptake in non-breast cancer lactating patients' breasts was characteristically high regardless of active malignancy status other than breast cancer (SUVmax = 5.0 ± 1.7, n = 32 breasts). Uptake correlated highly between the two breasts (r = 0.61, p = 0.01), but was not correlated with age or lactation duration (p = 0.24 and p = 0.61, respectively). Among PABC patients, the tumors demonstrated high 18F-FDG uptake (SUVmax = 7.8 ± 7.2, n = 16), which was 326-643% higher than the mostly low physiological FDG uptake observed in the non-affected lactating parenchyma of these patients (SUVmax = 2.1 ± 1.1). Overall, 18F-FDG uptake in lactating breasts of PABC patients was significantly decreased by 59% (p &lt; 0.0001) compared with that of lactating controls without breast cancer. Conclusion: 18F-FDG uptake in lactating tissue of PABC patients is markedly lower compared with the characteristically high physiological uptake among lactating patients without breast cancer. Consequently, breast tumors visualized by 18F-FDG uptake in PET/CT were comfortably depicted on top of the background 18F-FDG uptake in lactating tissue of PABC patients. Key Points: • FDG uptake in the breast is characteristically high among lactating patients regardless of the presence of an active malignancy other than breast cancer. • FDG uptake in non-affected lactating breast tissue is significantly lower among PABC patients compared with that in lactating women who do not have breast cancer. • In pregnancy-associated breast cancer patients, 18F-FDG uptake is markedly increased in the breast tumor compared with uptake in the non-affected lactating tissue, enabling its prompt visualization on PET/CT. abstract_id: PUBMED:38245051 Meta-analysis of the effectiveness of heparin in suppressing physiological myocardial FDG uptake in PET/CT. Background: The present meta-analysis aims to investigate the effectiveness of heparin administration in suppressing physiological myocardial 18F-fluorodeoxyglucose (FDG) uptake on positron emission tomography (PET)/computed tomography (CT), as its role in this regard has not been well investigated. Methods: PRISMA guidelines were used to interrogate the PubMed, Embase, Cochrane library, Web of Knowledge, and www.clinicaltrail.gov databases from the earliest records to March 2023. The final analysis included five randomized controlled trials (RCTs). Meta-analysis was conducted to compare the effectiveness of unfractionated heparin (UFH) administration versus non-UFH administration, and subgroup analysis based on fixed and variable fasting durations was conducted. Effect sizes were pooled using a random-effects model, and the pooled odds ratios (ORs) were calculated. Results: Five eligible RCTs with a total of 910 patients (550 with heparin, 360 without heparin) were included. The forest plot analysis initially indicated no significant difference in the suppression of myocardial FDG uptake between the UFH and non-UFH groups (OR 2.279, 95% CI 0.593 to 8.755, p = 0.23), with a high degree of statistical heterogeneity (I2 = 91.16%). Further subgroup analysis showed that the fixed fasting duration group with UFH administration had statistically significant suppression of myocardial FDG uptake (OR 4.452, 95% CI 1.221 to 16.233, p = 0.024), while the varying fasting duration group did not show a significant effect. Conclusions: According to the findings of our meta-analysis, we suggest that intravenous administration of UFH can be considered as a supplementary approach to suppress myocardial FDG uptake. abstract_id: PUBMED:35344131 Atlas of non-pathological solitary or asymmetrical skeletal muscle uptake in [18F]FDG-PET. Positron emission tomography (PET) using 2-deoxy-2-[18F]fluoro-D-glucose ([18F]FDG) is widely used in oncology and other fields. In [18F]FDG PET images, increased muscle uptake is observed owing exercise load or muscle tension, in addition to malignant tumors and inflammation. Moreover, we occasionally observe non-pathological solitary or unilateral skeletal muscle uptake, which is difficult to explain the strict reason. In most cases, we can interpret them as not having pathological significance. However, it is important to recognize such muscle uptake patterns to avoid misdiagnoses with pathological ones. Therefore, the teaching point of this pictorial essay is to comprehend the patterns of solitary or asymmetrical skeletal muscle uptake seen in routine [18F]FDG-PET scans. As an educational goal, you will be able to mention muscles where intense physiological [18F]FDG uptake can be observed, differentiate between physiological muscle uptake and lesion, and discuss with any physicians or specialists about uncertain muscle uptake. abstract_id: PUBMED:37258954 Meta-analysis of the effectiveness of heparin in suppressing physiological myocardial FDG uptake in PET/CT. Background: The present meta-analysis aims to investigate the effectiveness of heparin administration in suppressing physiological myocardial 18F-fluorodeoxyglucose (FDG) uptake on positron emission tomography (PET)/computed tomography (CT), as its role in this regard has not been well investigated. Methods: PRISMA guidelines were used to interrogate the PubMed, Embase, Cochrane library, Web of Knowledge, and www.clinicaltrail.gov databases from the earliest records to March 2023. The final analysis included five randomized controlled trials (RCTs). Meta-analysis was conducted to compare the effectiveness of unfractionated heparin (UFH) administration versus non-UFH administration, and subgroup analysis based on fixed and variable fasting durations was conducted. Effect sizes were pooled using a random-effects model, and the pooled odds ratios (ORs) were calculated. Results: Five eligible RCTs with a total of 910 patients (550 with heparin, 360 without heparin) were included. The forest plot analysis initially indicated no significant difference in the suppression of myocardial FDG uptake between the UFH and non-UFH groups (OR 2.279, 95% CI 0.593 to 8.755, p = 0.23), with a high degree of statistical heterogeneity (I2 = 91.16%). Further subgroup analysis showed that the fixed fasting duration group with UFH administration had statistically significant suppression of myocardial FDG uptake (OR 4.452, 95% CI 1.221 to 16.233, p = 0.024), while the varying fasting duration group did not show a significant effect. Conclusions: According to the findings of our meta-analysis, we suggest that intravenous administration of UFH can be considered as a supplementary approach to suppress myocardial FDG uptake. Answer: The evaluation of physiological FDG uptake in the skeleton in adults indicates that it is not uniformly distributed. A study involving 158 patients found that FDG uptake varied throughout the skeleton, with the highest uptake seen in the L3 vertebra and the lowest in the diaphysis of the femur. The uptake was also non-uniform in the vertebral column, with SUVmax gradually increasing from the cervix to the lumbar spine. Additionally, the study observed that the mean skeletal SUVmax decreased with age in both genders (PUBMED:24950889). This non-uniform distribution may be related to the normal distribution of red bone marrow and the glycolytic metabolism of the skeleton, which changes with age.
Instruction: Does endothelin work as an intrarenal mechanism to alter pressure natriuresis in spontaneously hypertensive rats? Abstracts: abstract_id: PUBMED:8021478 Does endothelin work as an intrarenal mechanism to alter pressure natriuresis in spontaneously hypertensive rats? Objective: To study the possible involvement of intrarenal endothelial dysfunction in the modulation of the pressure-natriuresis curve in spontaneously hypertensive rats (SHR). Methods: We examined the effect of endothelin on pressure natriuresis in isolated perfused kidneys from 16-week-old SHR and Wistar-Kyoto (WKY) rats. Plasma and urinary endothelin levels in intact rats and the rate of endothelin release from isolated kidneys were also determined. Results: Urinary sodium excretion by SHR kidneys was 60% less than by WKY rat kidneys at a given perfusion pressure. Endothelin-1 increased the renal vascular resistance dose-dependently and the change was comparable in SHR and WKY rats. A high perfusate concentration of endothelin-1 markedly reduced urinary sodium excretion, resulting in a significant rightwards shift of the pressure-natriuresis curve. However, endothelin-1 at concentrations below 0.1 nmol/l did not decrease urinary sodium excretion, despite its renal vasoconstrictory activity. In a different in vivo study, plasma endothelin-like immunoreactivity was similar in the two groups, as was the urinary endothelin excretion. However, the rate of endothelin release from isolated SHR kidneys was slightly greater than from WKY rat kidneys. Conclusion: Since the difference in endothelin levels is not remarkable, it seems unlikely that increased intrarenal production of endothelin plays a role in the maintenance of hypertension in SHR by modulating the pressure-natriuresis curve. abstract_id: PUBMED:1964405 Role of intrarenal renin-angiotensin system on pressure-natriuresis in spontaneously hypertensive rats. The pressure-natriuresis relationships in spontaneously hypertensive rats (SHR) and Wistar-Kyoto rats (WKY) were characterized with or without intrarenal renin-angiotensin system (RAS) blockade. The pressure-natriuresis relationship in SHR was shifted toward higher pressure in comparison to WKY. The inhibition of intrarenal RAS by MK-422 (0.3 ug/kg/min) in SHR enabled to excrete more sodium at the same pressure (P less than 0.05), whereas no significant changes were observed in WKY. In SHR, during administration of Thi5,8, D-Phe7-bradykinin (50 micrograms/kg/min), the natriuretic responses to MK-422 were maintained. Intrarenal infusion of Sar1, Ile8-angiotensin (70 ng/kg/min) into SHR increased sodium excretion accompanied by an increase in renal plasma flow. Intrarenally administered angiotensin I (10 ng/kg/min) into WKY showed antinatriuretic effects with minimal changes in renal hemodynamics. These results indicate that alteration of intrarenal RAS in SHR might contribute to reset the pressure-natriuresis relationship. abstract_id: PUBMED:6599682 Resetting of pressure-natriuresis and frusemide sensitivity in spontaneously hypertensive rats. We compared pressure-natriuresis in isolated perfused kidneys of spontaneously hypertensive rats (SHR), and age-matched controls, and studied the effect of frusemide on sodium excretion. Okamoto SHR and age-matched Wistar-Kyoto controls (WKY) were used. Conscious BP was measured in a tail artery cannulated before the experiment. Isolated kidneys were perfused at 37 degrees C and glomerular filtration rate, urinary sodium excretion (UNaV) and percentage sodium reabsorption (%TNa) were measured as mean perfusion pressure was increased in steps from 100 to 180 mmHg and repeated after addition of frusemide. At all perfusion pressures GFR and UNaV were lower in SHR and %TNa higher, consistent with a 50 mmHg rightward shift of the pressure-natriuresis relationship in SHR. However, at intrarenal perfusion pressure equal to MBP, sodium excretion was the same (2.9 microEq/min/g WKY; 2.7 microEq/min/g SHR). Subsequent response to frusemide was markedly reduced in SHR. We conclude that resetting of pressure-natriuresis in SHR compensates exactly for increased renal perfusion pressure. The mechanism by which these are so precisely linked is not known, nor is the reason for the blunted sensitivity to frusemide in SHR, but it is possible that Na-K-Cl cotransport in Henle's loop may be altered in this genetic model of hypertension. abstract_id: PUBMED:1730439 Normalization of pressure-natriuresis by nisoldipine in spontaneously hypertensive rats. This study examined whether the calcium antagonist nisoldipine can shift the relations between sodium excretion, papillary blood flow, renal interstitial pressure, and renal perfusion pressure toward lower pressures in spontaneously hypertensive rats. Mean arterial pressure decreased similarly by 9% and 12% in Wistar-Kyoto and spontaneously hypertensive rats after nisoldipine (0.5 microgram/kg bolus + 0.017 microgram/kg/min). Urine flow and sodium excretion increased by 35% and 24% in Wistar-Kyoto rats after nisoldipine. In contrast, urine flow and sodium excretion rose by 121% and 132% in spontaneously hypertensive rats, and fractional sodium excretion rose from 1.9 +/- 0.3 to 4.2 +/- 0.4%. Control sodium excretion, papillary blood flow, and renal interstitial pressure were significantly lower in spontaneously hypertensive rats than in Wistar-Kyoto rats when compared at similar renal perfusion pressures. Sodium excretion, papillary blood flow, and renal interstitial pressure all increased in spontaneously hypertensive rats after nisoldipine, whereas it had no effect on papillary blood flow or renal interstitial pressure in Wistar-Kyoto rats. The relations among sodium excretion, papillary blood flow, renal interstitial pressure, and renal perfusion pressure were shifted toward lower pressures in spontaneously hypertensive rats given nisoldipine and became similar to those seen in Wistar-Kyoto rats. These results indicate that nisoldipine normalizes the relations among sodium excretion, renal interstitial pressure, papillary blood flow, and renal perfusion pressure in spontaneously hypertensive rats perhaps by correcting the defect in renal medullary perfusion associated with resetting of pressure natriuresis in this model of hypertension. abstract_id: PUBMED:1986983 Effect of enalapril treatment on the pressure-natriuresis curve in spontaneously hypertensive rats. The effect of chronic angiotensin I converting enzyme inhibition on the pressure-natriuresis relation was studied in Wistar-Kyoto and spontaneously hypertensive rats. Enalapril maleate (25 mg.kg-1.day-1 in drinking water) was started at 4-5 weeks of age. At 7-9 weeks of age, the pressure-natriuresis relation was studied while the rats were under Inactin anesthesia 1 week after the right kidney and adrenal gland were removed. Neural and hormonal influences on the remaining kidney were fixed by surgical renal denervation, adrenalectomy, and infusion of a hormone cocktail (330 microliters.kg-1.min-1) containing high levels of aldosterone, arginine vasopressin, hydrocortisone, and norepinephrine dissolved in 0.9% NaCl containing 1% albumin. Changes in renal function resulting from alterations in renal artery pressure were compared between enalapril-treated and control rats. Mean arterial pressure (+/- SEM) under anesthesia was 118 +/- 5, 94 +/- 4, 175 +/- 3, and 124 +/- 2 mm Hg for control Wistar-Kyoto (n = 10), enalapril-treated Wistar-Kyoto (n = 10), control spontaneously hypertensive (n = 9), and enalapril-treated spontaneously hypertensive (n = 9) rats, respectively. When renal artery pressure was set at values above approximately 125 mm Hg, control spontaneously hypertensive rats excreted less sodium and water than control Wistar-Kyoto rats. Enalapril treatment resulted in a significant and similar shift to the left of the pressure-natriuresis relation in both strains of rats so that a lower renal artery pressure was required to excrete a similar amount of sodium when compared with their respective untreated controls.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:1992783 Role of renal nerves on pressure natriuresis in spontaneously hypertensive rats. An abnormal rightward shift of the pressure-natriuresis curve is a well known feature of the renal function in hypertension. The participation of intrinsic neural factors in the kidney in this phenomenon was investigated in anesthetized young and adult spontaneously hypertensive rats (SHR). At 7-8 wk of age, the renal pressure-diuresis curve and pressure-natriuresis curve were shifted to the left in denervated SHR compared with innervated animals. Fractional excretion of sodium was higher, and plasma renin activity was lower in denervated SHR. Glomerular filtration rate was not affected by renal denervation. In 13- to 15-wk-old SHR, renal denervation did not affect the pressure-diuresis and -natriuresis curves, although other parameters were changed compared with the results at 7-8 wk. In Wistar-Kyoto rats, the pressure-diuresis curve was shifted to the left by renal denervation at both ages. These results suggest that the renal nerves have an important effect on the renal pressure-diuresis and -natriuresis curves. However, renal innervation cannot be thought to cause an abnormal rightward shift of the pressure-diuresis and -natriuresis curves in SHR, especially in the established stage of hypertension. abstract_id: PUBMED:8230686 Effects of renal denervation on pressure-natriuresis in spontaneously hypertensive rats. To investigate the role of renal sympathetic nerve activity (RSNA) under developing and established hypertension, renal function was studied in chronically renal-denervated and sham-operated male spontaneously hypertensive rats (SHR) and control Wistar Kyoto rats (WKY) at 8 (early hypertensive) and 22 (established hypertensive) weeks of age. To further characterize the renal pressure-natriuresis-diuresis relationship in SHR, renal perfusion pressure (RPP) was reduced by aortic constriction to the level seen in age-matched WKY and the same studies were repeated. After denervation, urinary sodium excretion (UNaV), fractional excretion of sodium (FENa) and urine flow (UF) were increased in 8-week-old SHR (p &lt; 0.01). With the exceptions of UNaV and FENa in denervated 8-week-old SHR, renal cortical blood flow, glomerular filtration rate, UF, UNaV and FENa decreased with the reduction of RPP in all of the SHR groups. These results suggest that RSNA significantly influences renal sodium and fluid handling, thus contributing to the shifting of the arterial pressure-renal sodium excretion curve to the right along the pressure axis and/or to an increase in the steepness of the relationship in 8-week-old SHR. There appeared to be a marked difference in renal sodium handling between 8- and 22-week-old SHR. abstract_id: PUBMED:623262 Aldosterone in the exaggerated natriuresis of spontaneously hypertensive rats. The effect of mineralocorticoid hormones on the urinary responses of spontaneously hypertensive and normotensive rats to oral salt loading was determined. In response to a control salt load, the increase was determined. In response to a control salt load, the increase in urinary sodium excretion by the spontaneously hypertensive rats was significantly greater than that of the normotensive rats [48 +/- 6 (SE) mueq/h vs. 26 +/- 4 mueq/h]. Treatment with spironolactone did not significantly alter the natriuretic response of the spontaneously hypertensive rats (43 +/- 8 mueq/h) to another salt load, but increased the natriuretic response of the normotensive rats (55 +/- 7 mueq/h) to that of the hypertensive rats. D-Aldosterone suppressed the natriuretic response to salt loading of the hypertensive rats to a level which was not significantly different from that of the normotensive rats. Plasma aldosterone concentration was significantly lower in the spontaneously hypertensive rats than in the normotensive rats (18.0 +/- 3.3 and 52.1 +/- 5.2 ng/100 ml, respectively). Neither extracellular fluid volume nor total body water in spontaneously hypertensive and normotensive rats were significantly different. The data support the hypothesis that the exaggerated natriuresis in the spontaneously hypertensive rats is mediated by a relative lack by these rats of aldosterone-mediated distal tubular sodium reabsorption. abstract_id: PUBMED:9533421 Exaggerated natriuresis as a candidate intermediate phenotype in spontaneously hypertensive rats. Objective: To determine whether exaggerated natriuresis and exaggerated renal sympathoinhibition during volume loading constitute an intermediate phenotype in spontaneously hypertensive rats. Design: The borderline hypertensive rat, the F1 of a cross between a spontaneously hypertensive rat and a normotensive Wistar-Kyoto rat, is a NaCl-sensitive model of genetic hypertension. In addition to hypertension, borderline hypertensive rats fed 8% NaCl food develop characteristic alterations in regulation of renal sympathetic nerve activity and neural regulation of renal function similar to those in the spontaneously hypertensive rat parent. Like the Wistar-Kyoto rat parent, borderline hypertensive rats fed 1% NaCl food remain normotensive and do not exhibit these alterations in renal sympathetic neural mechanisms. These renal sympathetic neural mechanisms constitute a complex quantitative trait that could represent an intermediate phenotype. Methods: A backcross population, developed by mating borderline hypertensive rats with Wistar-Kyoto rats, was fed 8% NaCl food for 12 weeks from age 4 to 16 weeks. Responses to intravenous isotonic saline volume loading (10% body weight/30 min) in 81 backcross rats chronically instrumented for measurement of mean arterial pressure, renal sympathetic nerve activity, and urinary sodium excretion were determined. Results: Mean arterial pressure was 105-180 mmHg and was not correlated to the magnitude either of the decrease in renal sympathetic nerve activity or of the increase in urinary sodium excretion during volume loading. Conclusions: These two aspects of the complex quantitative trait, exaggerated natriuresis and exaggerated renal sympathoinhibition during volume loading, are not part of an intermediate phenotype in spontaneously hypertensive rats. abstract_id: PUBMED:7679457 Role of NO on pressure-natriuresis in Wistar-Kyoto and spontaneously hypertensive rats. We investigated the role of the endothelium-derived relaxing factor nitric oxide (NO) on pressure-natriuresis in spontaneously hypertensive rats (SHR) and Wistar-Kyoto rats (WKY) using in vivo perfusion studies. Differences in the neural and hormonal background to the kidney were minimized by renal denervation and by holding plasma vasopressin, aldosterone, corticosterone, and norepinephrine levels constant by intravenous infusion. In WKY, elevation of renal perfusion pressure (RPP) from 115 to 157 mm Hg increased urinary sodium excretion 4.5 to 14.8 microEq/min/g kidney wt, and the slope of its linear regression was 0.21 microEq/min/g kidney wt/mm Hg. Infusion of an inhibitor of NO synthase, L-NMMA (1 mg/min/kg), lowered this slope (P &lt; 0.05) but L-arginine (3 mg/min/kg) did not change it. By contrast, the impaired pressure-natriuresis response of SHR was ameliorated by L-arginine (slope: 0.08 to 0.16; P &lt; 0.05), while L-NMMA did not blunt it further. GFR and renal plasma flow (RPF) were well autoregulated in both strains, but L-NMMA lowered RPF significantly (SHR: from 4.2 to 2.6 ml/min/g kidney wt; WKY: 4.5 to 2.5 ml/min/g kidney wt). Moreover, when infused simultaneously, all these individual effects of L-NMMA and L-arginine were nullified. These results suggest that NO plays an important role in the pressure-natriuresis mechanism. Answer: Based on the provided abstract (PUBMED:8021478), it seems unlikely that increased intrarenal production of endothelin plays a role in the maintenance of hypertension in spontaneously hypertensive rats (SHR) by modulating the pressure-natriuresis curve. The study found that urinary sodium excretion by SHR kidneys was significantly less than by Wistar-Kyoto (WKY) rat kidneys at a given perfusion pressure. Although endothelin-1 increased renal vascular resistance dose-dependently in both SHR and WKY rats, and high concentrations of endothelin-1 markedly reduced urinary sodium excretion resulting in a significant rightward shift of the pressure-natriuresis curve, endothelin-1 at concentrations below 0.1 nmol/l did not decrease urinary sodium excretion despite its renal vasoconstrictory activity. Additionally, plasma endothelin-like immunoreactivity and urinary endothelin excretion were similar in both SHR and WKY groups, and the rate of endothelin release from isolated SHR kidneys was only slightly greater than from WKY rat kidneys. Therefore, the difference in endothelin levels is not remarkable enough to suggest a significant role for endothelin in altering pressure natriuresis in SHR.
Instruction: Direct brain recordings reveal impaired neural function in infants with single-suture craniosynostosis: a future modality for guiding management? Abstracts: abstract_id: PUBMED:25534054 Direct brain recordings reveal impaired neural function in infants with single-suture craniosynostosis: a future modality for guiding management? Background: Patients with single-suture craniosynostosis (SSC) are at an elevated risk for long-term learning disabilities. Such adverse outcomes indicate that the early development of neural processing in SSC may be abnormal. At present, however, the precise functional derangements of the developing brain remain largely unknown. Event-related potentials (ERPs) are a form of noninvasive neuroimaging that provide direct measurements of cortical activity and have shown value in predicting long-term cognitive functioning. The current study used ERPs to examine auditory processing in infants with SSC to help clarify the developmental onset of delays in this population. Methods: Fifteen infants with untreated SSC and 23 typically developing controls were evaluated. ERPs were recorded during the presentation of speech sounds. Analyses focused on the P150 and N450 components of auditory processing. Results: Infants with SSC demonstrated attenuated P150 amplitudes relative to typically developing controls. No differences in the N450 component were identified between untreated SSC and controls. Conclusions: Infants with untreated SSC demonstrate abnormal speech sound processing. Atypicalities are detectable as early as 6 months of age and may represent precursors to long-term language delay. Electrophysiological assessments provide a precise examination of neural processing in SSC and hold potential as a future modality to examine the effects of surgical treatment on brain development. abstract_id: PUBMED:12140415 Auditory ERPs reveal brain dysfunction in infants with plagiocephaly. It is suspected that the developmental delay in school-aged children diagnosed as infants suffering from plagiocephaly is caused by the modification of the skull form. To detect possible cognitive impairment in these children, we examined auditory ERPs to tones in infant patients. The infants with plagiocephaly exhibited smaller amplitudes of the P150 and the N250 responses to tones than healthy controls. Differences between the patients and control subjects indicate that already at this early age the presence of the plagiocephalic skull signals compromise of brain functioning. The present data suggest that most of the plagiocephalic infants have an elevated risk of auditory processing disorders. In the current study we demonstrated, for the first time, that the central sound processing, as reflected by ERPs, is affected in children with plagiocephaly. abstract_id: PUBMED:32941205 Neurologic Characterization of Craniosynostosis: Can Direct Brain Recordings Predict Language Development? Purpose: Nonsyndromic craniosynostosis (NSC) is associated with language deficits. Conventional tests, such as the Bayley Scales of Infant Development (BSID), may not reflect accurate long-term cognition. Alternatively, mismatch negativity (MMN) waves recorded via electroencephalogram (EEG) measure neural responses to speech and may objectively predict language development. This study aimed to (1) correlate infant MMN to future language achievement and (2) compare MMN among subtypes of NSC. Methods: Pre and postoperatively (mean operative age 9.5 months), NSC participants received the BSID and EEG phoneme-discrimination paradigm(80 dB,250 Hz). The MMN was the largest negative amplitude in the difference wave 80 to 300 ms after stimuli. To measure cognitive outcome, patients completed a neurodevelopmental battery (Wechsler-Abbreviated Scale of Intelligence and Wechsler-Fundamentals) at &gt;6 years of age. Results: Eleven NSC patients with EEG testing in infancy were neurocognitively tested (average age 8.0 years; 27% female; 55% sagittal, 27% metopic, 9% unicoronal, 9% sagittal/metopic). The left frontal cluster MMN strongly correlated with word-reading (r = 0.713, P = 0.031), reading-comprehension (r = 0.745, P = 0.021), and language-composites (r = 0.0771, P = 0.015). Conversely, BSID scores did not yield significant predictive value (r &lt; 0.5, P &gt; 0.05). Follow-up event related potentials (ERP) comparison included 39 normal control, 18 sagittal, 17 metopic, 6 unilateral-coronal infants. Preoperatively, sagittal (P = 0.003) and metopic (P = 0.003) patients had attenuated left frontal MMN compared to controls. Postoperatively, the sagittal cohort was normalized to controls while metopic patients retained attenuations (P = 0.041). Conclusion: ERP assessment in NSC had significantly better predictive value for future neurocognition than the BSID. Preoperatively, sagittal and metopic patients had attenuated neural response to language; postoperatively, sagittal patients had improved responses in comparison to metopic patients. Use of ERP assessment may help tailor treatment for language deficits earlier in development. abstract_id: PUBMED:20815706 Intracranial volume and whole brain volume in infants with unicoronal craniosynostosis. Objective: Craniosynostosis has been hypothesized to result in alterations of the brain and cerebral blood flow due to reduced intracranial volume, potentially leading to cognitive deficits. In this study we test the hypothesis that intracranial volume and whole brain volume in infants with unilateral coronal synostosis differs from those in unaffected infants. Design: Our study sample consists of magnetic resonance images acquired from 7- to 72-week-old infants with right unilateral coronal synostosis prior to surgery (n = 10) and age-matched unaffected infants (n = 10). We used Analyze 9.0 software to collect three cranial volume measurements. We used nonparametric tests to determine whether the three measures differ between the two groups. Correlations were calculated between age and the three volume measures in each group to determine whether the growth trajectory of the measurements differ between children with right unicoronal synostosis and unaffected infants. Results: Our results show that the three volume measurements are not reduced in infants with right unicoronal synostosis relative to unaffected children. Correlation analyses between age and various volume measures show similar correlations in infants with right unicoronal synostosis compared with unaffected children. Conclusions: Our results show that the relationship between brain size and intracranial size in infants with right unicoronal synostosis is similar to that in unaffected children, suggesting that reduced intracranial volume is not responsible for alterations of the brain in craniosynostosis. abstract_id: PUBMED:17884704 Perioperative management of pediatric patients with craniosynostosis. Craniosynostosis, premature closures of the skull sutures, results in dysmorphic features if left untreated. Brain growth and cognitive development may also be impacted. Craniosynostosis repair is usually performed in young infants and has its perioperative challenges. This article provides background information about the different forms of craniosynostosis, with an overview of associated anomalies, genetic influences, and their connection with cognitive function. It also discusses the anesthetic considerations for perioperative management, including blood-loss management and strategies to reduce homologous blood transfusions. abstract_id: PUBMED:37506075 Brain MRI segmentation of Zika-Exposed normocephalic infants shows smaller amygdala volumes. Background: Infants with congenital Zika syndrome (CZS) are known to exhibit characteristic brain abnormalities. However, the brain anatomy of Zika virus (ZIKV)-exposed infants, born to ZIKV-positive pregnant mothers, who have normal-appearing head characteristics at birth, has not been evaluated in detail. The aim of this prospective study is, therefore, to compare the cortical and subcortical brain structural volume measures of ZIKV-exposed normocephalic infants to age-matched healthy controls. Methods And Findings: We acquired T2-MRI of the whole brain of 18 ZIKV-exposed infants and 8 normal controls on a 3T MRI scanner. The MR images were auto-segmented into eight tissue types and anatomical regions including the white matter, cortical grey matter, deep nuclear grey matter, corticospinal fluid, amygdala, hippocampus, cerebellum, and brainstem. We determined the volumes of these regions and calculated the total intracranial volume (TICV) and head circumference (HC). We compared these measurements between the two groups, controlling for infant age at scan, by first comparing results for all subjects in each group and secondly performing a subgroup analysis for subjects below 8 weeks of postnatal age at scan. ZIKV-exposed infants demonstrated a significant decrease in amygdala volume compared to the control group in both the group and subgroup comparisons (p&lt;0.05, corrected for multiple comparisons using FDR). No significant volume differences were observed in TICV, HC, or any specific brain tissue structures or regions. Study limitations include small sample size, which was due to abrupt cessation of extramural funding as the ZIKV epidemic waned. Conclusion: ZIKV-exposed infants exhibited smaller volumes in the amygdala, a brain region primarily involved in emotional and behavioral processing. This brain MRI finding may lead to poorer behavioral outcomes and warrants long-term monitoring of pediatric cases of infants with gestational exposure to Zika virus as well as other neurotropic viruses. abstract_id: PUBMED:17635200 Visual function in infants with non-syndromic craniosynostosis. The aim of this study was to assess various aspects of visual function in children with single-suture, non-syndromic craniosynostosis. Thirty-eight infants (28 males, 10 females; age range 3.5-13mo, mean age 7mo, 11 with plagiocephaly, 12 with trigonocephaly, and 15 with scaphocephaly), were assessed with a battery of tests specifically designed to assess various aspects of visual function in infancy. Thirty-two of the 38 infants had at least one abnormality on one of the aspects of visual function assessed. Abnormal eye movements were found in eight infants of the whole cohort and were mainly found in infants with plagiocephaly (6/11), who also had frequent visual field abnormalities (5/11). In contrast, fixation shift, an aspect of visual function related to the integrity of parietal lobes, was more frequently abnormal in patients with scaphocephaly. Our results suggest that the presence and severity of visual impairment is related to the type of craniosynostosis. Follow-up studies after surgical correction are needed to evaluate the possible beneficial effects of reconstructive surgery on visual function. abstract_id: PUBMED:17513129 Displacement of brain regions in preterm infants with non-synostotic dolichocephaly investigated by MRI. Regional investigations of newborn MRI are important to understand the appearance and consequences of early brain injury. Previously, regionalization in neonates has been achieved with a Talairach parcellation, using internal landmarks of the brain. Non-synostotic dolichocephaly defines a bi-temporal narrowing of the preterm infant's head caused by pressure on the immature skull. The impact of dolichocephaly on brain shape and regional brain shift, which may compromise the validity of the parcellation scheme, has not yet been investigated. Twenty-four preterm and 20 fullterm infants were scanned at term equivalent. Skull shapes were investigated by cephalometric measurements and population registration. Brain tissue volumes were calculated to rule out brain injury underlying skull shape differences. The position of Talairach landmarks was evaluated. Cortical structures were segmented to determine a positional shift between both groups. The preterm group displayed dolichocephalic head shapes and had similar brain volumes compared to the mesocephalic fullterm group. In preterm infants, Talairach landmarks were consistently positioned relative to each other and to the skull base, but were displaced with regard to the calvarium. The frontal and superior region was enlarged; central and temporal gyri and sulci were shifted comparing preterm and fullterm infants. We found that, in healthy preterm infants, dolichocephaly led to a shift of cortical structures, but did not influence deep brain structures. We concluded that the validity of a Talairach parcellation scheme is compromised and may lead to a miscalculation of regional brain volumes and inconsistent parcel contents when comparing infant populations with divergent head shapes. abstract_id: PUBMED:28790902 Integration of Brain and Skull in Prenatal Mouse Models of Apert and Crouzon Syndromes. The brain and skull represent a complex arrangement of integrated anatomical structures composed of various cell and tissue types that maintain structural and functional association throughout development. Morphological integration, a concept developed in vertebrate morphology and evolutionary biology, describes the coordinated variation of functionally and developmentally related traits of organisms. Syndromic craniosynostosis is characterized by distinctive changes in skull morphology and perceptible, though less well studied, changes in brain structure and morphology. Using mouse models for craniosynostosis conditions, our group has precisely defined how unique craniosynostosis causing mutations in fibroblast growth factor receptors affect brain and skull morphology and dysgenesis involving coordinated tissue-specific effects of these mutations. Here we examine integration of brain and skull in two mouse models for craniosynostosis: one carrying the FGFR2c C342Y mutation associated with Pfeiffer and Crouzon syndromes and a mouse model carrying the FGFR2 S252W mutation, one of two mutations responsible for two-thirds of Apert syndrome cases. Using linear distances estimated from three-dimensional coordinates of landmarks acquired from dual modality imaging of skull (high resolution micro-computed tomography and magnetic resonance microscopy) of mice at embryonic day 17.5, we confirm variation in brain and skull morphology in Fgfr2cC342Y/+ mice, Fgfr2+/S252W mice, and their unaffected littermates. Mutation-specific variation in neural and cranial tissue notwithstanding, patterns of integration of brain and skull differed only subtly between mice carrying either the FGFR2c C342Y or the FGFR2 S252W mutation and their unaffected littermates. However, statistically significant and substantial differences in morphological integration of brain and skull were revealed between the two mutant mouse models, each maintained on a different strain. Relative to the effects of disease-associated mutations, our results reveal a stronger influence of the background genome on patterns of brain-skull integration and suggest robust genetic, developmental, and evolutionary relationships between neural and skeletal tissues of the head. abstract_id: PUBMED:38102659 Neural EGFL-like 1, a craniosynostosis-related osteochondrogenic molecule, strikingly associates with neurodevelopmental pathologies. Various craniofacial syndromes cause skeletal malformations and are accompanied by neurological abnormalities at different levels, leading to tremendous biomedical, financial, social, and psychological burdens. Accumulating evidence highlights the importance of identifying and characterizing the genetic basis that synchronously modulates musculoskeletal and neurobehavioral development and function. Particularly, previous studies from different groups have suggested that neural EGFL-like-1 (Nell-1), a well-established osteochondrogenic inducer whose biopotency was initially identified in the craniofacial tissues, may also play a vital role in the central nervous system, particularly regarding neurological disorder pathologies. To provide first-hand behavior evidence if Nell-1 also has a role in central nervous system abnormalities, we compared the Nell-1-haploinsufficient (Nell-1+/6R) mice with their wild-type counterparts regarding their repetitive, social communication, anxiety-related, locomotor, sensory processing-related, motor coordination, and Pavlovian learning and memory behaviors, as well as their hippocampus transcriptional profile. Interestingly, Nell-1+/6R mice demonstrated core autism spectrum disorder-like deficits, which could be corrected by Risperidone, an FDA-approved anti-autism, anti-bipolar medicine. Besides, transcriptomic analyses identified 269 differential expressed genes, as well as significantly shifted alternative splicing of ubiquitin B pseudogene Gm1821, in the Nell-1+/6R mouse hippocampus, which confirmed that Nell-1 plays a role in neurodevelopment. Therefore, the current study verifies that Nell-1 regulates neurological development and function for the first time. Moreover, this study opens new avenues for understanding and treating craniofacial patients suffering from skeletal deformities and behavior, memory, and cognition difficulties by uncovering a novel bone-brain-crosstalk network. Furthermore, the transcriptomic analysis provides the first insight into deciphering the mechanism of Nell-1 in neurodevelopment. Answer: Direct brain recordings, specifically event-related potentials (ERPs), have been used to examine auditory processing in infants with single-suture craniosynostosis (SSC) and have revealed impaired neural function. In a study with untreated SSC infants, attenuated P150 amplitudes were observed compared to typically developing controls, indicating abnormal speech sound processing as early as 6 months of age. These atypicalities may represent precursors to long-term language delay, suggesting that electrophysiological assessments could serve as a precise examination of neural processing in SSC and potentially as a future modality to examine the effects of surgical treatment on brain development (PUBMED:25534054). Similarly, auditory ERPs in infants with plagiocephaly, a condition often associated with SSC, showed smaller amplitudes of the P150 and N450 responses to tones compared to healthy controls. This finding suggests that these infants have an elevated risk of auditory processing disorders and that central sound processing is affected in children with plagiocephaly (PUBMED:12140415). Moreover, mismatch negativity (MMN) waves recorded via electroencephalogram (EEG) in nonsyndromic craniosynostosis (NSC) patients have been correlated with future language achievement. Preoperatively, patients with sagittal and metopic NSC had attenuated neural responses to language, but postoperatively, the sagittal cohort showed normalized responses, unlike the metopic patients. This indicates that ERP assessment in NSC has a significantly better predictive value for future neurocognition than conventional tests like the Bayley Scales of Infant Development (BSID) (PUBMED:32941205). These findings collectively suggest that direct brain recordings, such as ERPs and MMN waves, can reveal impaired neural function in infants with SSC and may be a valuable tool for guiding management and predicting long-term cognitive outcomes.
Instruction: Lymphocyte subpopulations and cytokines in nasal polyps: is there a local immune system in the nasal polyp? Abstracts: abstract_id: PUBMED:15138416 Lymphocyte subpopulations and cytokines in nasal polyps: is there a local immune system in the nasal polyp? Purpose: The pathogenesis of chronic hyperplastic rhinosinusitis with massive nasal polyposis is still not entirely known. The present study evaluates the lymphocyte subpopulations and their production of cytokines using a technique for detection of intracytoplasmic cytokines by flow cytometry. This information may allow us to determine whether the source of these lymphocytes is from peripheral blood, the common mucosal immune system, or both. Methods: Detection of intracytoplasmic cytokines by flow cytometry was performed using a fluoresceinated monoclonal antibody directed against CD4+ and CD8+ lymphocytes and a rhodamine-labeled intracytoplasmic monoclonal antibody directed against four cytokines. In this way, the percentage of lymphocytes synthesizing TH1 and TH2 cytokines were identified in nasal polyp lymphocytes and the corresponding peripheral blood lymphocytes of 13 patients. Results: Lymphocytes producing interferon-gamma and IL-2, as well as IL-4 and IL-5, were found in the nasal polyps, suggesting that the nasal polyp possesses both TH1 and TH2 cytokine expression. There are also significant differences between the percentage of lymphocytes producing these cytokines between nasal polyps and peripheral blood, suggesting that nasal polyp lymphocytes derive from at least another source than only peripheral blood lymphocytes. Statistical analysis of four groups of patients demonstrated that no statistically significant difference in the lymphocyte subpopulations in atopic versus non-atopic patients, nor aspirin-intolerant versus aspirin-tolerant patients. In general, CD8 cells always produce more interferon-gamma than IL-2 in both peripheral blood and nasal polyps. In contrast with this data, CD4 cells produce more IL-2 in the peripheral blood than in nasal polyps. Conclusions: Data support the concept that nasal polyp lymphocyte subpopulations may be derived from both the local mucosal immune system as well as from random migration of peripheral blood lymphocytes secondary to adhesion molecules and chemokines, which are known to be present in nasal polyps. abstract_id: PUBMED:8291744 Lymphocyte subsets and antigen-specific IgE antibody in nasal polyps. We tried to elucidate the role of allergic factors in the pathogenesis of nasal polyps. Nasal polyps were obtained from 22 patients with chronic sinusitis which included eight patients proved to have nasal allergy by history, skin test, and serum-specific IgE against houst dust mite. Immunohistochemical studies of lymphocyte subpopulations in the mucosa of nasal polyps were performed with monoclonal antibodies, and the concentrations of antigen-specific IgE in nasal polyps were measured by the fluoroallergosorbent test. In the epithelium, few HLA-DR+ cells were constantly present. In the submucosa, pan T cell marker CD2 was detected more often than CD19 (B cell), and more CD8 (T suppressor/cytotoxic) cells than CD4 (T helper/inducer) cells were found. IgE-producing plasma cells were rarely present. The lymphocyte subpopulations and the levels of antigen-specific IgE in nasal polyps were not different between the allergic and nonallergic groups. This suggests that allergy may not be the cause, and cellular immunity of antigen presenting cells and T lymphocytes, which consecutively induce infiltration and degranulation of mast cells by the production of cytokines, may be involved in the formation of nasal polyps with sinusitis. abstract_id: PUBMED:24627407 Immunomodulatory Effect of Mesenchymal Stem Cells on T Lymphocyte and Cytokine Expression in Nasal Polyps. Objectives: Adipose tissue-derived stem cells (ASCs) have been reported to have immunomodulatory effects in various inflammatory diseases, including asthma and allergic rhinitis, through the induction of T cell anergy. Nasal polyps (NPs) are a chronic inflammatory disease in the nose and paranasal sinus characterized histologically by the infiltration of inflammatory cells, such as eosinophils or lymphocytes. This study was performed to investigate whether ASCs have immunomodulatory effects on T lymphocyte and cytokine expression in eosinophilic NPs. Study Design: Basic science experimental study. Setting: University tertiary care facility. Subjects And Methods: NP specimens were obtained from 20 patients with chronic rhinosinusitis and eosinophilic NPs. ASCs were isolated and cultured from the abdominal fat of 15 subjects undergoing intra-abdominal surgery. Infiltrating cells (1 × 10(6)) were isolated from NP tissue and co-cultured with 1 × 10(5) ASCs. To determine whether ASCs affect infiltrating T lymphocyte and cytokine expression in eosinophilic NP, T lymphocyte subsets and cytokine expression were analyzed before and after ASC treatment. Results: ASC treatment significantly decreased the proportions of CD4(+) and CD8(+) T cells. After ASC treatment, Th2 cytokine (interleukin [IL]-4 and IL-5) levels decreased significantly. In contrast, levels of Th1 (interferon-γ and IL-2) and regulatory cytokines (transforming growth factor-β and IL-10) increased significantly after ASC treatment. Conclusions: ASCs have immunomodulatory effects in the eosinophilic inflammation of NPs, characterized by down-regulation of activated T lymphocytes and a Th2 immune response. These effects would be expected, over time, to significantly contribute to the control of eosinophilic inflammation and, possibly, growth of eosinophilic NPs. abstract_id: PUBMED:37206761 Pathogenesis of Nasal Polyposis: Current Trends. Chronic Rhinosinusitis (CRS) is characterized by edema of the sub-epithelial layers, but, only specific types of CRS are developing polyps. Nasal polyposis may develop under different pathogenetic mechanisms rendering the typical macroscopic classification of CRS, with or without nasal polyps, rather deficient. Currently, we approach nasal polyposis, in terms of diagnosis and treatment, according to its endotype, which means that we focus on the specific cells and cytokines that are participating in its pathogenesis. It appears that the molecular procedures that contribute to polyp formation, initiating with a Th-2 response of the adaptive immune system, are local phenomena occurring in the sub-epithelial layers of the mucosa. Several hypotheses are trying to approach the etiology that drives the immune response towards Th-2 type. Extrinsic factors, like fungi, Staphylococcus superantigens, biofilms, and altered microbiome can contribute to a modified and intense local reaction of the immune system. Some hypotheses based on intrinsic factors like the elimination of Treg lymphocytes, low local vitamin-D levels, high levels of leukotrienes, epithelial to mesenchymal transition (EMT) induced by hypoxia, and altered levels of NO, add pieces to the puzzle of the pathogenesis of nasal polyposis. Currently, the most complete theory is that of epithelial immune barrier dysfunction. Intrinsic and extrinsic conditions can damage the epithelial barrier rendering sub-epithelial layers more vulnerable to invasion by pathogens that trigger a Th-2 response of the adaptive immune system. Th2 cytokines, subsequently, induce the accumulation of eosinophils and IgE together with the remodeling of the stroma in the sub-epithelial layers leading, eventually, to the formation of nasal polyps. abstract_id: PUBMED:36060258 Identification of hub genes and immune cell infiltration characteristics in chronic rhinosinusitis with nasal polyps: Bioinformatics analysis and experimental validation. The aim of our study is to reveal the hub genes related to the pathogenesis of chronic rhinosinusitis with nasal polyps (CRSwNP) and their association with immune cell infiltration through bioinformatics analysis combined with experimental validation. In this study, through differential gene expression analysis, 1,516 upregulated and 1,307 downregulated DEG were obtained from dataset GSE136825 of the GEO database. We identified 14 co-expressed modules using weighted gene co-expression network analysis (WGCNA), among which the most significant positive and negative correlations were MEgreen and MEturquoise modules, containing 1,540 and 3,710 genes respectively. After the intersection of the two modules and DEG, two gene sets-DEG-MEgreen and DEG-MEturquoise-were obtained, containing 395 and 1,168 genes respectively. Through GO term analysis, it was found that immune response and signal transduction are the most important biological processes. We found, based on KEGG pathway enrichment analysis, that osteoclast differentiations, cytokine-cytokine receptor interactions, and neuroactive ligand-receptor interactions are the most important in the two gene sets. Through PPI network analysis, we listed the top-ten genes for the concentrated connectivity of the two gene sets. Next, a few genes were verified by qPCR experiments, and FPR2, ITGAM, C3AR1, FCER1G, CYBB in DEG-MEgreen and GNG4, NMUR2, and GNG7 in DEG-MEturquoise were confirmed to be related to the pathogenesis of CRSwNP. NP immune cell infiltration analysis revealed a significant difference in the proportion of immune cells between the NP group and control group. Finally, correlation analysis between target hub genes and immune cells indicated that FPR2 and GNG7 had a positive or negative correlation with some specific immune cells. In summary, the discoveries of these new hub genes and their association with immune cell infiltration are of great significance for uncovering the specific pathogenesis of CRSwNP and searching for disease biomarkers and potential therapeutic targets. abstract_id: PUBMED:35572064 Epithelial immune regulation of inflammatory airway diseases: Chronic rhinosinusitis with nasal polyps (CRSwNP). Background: The epithelial immune regulation is an essential and protective feature of the barrier function of the mucous membranes of the airways. Damage to the epithelial barrier can result in chronic inflammatory diseases, such as chronic rhinosinusitis (CRS) or bronchial asthma. Thymic stromal lymphopoietin (TSLP) is a central regulator in the epithelial barrier function and is associated with type 2 (T2) and non-T2 inflammation. Materials And Methods: The immunology of chronic rhinosinusitis with polyposis nasi (CRSwNP) was analyzed in a literature search, and the existing evidence was determined through searches in Medline, Pubmed as well as the national and international study and guideline registers and the Cochrane Library. Human studies or studies on human cells that were published between 2010 and 2020 and in which the immune mechanisms of TSLP in T2 and non-T2 inflammation were examined were considered. Results: TSLP is an epithelial cytokine (alarmin) and a central regulator of the immune reaction, especially in the case of chronic airway inflammation. Induction of TSLP is implicated in the pathogenesis of many diseases like CRS and triggers a cascade of subsequent inflammatory reactions. Conclusion: Treatment with TSLP-blocking monoclonal antibodies could therefore open up interesting therapeutic options. The long-term safety and effectiveness of TSLP blockade has yet to be investigated. abstract_id: PUBMED:12847477 Impact of intranasal budesonide on immune inflammatory responses and epithelial remodeling in chronic upper airway inflammation. Background: Histologic and immunohistologic features of nasal polyps (NP) are similar to those observed in asthma, thus suggesting a similar immunopathology. Objective: The primary objective of this study was to further understand the anti-inflammatory and immunoregulatory effects of locally delivered corticosteroids. To this end, the effect of intranasal budesonide on the expression of specific cytokines, lymphocyte subsets, and epithelial remodeling in this model of airway tissue inflammation were studied. Methods: We used immunohistochemical techniques to examine nasal mucosae (NM) from healthy individuals and nasal polyp (NP) tissues from patients with nasal polyposis obtained before and after intranasal budesonide treatment. Results: First, the density of CD8(+) cells was markedly increased in NP tissues after intranasal budesonide treatment from 16.1 +/- 8.4 (M +/- SEM) per mm(2) to 39.9 +/- 24.1. Second, the density of cells immunoreactive for IL-4, IL-5, IFN-gamma, IL-12, and TGF-beta in NP was significantly greater than in control NM tissues. The density of IL-4(+) and IL-5(+) cells in NP tissues significantly decreased after budesonide treatment from 40 +/- 12 to 17.8 +/- 8 and from 19.3 +/- 11 to 10.4 +/- 7, respectively. In contrast, the density of IFN-gamma(+) and IL-12(+) cells remained unchanged. In addition, we found that the density of TGF-beta(+) cells significantly increased after intranasal budesonide from 18 +/- 5 to 41 +/- 9. Third, damage to the entire length of the NP epithelium was quantified using a grading system. The epithelium of untreated NP was substantially damaged; remarkable epithelial restitution with no apparent changes in stromal collagen deposition was observed after intranasal budesonide treatment. Conclusions: These findings demonstrate that intranasal budesonide induced an increase in CD8 population and a selective regulatory effect on tissue cytokine expression. Furthermore, intranasal budesonide promoted epithelial remodeling. We hypothesize that these immunoregulatory and remodeling effects elicited by steroids might be, at least in part, mediated by the induction of TGF-beta. abstract_id: PUBMED:11761146 Cell-mediated immune responses (CMIR) in human rhinosporidiosis. Cell mediated immune responses (CMIR) to Rhinosporidium seeberi in human patients with rhinosporidiosis have been studied. With immuno-histochemistry, the cell infiltration patterns in rhinosporidial tissues from 7 patients were similar. The mixed cell infiltrate consisted of many plasma cells, fewer CD68+ macrophages, a population of CD3+ T lymphocytes, and CD56/57+ NK lymphocytes which were positive for CD3 as well. CD4+ T helper cells were scarce. CD8+ suppressor/cytotoxic-cytolytic cells were numerous. Most of the CD8+ cells were TIA1+ and therefore of the cytotoxic subtype. CD8+ T cells were not sub-typed according to their cytokine profile; 1L2, IFN-gamma (Tcl); IL4, ILS (Tc2). In lympho-proliferative response (LPR) assays in vitro, lymphocytes from rhinosporidial patients showed stimulatory responses to Con A but lymphocytes from some patients showed significantly diminished responses to rhinosporidial extracts as compared with unstimulated cells or cells stimulated by Con A, indicating suppressor immune responses in rhinosporidiosis. The overall stimulatory responses with Con A suggested that the rhinosporidial lymphocytes were not non-specifically anergic although comparisons of depressed LPR of rhinosporidial lymphocytes from individual patients, to rhinosporidial antigen with those to Con A, did not reveal a clear indication as to whether the depression was antigen specific or non-specific. The intensity of depression of the LPR in rhinosporidial patients bore no relation to the site, duration, or the number of lesions or whether the disease was localized or disseminated. Rhinosporidial extracts showed stimulatory activity on normal control lymphocytes, perhaps indicating mitogenic activity. These results indicate that CMIR develops in human rhinosporidiosis, while suppressed responses are also induced. abstract_id: PUBMED:35483691 Analysis of serum Vitamin C expression level and its correlation with immune function in adult patients with chronic sinusitis Objective:To investigate the expression of Vitamin C in adult patients with chronic rhinosinusitis(CRS) and its correlation with immune function. Methods:A total of 315 patients who underwent nasal endoscopic surgery at the Department of Otolaryngology Head and Neck Surgery and undergoing nasal endoscopic surgery were collected from Renmin Hospital of Wuhan University from May 2018 to June 2020, including 207 CRS patients, who were divided into CRS without nasal polyps(CRSsNP) group(110 cases) and CRS with nasal polyps(CRSwNP) group(97 cases); 108 patients with nasal septum correction were selected as the control group. All patients underwent serum Vitamin A, C, D, and E tests. Among them, 107 patients(39 in the control group, 35 in the CRSsNP group, and 33 in the CRSwNP group) were treated with serum IL-2, TNF-α, IFN-γ, IL-4, IL-5, IL-6, IL-10, etc. Detection of cytokines and immune protein levels such as IgA, IgM, IgG, C3, and C4. Results:The serum levels of Vitamin C and IL-10 in the CRSwNP group were significantly lower than those in the control group(P&lt;0.05), and the serum C4 level was significantly higher(P&lt;0.05); the serum Vitamin C level decreased with the level of IL-10, the decline is positively correlated, while negatively correlated with C4 levels; CRSsNP patients also had lower Vitamin C levels and higher C3 and C4 levels. Conclusion:The Vitamin C level in adult patients with CRS is lower than that in the normal population, and the level of C4 is higher; the level of IL-10 in patients with CRSwNP is lower; Vitamin C affects the immune balance and antioxidant level of patients, and in the pathogenesis of CRS and nasal polyps play a certain role. abstract_id: PUBMED:27117506 The role of dendritic cells in immune regulation of nasal polyps. Nasal polyps (NPs) are caused by a variety of immune cells and inflammatory cells. However, as the most potent antigen-presenting cells in the immune system, the role of dendritic cells (DCs) in NPs is still unclear. In the present research, we studied the role of DCs in immune regulation of NPs. Thirty patients with NPs, who served as the experimental group, received systemic and local glucocorticoids for 4-7 d, and specimens were collected prior to hormone treatment and during surgery. Normal middle turbinate mucosa tissues from 18 patients who underwent nasal septum surgery were collected as controls. The expression levels of CD83, tumor necrosis factor-α (TNF-α), interleukin-4 (IL-4) and eosinophils (EOS) in NP tissues before and after glucocorticoid therapy and in control middle turbinate mucosa tissues were studied. After glucocorticoid therapy, the expression levels of CD83, TNF-α, IL-4 and EOS decreased significantly. In addition, the expression of IL-4 was lower than that of TNF-α, reversing the Th2 cytokine-dominant condition. CD83 and EOS showed a positive correlation. DCs participated in the development and progression of NPs and could promote the generation of Th2 cytokines. After interference by glucocorticoid therapy, DCs could inhibit the expression of Th2 cytokines and induce secretion of Th1 cytokines. DCs and EOS thus might both play roles in promoting the development and progression of NPs, but the underlying mechanism requires further study. Answer: The presence of lymphocyte subpopulations and cytokines in nasal polyps suggests that there may indeed be a local immune system within the nasal polyp. Studies have found lymphocytes producing both TH1 and TH2 cytokines, such as interferon-gamma, IL-2, IL-4, and IL-5, within nasal polyps, indicating that the nasal polyp possesses both TH1 and TH2 cytokine expression (PUBMED:15138416). This finding is supported by the significant differences observed between the percentage of lymphocytes producing these cytokines in nasal polyps compared to peripheral blood, suggesting that nasal polyp lymphocytes may derive from sources other than just peripheral blood lymphocytes (PUBMED:15138416). Additionally, immunohistochemical studies have shown that in the submucosa of nasal polyps, there are more CD8 (T suppressor/cytotoxic) cells than CD4 (T helper/inducer) cells, and IgE-producing plasma cells are rarely present (PUBMED:8291744). This indicates that cellular immunity involving antigen-presenting cells and T lymphocytes, which can induce infiltration and degranulation of mast cells through cytokine production, may be involved in the formation of nasal polyps with sinusitis (PUBMED:8291744). Furthermore, adipose tissue-derived stem cells (ASCs) have been reported to have immunomodulatory effects on T lymphocyte and cytokine expression in eosinophilic nasal polyps, characterized by down-regulation of activated T lymphocytes and a Th2 immune response (PUBMED:24627407). This suggests that local immune responses within the nasal polyp can be modulated, further supporting the concept of a local immune system in nasal polyps. Overall, the data support the concept that nasal polyp lymphocyte subpopulations may be derived from both the local mucosal immune system and from random migration of peripheral blood lymphocytes, secondary to adhesion molecules and chemokines known to be present in nasal polyps (PUBMED:15138416). This local immune activity within the nasal polyp contributes to the chronic inflammation characteristic of nasal polyposis.
Instruction: Does competition improve health care quality? Abstracts: abstract_id: PUBMED:23670849 Competition and quality in home health care markets. Market-based solutions are often proposed to improve health care quality; yet evidence on the role of competition in quality in non-hospital settings is sparse. We examine the relationship between competition and quality in home health care. This market is different from other markets in that service delivery takes place in patients' homes, which implies low costs of market entry and exit for agencies. We use 6 years of panel data for Medicare beneficiaries during the early 2000s. We identify the competition effect from within-market variation in competition over time. We analyze three quality measures: functional improvements, the number of home health visits, and discharges without hospitalization. We find that the relationship between competition and home health quality is nonlinear and its pattern differs by quality measure. Competition has positive effects on functional improvements and the number of visits in most ranges, but in the most competitive markets, functional outcomes and the number of visits slightly drop. Competition has a negative effect on discharges without hospitalization that is strongest in the most competitive markets. This finding is different from prior research on hospital markets and suggests that market-specific environments should be considered in developing polices to promote competition. abstract_id: PUBMED:20195428 Quality health care in the European Union thanks to competition law. There are many biases concerning the application of competition law in health care. Quality concerns can however be integrated into competition law analysis. The aim of this paper is to identify the links between the application of competition law in the European Union and the right to quality health care and to point out the problems that arise when integrating quality concerns in competition law analysis. Guidelines must be issued and competition authorities must work together with institutions that have expertise in the field of health care quality measurement in order to integrate these dimensions in competition practice. abstract_id: PUBMED:18793214 Does competition improve health care quality? Objective: To identify the effect of competition on health maintenance organizations' (HMOs) quality measures. Study Design: Longitudinal analysis of a 5-year panel of the Healthcare Effectiveness Data and Information Set (HEDIS) and Consumer Assessment of Health Plans Survey(R) (CAHPS) data (calendar years 1998-2002). All plans submitting data to the National Committee for Quality Assurance (NCQA) were included regardless of their decision to allow NCQA to disclose their results publicly. Data Sources: NCQA, Interstudy, the Area Resource File, and the Bureau of Labor Statistics. Methods: Fixed-effects models were estimated that relate HMO competition to HMO quality controlling for an unmeasured, time-invariant plan, and market traits. Results are compared with estimates from models reliant on cross-sectional variation. Principal Findings: Estimates suggest that plan quality does not improve with increased levels of HMO competition (as measured by either the Herfindahl index or the number of HMOs). Similarly, increased HMO penetration is generally not associated with improved quality. Cross-sectional models tend to suggest an inverse relationship between competition and quality. Conclusions: The strategies that promote competition among HMOs in the current market setting may not lead to improved HMO quality. It is possible that price competition dominates, with purchasers and consumers preferring lower premiums at the expense of improved quality, as measured by HEDIS and CAHPS. It is also possible that the fragmentation associated with competition hinders quality improvement. abstract_id: PUBMED:9828030 Competition and quality among managed care plans in the USA. Boston University Health Care Management Program Group. This paper examines the popular idea that competition among managed care plans will lead not only to lower prices, but also to improved quality. We explore the likelihood that competition based on quality will occur and that better quality care will result. First, we discuss key elements of competitive theory and then we attempt to apply them to markets for health care coverage and care. We identify the conditions necessary for competition to have the desired effects and assess the extent to which those conditions do or can exist. We conclude that in the USA, many consumers have no choice among plans and, therefore, cannot select one based on quality. Moreover, the evidence suggests that as long as price varies among health plans, consumers who do have a choice will tend to emphasize price, not quality, in making their selections. We conclude with suggestions to increase the likelihood that quality can improve as a result of competition. abstract_id: PUBMED:12674406 Why competition law matters to health care quality. Competition law (encompassing both antitrust and consumer protection) is the forgotten stepchild of health care quality. This paper introduces readers to competition law and policy, describes its institutional features and analytic framework, surveys the ways in which competition law has influenced quality-based competition, and outlines some areas in need of further development. Competition law protects the competitive process--not individual competitors. It guides the structural features of the health care system and the conduct of providers as they navigate it. Competition law does not privilege quality over other competitive goals but honors consumers' preferences with respect to trade-offs among quality, price, and other attributes of goods and services. abstract_id: PUBMED:20542342 Competition and quality in health care markets: a differential-game approach. We investigate the effect of competition on quality in health care markets with regulated prices taking a differential game approach, in which quality is a stock variable. Using a Hotelling framework, we derive the open-loop solution (health care providers set the optimal investment plan at the initial period) and the feedback closed-loop solution (providers move investments in response to the dynamics of the states). Under the closed-loop solution competition is more intense in the sense that providers observe quality in each period and base their investment on this information. If the marginal provision cost is constant, the open-loop and closed-loop solutions coincide, and the results are similar to the ones obtained by static models. If the marginal provision cost is increasing, investment and quality are lower in the closed-loop solution (when competition is more intense). In this case, static models tend to exaggerate the positive effect of competition on quality. abstract_id: PUBMED:30654150 Competition and equity in health care markets. We provide a model where hospitals compete on quality under fixed prices to investigate how hospital competition affects (i) quality differences between hospitals, and as a result, (ii) health inequalities across hospitals and patient severities. The answer to the first question is ambiguous and depends on factors related to both demand and supply of health care. Whether competition increases or reduces health inequalities depends on the type and measure of inequality. Health inequalities due to the postcode lottery are more likely to decrease if the marginal health gains from quality decrease at a higher rate, whereas health inequalities between high- and low-severity patients decrease if patient composition effects are sufficiently small. We also investigate the effect of competition on health inequalities as measured by the Gini and the Generalised Gini coefficients, and highlight differences compared to the simpler dispersion measures. abstract_id: PUBMED:9879307 Increased competition and the quality of health care. Available information does not indicate either that quality has deteriorated as price competition has increased or that quality has improved. To reward plans for providing what consumers want, public and private policies have crucial roles in the following areas: mandating minimal requirements for plans; funding research to improve knowledge and methods related to quality-of-care assessment; publication of quality-of-care information; selective contracting and regionalizing of services; and payment for physician services. Learning what degree of trade-off between cost versus quality and other benefits is acceptable to consumers will be an iterative process that informs future policies to safeguard the quality of care. abstract_id: PUBMED:38333801 Does competition support integrated care to improve quality? Introduction: This work investigates the compatibility of integrated care and competition in China and analyses the impact of integrated care on regional care quality (DeptQ) within a competitive framework. Method: The study was built on multivariate correspondence analysis and a two-way fixed-effects model. The data were collected from Xiamen's Big Data Application Open Platform and represent nine specialised departments that regularly performed inter-institutional referrals between 2016 and 2019. Results: First, care quality for referred patients (ReferQ) and the relative scale of referred patients (ReferScale) and competition have an antagonistic but not completely mutually exclusive relationship. Second, ReferQ and competition both have a significant effect on DeptQ, but only when competition is weak can ReferQ and competition act synergistically on DeptQ. When competition is fierce, competition will weaken the impact of ReferQ on DeptQ. Conclusion: Changes in the intensity of integrated care and competition ultimately affect care quality. abstract_id: PUBMED:16785293 Competition and quality as dynamic processes in the Balkans of American health care. The American health care system embodies a complex amalgamation of fractured and conflicting parts. As such, any call to enhance quality or competition necessarily presupposes some ability to introduce greater harmony and coordination. But how does one make a complicated system work well? Dynamic theories of economics stress the significance of section mechanisms, learning, and adaptive modes of behavior in directing markets toward more efficient outcomes under conditions of uncertainty. Unfortunately, the American health care sector suffers from intense factional divisions. Policy makers need a more self-conscious understanding of the interactive and often conflicting effects of regulation if the health care system is to be reshaped in a manner that will generate more desired social outcomes. Evolutionary theories of economics can provide the conceptual framework in which such a restructuring could take place. This article examines how health care quality and competition can be improved through a better understanding of dynamic economic processes and evaluates the Federal Trade Commission and Department of Justice 2004 report Improving Health Care: A Dose of Competition in light of these perspectives. Answer: The relationship between competition and health care quality is complex and the evidence on whether competition improves health care quality is mixed. Some studies suggest that competition can have positive effects on certain aspects of health care quality. For instance, in home health care markets, competition was found to have positive effects on functional improvements and the number of visits in most ranges, although in the most competitive markets, these outcomes slightly dropped (PUBMED:23670849). However, competition had a negative effect on discharges without hospitalization, particularly in the most competitive markets (PUBMED:23670849). In contrast, other studies have found that increased levels of competition among Health Maintenance Organizations (HMOs) do not lead to improved quality, as measured by the Healthcare Effectiveness Data and Information Set (HEDIS) and Consumer Assessment of Health Plans Survey (CAHPS) (PUBMED:18793214). This could be due to price competition dominating consumer preferences, leading to a preference for lower premiums over improved quality, or because competition-induced fragmentation hinders quality improvement (PUBMED:18793214). The European Union has recognized the potential for competition law to integrate quality concerns into health care, suggesting that with proper guidelines and collaboration between competition authorities and health care quality experts, competition can be aligned with quality health care (PUBMED:20195428). However, the evidence from the USA indicates that as long as price varies among health plans, consumers with a choice will tend to emphasize price, not quality, in their selections (PUBMED:9828030). A differential-game approach to competition in health care markets with regulated prices suggests that static models may exaggerate the positive effect of competition on quality, especially if the marginal provision cost is increasing (PUBMED:20542342). Moreover, competition's impact on health inequalities is ambiguous and depends on various demand and supply factors (PUBMED:30654150). In summary, while competition has the potential to improve certain aspects of health care quality, the overall evidence does not conclusively show that competition consistently leads to quality improvements across different health care settings and measures.
Instruction: Can trained volunteers make a difference at mealtimes for older people in hospital? Abstracts: abstract_id: PUBMED:25117920 Can trained volunteers make a difference at mealtimes for older people in hospital? A qualitative study of the views and experience of nurses, patients, relatives and volunteers in the Southampton Mealtime Assistance Study. Background: Malnutrition is common amongst hospitalised older patients and associated with increased morbidity and mortality. Poor dietary intake results from factors including acute illness and cognitive impairment but additionally patients may have difficulty managing at mealtimes. Use of volunteers to help at mealtimes is rarely evaluated. Objectives: To obtain multiple perspectives on nutritional care of older inpatients, acceptability of trained volunteers and identify important elements of their assistance. Design: A qualitative study 1 year before and after introduction of volunteer mealtime assistants on one ward and parallel comparison with a control ward in a Medicine for Older People department at a UK university hospital. Participants And Methods: Semi-structured interviews and focus groups, in baseline and intervention years, with purposively sampled nursing staff at different levels of seniority; patients or close relatives; and volunteers. Results: At baseline staff felt under pressure with insufficient people assisting at mealtimes. Introducing trained volunteers was perceived by staff and patients to improve quality of mealtime care by preparing patients for mealtimes, assisting patients who needed help, and releasing nursing time to assist dysphagic or drowsy patients. There was synergy with other initiatives, notably protected mealtimes. Interviews highlighted the perceived contribution of chronic poor appetite and changes in eating patterns to risk of malnutrition. Conclusions: Improved quality of mealtime care attributed to volunteers' input has potential to enhance staff morale and patients'/relatives' confidence. A volunteer mealtime assistance scheme may work best when introduced in context of other changes reflecting commitment to improving nutrition. Implications For Practice: (i) A mealtime assistance scheme should incorporate training, supervision and support for volunteers; (ii) Good relationships and a sense of teamwork can develop between wards staff and volunteers; (iii) Impact may be maximised in the context of 'protected mealtimes'. abstract_id: PUBMED:29081312 Changing the food environment: the effect of trained volunteers on mealtime care for older people in hospital. This review will describe the evidence for changing the hospital environment to improve nutrition of older people, with particular emphasis on the role of additional mealtime assistance. Poor nutrition among older people in hospital is well recognised in many countries and is associated with poor outcomes of hospital care including increased mortality and longer lengths of stay. Factors recognised to contribute to poor dietary intake include acute illness, co-morbidities, cognitive impairment, low mood and medication. The hospital environment has also been scrutinised with reports from many countries of food being placed out of reach or going cold because time-pressured ward and catering staff often struggle to help an increasingly dependent group of patients at mealtimes. Routine screening in hospital for people at risk of under nutrition is recommended. Coloured trays and protected mealtimes are widespread although there is relatively little evidence for their impact on dietary intake. Volunteers can be trained to sfely give additional mealtime assistance including feeding to older patients on acute medical wards. They can improve the quality of mealtime care for patients and nursing staff although the evidence for improved dietary intake is mixed. In conclusion, improving the nutrition of older patients in hospital is challenging. Initiatives such as routine screening, the use of coloured trays, protected mealtimes and additional mealtime assistance can work together synergistically. Volunteers are likely to be increasingly important in an era when healthcare systems are generally limited in both financial resources and the ability to recruit sufficient nursing staff. abstract_id: PUBMED:27471215 How trained volunteers can improve the quality of hospital care for older patients. A qualitative evaluation within the Hospital Elder Life Program (HELP). The aim of this study was to investigate, using a mixed-methods design, the added value of a trained Hospital Elder Life Program (HELP) volunteer to the quality of hospital care in the Netherlands. The trained volunteers daily stimulate older patients, at risk of a delirium, to eat, to drink, and to exercise, and they provide walking assistance and cognitive stimulation. This study showed that each group appreciated the extra attention and service from the volunteers. The positive effect on feelings of loneliness during the hospital stay was an unexpected outcome. The volunteers themselves appreciated their work. In conclusion, a HELP volunteer should be provided to every older hospital patient. abstract_id: PUBMED:30082361 Can trained volunteers improve the mealtime care of older hospital patients? An implementation study in one English hospital. Objective: Multinational studies report undernutrition among 39% older inpatients; importantly, malnutrition risk may further increase while in hospital. Contributory factors include insufficient mealtime assistance from time-pressured hospital staff. A pilot study showed trained volunteers could safely improve mealtime care. This study evaluates the wider implementation of a mealtime assistance programme. Design: Mixed methods prospective quasi-experimental study. Setting: Nine wards across Medicine for Older People (MOP), Acute Medical Unit, Orthopaedics and Adult Medicine departments in one English hospital. Participants: Patients, volunteers, ward staff. Intervention: Volunteers trained to help patients aged ≥70 years at weekday lunchtime and evening meals. Main Outcome Measures: The number of volunteers recruited, trained and their activity was recorded. Barriers and enablers to the intervention were explored through interviews and focus groups with patients, ward staff and volunteers. The total cost of the programme was evaluated. Results: 65 volunteers (52 female) helped at 846 meals (median eight/volunteer, range 2-109). The mix of ages (17-77 years) and employment status enabled lunch and evening mealtimes to be covered. Feeding patients was the most common activity volunteers performed, comprising 56% of volunteer interactions on MOP and 34%-35% in other departments. Patients and nurses universally valued the volunteers, who were skilled at encouraging reluctant eaters. Training was seen as essential by volunteers, patients and staff. The volunteers released potential costs of clinical time equivalent to a saving of £27.04/patient/day of healthcare assistant time or £45.04 of newly qualified nurse time above their training costs during the study. Conclusions: Patients in all departments had a high level of need for mealtime assistance. Trained volunteers were highly valued by patients and staff. The programme was cost-saving releasing valuable nursing time. Trial Registration Number: NCT02229019; Pre-results. abstract_id: PUBMED:28199923 Assistance at mealtimes in hospital settings and rehabilitation units for patients (&gt;65years) from the perspective of patients, families and healthcare professionals: A mixed methods systematic review. Background: Malnutrition is one of the key issues affecting the health of older people (&gt;65years). With an aging population the problem is expected to increase further since the prevalence of malnutrition increases with age. Studies worldwide have identified that some older patients with good appetites do not receive sufficient nourishment because of inadequate feeding assistance. Mealtime assistance can enhance nutritional intake, clinical outcomes and patient experience. Objectives/aim: To determine the effectiveness of meal time assistance initiatives for improving nutritional intake and nutritional status for older adult patients (&gt;65years) in hospital settings and rehabilitation units. The review also sought to identify and explore the perceptions and experiences of older adult patients and those involved with their care. Design: Mixed methods systematic review. Data Sources: A search of electronic databases to identify published studies (CINAHL, MEDLINE, British Nursing Index, Cochrane Central Register of Controlled Trials, EMBASE, PsychINFO, Web of Science (1998-2015) was conducted. Relevant journals were hand-searched and reference lists from retrieved studies were reviewed. The search was restricted to English language papers. The key words used were words that described meal time assistance for adult patients in hospital units or rehabilitation settings. Review Methods: The review considered qualitative, quantitative and mixed methods studies that included interventions for mealtime assistance, observed mealtime assistance or discussed experiences of mealtime assistance with staff, patients, relatives, volunteers or stakeholders. Extraction of data was undertaken independently by two reviewers. A further two reviewers assessed the methodological quality against agreed criteria. Findings: Twenty one publications covering 19 studies were included. Three aggregated mixed methods syntheses were developed: 1) Mealtimes should be viewed as high priority. 2a) Nursing staff, employed mealtime assistants, volunteers or relatives/visitors can help with mealtime assistance. 2b) Social interaction at mealtimes should be encouraged. 3) Communication is essential. Conclusions: A number of initiatives were identified which can be used to support older patients (&gt;65years) at mealtimes in hospital settings and rehabilitation units. However, no firm conclusions can be drawn in respect to the most effective initiatives. Initiatives with merit include those that encourage social interaction. Any initiative that involves supporting the older patient (&gt;65years) at mealtimes is beneficial. A potential way forward would be for nurses to focus on the training and support of volunteers and relatives to deliver mealtime assistance, whilst being available at mealtimes to support patients with complex nutritional needs. abstract_id: PUBMED:32510700 Structure and agency attributes of residents' use of dining space during mealtimes in care homes for older people. Research stresses that mealtimes in care homes for older people are vital social events in residents' lives. Mealtimes have great importance for residents as they provide a sense of normality, reinforce individuals' identities and orientate their routines. This ethnographic study aimed to understand residents' use of dining spaces during mealtimes, specifically examining residents' table assignment processes. Data were collected in summer 2015 in three care homes located in England. The research settings looked after residents aged 65+, each having a distinct profile: a nursing home, a residential home for older people and a residential home for those with advanced dementia. Analyses revealed a two-stage table assignment process: 1. Allocation - where staff exert control by determining residents' seating. Allocation is inherently part of the care provided by the homes and reflects the structural element of living in an institution. This study identified three strategies for allocation adopted by the staff: (a) personal compatibilities; (b) according to gender and (c) 'continual allocation'. 2. Appropriation - it consists of residents routinely and willingly occupying the same space in the dining room. Appropriation helps residents to create and maintain their daily routines and it is an expression of their agency. The findings demonstrate the mechanisms of residents' table assignment and its importance for their routines, contributing towards a potentially more self-fulfilling life. These findings have implications for policy and care practices in residential and nursing homes. abstract_id: PUBMED:27477624 The use of volunteers to help older medical patients mobilise in hospital: a systematic review. Aims And Objectives: To review current evidence for the use of volunteers to mobilise older acute medical in-patients. Background: Immobility in hospital is associated with poor healthcare outcomes in older people, but maintaining mobility is frequently compromised due to time pressures experienced by clinical staff. Volunteers are established in many hospitals, usually involved in indirect patient care. Recent evidence suggests that trained mealtime volunteers had a positive impact on patients and hospital staff. It is unclear whether volunteers can help older inpatients to mobilise. Design: Systematic review. Methods: We searched Cochrane, Medline, Embase, CINAHL, AMED and Google databases using MeSH headings and keywords within six key themes: inpatients, older, mobility/exercise, delirium, falls and volunteers. Full texts of relevant articles were retrieved and reference lists reviewed. Results: Of the 2428 articles that were identified, two scientific studies and three reports on quality improvement initiatives were included in the final review. One study included volunteer assisted mobilisation as part of a delirium prevention intervention (HELP). The second study has not reported yet (MOVE ON). The contribution of volunteers in both is unclear. Three quality improvement initiatives trained volunteers to help mobilise patients. They were not formally evaluated but report positive effects of the volunteers on patient and staff satisfaction. Conclusions: This review has identified a lack of scientific evidence for the use of volunteers in mobilising older medical inpatients, but quality improvement initiatives suggest that volunteers can be employed in this role with reports of staff and patient satisfaction: this is an area for further development and evaluation. Relevance To Clinical Practice: This review outlines the evidence for the involvement of volunteers in maintaining patients' mobility, identifies mobilisation protocols that have been used, the need to train volunteers and for formal evaluation of volunteers in this role. Prospero registration number: CRD42014010388. abstract_id: PUBMED:24666963 The feasibility and acceptability of training volunteer mealtime assistants to help older acute hospital inpatients: the Southampton Mealtime Assistance Study. Aims And Objectives: To determine the feasibility and acceptability of using trained volunteers as mealtime assistants for older hospital inpatients. Background: Poor nutrition among hospitalised older patients is common in many countries and associated with poor outcomes. Competing time pressures on nursing staff may make it difficult to prioritise mealtime assistance especially on wards where many patients need help. Design: Mixed methods evaluation of the introduction of trained volunteer mealtime assistants on an acute female medicine for older people ward in a teaching hospital in England. Methods: A training programme was developed for volunteers who assisted female inpatients aged 70 years and over on weekday lunchtimes. The feasibility of using volunteers was determined by the proportion recruited, trained, and their activity and retention over one year. The acceptability of the training and of the volunteers' role was obtained through interviews and focus groups with 12 volunteers, nine patients and 17 nursing staff. Results: Fifty-nine potential volunteers were identified: 38 attended a training session, of whom 29 delivered mealtime assistance, including feeding, to 3911 (76%) ward patients during the year (mean duration of assistance 5·5 months). The volunteers were positive about the practical aspects of training and ongoing support provided. They were highly valued by patients and ward staff and have continued to volunteer. Conclusions: Volunteers can be recruited and trained to help acutely unwell older female inpatients at mealtimes, including feeding. This assistance is sustainable and is valued. Relevance To Clinical Practice: This paper describes a successful method for recruitment, training and retention of volunteer mealtime assistants. It includes a profile of those volunteers who provided the most assistance, details of the training programme and role of the volunteers and could be replicated by nursing staff in other healthcare units. abstract_id: PUBMED:29493833 Exploring staff perceptions and experiences of volunteers and visitors on the hospital ward at mealtimes using an ethnographic approach. Aims And Objectives: To explore multiple perspectives and experiences of volunteer and visitor involvement and interactions at hospital mealtimes. In addition, to understand how the volunteer and visitor role at mealtimes is perceived within the hospital system. Background: Mealtime assistance can improve patients' food intake and mealtime experience. Barriers to providing mealtime assistance include time pressures, staff availability and inadequate communication. Volunteers and visitors can encourage and assist patients at mealtimes. There is a lack of evidence on the relationship between hospital staff, volunteers and visitors. Design: A qualitative, ethnographic approach. Methods: Sixty-seven hours of fieldwork were conducted on two subacute wards within an Australian healthcare network in 2015. Mealtime practices and interactions of hospital staff, volunteers and visitors were observed. Sixty-one staff, volunteers and visitors were interviewed in 75 ethnographic and semi-structured interviews. Data were inductively and thematically analysed. Results: Three key themes emerged as follows: "help"-volunteers and visitors were considered helpful when they assisted patients at mealtimes, supported well-being and aided staff-patient communication; "hindrance"-staff perceived visitors as negative presences when they inhibited patient progress and impacted staff work practices; and "reality of practice"-visiting hours, visitor engagement in patient therapy and communication between staff, volunteers and visitors were important practical considerations of mealtime involvement. Conclusions: The findings show how and why volunteers and visitors can be helpful and unhelpful at hospital mealtimes on subacute wards. More research on the role and contribution of volunteers and visitors on hospital wards will inform future practice in healthcare settings. Relevance To Clinical Practice: This healthcare organisation should continue to encourage volunteer and visitor involvement at hospital mealtimes. More effort is needed to educate visitors about patients' therapeutic goals and the importance of nutrition. The working relationship between hospital staff, volunteers and visitors should be strengthened to improve nutritional care. abstract_id: PUBMED:25729412 Visible Voices: Expressive arts with isolated seniors using trained volunteers. This practice-based paper describes an innovative program from Ontario, Canada that explored the potential for volunteer-facilitated expressive arts to contribute to the well-being of socially isolated rural seniors. Inspired by Arts on Prescription initiatives in the UK and coordinated by a Registered Expressive Arts Consultant/Educator, the program involved eight older volunteers and eight older participants engaged in a 10-week series of one-on-one intermodal art-making activities in the participants' homes and institutional settings in 2009-2010. An evaluation of the program design and implementation is presented and the challenges and opportunities of expressive arts with isolated seniors using trained volunteers are discussed. Answer: Yes, trained volunteers can make a difference at mealtimes for older people in hospital. Studies have shown that the introduction of trained volunteers to assist with mealtime care for older inpatients can improve the quality of care. These volunteers can help prepare patients for mealtimes, assist those who need help eating, and release nursing time to assist patients with more complex needs, such as those with dysphagia or who are drowsy (PUBMED:25117920). The presence of volunteers has been perceived positively by both staff and patients, potentially enhancing staff morale and patients' and relatives' confidence in the care provided (PUBMED:25117920). Additionally, volunteers can be trained to safely give additional mealtime assistance, including feeding to older patients on acute medical wards. While the evidence for improved dietary intake is mixed, volunteers can improve the quality of mealtime care for patients and nursing staff (PUBMED:29081312). The Hospital Elder Life Program (HELP) in the Netherlands also found that trained volunteers could positively impact older patients' hospital care by stimulating them to eat, drink, exercise, and provide walking assistance and cognitive stimulation (PUBMED:27471215). An implementation study in one English hospital showed that trained volunteers could help at a high number of meals, with their most common activity being feeding patients. The program was cost-saving and released valuable nursing time (PUBMED:30082361). Furthermore, a mixed methods systematic review highlighted that mealtime assistance initiatives could enhance nutritional intake, clinical outcomes, and patient experience, suggesting that training and supporting volunteers to deliver mealtime assistance could be beneficial (PUBMED:28199923). In conclusion, trained volunteers can indeed make a significant difference at mealtimes for older people in hospital, contributing to better nutritional care and overall patient well-being.
Instruction: Is there any correlation between 13C-urea breath test values and response to first-line and rescue Helicobacter pylori eradication therapies? Abstracts: abstract_id: PUBMED:16309984 Is there any correlation between 13C-urea breath test values and response to first-line and rescue Helicobacter pylori eradication therapies? Aim: To study if there is a correlation between 13C-urea breath test values prior to treatment and the response to first-line and rescue Helicobacter pylori eradication therapies. Methods: Six-hundred patients with peptic ulcer or functional dyspepsia infected by H. pylori were prospectively studied. Pre-treatment H. pylori infection was established by 13C-urea breath test. Three-hundred and twelve patients were treated with first-line eradication regimen, and 288 received a rescue regimen. H. pylori eradication was defined as a negative 13C-urea breath test, 8 weeks after completion of treatment. Results: H. pylori eradication was achieved in 444 patients. No statistically significant differences were demonstrated when mean delta 13C-urea breath test values were compared between patients with eradication success and failure (49.4+/-33 versus 49.2+/-31). Differences in mean pre-treatment delta 13CO2 between patients with eradication success/failure were not demonstrated either when first-line or rescue regimens were prescribed. With the cut-off point of pre-treatment delta 13CO2 set at 35 units, sensitivity and specificity for the prediction of H. pylori eradication success was 43 and 60%. The area under the receiver operating characteristic curve evaluating all the cut-off points of the pre-treatment delta 13CO2 for the diagnosis of H. pylori eradication was 0.5. Finally, delta 13CO2 values did not influence the eradication in the logistic regression model. Conclusion: No correlation was observed between 13C-urea breath test values before treatment and the response to first-line and rescue H. pylori eradication therapies. Therefore, we conclude that the quantification of delta 13CO2 prior to treatment is not useful to predict the success or failure of eradicating therapy. abstract_id: PUBMED:8924323 Is there a correlation between the values of breath tests and the response to the treatment for the eradication of Helicobacter pylori? Purpose: To study a possible correlation between the urea breath test values prior to treatment and the response to Helicobacter pylori eradication therapy in patients with duodenal ulcer. Methods: Seventy-seven patients with duodenal ulcer were retrospectively studied (mean age: 46 +/- 13 years; 71% males). Initially, an endoscopy with biopsy samples (H&amp;E stain) taken from antrum and body was performed, and a 13C-urea breath test (measuring 13C difference: delta 13CO2) was also performed. Both procedures were repeated one month after completing therapy ("classic" triple therapy [n = 25], and omeprazole plus amoxycillin [n = 52]). Results: Eradication was achieved in 40% of cases (n = 32), and it was higher in patients treated with "classic" triple therapy (60%) than in those treated with omeprazole plus amoxycillin (31%) (p = 0.017). Mean delta 13CO2 level was 25 +/- 15. There were no differences when comparing values of patients with successful eradication (24 +/- 18) or therapy failure (25.6 +/- 13). No differences were observed when considering both therapies separately or when comparing eradication rates depending upon breath test levels prior to therapy. Breath test values did not influence the eradication in the logistic regression model. Mean delta 13CO2 values after therapy in patients with eradication failure ran in parallel with those observed previously. Conclusion: No correlation was observed between urea breath test values before treatment and the response to H. pylori eradication therapy in patients with duodenal ulcer. Thus, we conclude that quantification of this diagnostic method is not useful to predict the success or failure of eradicating therapy. abstract_id: PUBMED:7847290 13C]urea breath test to confirm eradication of Helicobacter pylori. Objective: To determine the utility of the [13C]urea breath test in confirming the eradication of Helicobacter pylori. Methods: We reviewed our H. pylori database for patients who underwent [13C]urea breath test at baseline and 6 wk after triple therapy with tetracycline, metronidazole, and bismuth subsalicylate. Baseline infection was defined by the identification of the organism on antral biopsies or a reactive CLO test. Eradication was defined as a negative Warthin-Starry stain and a non-reactive CLO test at 24 h. All patients had a positive baseline [13C]urea breath test defined as [13C] enrichment &gt; 6% at 60 min. Results: One hundred eighteen H. pylori-infected patients (mean age 58.3 +/- 13.9 yr) met the review criteria (61 duodenal ulcers, 24 gastric ulcers, 33 non-ulcer dyspepsia). In 101/118 patients (86%), H. pylori was successfully eradicated (mean baseline breath test value 25.8 +/- 1.6). Of 101 patients, 95 had a negative 6-wk follow-up breath test (mean 2.2 +/- 0.2, p &lt; 0.001). Of the 6/101 patients in whom treatment was successful, and who remained breath test positive at 6 wk, 4/6 were breath test negative when retested at 3 months. The remaining two patients were lost to follow-up. In 17/118 (14%) patients, H. pylori failed to be eradicated (mean baseline breath test 22.4 +/- 3.6). Fifteen of 17 patients had a positive breath test at 6 wk (mean 19.9 +/- 3.7). Two of 17 with a negative breath test at 6 wk tested positive when the breath test was repeated at 3 months. The sensitivity and specificity of [13C]urea breath test at 6 wk posttreatment are 97% and 71%, respectively. The positive and negative predictive values are 94% and 88%, respectively. Conclusions: [13C]urea breath test is a sensitive indicator of H. pylori eradication 6 wk after treatment. Antral biopsies are unnecessary to confirm eradication of H. pylori after completion of treatment. abstract_id: PUBMED:11262917 Diagnosis of Helicobacter pylori with the 13C-labeled urea breath test: study methodology Helicobacter pylori is one of the most common causes of chronic bacterial infection in humans, and it is associated with many diseases of the upper gastrointestinal tract. The 13C urea breath test (13C-UBT) is a simple, non-invasive and global test for Helicobacter pylori detection. The test reflects the hydrolysis of 13C-labelled urea by Helicobacter pylori urease. The 13C-UBT is the gold standard test for Helicobacter pylori infection. Since the original description (in 1987) several modifications of 13C-UBT have been published to simplify and optimise the test. However, neither Standardised European Protocol nor Standard US Protocol were accepted. This paper gives the methodology of the 13C-UBT based on eur own study and on the review of the literature. abstract_id: PUBMED:12700505 Confirmation of Helicobacter pylori eradication following first line treatment. How and when? Helicobacter pylori (H. pylori) eradication treatment should always be thought of as a package which includes first and second line therapies together. So, testing of H. pylori eradication following first line treatment should always be performed and should be explained to the patient with the prescription of the triple therapy. For confirmation of H. pylori eradication both the urea breath test and the biopsy based test (when endoscopy is clinically indicated) are recommended. Stool antigen test is also an accurate test although it seems to have a lower diagnostic value after eradication treatment. Testing should be performed at least of 4 weeks after treatment. Serology with pre and 6 months post treatment samples is usually not recommended except in the case of H. pylori eradication campaign in populations at high risk for stomach cancer for instance. abstract_id: PUBMED:10457031 The 13C urea breath test in the diagnosis of Helicobacter pylori infection. The urea breath test (UBT) is one of the most important non-invasive methods for detecting Helicobacter pylori infection. The test exploits the hydrolysis of orally administered urea by the enzyme urease, which H pylori produces in large quantities. Urea is hydrolysed to ammonia and carbon dioxide, which diffuses into the blood and is excreted by the lungs. Isotopically labelled CO2 can be detected in breath using various methods. Labelling urea with 13C is becoming increasingly popular because this non-radioactive isotope is innocuous and can be safely used in children and women of childbearing age. Breath samples can also be sent by post or courier to remote analysis centres. The test is easy to perform and can be repeated as often as required in the same patient. A meal must be given to increase the contact time between the tracer and the H pylori urease inside the stomach. The test has been simplified to the point that two breath samples collected before and 30 minutes after the ingestion of urea in a liquid form suffice to provide reliable diagnostic information. The cost of producing 13C-urea is high, but it may be possible to reduce the dosage further by administering it in capsule form. An isotope ratio mass spectrometer (IRMS) is generally used to measure 13C enrichment in breath samples, but this machine is expensive. In order to reduce this cost, new and cheaper equipment based on non-dispersive, isotope selective, infrared spectroscopy (NDIRS) and laser assisted ratio analysis (LARA) have recently been developed. These are valid alternatives to IRMS although they cannot process the same large number of breath samples simultaneously. These promising advances will certainly promote the wider use of the 13C-UBT, which is especially useful for epidemiological studies in children and adults, for screening patients before endoscopy, and for assessing the efficacy of eradication regimens. abstract_id: PUBMED:32635179 The Use of 13C-Urea Breath Test for Non-Invasive Diagnosis of Helicobacter pylori Infection in Comparison to Endoscopy and Stool Antigen Test. Helicobacter pylori (H. pylori) can cause gastritis, peptic ulcer diseases and gastric carcinoma. Endoscopy as the gold standard method of diagnosis is an invasive procedure that might not be suitable in all scenarios. Therefore, this first study in Jordan aimed to assess the non-invasive 13C urea breath test (UBT) and stool antigen test for diagnosis of H. pylori infection and the successfulness of eradication therapy as alternatives for endoscopy. Hence, a total of 30 patients attending the endoscopy units at Alkarak teaching hospital were asked to complete a questionnaire with demographic and clinical data. They were then tested for H. pylori using 13C UBT and H. pylori stool antigen before having endoscopy. Another 30 patients who were positive for H. pylori by endoscopy were tested using both tests 6 weeks post eradication therapy. Results showed that the rate of H. pylori detection using endoscopy was 56.7% (17/30). Heartburns (82.3%, p value = 0.019), epigastric pain (88.2%, p value = 0.007) and vomiting (70.5%, p value = 0.02) were the most significant symptoms. Family history of peptic ulcer diseases was significantly associated with an increased risk for having a H. pylori positive result (p value = 0.02). Compared to endoscopy, the sensitivity of 13C UBT for the diagnosis of H. pylori was 94.1% (16/17), while it was 76.5% (13/17) for the stool antigen test. The specificity of both tests was equal (76.9%). However, the positive predictive and negative predictive values (84.2% and 90.9%) for 13C UBT were higher than those (81.3% and 71.4%) for the stool antigen test. The accuracy of 13C UBT was 86.7% compared to 76.7% for the stool antigen test. There was an 87% agreement (20 patients out of 23) between both tests when used to assess success of the eradication therapy. In conclusion, the 13C UBT was found to be more sensitive and accurate than the stool antigen test when used for diagnosis; furthermore, it has a comparable outcome to the stool antigen test in assessing the successfulness of the eradication treatment. abstract_id: PUBMED:10610215 13C urea breath testing to diagnose Helicobacter pylori infection in children. The casual relationship between Helicobacter pylori colonization of the gastric mucosa and gastritis has been proven. Endoscopy and subsequent histological examination of antral biopsies have been regarded as the gold standard for diagnosing H pylori gastritis. The C urea breath test is a noninvasive test with a high specificity and sensitivity for H pylori colonization. Increasingly, it is becoming an important tool for use in diagnosing H pylori infection in pediatric populations. This test is particularly well suited for epidemiological studies evaluating reinfection rates, spontaneous clearance of infection and eradication rates after therapy. However, few groups have validated the test in the pediatric age group. The testing protocol has not yet been standardized. Variables include fasting state, dose of urea labelled with 13C, delta cutoff level of 13C carbon dioxide, choice of test meal and timing of collection of expired breath samples. Further studies are urgently needed to evaluate critically the impact of H pylori infection in the children. The 13C urea breath test should prove very useful in such prospective studies. abstract_id: PUBMED:24895807 Correlation of 13C urea breath test values with Helicobacter pylori load among positive patients. Background/aims: 13C urea breath test (13C UBT) is used to detect Helicbacter (H.) pylori in gastric mucosa. There are controversial results regarding associations of 13C UBT values with histopathological grades. We designed this study to correlate 13C UBT values with different histopathological grades in our local setting. Methodology: 13CO2/12CO2 ratio for 13C UBT was analyzed using mass spectrometry and histopatholgical grades were scored by updated Sydney System. Results: 13C UBT values of H. pylori positive patients at different times (T10-T60) were higher as compared to negative patients. Significant positive correlation of 13C UBT values at T30 with different scores of H. pylori load (r = 0.277, p = 0.037) was observed. Associations of the mean 13C UBT values with neutrophil infiltration (p = 0.214), mononuclear cell infiltration (p = 0.648), atrophy (p= 0.620), atypia (p = 0.057) and metaplasia scores (p = 0.718) were found to be nonsignificant. H. pylori load significantly correlated with neutrophil infiltration and atrophy with exception of mononuclear cell infiltration, atypia and metaplasia. Conclusions: In the present analysis, significant positive correlation was observed between 13C UBT values and H. pylori load that would be helpful in qauntification of H. pylori in our local setting. abstract_id: PUBMED:10201464 Comparison of [13C]urea blood test to [13C]urea breath test for the diagnosis of Helicobacter pylori. Objective: It has been determined that the [13C]urea breath test (UBT) is a safe and effective way of detecting Helicobacter pylori (H. pylori) infection. Some individuals may have difficulty performing the exhalation component of the test, possibly due to age, or mental or physical compromise. Our aim was to determine if a commercially developed [13C]urea blood test could be utilized as a substitute for the UBT to detect H. pylori infection. Methods: Patients who were referred by their physicians for UBT were offered study inclusion. Patients underwent baseline and 30-min UBT. A simultaneous blood sample of 3 cc was drawn into a heparinized vacutainer at the 30-min period of the UBT. [13C]urea levels in both blood and breath samples were analyzed using isotope ratio mass spectrometry. UBT &gt; or = 6 delta per mil over baseline and urea blood tests &gt; (-17 delta per mil) were considered positive. Results: One hundred sixty-one patients (68 men/93 women) with average age of 47.0 +/- 14.2 yr were tested. Agreement between breath and blood test results occurred in 153/161 (95%) cases. Using the UBT as the diagnostic standard, the urea blood test resulted in 44 true positive, 109 true negative, four false positive, and four false negative results, giving a sensitivity of 92%, specificity of 96%, positive predictive value of 92%, and negative predictive value of 96%. Conclusions: The urea blood test was found to be comparable to the urea breath test in the detection of H. pylori infection. The urea blood test will be accurate in the diagnosis of active H. pylori infection. Answer: Based on the information provided in the abstracts, there is no observed correlation between 13C urea breath test values prior to treatment and the response to first-line and rescue Helicobacter pylori eradication therapies. Two studies specifically investigated this correlation and found no statistically significant differences in mean delta 13C-urea breath test values between patients with successful eradication and those with treatment failure. One study focused on patients with peptic ulcer or functional dyspepsia (PUBMED:16309984), while the other study involved patients with duodenal ulcer (PUBMED:8924323). Both concluded that the quantification of delta 13CO2 prior to treatment is not useful for predicting the success or failure of H. pylori eradication therapy. Additionally, the sensitivity and specificity for predicting H. pylori eradication success using a cut-off point of pre-treatment delta 13CO2 were low, and delta 13CO2 values did not influence eradication in the logistic regression model (PUBMED:16309984). Therefore, based on these studies, the 13C urea breath test values before treatment do not correlate with the response to H. pylori eradication therapies.
Instruction: Can serum HCG values be used in the differential diagnosis of pregnancy complicated by hypertension? Abstracts: abstract_id: PUBMED:15117595 Can serum HCG values be used in the differential diagnosis of pregnancy complicated by hypertension? Objective: The aim of our study is to determine whether the serum human chorionic gonadotropin (hCG) level is helpful in the differential diagnosis and in the clinical management and follow-up of preeclampsia, superimposed preeclampsia, and chronic hypertension during the third trimester. Material And Methods: Eighty hypertensive pregnant patients, who had been hospitalized, and 25 normotensive pregnant patients, who attended the outpatient perinatology clinic in Zeynep Kamil Women and Pediatric Diseases Education and Research Hospital between June 2001 and September 2001 were enrolled in the study. These patients were evaluated in five groups: mild preeclamptic, severe preeclamptic, superimposed preeclamptic, chronic hypertensive, and normotensive groups. The geometric means of hCG levels of these groups were compared with each other and cutoff levels for differential diagnosis were determined. Results: The geometric mean of hCG levels was established as 17,361.31 mIU/mL in the mild preeclamptic group, 49,817.59 mIU/mL in the severe preeclamptic group, 41,101.09 mIU/mL in the superimposed preeclamptic group, 12,558.57 mIU/mL in the chronic hypertensive group, and 9647.98 mIU/mL in the normotensive group. When the geometric mean of the severe preeclamptic group was compared with the results of the normotensive patients, mild preeclamptic patients, chronic hypertensive patients, and superimposed preeclamptic patients, the mean hCG value of severe preeclamptic group was statistically significantly higher than all of the other groups (p &lt; 0.001) except for the latter. The geometric mean of hCG levels of severe preeclamptic patients was compared with the geometric mean of hCG levels of superimposed preeclamptic patients (p &gt; 0.05). The geometric mean of hCG levels in the chronic hypertensive group was lower than that of the superimposed preeclamptic group and the difference was statistically significant (p &lt; 0.001). The geometric mean of hCG levels of the chronic hypertensive group was not significantly different from the results of the mild preeclamptic group and the normotensive group. There was, however, a statistically significant difference between the geometric means of hCG levels of mild preeclamptic patients and normotensive group (p &lt; 0.001). The cutoff value of hCG was determined as 25,000 mIU/mL in differentiation of chronic hypertension from the severe preeclampsia, as 20,000 mIU/mL in differentiation of chronic hypertension from the superimposed preeclampsia, and as 30,000 mIU/mL in differentiation of severe preeclampsia from mild preeclampsia. Conclusion: The maternal serum hCG level is a useful laboratory tool when managing and treating hypertensive disorders that complicate pregnancy. The serum hCG level is especially significant in severe preeclampsia and superimposed preeclampsia. Therefore, a high serum hCG level can be a helpful marker in the diagnosis and clinical management by preventing possible complications resulting from severe and superimposed preeclampsia. abstract_id: PUBMED:8994249 Maternal serum testing for alpha-fetoprotein and human chorionic gonadotropin in high-risk pregnancies. To evaluate the variations and potential clinical use of serial maternal alpha-fetoprotein (AFP) and human chorionic gonadotropin (hCG) in pregnancies at risk of pregnancy-induced hypertension (PIH) and/or intrauterine growth retardation (IUGR), we investigated the relationship between placental sonographic findings, uterine artery Doppler measurements, and maternal serum AFP, hCG, and uric acid levels between 20 and 34 weeks of pregnancy. Maternal serum samples were collected from 41 singleton pregnancies with bilateral uterine notches and/or an increased uterine artery pulsatility index at 20-24 weeks. Maternal serum AFP, intact hCG and free alpha and beta subunits, and uric acid circulating levels were measured in all cases at 20-24 weeks and 25-28 weeks. Placental sonographic investigations comprised measurements of thickness and morphology. Twenty pregnancies had a normal outcome and 21 had an adverse outcome, including eight complicated by severe PIH with fetal IUGR, eight by isolated IUGR, three by mild PIH with normal fetal growth, and two by placental abruption. At the time of the first scan, the placental thickness and maternal serum levels of AFP, hCG, and uric acid were significantly increased in pregnancies with adverse outcomes, compared with those with a normal outcome. In subsequent maternal serum examinations, the incidence of elevated hormonal levels fell for AFP, intact hCG, and beta-hCG, whereas it increased for the uric acid level. No difference was found at any stage for the alpha-hCG level. Seven out of 11 pregnancies complicated by PIH presented with elevated MSAFP and MShCG and a large heterogeneous placenta at the first visit, whereas no pregnancy with a normal outcome presented with similar features. This study has shown a significant association between abnormal development of the utero-placental circulation, elevated MSAFP and MShCG at mid-gestation, and subsequent adverse pregnancy outcome. Serial measurements of MSAFP and MShCG do not provide extra information for the follow-up of these pregnancies. abstract_id: PUBMED:22338557 Severe preeclampsia and fetal virilization in a spontaneous singleton pregnancy complicated by hyperreactio luteinalis. Unlabelled: BACKGROUNDŕ: Hyperreactio luteinalis is a rare condition that stems from theca cell hyperplasia in the ovaries due to a high level of human chorionic gonadotropin during gestation. It occurs commonly in pregnant patients with trophoblastic disease, occasionally in multiple pregnancies, and rarely in normal singleton pregnancy. Case Report: A 24-year-old pregnant woman, G3 P0, who was admitted to the Perinatology Clinic with increasing findings of virilization during pregnancy was presented. The patient had bilaterally enlarged multicystic ovaries on sonographic examination and elevated serum androgen levels She was managed conservatively until 38th week of gestation as a presumptive diagnosis of hyperreactio luteinalis. Elevated blood pressure and prominent proteinuria were detected during the follow-up of the patient and labor was induced. She underwent an emergency caesarean delivery because of fetal distress. During caesarean section, ovarian biopsies were taken and a histopathological diagnosis of hyperreactio luteinalis was determined. The female fetus also presented virilization. Conclusion: Although infrequent, hyperreactio luteinalis with both maternal and fetal virilization can occur in women with spontaneous singleton pregnancies. The clinical manifestations in such women may be complicated by severe preeclampsia. abstract_id: PUBMED:28556194 Assessment of serum β-hCG and lipid profile in early second trimester as predictors of hypertensive disorders of pregnancy. Objective: To assess and compare the ability of serum β-human chorionic gonadotropin (β-hCG) and serum lipid profile in early second trimester as predictors of hypertensive disorders of pregnancy. Methods: The present hospital-based prospective study was conducted between November 24, 2012, and April 30, 2014, at a tertiary hospital in Mangalore, India. Women of any parity with a pregnancy of 14-20 weeks were included. Venous blood (3 mL) was collected, and serum β-hCG and lipid profile were estimated by enzyme-linked immunosorbent assay and an enzymatic colorimetric test with lipid clearing factor, respectively. A cutoff value of β-hCG for predicting hypertensive disorders was obtained by receiver operating curve analysis. Results: Serum β-hCG was significantly higher among women who subsequently developed hypertension (71 142 IU/L [n=27]) than among those who did not (20 541 IU/L [n=137]; P&lt;0.001). The sensitivity and specificity of serum β-hCG to predict hypertension were 92.6% and 94.9% respectively. The positive and negative predictive values were 78.1% and 98.5%, respectively. Conclusion: Serum β-hCG might be used as a predictor of hypertensive disorders that complicate pregnancy. Dyslipidemia was not found to be a useful marker. abstract_id: PUBMED:33341826 Risk of gestational hypertension in pregnancies complicated with ovarian hyperstimulation syndrome. Objective: Ovarian hyperstimulation syndrome (OHSS) is the most common iatrogenic complication due to ovulation stimulation during assisted reproductive technology. Pathophysiology of this syndrome is not completely clarified, and there is no some specific treatment. Human chorionic gonadotropin is considered as the most significant factor in etiopathogenesis of OHSS. The results of some clinical studies related to influence of OHSS on pregnancy are variable. The aim of this study was to investigate hypertensive disease of pregnancies in patients admitted to hospital due to severe forms of OHSS with reference to maternal characteristics. Methodology: A case control study was conducted at the Obstetrics and Gynaecology Clinic "Narodni Front" and involved 50 patients admitted to hospital due to severe form of OHSS during a period from January 2008 to March 2015. A control group was created based on age and it involved 59 patients with pregnancy achieved with IVF/ICSI during the same period, but in which OHSS did not occur. For comparing mean values of continuous variables, Independent samples t test was applied. Results: Patients with pregnancy complicated by OHSS, had considerably higher rate of hypertension (14% vs. 3.2 %, p=0.046). Conclusions: Pregnancies achieved by IVF/ICSI, being complicated with severe OHSS could be related to gestational hypertension. abstract_id: PUBMED:20006261 Anesthetic management of a parturient with fetal sacrococcygeal teratoma and mirror syndrome complicated by elevated hCG and subsequent hyperthyroidism. Mirror syndrome is a condition in which the mother "mirrors" her hydropic fetus and/or hydropic placenta. Physical and laboratory findings of mirror syndrome include generalized edema, hypertension, and proteinuria similar to preeclampsia. However, unlike preeclampsia, mirror syndrome is associated with hemodilutional anemia and fluid overload, which may progress to pulmonary edema. The anesthetic management of a parturient with fetal sacrococcygeal teratoma, hydrops fetalis, and mirror syndrome complicated by markedly elevated maternal serum human chorionic gonadotropin and subsequent clinical hyperthyroidism, is presented. abstract_id: PUBMED:10521763 Second-trimester maternal serum marker screening: maternal serum alpha-fetoprotein, beta-human chorionic gonadotropin, estriol, and their various combinations as predictors of pregnancy outcome. Objective: We evaluated the value of all 3 common biochemical serum markers, maternal serum alpha-fetoprotein, beta-human chorionic gonadotropin, and unconjugated estriol, and combinations thereof as predictors of pregnancy outcome. Study Design: A total of 60,040 patients underwent maternal serum screening. All patients had maternal serum alpha-fetoprotein measurements; beta-human chorionic gonadotropin was measured in 45,565 patients, and 24,504 patients had determination of all 3 markers, including unconjugated estriol. The incidences of various pregnancy outcomes were evaluated according to the serum marker levels by using clinically applied cutoff points. Results: In confirmation of previous observations, increased maternal serum alpha-fetoprotein levels (&gt;2.5 multiples of the median) were found to be significantly associated with pregnancy-induced hypertension, miscarriage, preterm delivery, intrauterine growth restriction, intrauterine fetal death, oligohydramnios, and abruptio placentae. Increased beta-human chorionic gonadotropin levels (&gt;2.5 multiples of the median [MoM]) were significantly associated with pregnancy-induced hypertension, miscarriage, preterm delivery, and intrauterine fetal death. Finally, decreased unconjugated estriol levels (&lt;0.5 MoM) were found to be significantly associated with pregnancy-induced hypertension, miscarriage, intrauterine growth restriction, and intrauterine fetal death. As with increased second-trimester maternal serum alpha-fetoprotein levels, increased serum beta-human chorionic gonadotropin and low unconjugated estriol levels are significantly associated with adverse pregnancy outcomes. These are most likely attributed to placental dysfunction. Conclusion: Multiple-marker screening can be used not only for the detection of fetal anomalies and aneu-ploidy but also for detection of high-risk pregnancies. abstract_id: PUBMED:17763188 Human chorionic gonadatropin (hCG) during third trimester pregnancy. Objective: Separate reference values were recently established for routine blood samples during last trimester pregnancy. Previously, these were based on blood samples from healthy men or non-pregnant women. Normal changes in variation in the levels of steroid hormones in the last weeks of pregnancy before delivery are also incompletely investigated. This study of the preterm hormone levels was carried out in the search for events leading to increased contractility that might occur in the predelivery weeks and potentially influence the initiation of delivery. Material And Methods: Blood samples during pregnancy weeks 33, 36 and 39 as well as 1-3 h postpartum were collected from pregnant women (19-39 years, mean age 30) with at least one previous pregnancy without hypertension or pre-eclampsia. All women (n = 135) had had a vaginal delivery and spontaneous start of labour. The blood samples were analysed for serum hCG, oestradiol and progesterone. Postpartum, the values were retrospectively rearranged to correspond with the actual week before the day of delivery. Results: During the last trimester of normal pregnancy, a gradual increase was found in oestradiol (median 45980 to 82410 pmol/L), progesterone (median 341 to 675 nmol/L) and a gradual decrease in hCG (median 31833 to 19494 IU/L). Furthermore, a significant (p&lt;0.03) decrease in hCG was found from the third to the second week before delivery, while oestradiol and progesterone continued to increase. Conclusions: Hormone levels during third-trimester pregnancy have not previously been systematically investigated. Recent data suggest that hCG may have a role as an endogenous tocolytic in normal pregnancy by directly promoting relaxation of uterine contractions. In the present study a significant decrease in serum hCG level was found 2-3 weeks before the spontaneous start of labour. This might contribute to increasing the contractility in the uterine muscle and gradually initiate the onset of labour. abstract_id: PUBMED:17962110 Triplet pregnancy complicated with one hydatidiform mole and preeclampsia in a 46,XY female with gonadal dysgenesis. Objective: We present a case of triplet pregnancy with a complete hydatidiform mole, a condition carrying a significant risk to both mother and fetuses and, therefore, raising an important issue on prenatal care. Case Report: A 36-year-old patient with gonadal dysgenesis and a 46,XY karyotype successfully conceived a triplet pregnancy after oocyte donation and in vitro fertilization. At mid-trimester, the pregnancy was seen harboring a hydatidiform mole along with two other fetuses by ultrasound. Fetal karyotyping of both fetuses revealed normal results. Serum human chorionic gonadotropin levels were followed up throughout the remainder of pregnancy. At 33 weeks of gestation, preeclampsia ensued with worsening of maternal renal function and high blood pressure, so cesarean section was arranged to deliver a set of two surviving twins. Prophylactic bilateral gonadectomy was done at the same time to curtail the possibility of future malignancy development. Upon pathologic examination of the placentae, hydropic chorionic villi with central cistern formation and nonpolar trophoblastic hyperplasia with atypia and necrosis were found, compatible with complete hydatidiform mole. The gonads showed streaks of fibrous tissue, which resembled ovarian stroma and hilus cells, and an unremarkable tube. Maternal serum human chorionic gonadotropin levels declined gradually to normal level at two months after delivery. Conclusion: This is the first report of triplet pregnancy complicated with one complete hydatidiform mole and preeclampsia in a 46,XY female with gonadal dysgenesis. Our case demonstrated that prolonged gestation with both surviving fetuses was possible by applying intensive monitoring of the whole pregnancy. abstract_id: PUBMED:28764583 Expression and significance of aquaporin-2 and serum hormones in placenta of patients with preeclampsia. To evaluate the relationship between serum hormone or aquaporin-2 (AQP-2) and preeclampsia, patients with severe preeclampsia (A group), mild preeclampsia (B group), chronic hypertension (C group) and normal pregnant women (D group) were recruited and analysed. The AQP-2 level in placenta tissues was detected and the correlations of AQP-2 with serum hormone levels were analysed using linear correlation regression analysis. The differences of alpha foetal protein (AFP) and human chorionic gonadotropin (HCG) levels during mid-pregnancy, as well as the levels of AFP, HCG, unconjugated oestriol and progesterone during late pregnancy were significant among A, B, C and D groups (p &lt; .05). The AQP-2 level in placenta tissues was higher in A group than that in other groups (p &lt; .05). The AQP-2 was correlated with HCG (p &lt; .05). In conclusion, AQP-2 may be involved in the development of severe preeclampsia, which may be related to serum HCG. Answer: Yes, serum human chorionic gonadotropin (hCG) values can be used in the differential diagnosis of pregnancy complicated by hypertension. A study found that the serum hCG level is a useful laboratory tool when managing and treating hypertensive disorders that complicate pregnancy, especially significant in severe preeclampsia and superimposed preeclampsia. High serum hCG levels can be a helpful marker in the diagnosis and clinical management by preventing possible complications resulting from severe and superimposed preeclampsia (PUBMED:15117595). Additionally, another study showed that serum β-hCG might be used as a predictor of hypertensive disorders that complicate pregnancy (PUBMED:28556194). Furthermore, elevated hCG levels have been associated with adverse pregnancy outcomes, including pregnancy-induced hypertension (PUBMED:10521763). Moreover, the expression of aquaporin-2 (AQP-2) in placenta tissues, which was found to be higher in patients with severe preeclampsia, was correlated with serum hCG levels, suggesting a potential relationship between hCG and the development of severe preeclampsia (PUBMED:28764583). In summary, serum hCG values have been shown to be potentially useful in the differential diagnosis and prediction of hypertensive disorders in pregnancy, particularly in distinguishing between different types of hypertensive conditions such as chronic hypertension, mild preeclampsia, and severe preeclampsia.
Instruction: Drug users in Amsterdam: are they still at risk for HIV? Abstracts: abstract_id: PUBMED:23970642 Drug and sexual HIV risk behaviours related to knowledge of HIV serostatus among injection drug users in Houston, Texas. This study examines the association between drug and sexual HIV risk behaviours and knowledge of HIV serostatus among a sample of injection drug users, recruited into the 2009 National HIV Behavioral Surveillance project. We calculated prevalence ratios and associated 95% confidence intervals of reporting a given risk behaviour comparing injection drug users unaware of their serostatus and HIV-negative to HIV-positive injection drug users. Of 523 participants, 21% were unaware of their HIV serostatus. The three groups were not different from each other in terms of drug-use behaviours; however, injection drug users unaware of their HIV serostatus were 33% more likely to report having more than three sexual partners in the past 12 months and 45% more likely to report having unprotected sex compared to HIV-positive injection drug users. We observed markedly higher prevalence of sexual risk behaviours among injection drug users unaware of their serostatus, but drug-use risk behaviours were similar across the groups. abstract_id: PUBMED:21324140 Prevalence of HIV among injection drug users in Georgia. Background: Injection drug use remains a major risk factor for HIV transmission in Georgia. The study aims to characterize the prevalence of HIV among injection drug users in Georgia. Methods: A cross-sectional, anonymous bio-behavioural survey to assess knowledge and behaviour in injection drug users in combination with laboratory testing on HIV status was conducted in five Georgian cities (Tbilisi, Gori, Telavi, Zugdidi and Batumi) in 2009. A snowball sample of 1127 eligible injection drug user participants was investigated. Results: Odds of HIV exposure were increased for injection drug users of greater age, with greater duration of drug use and with a history of imprisonment or detainment (p &lt; 0.05). Conclusions: More research is required to analyze the determinants of HIV risk in Georgian injection drug users. The imprisoned population and young injection drug users may be appropriate target groups for programmes aimed at preventing HIV transmission. abstract_id: PUBMED:34728868 The correlates and predictive validity of HIV risk groups among drug users in a community-based sample: methodological findings from a multi-site cluster analysis. Outreach and intervention with out-of-treatment drug users in their natural communities has been a major part of our national HIV-prevention strategy for over a decade. Intervention design and evaluation is complicated because this population has heterogeneous patterns of HIV risk behaviors. The objectives of this paper are to: (a) empirically identify the major HIV risk groups; (b) examine how these risk groups are related to demographics, interactions with others, risk behaviors, and community (site); and (c) evaluate the predictive validity of these risk groups in terms of future risk behaviors. Exploratory cluster analysis of a sample of 4445 out-of-treatment drug users from the national data set identified eight main risk subgroups that could explain over 99% of the variance in the 20 baseline indices of HIV risk. We labeled these risk groups: Primary Crack Users (29.2%), Cocaine and Sexual Risk (12.8%), High Poly Risk Type 2 (0.3%), Poly Drug and Sex Risk (10.9%), Primary Needle Users (24.1%), High Poly Risk Type 1 (1.4%), High Frequency Needle Users (19.8%), and High Risk Needle Users (1.6%). Risk group membership was highly related to HIV characteristics (testing, sero-status), demographics (gender, race, age, education), status (marital, housing, employment, and criminal justice), prior target populations (needle users, crack users, pattern of sexual partners), and geography (site). Risk group membership explained 63% of the joint distribution of the original 20 HIV risk behaviors 6 months later (ranging from 0.03 to 37.2% of the variance individual indices). These analyses were replicated with both another 25% sample from the national data set and an independent sample collected from a new site. These findings suggest HIV interventions could probably be more effective if they targeted specific subgroups and that evaluations would be more sensitive if they consider community and sub-populations when evaluating these interventions. abstract_id: PUBMED:21390967 HIV prevalence and sexual risk behaviour among non-injection drug users in Tijuana, Mexico. Unlabelled: Prior studies estimate HIV prevalence of 4% among injection drug users (IDUs), compared with 0.8% in the general population of Tijuana, Mexico. However, data on HIV prevalence and correlates among non-injecting drug users (NIDUs) are sparse. Individuals were recruited through street outreach for HIV testing and behavioural risk assessment interviews to estimate HIV prevalence and identify associated sexual risk behaviours among NIDUs in Tijuana. Descriptive statistics were used to characterise 'low-risk' NIDUs (drug users who were not commercial sex workers or men who have sex with men). Results showed that HIV prevalence was 3.7% among low-risk NIDUs. During the prior six months, 52% of NIDUs reported having &gt;1 casual partner; 35% reported always using condoms with a casual partner; and 13% and 15%, respectively, reported giving or receiving something in exchange for sex. Women were significantly more likely than men to have unprotected sex with an IDU (p&lt;0.01). Conclusions: The finding that HIV prevalence among NIDUs was similar to that of IDUs suggests that HIV transmission has occurred outside of traditional core groups in Tijuana. Broad interventions including HIV testing, condom promotion and sexual risk reduction should be offered to all drug users in Tijuana. abstract_id: PUBMED:27423099 Perceived risk for severe outcomes and drinking status among drug users with HIV and Hepatitis C Virus (HCV). Objective: Among drug users with HIV and Hepatitis C Virus (HCV) infections, heavy drinking can pose significant risks to health. Yet many drug users with HIV and HCV drink heavily. Clarifying the relationship of drug-using patients' understanding of their illnesses to their drinking behavior could facilitate more effective intervention with these high-risk groups. Method: Among samples of drug users infected with HIV (n=476; 70% male) and HCV (n=1145; 81% male) recruited from drug treatment clinics, we investigated whether patients' perceptions of the risk for severe outcomes related to HIV and HCV were associated with their personal drinking behavior, using generalized logit models. Interactions with co-infection status were also explored. Results: HIV-infected drug users who believed that HIV held highest risk for serious outcomes were the most likely to be risky drinkers, when compared with those with less severe perceptions, X(2)(6)=14.19, p&lt;0.05. In contrast, HCV-infected drug users who believed that HCV held moderate risk for serious outcomes were the most likely to be risky drinkers, X(2)(6)=12.98, p&lt;0.05. Conclusions: In this sample of drug users, risky drinking was most common among those with HIV who believed that severe outcomes were inevitable, suggesting that conveying the message that HIV always leads to severe outcomes may be counterproductive in decreasing risky drinking in this group. However, risky drinking was most common among those with HCV who believed that severe outcomes were somewhat likely. Further research is needed to understand the mechanisms of these associations. abstract_id: PUBMED:29378631 Socio-demographic and sexual practices associated with HIV infection in Kenyan injection and non-injection drug users. Background: Substance use is increasingly becoming prevalent on the African continent, fueling the spread of HIV infection. Although socio-demographic factors influence substance consumption and risk of HIV infection, the association of these factors with HIV infection is poorly understood among substance users on the African continent. The objective of the study was to assess socio-demographic and sexual practices that are associated with HIV infection among injection drug users (IDUs), non-IDUs, and non-drug users (DUs) at an urban setting of coastal Kenya. Methods: A cross-sectional descriptive study was conducted among 451 adults comprising HIV-infected and -uninfected IDUs (n = 157 and 39); non-IDUs (n = 17 and 48); and non-DUs (n = 55 and 135); respectively at coastal, Kenya. Respondent driven sampling, snowball and makeshift methods were used to enroll IDUs and non-IDUs. Convenience and purposive sampling were used to enroll non-DUs from the hospital's voluntary HIV testing unit. Participant assisted questionnaire was used in collecting socio-demographic data and sexual practices. Results: Binary logistic regression analysis indicated that higher likelihood of HIV infection was associated with sex for police protection (OR, 9.526; 95% CI, 1.156-78.528; P = 0.036) and history of sexually transmitted infection (OR, 5.117; 95% CI, 1.924-13.485; P = 0.001) in IDUs; divorced, separated or widowed marital status (OR, 6.315; 95% CI, 1.334-29.898; P = 0.020) in non-IDUs; and unemployment (OR, 2.724; 95% CI, 1.049-7.070; P = 0.040) in non-drug users. However, never married (single) marital status (OR, 0.140; 95% CI, 0.030-0.649; P = 0.012) was associated with lower odds for HIV infection in non-drug users. Conclusion: Altogether, these results suggest that socio-demographic and sexual risk factors for HIV transmission differ with drug use status, suggesting targeted preventive measures for drug users. abstract_id: PUBMED:23527107 Drug users in Amsterdam: are they still at risk for HIV? Background And Aims: To examine whether drug users (DU) in the Amsterdam Cohort Study (ACS) are still at risk for HIV, we studied trends in HIV incidence and injecting and sexual risk behaviour from 1986 to 2011. Methods: The ACS is an open, prospective cohort study on HIV. Calendar time trends in HIV incidence were modelled using Poisson regression. Trends in risk behaviour were modelled via generalized estimating equations. In 2010, a screening for STI (chlamydia, gonorrhoea and syphilis) was performed. Determinants of unprotected sex were studied using logistic regression analysis. Results: The HIV incidence among 1298 participants of the ACS with a total follow-up of 12,921 person-years (PY) declined from 6.0/100 PY (95% confidence interval [CI] 3.2-11.1) in 1986 to less than 1/100 PY from 1997 onwards. Both injection and sexual risk behaviour declined significantly over time. Out of 197 participants screened for STI in 2010-2011, median age 49 years (IQR 43-59), only 5 (2.5%) were diagnosed with an STI. In multivariable analysis, having a steady partner (aOR 4.1, 95% CI 1.6-10.5) was associated with unprotected sex. HIV-infected participants were less likely to report unprotected sex (aOR 0.07, 95% CI 0.02-0.37). Conclusions: HIV incidence and injection risk behaviour declined from 1986 onwards. STI prevalence is low; unprotected sex is associated with steady partners and is less common among HIV-infected participants. These findings indicate a low transmission risk of HIV and STI, which suggests that DU do not play a significant role in the current spread of HIV in Amsterdam. abstract_id: PUBMED:27066590 The Impact of Methadone Maintenance Treatment on HIV Risk Behaviors among High-Risk Injection Drug Users: A Systematic Review. Injection drug users (IDUs) are at high risk of acquiring HIV infection through preventable drug- and sex-related HIV risk behaviors. In recent decade, there has been a growing evidence that methadone maintenance treatment (MMT) is associated with a significant decrease in both drug- and sex-related risk behaviors among this high-risk population. The better understanding of the relationship between MMT and HIV-related risk behaviors will help to better inform future HIV prevention strategies, which may have policy implications as well. In this systematic review, we therefore aimed to explore the relevant literature to more clearly examine the possible impact of MMT on HIV risks behaviors among high-risk IDUs. The findings thus far suggest that MMT is associated with a significant decrease in injecting drug use and sharing of injecting equipment. Evidence on sex-related risk behavior is limited, but suggest that MMT is associated with a lower incidence of multiple sex partners and unprotected sex. The literature also suggests that the most significant factor in reducing HIV risks was treatment adherence. As such, more attention needs to be given in future studies to ensure the higher rates of access to MMT as well as to improve the adherence to MMT. abstract_id: PUBMED:21748277 Psychiatric, behavioural and social risk factors for HIV infection among female drug users. Female drug users report greater psychopathology and risk behaviours than male drug users, putting them at greater risk for HIV. This mixed-methods study determined psychiatric, behavioural and social risk factors for HIV among 118 female drug users (27% (32/118) were HIV seropositive) in Barcelona. DSM-IV disorders were assessed using the Spanish Psychiatric Research Interview for Substance and Mental Disorders. 30 participants were interviewed in-depth. In stepwise multiple backward logistic regression, ever injected with a used syringe, antisocial personality disorder, had an HIV seropositive sexual partner and substance-induced major depressive disorder were associated with HIV seropositivity. Qualitative findings illustrate the complex ways in which psychiatric disorders and male drug-using partners interact with these risk factors. Interventions should address all aspects of female drug users' lives to reduce HIV. abstract_id: PUBMED:26310336 A study on the risk and its determinants of HIV transmission by syringe sharing among HIV-positive drug users Objective: To understand the risks and associated factors of HIV transmission by sharing syringes among HIV-positive drug users. Method: The survey was conducted among HIV-positive injecting drug users (IDUs-HIV+) who received HIV counseling, testing and treatment in Changsha city Infectious Disease Hospital and Hengyang city No.3 People's Hospital from July 2012 to May 2013 to understand their socio-demographic characteristics, HIV prevalence and syringe sharing. A total of 503 IDUs-HIV+ were involved in and provided the contact list of 2 460 drug users who had the syringe sharing experience over one month with IDUs-HIV+. 420 IDUs-HIV+ among 503 were defined as infection sources due to sharing syringe with at least one drug user. Among them, 234 HIV-negative persons were in control group, and 186 HIV-positive were in cased group. A total of 1 220 drug users were followed up among 2 460 and defined as vulnerable population. The HIV transmission rate was calculated based on the HIV prevalence among vulnerable population. Based on the result of HIV transmission to vulnerable population from 420 infection sources, case-control study and the multivariate logistic regression analysis were adopted to explore the associated factors of HIV transmission among IDUs-HIV+. Results: As the sources of HIV transmission, 420 IDUs-HIV+ had an average duration of (4.5 ± 1.2) years for drug use. As a susceptible population, 1 220 drug users sharing syringes with the 420 IDUs-HIV+ had an average duration of (1.1 ± 0.5) years for drug use. There were 238 HIV-positive persons among 1 220 vulnerable drug users, with a transmission rate of 0.57. In the case-control study, the proportion of male subjects was 87.1% (162/186) in the case group, which was higher than that in the control group (77.8%, 182/234). The proportion of subjects who received support after knowing their HIV infection status was 51.1% (95/186) in the case group, which was lower than that in the control group (79.5%, 186/234). The proportion of subjects sharing syringes every time of using drugs was 47.8% (89/186) in the case group, which was higher than that in the control group (36.8%, 86/234). The proportion of subjects having AIDS awareness was 21.0% (39/186) in the case group, which was lower than that in the control group (64.5%, 151/234); the proportion of subjects having close contact with HIV-positive persons for more than 106 days was 60.2% (112/186) in the case group, which was higher than that in the control group (31.6%, 74/234). The proportion of subjects maintaining the original drug use method after being infected with HIV was 50.5% (94/186) in the case group, which was higher than that in the control group (16.7%, 39/234) (all P values &lt; 0.05). The multivariate logistic regression analysis was carried out to analyse high correlate factors of HIV transmission by sources of transmission, and the AIDS awareness, duration of contact between sources of transmission and vulnerable population, access to support following confirmed HIV infection were protective factors, OR (95% CI) values were 0.155 (0.104-0.262), 0.170 (0.106-0.253), and 0.306 (0.189-0.450), respectively; while the frequency of syringe sharing and continuous drug use after being infected with HIV were risk factors, and the OR (95% CI) values were 3.06 (1.77-5.29), and 3.54 (2.16-5.80), respectively. Conclusion: HIV transmission by IDUs-HIV+ might be contained by raising AIDS awareness, providing comprehensive psychological support, conducting needle exchange and methadone maintenance treatment and reducing syringe sharing. Answer: According to the study abstract from PUBMED:23527107, drug users (DU) in the Amsterdam Cohort Study (ACS) have shown a significant decline in HIV incidence and injecting and sexual risk behavior from 1986 to 2011. The HIV incidence among participants declined from 6.0 per 100 person-years in 1986 to less than 1 per 100 person-years from 1997 onwards. Additionally, both injection and sexual risk behavior declined significantly over time. A screening for sexually transmitted infections (STIs) performed in 2010-2011 among participants, with a median age of 49 years, revealed a low STI prevalence of only 2.5%. Unprotected sex was found to be associated with having a steady partner and was less common among HIV-infected participants. These findings suggest that the transmission risk of HIV and STIs is low, indicating that drug users do not play a significant role in the current spread of HIV in Amsterdam.
Instruction: Does very advanced maternal age, with or without egg donation, really increase obstetric risk in a large tertiary center? Abstracts: abstract_id: PUBMED:20707613 Does very advanced maternal age, with or without egg donation, really increase obstetric risk in a large tertiary center? Objective: to assess complications of very advanced maternal age (VAMA) pregnancies ≥ 45 years with and without egg donation (ED). Study Design: obstetric and neonatal complications were studied in 20,659 singleton pregnancies according to three maternal age groups: 20-39, 40-44 [advanced maternal age (AMA)] and ≥ 45 years (VAMA). Twenty pregnancies within the AMA/LAMA group that were achieved with ED were compared with age-matched controls. Results: AMA mothers were more likely to have higher rates of preterm deliveries (OR 1.25), cesarean sections (OR 1.84) hypertension (OR 1.71) and diabetes (OR 2.45). Their newborns were more frequently small for gestational age (OR 1.30), and were more likely to have high rates of respiratory distress syndrome (OR 1.66), neonatal intensive care admission (OR 1.40) and perinatal/neonatal mortality (OR 1.83). VAMA pregnancies had &gt;50% cesarean section rate and a high rate of diabetes (OR 2.29), hypertension (OR 1.54) and postpartum hemorrhage (OR 5.38). Congenital anomalies were more common among ED pregnancies. Conclusions: the higher rate of pregnancy complications for women ≥ 40 years is not further increased after 45 years of age. abstract_id: PUBMED:32921559 Assisted conception in women of advanced maternal age. A delay in childbearing to later in life has increased the number of women of advanced maternal age (AMA) opting for assisted reproduction. Women should be made aware that there are age-related changes to fertility, including a decline in oocyte reserve and quality, in addition to an increase in the number of oocyte chromosomal aberrations. Success rates of assisted reproductive technology (ART) cycles decrease with advanced maternal age. There are different fertility options for women of AMA, including fertility preservation (oocyte or embryo freezing), in vitro fertilisation (IVF treatment) with or without preimplantation genetic screening and oocyte or embryo donation. Detailed counselling needs to be offered to these women with regard to the risks, success rates, ethical and legal implications of these fertility treatment options. Women of AMA should be screened for underlying medical conditions that could have an impact on maternal and neonatal morbidity and mortality. abstract_id: PUBMED:35537719 Pregnancy outcomes at advanced maternal age in a tertiary Hospital, Jeddah, Saudi Arabia. Objectives: To evaluate obstetrical and fetal outcomes among advanced maternal age (AMA) women. Methods: Retrospective cohort study carried out at a teaching hospital, Jeddah, Saudi Arabia, during 18 years period (from January 2003 until December 2020). A total of 79095 women gave birth, and randomized block was used to include 4318 singleton pregnancy women (&gt;28 gestational weeks), of them 2162 age ≥40 years. Associations between AMA and obstetrical and fetal parameters were assessed. Results: Advanced maternal age independently associated with non-Saudi national, mother's weights 80-99 kg, diabetes mellitus, and hypertension. Advanced maternal age mothers were more liable to premature rupture of membranes (PROM), caesarean (CS) deliveries, and postpartum hemorrhage. Newborn of AMA women were at high risk of birth weight &lt;2500 g, birth weight 3600-4500 g, decline Apgar score at 5 minutes, and neonatal intensive care unit (NICU) admissions. Conclusion: Advanced maternal age is an independent risk factor for adverse obstetric hazards as CS, antepartum haemorrhage, diabetes mellitus, hypertension, PROM, postpartum hemmorage, and fetal complications as low birth weight, macrosomia, NICU admission, congenital anomalies, and low Apgar score. These results must be carefully considered by maternal care providers to effectively improve clinical surveillance. abstract_id: PUBMED:29458905 Obstetric outcomes of twin pregnancies at advanced maternal age: A retrospective study. Objective: To evaluate obstetric outcomes in twin pregnancies of advanced maternal age (≥35 years). Materials And Methods: A retrospective study involved 470 twin pregnancies in a single center from Sep. 1, 2012 to Mar. 31, 2015. Clinical characteristics and obstetric outcomes were recorded and compared among twin pregnancies who were classified as follows: age 20-29, 30-34, 35-39 and ≥40 years. Results: The incidence of gestational diabetes (age 20-29 years 15.8%; 30-34 years 24.3%; 35-39 years 30.4%; ≥40 years 57.1%; p = 0.004) and premature delivery (20-29 years 58.6%; 30-34 years 69.1%; 35-39 years 72.2%; ≥40 years 85.7%; p = 0.001) significantly increased with increasing age whereas spontaneous abortion (20-29 years 27.6%; 30-34 years 11.6%; 35-39 years 11.4%; ≥40 years 0.0%; p = 0.021) decreased in twin pregnancies of advanced maternal age. In addition, the rate of postpartum hemorrhage increased almost continuously by age and advanced maternal age was described as a risk factor for postpartum hemorrhage (age 35-39, adjusted OR 3.377; 95% confidence interval 1729-6.598; p &lt; 0.001; age ≥ 40, adjusted OR 10.520; 95% CI 1.147-96.492; p = 0.037). However, there was no significant difference between advanced maternal age and adverse neonatal outcomes. Conclusion: In twin pregnancies, advanced maternal age experienced significant higher risk of postpartum hemorrhage, gestational diabetes and premature delivery. Neither adverse neonatal outcomes nor stillbirth was significantly associated with maternal age. abstract_id: PUBMED:24299057 The risks and outcome of pregnancy in an advanced maternal age in oocyte donation cycles. The maternal age at the first and repeated deliveries constantly rises in developed countries due to current social trends that favor values of personal achievements upon procreation. Assisted reproduction technologies and especially the availability of oocyte donation programs extend the age of fecundity to the fifth and sixth decades of life. The ability to conceive and deliver at such age raises serious medical, moral, social and legal concerns regarding the health and welfare of the mother and child will be presented and discussed here. abstract_id: PUBMED:31736510 Influence of Maternal Age on Selected Obstetric Parameters. Introduction In recent decades, there has been a continuous rise in the average age at which women give birth. A maternal age of 35 years and above is considered an independent risk factor in pregnancy and birth, due to higher rates of intervention. This study investigates the influence of maternal age on birth procedure, gestational age, and rate of interventions during delivery. The influence of maternal parity is also analyzed. Material and Methods Data from the Austrian Register of Births was retrospectively collected and evaluated. The collected data was the data of all singleton live births in Austria between January 1, 2008 and December 31, 2016 (n = 686 272). Multiple births and stillbirths were excluded from the study. Maternal age and parity were analyzed in relation to predefined variables (birth procedure, gestational age, episiotomy in cases of vaginal delivery, epidural anesthesia in both vaginal and cesarean deliveries, and intrapartum micro-blood gas analysis). Statistical data was evaluated using (1) descriptive univariate analysis, (2) bivariate analysis, and (3) multinomial regression models. Results The cesarean section rate and the rate of surgically-assisted vaginal deliveries increased with advancing maternal age, especially in primiparous women, while the rate of spontaneous deliveries decreased with increasing maternal age. A parity of ≥ 2 had a protective effect on the cesarean section rate. The rate of premature births also increased with increasing maternal age, particularly among primiparous women. Discussion Although higher maternal age has a negative effect on various obstetric parameters, it was nevertheless not possible to identify a causal connection. Maternal age should not be assessed as an independent risk factor; other factors such as lifestyle or prior chronic disease and parity must be taken into consideration. abstract_id: PUBMED:27131580 The association between maternal age at first delivery and risk of obstetric trauma. Background: There are a number of poor birth outcomes with advancing maternal age. Although there is some evidence of a higher risk of trauma to obstetric anal sphincter and the levator ani muscle with advancing age, findings to date are inconclusive. Objective: The aim of this study was to assess the risk of pelvic floor injury using translabial 3- and 4-dimensional ultrasound relative to advancing maternal age in primiparous women after a singleton vaginal delivery at term and to determine any association between maternal age and obstetric trauma, including obstetric anal sphincter injuries, levator avulsion, and irreversible overdistension of the levator hiatus. Study Design: This is a subanalysis of a perinatal intervention trial conducted in a specialist urogynecology referral unit at 2 tertiary units. All primiparous women with singleton birth at term underwent 3- and 4-dimensional translabial pelvic floor ultrasound both ante- and postnatally for the assessment of the obstetric trauma including levator ani muscle avulsion, hiatal overdistension to 25 cm(2) or more, and obstetric anal sphincter injuries. A multivariate logistic regression analysis was performed to examine the association between maternal age and obstetric trauma diagnosed on 3- and 4-dimensional translabial ultrasound. Multiple confounders were included, and the most significant (forceps and vacuum delivery) were used for probability modeling. Results: Of 660 women recruited for the original study, a total of 375 women who had a vaginal delivery with complete data sets were analyzed. A total of 174 women (46.4%) showed evidence of at least 1 form of major pelvic floor trauma. Advancing maternal age at first delivery carries with it a significant incremental risk of major pelvic floor trauma with an odds ratio of 1.064 for overall risk of injury for each increasing year of age past age 18 years (P = .003). The probability of any type of trauma appears to be substantially higher for forceps delivery. Vacuum delivery appears to increase the risk of obstetric anal sphincter injuries but not of levator avulsion. Conclusion: There is a significant association between the risk of major pelvic floor injury and increasing maternal age at first delivery. abstract_id: PUBMED:27743695 Oocyte donation recipients of very advanced age: perinatal complications for singletons and twins. Objective: To compare maternal, obstetric, and neonatal outcomes between women who underwent oocyte donation at or after age 50 years and from 45 through 49 years. Design: Single-center, retrospective cohort study. Setting: Maternity hospital. Patient(s): Forty women aged 50 years and older ("older group") and 146 aged 45-49 years ("younger group"). Intervention(s): Comparison between the older and younger groups, globally and after stratification by type of pregnancy (singleton/twin pregnancy). Main Outcome Measure(s): Maternal, obstetric, and neonatal outcomes. Result(s): The rate of multiple-gestation pregnancies was similar in both groups (35% in the older and 37.7% in the younger group). We observed no significant difference globally between the two groups for outcomes, except for the mean duration of postpartum hospitalization, which was significantly longer among the older women (mean ± SD, 9.5 ± 7.4 days vs. 6.8 ± 4.4 days). The rates of isolated pregnancy-related hypertension and of fetal growth restriction in singleton pregnancies were statistically higher in the older than in the younger group (19.2% vs. 5.5%, and 30.7% vs. 14.3%, respectively). Complication rates with twin pregnancies were similar between groups and very high compared with singleton pregnancies. Conclusion(s): Complication rates were similar among women aged 50 years and older and those aged 45-49 years. Nonetheless, given the high rate of complication in both groups, especially among twin pregnancies, single embryo transfer needs to be encouraged for oocyte donations after age 45 years. abstract_id: PUBMED:24696922 Advanced maternal age and obstetric outcome. Advanced maternal age defined as age 35 years or more at estimated date of delivery is considered to have higher incidence of obstetric complications and adverse pregnancy outcome than younger women. The objective of this study was to compare the obstetric and perinatal outcome of pregnancies in women with advanced maternal age &gt; or = 35 years with that of younger women &lt; 35 years. A prospective comparative study was carried out in department of obstetrics and gynecology at Nepal Medical College and Teaching Hospital over the period of one year from October 2012 to September 2013. The obstetric and perinatal outcome of 90 women with advanced maternal age (study group) were compared with those of 90 younger women aged 20-34 years (control group). Among antenatal complications, women of advanced maternal age had increased incidence of hypertensive disorder of pregnancy (26.6% vs 4.4%; p = 0.00009) and breech presentation (8.8% vs 1.1%; p = 0.04). There were no significant difference between two groups in incidence of antepartum hemorrhage, gestational diabetes mellitus, prelabor rupture of membrane and preterm delivery. The rate of caesarean delivery was significantly higher in advanced maternal age (28.8% vs 17.7%; p = 0.05). In perinatal outcome, older women had significantly higher incidence of perinatal death (7.7% vs 0%; p = 0.01). There were no significant differences in low birth weight rate and apgar score less than 7 at five minutes of life in two groups. Thus from this study, it can be concluded that advanced age women had higher incidence of hypertensive disorder of pregnancies and malpresentation, were more likely to deliver by caesarean section and had increased incidence of perinatal death. abstract_id: PUBMED:17727845 Obstetric outcomes in donor oocyte pregnancies compared with advanced maternal age in in vitro fertilization pregnancies. Objective: To evaluate obstetric complications in women who conceived through donated oocytes compared with women who conceived through assisted reproduction using autologous oocytes. Design: Retrospective cohort analysis. Setting: Stanford Hospital and Clinics and Lucille Packard Children's hospital, both tertiary referral centers. Patient(s): A cohort of 71 oocyte recipients who underwent in vitro fertilization (IVF) were compared to all women over 38 years who conceived through IVF with autologous oocytes (n = 108) between January 1, 2001, and December 31, 2005, at Stanford University and subsequently delivered infants at Lucille Packard Children's Hospital. Intervention(s): Assisted reproductive technology with donor oocytes. Main Outcome Measure(s): Obstetric charts of the donor-oocyte recipients were compared for all women over 38 years old who had conceived through IVF with autologous oocytes at the same center (n = 108) and delivered at the same hospital during the same time period. Perinatal complications including preeclampsia, diabetes, preterm labor, preterm premature rupture of membranes and placental abnormalities, mode of delivery, presentation, Apgar scores, gestational age at delivery, and weight were compared between the groups. Result(s): Oocyte recipients and autologous oocyte controls had similar rates of complications of prematurity, hypertensive disorders of pregnancy, gestational diabetes, and placental abnormalities. Infant birth weights and gestational age at time of delivery were similar between the two groups. Conclusion(s): This study suggests that women undergoing IVF with donor oocytes are not at increased risk for complications during pregnancy or at increased immediate neonatal complications compared with women of advanced maternal age undergoing IVF with autologous oocytes. Answer: Very advanced maternal age (VAMA), defined as age 45 years and above, does increase obstetric risk, both with and without egg donation. Studies have shown that women of advanced maternal age (AMA), which typically refers to ages 40-44, and VAMA face higher rates of obstetric complications compared to younger women. These complications include higher rates of preterm deliveries, cesarean sections, hypertension, diabetes, and small for gestational age newborns. Additionally, VAMA pregnancies have a greater than 50% cesarean section rate and a high rate of diabetes, hypertension, and postpartum hemorrhage (PUBMED:20707613). In the context of assisted reproductive technology (ART), success rates decrease with advanced maternal age due to a decline in oocyte reserve and quality, as well as an increase in chromosomal aberrations in oocytes. Women of AMA opting for ART should be informed about the risks, success rates, and implications of fertility treatment options, including oocyte or embryo donation (PUBMED:32921559). A study conducted in Jeddah, Saudi Arabia, found that AMA is an independent risk factor for adverse obstetric outcomes such as cesarean deliveries, antepartum hemorrhage, diabetes mellitus, hypertension, premature rupture of membranes (PROM), postpartum hemorrhage, and fetal complications like low birth weight, macrosomia, NICU admission, congenital anomalies, and low Apgar score (PUBMED:35537719). For twin pregnancies, advanced maternal age is associated with a higher risk of postpartum hemorrhage, gestational diabetes, and premature delivery, although not significantly associated with adverse neonatal outcomes or stillbirth (PUBMED:29458905). Oocyte donation programs allow women to conceive and deliver at advanced ages, but this raises concerns regarding the health and welfare of the mother and child (PUBMED:24299057). In summary, very advanced maternal age, with or without egg donation, does indeed increase obstetric risk, and these risks must be carefully considered by healthcare providers to improve clinical surveillance and management of such pregnancies (PUBMED:20707613; PUBMED:32921559; PUBMED:35537719; PUBMED:29458905; PUBMED:24299057).
Instruction: Out-of-focus shockwaves: a new tissue-protecting therapy? Abstracts: abstract_id: PUBMED:25464642 The focus group The focus group is a research method commonly used in nursing science with the aim of exploring in depth extremely targeted themes. This qualitative approach places the interaction between participants in the foreground. The practices of the researchers who select the focus group are notably heterogeneous. abstract_id: PUBMED:15693429 Out-of-focus shockwaves: a new tissue-protecting therapy? Introduction: It seems that vasoconstriction induced by 12 Kv shock waves reduces kidney lesions caused by subsequent application of 24 Kv shock waves. The lowest shock wave voltage to induce this protective effect is not known yet and may be lower than the common energy setting of commercial lithotripters. Because of this we propose the application of shock waves as a tissue protecting method. Materials And Methods: Preliminary pressure measurements were performed on an experimental unmodified HM3 lithotripter (at 12 and 24 Kv), using a 20 ns rise time needle hydrophone connected to a 100 MHz digital oscilloscope. Ten pressure records were obtained at different aging of the spark plug. A new spark plug was used for each voltage. Pressure measurement were also performed on a Tripter compact lithotripter at 6 positions along the focal axis, starting at F2 and moving away from the reflector, using maximum voltage and capacitance (22 Kv, HI-2). The position on the focal axis of the Tripter Compact with the same pressure as measured at 12 Kv on the HM3 at F2 was chosen as the prophylactic treatment spot (PTS). In vivo pressure measurement were done on the Tripter Compact placing the needle hydrophone inside the lower pole of the right kidney of an anesthetized healthy 25 kg female pig. Measurements were done at the same positions mentioned above, without moving the hydrophone, inside the pig. For both in vitro and in vivo measurements, the radiopaque hydrophone was aligned with the focal axis, using the fluoroscopy system of the lithotripter. Results: The mean positive pressure peak at the second focus of the HM3 lithotripter was 64 and 153 mV at 12 Kv, respectively. Coefficients of variations were 0.28 and 0.13. No significant pressure differences were detected below 700 and 2220 discharges with the HM3 and the Tripter compact, respectively. The difference peak amplitudes are all significant (p&lt;0.01 in a one tailed test) with the exception of F2 and F2+1 Ohm. Conclusions: Prophylactic administrations of out-of-focus shock waves may reduce tissue damage during SWL. Experiments in vivo are underway in order to prove this hypothesis. abstract_id: PUBMED:19657618 Regenerative medicine in orthopaedics. Cell therapy - tissue engineering - in situ regeneration Therapeutic approaches in regenerative medicine, irrespective of specific fields, comprise cell therapy, tissue engineering and in situ regeneration. Regenerative orthopaedics often leads the way on the path to clinical application. In cell therapy primary cells could be replaced by adult mesenchymal stem cells exhibiting almost unlimited regeneration capacity. More sophisticated biomaterial design allowing specific control of cell morphology and tissue organisation is the current focus of advancements in tissue engineering, while signalling to cells by intelligent biomaterials is a main focus of in situ regeneration. These new approaches to the reconstruction of structures and function in damaged or dysfunctional tissue will make it more often possible to achieve a sustainable improvement in terms of real regeneration rather than an acceptable repair. abstract_id: PUBMED:20681216 Human tissue samples for research. A focus group study in adults and teenagers in Flanders. A focus group study in adults and teenagers in Flanders: Attitudes towards research on human stored tissue samples may be dependent on the cultural context. To-day, no data exist on the attitudes and values of the Flemish population towards such research. To query these attitudes, we conducted ten focus groups, composed of adults and of minors on the verge of legal competence. Amongst the focus group participants, we found a trust in the advancement of science, and a willingness to contribute tissue to research. The importance attributed to informed consent depended on the type of tissue donated and the effort needed to contribute. Participants did not see high risk associated with research on stored tissue, but thought there was a need for confidentiality protections. The coding of samples was deemed an appropriate protection. With regard to the return of research results, people expected to receive information that could be relevant to them, but the meaning of what is relevant was different between individuals. abstract_id: PUBMED:29712190 Orthogonal Photolysis of Protecting Groups. The selective activation of photolabile protecting groups was made possible by the use of monochromatic light of suitable wavelength. This new approach allowed the orthogonal deprotection of a substrate containing several photosensitive groups. abstract_id: PUBMED:22554978 Art therapy focus groups for children and adolescents with epilepsy. Children with epilepsy are at risk for numerous psychological and social challenges. We hypothesized that art therapy focus groups would enhance the self-image of children and adolescents with epilepsy. Sixteen children with epilepsy, ages 7-18 years, were recruited from pediatric neurology clinics at the University of Wisconsin to participate in four art therapy sessions. Pre-group assessments included psychological screens (Piers-Harris Children's Self-Concept Scale; Childhood Attitude Toward Illness Scale; Impact of Childhood Neurologic Disability Scale) and art therapy instruments (Formal Elements Art Therapy Scale; Seizure Drawing Task; Levick Emotional and Cognitive Art Therapy Assessment). Developmental levels of drawings were significantly below age-expected standards. Following completion of focus groups, a repeat Childhood Attitude Toward Illness Scale showed no differences between pre- and post-test scores on any measure of this scale. However, subjects and parents were uniformly positive about their group experiences, suggesting a qualitative benefit from participation in art therapy focus groups. abstract_id: PUBMED:31895493 Advances in Protecting Groups for Oligosaccharide Synthesis. Carbohydrates contain numerous hydroxyl groups and sometimes amine functionalities which lead to a variety of complex structures. In order to discriminate each hydroxyl group for the synthesis of complex oligosaccharides, protecting group manipulations are essential. Although the primary role of a protecting group is to temporarily mask a particular hydroxyl/amino group, it plays a greater role in tuning the reactivity of coupling partners as well as regioselectivity and stereoselectivity of glycosylations. Several protecting groups offer anchimeric assistance in glycosylation. They also alter the solubility of substrates and thereby influence the reaction outcome. Since oligosaccharides comprise branched structures, the glycosyl donors and acceptors need to be protected with orthogonal protected groups that can be selectively removed one at a time without affecting other groups. This minireview is therefore intended to provide a discussion on new protecting groups for amino and hydroxyl groups, which have been introduced over last ten years in the field of carbohydrate synthesis. These protecting groups are also useful for synthesizing non-carbohydrate target molecules as well. abstract_id: PUBMED:30112080 An amine protecting group deprotectable under nearly neutral oxidative conditions. The 1,3-dithiane-based dM-Dmoc group was studied for the protection of amino groups. Protection was achieved under mild conditions for aliphatic amines, and under highly reactive conditions for the less reactive arylamines. Moderate to excellent yields were obtained. Deprotection was performed by oxidation followed by treating with a weak base. The yields were good to excellent. The new amino protecting group offers a different dimension of orthogonality in reference to the commonly used amino protecting groups in terms of deprotection conditions. It is expected to allow a collection of transformations to be carried out on the protected substrates that are unattainable using any known protecting groups. abstract_id: PUBMED:24550131 Rapid assessment of protecting-group stability by using a robustness screen. An experimentally simple method has been developed to rapidly establish the stability of widely utilized silyl, acetal, and carbamate protecting groups to a given set of reaction conditions. Assessment of up to twelve protecting groups in a single experiment has been demonstrated. Evaluation of this protocol in two unrelated synthetic transformations suggests that this method can be used to select appropriate protecting groups in the design of synthetic routes. abstract_id: PUBMED:36395933 Revealing the relationship between molecular weight of lignin and its color, UV-protecting property. Lignin has great potential as a natural, green, and sustainable broad-spectrum sunscreen active ingredient. However, the coexistence of dark color and sunscreen properties hinders its application in cosmetics. In this study, we focus on the effects of the molecular weight of lignin on tis UV-protecting property and color in order to prepare lignin-based sunscreen with high performance. A prepared sunscreen containing low molecular weight lignin (F5, &lt;1000 g/mol) exhibits good UV-protecting property (sun protection factor (SPF) = 7.14) and light color advantages (ΔE = 46.2). Moreover, a strong synergistic effect on UV-protecting property exists between low molecular weight lignin and ethylhexyl methoxycinnamate (EHMC), resulting in high SPF of F5@EHMC-based sunscreen (55.56). Additionally, added TiO2 can efficiently mitigate the dark color of lignin-based sunscreens due to prominent covering power of TiO2. Moreover, lignin-based sunscreens have good biocompatibility with HaCaT cells. This work is useful for understanding the mechanism of the UV-protecting property and dark color of lignin, and for designing an efficient and safe lignin-based sunscreen. Answer: The abstract from PUBMED:15693429 discusses the concept of out-of-focus shockwaves as a potential new tissue-protecting therapy. The study introduces the idea that vasoconstriction induced by lower voltage shock waves (12 Kv) can reduce kidney lesions caused by subsequent application of higher voltage shock waves (24 Kv). The research aims to determine the lowest shock wave voltage that can induce this protective effect, which may be lower than the energy settings commonly used in commercial lithotripters. The study proposes using shock waves as a method to protect tissues. In the study, preliminary pressure measurements were performed using an HM3 lithotripter and a Tripter compact lithotripter to identify the prophylactic treatment spot (PTS) that corresponds to the same pressure as the 12 Kv shock waves on the HM3. In vivo pressure measurements were also conducted on a pig's kidney to validate the findings. The results indicated that prophylactic administrations of out-of-focus shock waves might reduce tissue damage during shock wave lithotripsy (SWL), and further in vivo experiments were planned to prove this hypothesis. In summary, the abstract suggests that out-of-focus shockwaves could be a novel approach to reducing tissue damage during medical procedures such as SWL, and further research is being conducted to confirm the effectiveness of this potential therapy.
Instruction: Can postal prompts from general practitioners improve the uptake of breast screening? Abstracts: abstract_id: PUBMED:9575461 Can postal prompts from general practitioners improve the uptake of breast screening? A randomised controlled trial in one east London general practice. Objective: To determine the effect on the uptake of breast screening of a personalized letter from the general practitioner recommending mammography, sent to coincide with an invitation from the NHS breast screening programme. Design: Randomised control trial with stratification of prognostic variables. Setting: A group practice in Hackney, east London. Subjects: 473 women invited for breast screening by the City and East London Breast Screening Service. Outcome Measure: Attendance for mammography. Results: All women in the randomised trial were followed up; 134 of 236 (57%) randomly allocated to receive the prompting letter attended for mammography compared with 120 of 234 (51%) controls This difference was not significant (chi 2 = 1.43, p = 0.23) Conclusion: Personal recommendation by a letter prompting attendance for mammography from the general practitioner known best to women due to be screened did not improve uptake of breast screening in this east London practice. Other strategies are needed to increase uptake of mammography in inner cities. abstract_id: PUBMED:8536178 Do general practitioners influence the uptake of breast cancer screening? Objectives: To investigate the relative importance of patient and general practice characteristics in explaining variations between practices in the uptake of breast cancer screening. Design: Ecological study examining variations in breast cancer screening rates among 131 general practices using routine data. Setting: Merton, Sutton, and Wandsworth Family Health Services Authority, which covers parts of inner and outer London. Main Outcome Measure: Percentage of eligible women aged 50-64 who attended for mammography during the first round of screening for breast cancer (1991-1994). Results: Of the 43,063 women eligible for breast cancer screening, 25,826 (60%) attended for a mammogram. Breast cancer screening rates in individual practices varied from 12.5% to 84.5%. The estimated percentage list inflation for the practices was the variable most highly correlated with screening rates (r = -0.69). There were also strong negative correlations between screening rates and variables associated with social deprivation, such as the estimated percentage of the practice population living in households without a car (r = -0.61), and with variables that measured the ethnic make-up of practice populations, such as the estimated percentage of people in non-white ethnic groups (r = -0.60). Screening rates were significantly higher in practices with a computer than in those without (59.5% v 53.9%, difference 5.6%, 95% confidence interval 1.1 to 10.2%). There was no significant difference in screening rates between practices with and without a female partner; with and without a practice nurse; and with and without a practice manager. In a forward stepwise multiple regression model that explained 58% of the variation in breast cancer screening rates, four factors were significant independent predictors (at P = 0.05) of screening rates: list inflation and people living in households without a car were both negative predictors of screening rates, and chronic illness and the number of partners in a practice were both positive predictors of screening rates. The practice with the highest screening rate (84.5%) contacted all women invited for screening to encourage them to attend for their mammogram and achieved a rate 38% higher than predicted from the regression model. Breast cancer screening rates were on average lower than cervical cancer screening rates (mean difference 14.5%, standard deviation 12.0%) and were less strongly associated with practice characteristics. Conclusions: The strong negative correlation between breast cancer screening rates and list inflation shows the importance of accurate age-sex registers in achieving high breast cancer screening rates. Breast cancer screening units, family health services authorities, and general practitioners need to collaborate to improve the accuracy of the age-sex registers used to generate invitations for breast cancer screening. The success of the practice with the highest screening rate suggests that practices can influence the uptake of breast cancer screening among their patients. Giving general practitioners a greater role in breast cancer screening, either by offering them financial incentives or by giving them clerical support to check prior notification lists and contact nonattenders, may also help to increase breast cancer screening rates. abstract_id: PUBMED:7841259 Participation in breast cancer screening: randomised controlled trials of doctors' letters and of telephone reminders. The study used a randomised controlled trial to find out whether supporting letters from general practitioners accompanying the invitations from a screening centre affected participation in a population-based breast cancer screening program for women aged 50 to 64. A further randomised controlled trial compared the effect of postal reminders with telephone reminders for women who did not respond to an initial invitation to participate in the program. There were 482 women in the first trial and 641 in the second. Excluding women who were ineligible or could not be contacted, participation in screening was 71 per cent in the group which received letters from their general practitioners compared with 62 per cent in the group which did not receive letters (P = 0.059). In the group that received letters, 56 per cent were screened without a reminder compared with 43 per cent of the group that did not receive letters (P = 0.01). Fewer women who received letters from their general practitioners declined the invitation to be screened (P = 0.048). In the second trial, there was no difference in participation between the group receiving telephone reminders and the group receiving postal reminders. As in breast cancer screening programs in other countries, general practitioner endorsement of invitations increased participation in breast cancer screening. Postal reminders were as effective as telephone reminders in encouraging women who did not respond to an initial invitation to participate in screening. abstract_id: PUBMED:10356008 Effect of postal prompts to patients and general practitioners on the quality of primary care after a coronary event (POST): randomised controlled trial. Objectives: To determine whether postal prompts to patients who have survived an acute coronary event and to their general practitioners improve secondary prevention of coronary heart disease. Design: Randomised controlled trial. Setting: 52 general practices in east London, 44 of which had received facilitation of local guidelines for coronary heart disease. Participants: 328 patients admitted to hospital for myocardial infarction or unstable angina. Interventions: Postal prompts sent 2 weeks and 3 months after discharge from hospital. The prompts contained recommendations for lowering the risk of another coronary event, including changes to lifestyle, drug treatment, and making an appointment to discuss these issues with the general practitioner or practice nurse. Main Outcome Measures: Proportion of patients in whom serum cholesterol concentrations were measured; proportion of patients prescribed beta blockers (6 months after discharge); and proportion of patients prescribed cholesterol lowering drugs (1 year after discharge). Results: Prescribing of beta bockers (odds ratio 1.7, 95% confidence interval 0.8 to 3.0, P&gt;0.05) and cholesterol lowering drugs (1.7, 0. 8 to 3.4, P&gt;0.05) did not differ between intervention and control groups. A higher proportion of patients in the intervention group (64%) than in the control group (38%) had their serum cholesterol concentrations measured (2.9, 1.5 to 5.5, P&lt;0.001). Secondary outcomes were significantly improved for consultations for coronary heart disease, the recording of risk factors, and advice given. There were no significant differences in patients' self reported changes to lifestyle or to the belief that it is possible to modify the risk of another coronary event. Conclusions: Postal prompts to patients who had had acute coronary events and to their general practitioners in a locality where guidelines for coronary heart disease had been disseminated did not improve prescribing of effective drugs for secondary prevention or self reported changes to lifestyle. The prompts did increase consultation rates related to coronary heart disease and the recording of risk factors in the practices. Effective secondary prevention of coronary heart disease requires more than postal prompts and the dissemination of guidelines. abstract_id: PUBMED:21770226 General practitioners and colorectal cancer screening: experience in the Trentino region The aims of this longitudinal study are to investigate general practitioners' opinions and knowledge about colorectal cancer screening in Trentino region, to identify their role and level of participation within the screening program and to find out their formative needs. 174 general practitioners answered the postal self-filled questionnaire: 82% of them asserted their main role in colorectal screening is patient counselling, but many physicians also showed availability to collaborate with the Centre for Health Services of Trento in organizing patients recruitment list and in recovering patients who didn't accept screening invitation. 78% thinks the Health Services of Trento should allocate incentives, especially push money, to promote physicians participation in screening program. Moreover 68% needs a basic formative course about screening programme. Female general practitioners are more prepared to collaborate in organizing patients recruitment list and in handing over the kit for fecal occult blood test than their male colleagues. Instead men prefer to take an active role in counselling and are more interested in economic incentives. The study has found considerable general practitioners support for the introduction of the new screening programme. The info-formative line to improve in Trentino could create a better interface between general practitioners and Centre for Health Services, keeping into account the organizing features of physicians. It seems to be important the definition of shared procedures for the counselling and the requirement of formative courses by general practitioners; the Centre for Health Services of Trento has already undertaken these courses disguised as seminars addressed all health operators. abstract_id: PUBMED:8068375 Telephone versus postal surveys of general practitioners: methodological considerations. Background: High response rates to surveys help to maintain the representativeness of the sample. Aim: In the course of a wider investigation into counselling services within general practice it was decided to assess the feasibility of increasing the response rate by telephone follow up of non-respondents to a postal survey. Method: A postal survey was undertaken of a random sample of 1732 general practitioners followed by telephone administration of the questionnaire to non-respondents. The identical questionnaire was administered by telephone to a separate random sample of 206 general practitioners. Results: Of 1732 general practitioners first approached by mail, 1683 were still in post of whom 881 (52%) completed the postal questionnaire and a further 494 (29%) the telephone interview. Of 206 general practitioners first contacted by telephone, 197 were still in post of whom 167 (85%) completed interviews. Compared with doctors first approached by mail, those first approached by telephone were significantly more likely to report having a partner with a special interest in psychiatry (P &lt; 0.01); and a general practitioner, practice nurse or health visitor who worked as a counsellor (P &lt; 0.01 in each case). A comparison of doctors first approached by telephone with those who completed telephone interviews after failing to respond to the postal questionnaire showed that postal non-respondents were significantly less likely to report having a general practitioner, practice nurse, health visitor or community psychiatric nurse who worked as a counsellor (P &lt; 0.01 in each case). Conclusion: These findings suggest that non-response to the postal survey was associated with lack of activity in the study area. Telephone administration of questionnaires to postal non-respondents increased response rates to above 80% but, as telephone administration enhanced the reporting of counsellors, a social desirability bias may have been introduced. abstract_id: PUBMED:22306007 E-mail invitations to general practitioners were as effective as postal invitations and were more efficient. Objective: To evaluate which of two invitation methods, e-mail or post, was most effective at recruiting general practitioners (GPs) to an online trial. Study Design And Setting: Randomized controlled trial. Participants were GPs in Scotland, United Kingdom. Results: Two hundred and seventy GPs were recruited. Using e-mail did not improve recruitment (risk difference=0.7% [95% confidence interval -2.7% to 4.1%]). E-mail was, however, simpler to use and cheaper, costing £3.20 per recruit compared with £15.69 for postal invitations. Reminders increased recruitment by around 4% for each reminder sent for both invitation methods. Conclusions: In the Scottish context, inviting GPs to take part in an online trial by e-mail does not adversely affect recruitment and is logistically easier and cheaper than using postal invitations. abstract_id: PUBMED:27800098 Awareness of breast cancer screening among general practitioners in Mohammedia (Morocco) Introduction: Breast cancer is a major public health problem in Morocco. It is the most common cancer in women. Our study aims to evaluate the extent of breast cancer awareness among general practitioners (GP) in the prefecture of Mohammedia, Morocco. Methods: We conducted a cross-sectional, descriptive, exhaustive study including 97 GP working in primary health care facilities (public and private sector) of the province of Mohammedia. Results: Participation rate was 87%. The average age of GP was 49.6 ± 8.1. Eighty percent (n = 55) of the GP misstated the incidence of breast cancer, 77.6% (n = 85) recognized the existence of a national plan to prevent and control cancer (NPPCC) in Morocco and 67.1% of GP reported the existence of a cancer registry in Morocco. General practice sector was significantly related to the awareness of NPPCC among GP and to the existence of guidelines for the early detection of breast cancer (p = 0.003 and p = 0.001 respectively). A significant relationship was found between seniority and the existence of guidelines for the early detection of breast cancer and a breast cancer registry (p = 0.005 and p = 0.002 respectively). Conclusion: In light of these results GP awareness and practices should be enhanced by promoting initial and continuing training on breast cancer screening. abstract_id: PUBMED:22708828 The association between general practitioners' attitudes towards breast cancer screening and women's screening participation. Background: Breast cancer screening in Denmark is organised by the health services in the five regions. Although general practitioners (GPs) are not directly involved in the screening process, they are often the first point of contact to the health care system and thus play an important advisory role. No previous studies, in a health care setting like the Danish system, have investigated the association between GPs' attitudes towards breast cancer screening and women's participation in the screening programme. Methods: Data on women's screening participation was obtained from the regional screening authorities. Data on GPs' attitudes towards breast cancer screening was taken from a previous survey among GPs in the Central Denmark Region. This study included women aged 50-69 years who were registered with a singlehanded GP who had participated in the survey. Results: The survey involved 67 singlehanded GPs with a total of 13,288 women on their lists. Five GPs (7%) had a negative attitude towards breast cancer screening. Among registered women, 81% participated in the first screening round. Multivariate analyses revealed that women registered with a GP with a negative attitude towards breast cancer screening were 17% (95% CI: 2-34%) more likely to be non-participants compared with women registered with a GP with a positive attitude towards breast cancer screening. Conclusion: The GPs' attitudes may influence the participation rate even in a system where GPs are not directly involved in the screening process. However, further studies are needed to investigate this association. abstract_id: PUBMED:36438208 Cervical cancer screening uptake: A randomized controlled trial assessing the effect of sending invitation letters to non-adherent women combined with sending their general practitioners a list of their non-adherent patients (study protocol). Introduction: Cervical cancer (CC) is the fourth most common cancer among women. It can be cured if diagnosed at an early stage and treated promptly. The World Health Organization suggests that 70% of women should be screened with a high-performance test by the age of 35. This paper reports a protocol to assess the effect of two modalities of organized CC screening programmes on CC screening uptake. Methods And Analysis: Design and setting: The design involves a 3-arm randomized controlled study performed in a French geographic area on the west coast. A total of 1,395 general practitioners will be randomized, depending on their general practice surgeries. Participants: The design is based on a total of 94,393 women aged 40 to 65 years who are eligible for CC screening. Intervention: In the "optimized cancer screening" group, the intervention will combine sending invitation letters to non-adherent women with sending general practitioners (GPs) a list of their non-adherent patients. In the "standard cancer screening" group, the intervention will be limited to sending invitation letters to non-adherent women. In the "usual care" group, no letter will be sent either to women or to their GPs. Primary endpoint: CC screening test uptake will be assessed after a 6-month follow-up period. Statistical analysis: The percentage of women who are up-to-date with their screening at 6 months after the intervention will be compared across arms using a generalized mixed linear model. Discussion: A large-scale randomized trial of this nature is unprecedented. The study will enable us to assess a strategy relying on GPs, identified as the coordinators in this screening strategy. The study results should help policy makers to implement organized CC screening programs in the future. Ethics And Dissemination: The study was approved was approved by the Ethics Committee of the National College of Teaching General practitioners (IRB00010804). It was recorded in ClinicalTrials.gov on the number NCT04689178 (28 December 2020). The study findings will be used for publication in peer-reviewed scientific journals and presentations in scientific meetings. Answer: The effectiveness of postal prompts from general practitioners (GPs) in improving the uptake of breast screening appears to be mixed based on the provided abstracts. One randomized controlled trial conducted in an east London general practice found that a personalized letter from the GP recommending mammography, sent to coincide with an invitation from the NHS breast screening programme, did not significantly improve uptake of breast screening (57% with prompting letter vs. 51% controls) (PUBMED:9575461). Another study suggested that general practice characteristics, such as accurate age-sex registers and the presence of a computer, were associated with higher breast cancer screening rates. The study also noted that one practice with a high screening rate actively contacted women to encourage attendance, suggesting that practices can influence uptake (PUBMED:8536178). A separate randomized controlled trial found that supporting letters from GPs accompanying screening invitations did increase participation in breast cancer screening (71% with GP letters vs. 62% without), and fewer women declined the invitation when they received GP letters (PUBMED:7841259). However, another study indicated that postal prompts to patients and GPs did not improve prescribing of effective drugs for secondary prevention or self-reported changes to lifestyle after a coronary event, although they did increase consultation rates related to coronary heart disease and the recording of risk factors in practices (PUBMED:10356008). In summary, while some studies suggest that GP involvement through postal prompts can have a positive impact on breast screening uptake, the evidence is not uniformly supportive, and other factors such as practice characteristics and direct contact with patients may also play significant roles.
Instruction: Is poor sleep quality at high altitude separate from acute mountain sickness? Abstracts: abstract_id: PUBMED:28063399 Sleep quality among elderly high-altitude dwellers in Ladakh. It has been already known that people who temporarily stay at high altitude may develop insomnia as a symptom of acute mountain sickness. However, much less is known about people living at high altitude. The aim of this study was to determine the effect of high altitude environment on sleep quality for the elderly who have been living at high altitude for their whole lives. A cross-sectional study was conducted in Domkhar valley at altitudes of 2800-4200m, Ladakh. Sleep quality was assessed using Insomnia Severity Index (ISI). Measurement items include body mass index, blood pressure, blood sugar, hemoglobin, timed Up and Go test, oxygen saturation during wakefulness, respiratory function test, Oxford Knee Score (OKS), and Geriatric Depression Scale (GDS), and so on. The participants were Ladakhi older adults aged 60 years or over (n=112) in Domkhar valley. The participation rate was 65.1% (male: female=47:65, mean age: 71.3 years and 67.9 years, respectively). The prevalence of the high score of ISI (8 or more) was 15.2% (17 out of 112). Altitude of residence was significantly correlated with ISI. Stepwise multiple regression analysis showed that OKS and altitude of residence were significantly related with ISI. abstract_id: PUBMED:29172727 Objective Versus Self-Reported Sleep Quality at High Altitude. Anderson, Paul J., Christina M. Wood-Wentz, Kent R. Bailey, and Bruce D. Johnson. Objective versus self-reported sleep quality at high altitude. High Alt Med Biol. 24:144-148, 2023. Background: Previous studies have found little relationship between polysomnography and a diagnosis of acute mountain sickness (AMS) using the Lake Louise Symptom Questionnaire (LLSQ). The correlation between sleep question responses on the LLSQ and polysomnography results has not been explored. We compared LLSQ sleep responses and polysomnography data from our previous study of workers rapidly transported to the South Pole. Methods: Sixty-three subjects completed a 3-hour flight from sea level to the South Pole (3200 m, 9800 ft). Participants completed limited overnight polysomnography on their first night and completed LLSQ upon awakening. We compared polysomnography results at the South Pole with sleep question responses on the LLSQ to assess their degree of correspondence. Results: Twenty-two (30%) individuals reported no sleep problems whereas 20 (32%) reported some problems and 20 (33%) individuals reported poor sleep and 1 reported no sleep (n = 1). Median sleep efficiency was (94%) among response groups and mean overnight oxygen saturation was 81%. Median apnea hypopnea index (AHI; events/hour) was 10.2 in those who reported no problems sleeping, 5.1 in those reporting some problems sleeping, and 13.7 in those who reported poor sleep. These differences were not statistically significant. Conclusion: Self-reported sleep quality varied but there were no associated significant differences in sleep efficiency, overnight oxygen saturation, nor AHI. Studies that explore the role of objective sleep quality in the development of AMS should remove the sleep question on the LLSQ from AMS scoring algorithms. abstract_id: PUBMED:15265339 Sleep at high altitude. New arrivals to altitude commonly experience poor-quality sleep. These complaints are associated with increased fragmentation of sleep by frequent brief arousals, which are in turn linked to periodic breathing. Changes in sleep architecture include a shift toward lighter sleep stages, with marked decrements in slow-wave sleep and with variable decreases in rapid eye movement (REM) sleep. Respiratory periodicity at altitude reflects alternating respiratory stimulation by hypoxia and subsequent inhibition by hyperventilation-induced hypocapnia. Increased hypoxic ventilatory responsiveness and loss of regularization of breathing during sleep contribute to the occurrence of periodicity. Interventions that improve sleep quality at high altitude include acetazolamide and benzodiazepines. abstract_id: PUBMED:37068619 Remote ischemic preconditioning improves spatial memory and sleep of young males during acute high-altitude exposure. Objective: The high-altitude hypoxia environment will cause poor acclimatization in a portion of the population. Remote ischemic preconditioning(RIPC)has been demonstrated to prevent cardiovascular and cerebrovascular diseases under ischemic or hypoxic conditions. However, its role in improving acclimatization and preventing acute mountain sickness (AMS) at high altitude has been undetermined. This study aims to estimate the effect of RIPC on acclimatization of individuals exposed to high altitude. Methods: The project was designed as a randomized controlled trial with 82 healthy young males, who received RIPC training once a day for 7 consecutive days. Then they were transported by aircraft to a high altitude (3680 m) and examined for 6 days. Lake Louise Score(LLS) of AMS, physiological index, self-reported sleep pattern, and Pittsburgh Sleep Quality Index(PSQI)score were applied to assess the acclimatization to the high altitude. Five neurobehavioral tests were conducted to assess cognitive function. Results: The result showed that the RIPC group had a significantly lower AMSscore than the control group (2.43 ± 1.58 vs 3.29 ± 2.03, respectively; adjusted mean difference-0.84, 95% confidence interval-1.61 to -0.06, P = 0.036). and there was no significant difference in AMS incidence between the two groups (25.0% vs 28.57%, P = 0.555). The RIPC group performed better than the control group in spatial memory span score (11[9-12] vs 10[7.5-11], P=0.025) and the passing digit (7[6-7.5] vs 6[5-7], P= 0.001). Spatial memory was significantly higher in the high-altitude RIPC group than in the low-altitude RIPC group (P<0.01). And the RIPC group obtained significantly lower self-reported sleep quality score (P = 0.024) and PSQI score (P = 0.031). Conclusions: The RIPC treatment improved spatial memory and sleep quality in subjects exposed to acute hypoxic exposure and this may lead to improved performance at high altitude. abstract_id: PUBMED:25722875 Correlation between blood pressure changes and AMS, sleeping quality and exercise upon high-altitude exposure in young Chinese men. Background: Excessive elevation of arterial blood pressure (BP) at high altitude can be detrimental to our health due to acute mountain sickness (AMS) or some AMS symptoms. This prospective and observational study aimed to elucidate blood pressure changes induced by exposure to high-altitude hypoxia and the relationships of these changes with AMS prevalence, AMS severity, sleep quality and exercise condition in healthy young men. Methods: A prospective observational study was performed in 931 male young adults exposed to high altitude at 3,700 m (Lhasa) from low altitude (LA, 500 m). Blood pressure measurement and AMS symptom questionnaires were performed at LA and on day 1, 3, 5, and 7 of exposure to high altitude. Lake Louise criteria were used to diagnose AMS. Likewise, the Athens Insomnia Scale (AIS) and the Epworth Sleepiness Scale (ESS) were filled out at LA and on day 1, 3, and 7 of exposure to high altitude. Results: After acute exposure to 3,700 m, diastolic blood pressure (DBP) and mean arterial blood pressure (MABP) rose gradually and continually (P &lt; 0.05). Analysis showed a relationship with AMS for only MABP (P &lt; 0.05) but not for SBP and DBP (P &gt; 0.05). Poor sleeping quality was generally associated with higher SBP or DBP at high altitude, although inconsistent results were obtained at different time (P &lt; 0.05). SBP and Pulse BP increased noticeably after high-altitude exercise (P &lt; 0.05). Conclusions: Our data demonstrate notable blood pressure changes under exposure to different high-altitude conditions: 1) BP increased over time. 2) Higher BP generally accompanied poor sleeping quality and higher incidence of AMS. 3) SBP and Pulse BP were higher after high-altitude exercise. Therefore, we should put more effort into monitoring BP after exposure to high altitude in order to guard against excessive increases in BP. abstract_id: PUBMED:31352013 The effect of sleep quality in Sherpani Col High Camp Everest. Recently, an increasingly higher volume of travelers deciding to get the experience of hiking to the highest summit worldwide has been noted. However, high altitude environments have adverse effects on the normal bodily function of individuals accustomed to living at low altitudes. The purpose of this study was to record sleep quality and physiological responses of 8 climbers during a 7 days stay at Sherpani Col High Camp Everest in an altitude of 5700-m. Eight experienced climbers (Age: 48 ± 9.2 yrs, Height: 176.3 ± 7.1 cm, Body mass: 76.9 ± 11.7 kg, weekly exercise &gt;80% HRmax &gt; 270 min-1) participated in the study. The climbers recorded their sleep quality daily and one hour after waking up via a questionnaire (Groningen Sleep Quality Scale, GSQS), levels of perceived exertion (Borg CR10 Scale), heart rate (HR, bpm-1) and oxygen saturation in blood (SpO2, %) using the pulse oximeter Nonin Onyx Vantage 9590 (USA). Climbers also filled out questionnaires regarding how sleepy they felt (Epworth Sleepiness Score, ESS) 12 h post waking-up. Repeated measures ANOVA were used in order to examine possible variations between variables. Results showed statistical significant differences in the HR and SpO2 parameters, (HR: 86.5 ± 5.2 bpm-1, p &lt; 0.05; SpO2: 85.3 ± 2.4%, p &lt; 0.05). The subjective evaluation of GSQS, ESS and perceived exertion using a Borg CR10 Scale may be affected by the extreme hypoxic environment and the daily hike-climb which results in low blood oxygen saturation. abstract_id: PUBMED:17869574 High-altitude sleep disturbance: results of the Groningen Sleep Quality Questionnaire survey. Objective: To assess the Groningen Sleep Quality Scale (GSQS) for evaluation of high-altitude sleep (HAS) disturbance and employ GSQ questionnaire to describe HAS. Methods: After the first night's stay at the altitude of 3500 m, quality of sleep for 100 participants (age: 29.13+/-11.01 years; 36 females/64 males) was assessed using the self-administered 15-item GSQS translated into Farsi. Results: Mean GSQS score was 5.36+/-4.32; 38 (38%) participants had a score equal to or less than 2, and 46 (46%) participants had a score equal to or more than 6. A Cronbach's alpha of 0.90 was calculated for internal consistency. Waking up several times during the night was the most prevalent complaint during the first night of sleep, and absolute inability to sleep was the most uncommon problem. Conclusions: HAS disturbance, which involved many of newcomers to a high altitude, had various harmful effects. For HAS research, GSQS was confirmed to be valid and reliable. abstract_id: PUBMED:32449502 Interplay between rotational work shift and high altitude-related chronic intermittent hypobaric hypoxia on cardiovascular health and sleep quality in Chilean miners. Mining activities expose workers to diverse working conditions, rotational shifts and high altitude-related hypobaric hypoxia. Separately, each condition has been reported having a negative impact on miners' health risk; however, the combination of both stressors has been poorly explored. The present study aimed to analyse the effects of exposure to rotational work shift (RWS) alone or in combination with high altitude-related chronic intermittent hypobaric hypoxia (CIHH) on cardiometabolic, physical activity and sleep quality related markers in copper miners from Los Pelambres mine in Chile. One hundred and eleven male miners working in RWS with or without CIHH were included. Anthropometrics measures, sleep quality assessment, physical activity level (PAL) and handgrip strength were evaluated. Exposure to CIHH exacerbated the detrimental effects of RWS as miners exposed to the combination of RWS and CIHH where more obese and had a wider neck circumference, reduced PAL at work and worsened sleep quality. Practitioner summary: The purpose was to assess cardiometabolic health and sleep quality markers associated with the combined effects of rotational shift work and high altitude-related intermittent hypobaric hypoxia in miners. Findings showed a wider neck circumference, lower physical activity level and higher prevalence of poor sleep quality in exposed miners. Abbreviations: ANOVA: analysis of variance; BM: body mass; BMI: body mass index; CI: confidence intervals; CIHH: chronic intermittent hypobaric hypoxia; CV: cardiovascular; CVR: cardiovascular risk; HA: high altitude; HACE: high-altitude cerebral edema; HGS: handgrip strength; IPAQ-SF: International Physical Activity Questionnaire - Short Form; LSD: Fisher's least standardized difference; MANCOVA: multivariate general lineal model; MET: metabolic equivalent; PAL: physical activity level; PSQI: Pittsburg sleep quality index; RWS: rotational work shift; WHR: waist-to-hip ratio. abstract_id: PUBMED:33991443 Improvements in sleep-disordered breathing during acclimatization to 3800 m and the impact on cognitive function. Sojourners to high altitude often experience poor sleep quality due to sleep-disordered breathing. Additionally, multiple aspects of cognitive function are impaired at high altitude. However, the impact of acclimatization on sleep-disordered breathing and whether poor sleep is a major contributor to cognitive impairments at high altitude remains uncertain. We conducted nocturnal actigraphy and polygraphy, as well as daytime cognitive function tests, in 15 participants (33% women) at sea level and over 3 days of partial acclimatization to high altitude (3800 m). Our goal was to determine if sleep-disordered breathing improved over time and if sleep-disordered breathing was associated with cognitive function. The apnea-hypopnea index and oxygen desaturation index increased on night 1 (adj. p = 0.026 and adj. p = 0.026, respectively), but both improved over the subsequent 2 nights. These measures were matched by poorer self-reported sleep quality on the Stanford Sleepiness Scale and PROMIS questionnaires following 1 night at high altitude (adj. p = 0.027 and adj. p = 0.022, respectively). The reaction time on the psychomotor vigilance task was slower at high altitude and did not improve (SL: 199 ± 27, ALT1: 224 ± 33, ALT2: 216 ± 41, ALT3: 212 ± 27 ms). The reaction times on the balloon analog risk task decreased at high altitude (SL: 474 ± 235, ALT1: 375 ± 159, ALT2: 291 ± 102, ALT3: 267 ± 90 ms), perhaps indicating increased risk-taking behavior. Finally, multiple cognitive function measures were associated with sleep-disordered breathing and measures of subjective sleep quality, rather than low daytime arterial oxygen saturation. These data indicate that sleep-disordered breathing at moderately high altitude improves with partial acclimatization and that some aspects of cognitive performance in unacclimatized sojourners may be impacted by poor sleep rather than hypoxemia alone. abstract_id: PUBMED:2935358 Sleep at high altitude. The sensation of disrupted sleep following ascent to high altitude is associated with frequent awakenings, which may reflect sleep disruption due to respiratory dysrhythmia consisting typically of monotonously repetitive periodic breathing. This seems to arise from the combined effects of hypocapnia, which leads to suppression of respiratory effort in NREM sleep, and hypoxia, which stimulates termination of apnea and hyperpnea with consequent hypocapnia, leading to perpetuation of periodicity. Sleep disruption and periodic breathing decrease with time at altitude but may also be considerably reduced by pretreatment with acetazolamide, which may act by correction of alkalosis or through some other mechanism. In long-term residents of high altitude less distinctive, undulating respiratory dysrhythmias are described with unstable and decreased arterial oxygenation. Answer: Yes, poor sleep quality at high altitude can be separate from acute mountain sickness (AMS). Studies have shown that new arrivals to high altitude commonly experience poor-quality sleep, which is associated with increased fragmentation of sleep by frequent brief arousals and a shift toward lighter sleep stages, with decrements in slow-wave sleep and variable decreases in rapid eye movement (REM) sleep (PUBMED:15265339). These sleep disturbances are linked to periodic breathing caused by the alternating respiratory stimulation by hypoxia and subsequent inhibition by hyperventilation-induced hypocapnia (PUBMED:15265339). While poor sleep quality is a common symptom of AMS, it can also occur independently of AMS. For instance, a study comparing self-reported sleep quality and polysomnography data found that self-reported sleep quality varied among individuals at high altitude, but there were no associated significant differences in sleep efficiency, overnight oxygen saturation, nor apnea-hypopnea index (AHI) (PUBMED:29172727). This suggests that poor sleep quality at high altitude does not necessarily correlate with objective measures of sleep disturbances that are often associated with AMS. Additionally, interventions such as remote ischemic preconditioning (RIPC) have been shown to improve spatial memory and sleep quality in subjects exposed to acute hypoxic exposure, which may lead to improved performance at high altitude, indicating that there are methods to improve sleep quality at high altitude that are not directly related to AMS treatment (PUBMED:37068619). Furthermore, high-altitude dwellers, such as the elderly in Ladakh, have been studied for the effect of high altitude on their sleep quality, and factors such as the altitude of residence and knee function were significantly related to sleep quality, as measured by the Insomnia Severity Index (ISI) (PUBMED:28063399). This indicates that chronic exposure to high altitude can affect sleep quality independently of AMS. In summary, while poor sleep quality and AMS can co-occur at high altitude, they can also exist independently, and poor sleep quality can be a distinct issue that may require separate assessment and management from AMS.
Instruction: Does the presence of a pharmacological substance alter the placebo effect? Abstracts: abstract_id: PUBMED:19697301 Does the presence of a pharmacological substance alter the placebo effect?--results of two experimental studies using the placebo-caffeine paradigm. Objectives: We employed the placebo-caffeine paradigm to test whether the presence or absence of a substance (caffeine) influences the placebo effect. Methods: In experiment 1 consisting of four conditions with n = 15 participants each (control, placebo, two double-blind groups, each with placebo only), we maximized the placebo effect through expectation. Effects were assessed with physiological (blood pressure, heart rate), psychomotor (response times), and well-being indicators (self-report). In experiment 2, caffeine was administered in one of the double-blind groups, and another condition was added where caffeine was given openly. Results: Effect sizes were medium to large for some outcome parameters in experiment 1 and 2, showing partial replicability of the classical placebo effect. Although not formally significant, differences between the double blind placebo conditions of the two experiments (with and without caffeine present) were medium to small. There was a significant difference (p = 0.03) between experiment 1 and experiment 2 in the physiological variables, and a near significant interaction effect between groups and experiments in the physiological variables (p = 0.06). Conclusion: The question warrants further scrutiny. The presence of a pharmacological substance might change the magnitude of the placebo response. abstract_id: PUBMED:16292233 Placebo and placebo effect The word placebo appeared for the first time in an English medical dictionary in 1785. In French, it appeared much latter in 1958. This word defines an experimental tool used for rigourous evaluation of a specific effect of pharmacological treatment and the non specific effect of any therapy. The placebo effect is the strictly psychological or psychophysiological effect of a placebo. The two principal components of placebo effect as a pain killer, which has been extensively studied in this field, are positive expectancies of both the patient and the physician. Although the mechanisms of action of placebo effect are not well understood, results of several recent works are particularly interesting. abstract_id: PUBMED:28300816 The practical use of placebo effect in psychotherapeutic treatment of patients with substance use disorders: therapeutic and ethic consequences The article discusses therapeutic potential of placebo and nocebo effects in treatment of substance use disorders. The authors review the background of the issue, describe neurobiological and psychological mechanisms of placebo effects and demonstrate their impact on psychotherapy of patients with substance use disorders. Attention is drawn to the clinical and ethical issues of practical use of placebo effects including that in terms of placebo-therapy, indirect suggestion psychotherapy, motivational interventions and cognitive-behavioral psychotherapy, psychotherapy with the use of disulfiram, psychopharmacotherapy with opioid antagonists. The authors conclude that the ethical use of placebo-effects in treatment of substance use disorders may improve its overall efficiency. abstract_id: PUBMED:17323230 Placebo, placebo effect and clinical trials Recent studies have begun to unveil some of the biochemical bases of the placebo effect. Thus, while placebo analgesia is related to the release of endogenous opioids, placebo-induced dopamine release leads to motor improvement in Parkinson's disease. A theory proposes that the placebo effect is mediated by the activation of the reward circuitry. These biochemical findings indicate that the placebo effect is real, and suggest that many ethical arguments and controversies regarding the use of placebos should perhaps be reconsidered. While it may be advisable to minimize the placebo effect in clinical trials in order to estimate the pure effect of the active treatment, acting in the patient's best interest may require maximizing the placebo effect in the usual clinical setting. abstract_id: PUBMED:10379195 The placebo effect: classes of explanation The placebo effect is a frequent phenomenon in medicine, but very little is known about its mechanisms. An overview is given of the different classes of explanation of the placebo effect in analgesia and in particular the role of endogenous opioids, classical conditioning and expectations. Then the question is raised which are the properties of placebo for which a theory has to provide answers in order to be coherent. These properties are, between others, the efficacy of placebo in a variety of conditions, in individuals with different personality characteristics, etc. Finally, the difficulty of observing individual placebo is emphasized and problems concerning the diagnostic and therapeutic use of placebo are mentioned. abstract_id: PUBMED:34296741 Placebo effect in pharmacological management of fibromyalgia: a meta-analysis. Introduction: The management of fibromyalgia involves a combination of pharmacological and non-pharmacological treatments. Source Of Data: Recently published literature in PubMed, Google Scholar and Embase databases. Areas Of Agreement: Several pharmacological and non-pharmacological strategies have been proposed for the management of fibromyalgia. However, the management of fibromyalgia remains controversial. The administration of placebo has proved to be more effective than no treatment in many clinical settings and evidence supports the 'therapeutic' effects of placebo on a wide range of symptoms. Areas Of Controversy: The placebo effect is believed to impact the clinical outcomes, but its actual magnitude is controversial. Growing Points: A meta-analysis comparing pharmacological management versus placebo administration for fibromyalgia was conducted. Areas Timely For Developing Research: Drug treatment resulted to be more effective than placebo administration for the management of fibromyalgia. Nevertheless, placebo showed a beneficial effect in patients with fibromyalgia. Treatment-related adverse events occurred more frequently in the drug treatment. Level Of Evidence: I, Bayesian network meta-analysis of double-blind randomized clinical trials. abstract_id: PUBMED:10707743 Roaming through methodology. XVII. The placebo effect The placebo effect is well known, but there are many misconceptions. One of these misconceptions is that one-third of patients respond to placebos. This misunderstanding is probably due to methodologically poor research conducted in the 1950s. Another error is that the effect in the placebo arm of a clinical trial is often confused with the placebo effect. The belief in the placebo effect is enormous, but the quantity and quality of data to substantiate this belief are very limited. Investigating the placebo effect is methodologically difficult, not easy to get financed and considered unrewarding. abstract_id: PUBMED:28521961 Placebo effect and therapeutic context: A challenge in clinical research. When we apply a physical or pharmacological treatment, there are many things that may explain the clinical improvement experienced by a patient. The drugs or physical agents applied are important, but we must also add other elements in the context of the patient-therapist relationship. Scientific evidence has proven that the placebo effect exists. This is a true biopsychosocial phenomenon produced by the context in which an intervention is carried out. Biases aside, placebo and nocebo responses are changes in patients' symptoms, due to their participation at the therapeutic meeting, with its rituals, symbols and interactions. This multitude of signals inherent in any intervention, is perceived and interpreted by patients and can create positive or negative expectations. abstract_id: PUBMED:25938400 Expectation, the placebo effect and the response to treatment. What we believe we will experience from a treatment--our expectation--has a substantial impact on what we actually experience. Expectation has been established as a key process behind the placebo effect. Studies in both laboratory and clinical settings consistently show that when people ingest a pharmacologically inert substance (placebo) but believe that it is an active substance, they experience both the subjective sensations and physiologic effects expected from that active substance. Expectation has an important place in the response to "real" treatment as well. This paper provides an overview of the data which point to the role of expectation in both the placebo effect and the response to treatment. These data suggest that clinicians might enhance the benefit of all treatments by promoting patients' positive expectations. abstract_id: PUBMED:22432096 Scope and limitations of placebo effect The placebo effect is difficult to measure, since any favorable response to placebo may be related to other factors, such as spontaneous remission. There are complementary theories to explain it, such as conditioning and expectancy. In addition, the placebo effect induces neurobiological changes in the brain. However, the most important element is the interaction between doctor and patient. Answer: The presence of a pharmacological substance may indeed alter the placebo effect. In an experimental study using the placebo-caffeine paradigm, it was found that although not formally significant, there were medium to small differences between the double-blind placebo conditions with and without caffeine present. There was also a significant difference in physiological variables between the two experiments and a near significant interaction effect between groups and experiments in the physiological variables, suggesting that the presence of a pharmacological substance might change the magnitude of the placebo response (PUBMED:19697301). While the mechanisms of action of the placebo effect are not well understood, it is recognized that the placebo effect is a strictly psychological or psychophysiological effect and involves components such as positive expectancies of both the patient and the physician (PUBMED:16292233). The placebo effect can also induce neurobiological changes in the brain, and the interaction between doctor and patient is considered a crucial element (PUBMED:22432096). Recent studies have begun to unveil some of the biochemical bases of the placebo effect, such as placebo analgesia being related to the release of endogenous opioids and placebo-induced dopamine release leading to motor improvement in Parkinson's disease (PUBMED:17323230). These findings indicate that the placebo effect is real and suggest that the presence of a pharmacological substance could potentially interact with these biochemical pathways, thereby altering the placebo effect. In conclusion, while the exact impact of a pharmacological substance on the placebo effect requires further scrutiny, existing research suggests that it may indeed influence the magnitude or nature of the placebo response.
Instruction: Can computer aided teaching packages improve clinical care in patients with acute abdominal pain? Abstracts: abstract_id: PUBMED:1855017 Can computer aided teaching packages improve clinical care in patients with acute abdominal pain? Objective: To compare three methods of support for inexperienced staff in their diagnosis and management of patients with acute abdominal pain--namely, with (a) structured data collection forms, (b) real time computer aided decision support, and (c) computer based teaching packages. Design: Prospective assessment of effects of methods of support on groups of doctors in one urban hospital and one rural hospital. Setting: Accident and emergency department at Whipps Cross Hospital, London, and surgical wards of Airedale General Hospital, West Yorkshire. Patients: Consecutive prospective series of all patients presenting to each hospital in specified time periods with acute abdominal pain; total patients in the various periods were 12,506. Main Outcome Measures: Diagnostic accuracy of participating doctors, admission rates of patients with non-specific abdominal pain, perforation rates in patients with appendicitis, negative laparotomy rates. Results: Use of any one modality resulted in improved diagnostic accuracy and decision making performance. Use of structured forms plus computer feedback resulted in better performance than use of forms alone. Use of structured forms plus a computer teaching package gave results at least as good as those with direct feedback by computer. Conclusions: The results confirm earlier studies in suggesting that the use of computer aided decision support improves diagnostic and decision making performance when dealing with patients suffering from acute abdominal pain. That use of the computer for teaching gave results at least as good as with its use for direct feedback may be highly relevant for those who are apprehensive about the real time use of diagnostic computers in a clinical setting. abstract_id: PUBMED:6371951 Computer-aided diagnosis of acute abdominal pain. The British experience. This presentation reviews the U.K. experience of computer-aided diagnosis of acute abdominal pain--which now relates to over 30,000 cases seen in more than 10 hospitals during a 13 years period. Following a discussion of the philosophy, construction and mode of usage of the systems employed, results of this experience are presented. Computer-aided diagnosis in this area has been shown to be feasible and (if correctly utilised) leads to improvements in patient care, diagnosis, and decision making by the doctors involved. In this context, the computer is simply one element of an integrated package reaffirming the importance of traditional clinical medicine. abstract_id: PUBMED:1736794 How does computer-aided diagnosis improve the management of acute abdominal pain? The introduction of standardised data-collection forms and computer-aided diagnosis has been found to be associated with improved diagnosis and management of patients with acute abdominal pain. The mechanism by which such benefits accrue has been the subject of some controversy. Detailed analysis of 5193 patients from one hospital shows that the major benefit from such diagnostic aids was the accurate early diagnosis of non-specific abdominal pain by senior house officers in the accident and emergency department; this in turn led to fewer admissions and fewer operations with negative findings. Clinical data about patients with acute abdominal pain should be recorded on structured information sheets by junior doctors and early positive diagnosis should be encouraged before decisions affecting the patient's management are made. Improved computer support may confer further benefits. abstract_id: PUBMED:2242936 Computer-aided decision support in acute abdominal pain, with special reference to the EC concerted action. This presentation describes the use of computer aided decision support in acute abdominal pain. The need for such support is explored and the feasability of providing support is described with reference to studies involving nearly 100,000 patients. It is argued that the provision of computer aided decision support can lead to substantial and practical benefit in clinical care--but this is mostly due to the constant stimulus towards "doing it right". This in turn depends upon the provision of a consensus view of "good medicine" in the area concerned; and on an international level strongly argues the case for multi-national cooperative studies to define good medicine and make it available. One such study (the European Community Concerted Action on Acute Abdominal Pain) is described. abstract_id: PUBMED:2654023 Computer-aided decision support in clinical medicine. This paper reviews the problems and prospects involved in providing computer-aided decision support in clinical medicine. First, the evaluation of medical innovation is discussed. It is suggested that there are three criteria by which an innovation may be judged, namely (1) a need for the innovation, (2) the ability of the innovation to fulfil that need and (3) the ability to do so without transgressing practical, ethical or legal boundaries. These problems are addressed in turn. The paper suggests, taking one area of clinical medicine as an example (acute abdominal pain) there is a clear need for decision support--since the area is not handled well by doctors in current practice. Evidence is adduced to suggest that the computer can provide decision support and do so without transgressing professional, ethical or legal boundaries. The obstacles to progress, which stand in the way of widespread implementation are briefly discussed. These are lack of medical terminology, poor man-machine interface and above all a lack of co-ordination. Finally, it is suggested that the most valuable facet of current systems is the discipline and precision in data collection they impose upon practicing doctors. abstract_id: PUBMED:21045220 Does computer-aided clinical decision support improve the management of acute abdominal pain? A systematic review. Acute abdominal pain is a common reason for emergency presentation to hospital. Despite recent medical advances in diagnostics, overall clinical decision-making in the assessment of patients with undifferentiated acute abdominal pain remains poor, with initial clinical diagnostic accuracy being 45-50%. Computer-aided decision support (CADS) systems were widely tested in this arena during the 1970s and 1980s with results that were generally favourable. Inception into routine clinical practice was hampered largely by the size and speed of the hardware. Computer systems and literacy are now vastly superior and the potential benefit of CADS deserves investigation. An extensive literature search was undertaken to find articles that directly compared the clinical diagnostic accuracy prospectively of medical staff in the diagnosis of acute abdominal pain before and after the institution of a CADS programme. Included articles underwent meta-analysis with a random-effects model. Ten studies underwent meta-analysis that demonstrated an overall mean percentage improvement in clinical diagnostic accuracy of 17.25% with the use of CADS systems. There is a role for CADS in the initial evaluation of acute abdominal pain, which very often takes place in the emergency department setting. abstract_id: PUBMED:3094664 Computer aided diagnosis of acute abdominal pain: a multicentre study. A multicentre study of computer aided diagnosis for patients with acute abdominal pain was performed in eight centres with over 250 participating doctors and 16,737 patients. Performance in diagnosis and decision making was compared over two periods: a test period (when a small computer system was provided to aid diagnosis) and a baseline period (before the system was installed). The two periods were well matched for type of case and rate of accrual. The system proved reliable and was used in 75.1% of possible cases. User reaction was broadly favourable. During the test period improvements were noted in diagnosis, decision making, and patient outcome. Initial diagnostic accuracy rose from 45.6% to 65.3%. The negative laparotomy rate fell by almost half, as did the perforation rate among patients with appendicitis (from 23.7% to 11.5%). The bad management error rate fell from 0.9% to 0.2%, and the observed mortality fell by 22.0%. The savings made were estimated as amounting to 278 laparotomies and 8,516 bed nights during the trial period--equivalent throughout the National Health Service to annual savings in resources worth over 20m pounds and direct cost savings of over 5m pounds. Computer aided diagnosis is a useful system for improving diagnosis and encouraging better clinical practice. abstract_id: PUBMED:4552594 Computer-aided diagnosis of acute abdominal pain. This paper reports a controlled prospective unselected real-time comparison of human and computer-aided diagnosis in a series of 304 patients suffering from abdominal pain of acute onset.The computing system's overall diagnostic accuracy (91.8%) was significantly higher than that of the most senior member of the clinical team to see each case (79.6%). It is suggested as a result of these studies that the provision of such a system to aid the clinician is both feasible in a real-time clinical setting, and likely to be of practical value, albeit in a small percentage of cases. abstract_id: PUBMED:329937 Computer-aided diagnosis of lower abdominal pain in women. This paper describes the use of a system of computer aided diagnosis in an unselected, prospective survey of 393 women suffering from lower abdominal pain of less than 1 week's duration. An accurate diagnosis was made by clinicians at first patient contact in 68-5 per cent of the group of patients. The computer's diagnostic prediction (based on the same data) matched the final diagnosis in 81-6 per cent of the patients. During this survey a marked improvement in diagnostic accuracy was observed amongst the junior clinicians. It is suggested that this is because of the discipline of data collection imposed and the intermittent feedback received, and also that this educational aspect of computer usage may be of wider benefit. abstract_id: PUBMED:6753199 A feasibility study of computer aided diagnosis in appendicitis. In this study, the hypothesis that computer aided diagnosis could enable a more accurate differentiation between patients with acute appendicitis and those with abdominal pain but normal appendixes was examined. A data base was established by analyzing the records of 476 patients having an emergency measure appendectomy during a five year period. There were 360 or 76 per cent with acute appendicitis, 98 or 20 per cent with normal appendixes and 18 or 4 per cent with other diseases requiring operation. The records were analyzed with regard to history, physical examination and laboratory findings. The data base was then divided randomly into two parts. Part 1 was subjected to univariate discriminant analysis, using the chi-square test. The only quantities which were significantly different between appendicitis and a normal appendix were sex, duration of symptoms, anorexia and vomiting. Multivariate discriminant analysis was used to derive an abdominal pain index which discriminated between appendicitis and a normal appendix with a sensitivity of 0.82 and a specificity of 0.39. Using the abdominal pain index to evaluate the patients in part 2 of the data base, 23 or 40 per cent of the 58 patients with a normal appendix would have avoided operation. However, 31 or 18 per cent of the 169 patients with appendicitis would have not been operated upon; three of those 31 had perforated appendixes. Computer aided diagnosis was no more effective than unaided clinical diagnosis in appendicitis. Answer: Yes, computer-aided teaching packages can improve clinical care in patients with acute abdominal pain. Studies have shown that the use of computer-aided decision support (CADS) systems in the diagnosis and management of acute abdominal pain can lead to improvements in patient care, diagnosis, and decision-making by doctors (PUBMED:6371951). Specifically, the use of structured data collection forms along with computer feedback or computer-based teaching packages has resulted in better performance compared to the use of forms alone (PUBMED:1855017). The introduction of standardized data-collection forms and computer-aided diagnosis has been associated with more accurate early diagnosis of non-specific abdominal pain, leading to fewer unnecessary admissions and operations with negative findings (PUBMED:1736794). Moreover, computer-aided diagnosis has been shown to be feasible and, when correctly utilized, leads to improvements in patient outcomes, such as increased diagnostic accuracy, reduced negative laparotomy rates, and decreased perforation rates in patients with appendicitis (PUBMED:3094664). A systematic review also indicated that the use of CADS systems resulted in an overall mean percentage improvement in clinical diagnostic accuracy of 17.25% (PUBMED:21045220). The benefits of CADS are not limited to direct feedback during clinical care but extend to the educational aspect, where the discipline and precision in data collection imposed by these systems can lead to marked improvements in the diagnostic accuracy of junior clinicians (PUBMED:329937). Therefore, computer-aided teaching packages, as part of a broader decision support system, have the potential to significantly improve the clinical care of patients with acute abdominal pain.
Instruction: Is a perceived supportive physical environment important for self-reported leisure time physical activity among socioeconomically disadvantaged women with poor psychosocial characteristics? Abstracts: abstract_id: PUBMED:23537188 Is a perceived supportive physical environment important for self-reported leisure time physical activity among socioeconomically disadvantaged women with poor psychosocial characteristics? An observational study. Background: Over the past decade, studies and public health interventions that target the physical environment as an avenue for promoting physical activity have increased in number. While it appears that a supportive physical environment has a role to play in promoting physical activity, social-ecological models emphasise the importance of considering other multiple levels of influence on behaviour, including individual (e.g. self-efficacy, intentions, enjoyment) and social (e.g. social support, access to childcare) factors (psychosocial factors). However, not everyone has these physical activity-promoting psychosocial characteristics; it remains unclear what contribution the environment makes to physical activity among these groups. This study aimed to examine the association between the perceived physical environment and self-reported leisure-time physical activity (LTPA) among women living in socioeconomically disadvantaged areas demonstrating different psychosocial characteristics. Methods: In 2007-8, 3765 women (18-45 years) randomly selected from low socioeconomic areas in Victoria, Australia, self-reported LTPA, and individual, social and physical environmental factors hypothesised within a social-ecological framework to influence LTPA. Psychosocial and environment scores were created. Associations between environment scores and categories of LTPA (overall and stratified by thirds of perceived environment scores) were examined using generalised ordered logistic regression. Results: Women with medium and high perceived environment scores had 20-38% and 44-70% greater odds respectively of achieving higher levels of LTPA than women with low environment scores. When stratified by thirds of psychosocial factor scores, these associations were largely attenuated and mostly became non-significant. However, women with the lowest psychosocial scores but medium or high environment scores had 76% and 58% higher odds respectively of achieving ≥120 minutes/week (vs. &lt;120 minutes/week) LTPA. Conclusions: Acknowledging the cross-sectional study design, the findings suggest that a physical environment perceived to be supportive of physical activity might help women with less favourable psychosocial characteristics achieve moderate amounts of LTPA (i.e. ≥120 minutes/week). This study provides further support for research and public health interventions to target perceptions of the physical environment as a key component of strategies to promote physical activity. abstract_id: PUBMED:34670605 Perceived social and built environment associations of leisure-time physical activity among adults in Sri Lanka. Objective: Although perceived neighbourhood environment is considered a predictor of leisure-time physical activity (LTPA), evidence for this is limited in South Asia. Thus, the aim was to determine the association between neighbourhood social and built environment features in carrying out LTPA among adults in Colombo District, Sri Lanka. A cross-sectional study among 1320 adults was carried out using validated questionnaires for physical activity (PA) and built environment data collection. Multiple logistic regression analysis was conducted to assess the associations between environment characteristics and LTPA after adjusting for gender, age, employment status, income level and sector of residence. Results: A total of 21.7% of adults participated in some LTPA. The commonest type of LTPA was walking; carried out by 14.5%. Moderate and vigorous activity at leisure was carried out by 10.3% and 3.9% respectively. Perceived social acceptance for PA was positively associated with LTPA. Out of the built environment characteristics perceived infrastructure for walking, and recreational facilities for PA were negatively associated with LTPA. Self-efficacy emerged as an important positive correlate of LTPA. The participants were positively influenced by the self-efficacy and perceived social environment which should be addressed when promoting LTPA. abstract_id: PUBMED:25735603 Safety in numbers: does perceived safety mediate associations between the neighborhood social environment and physical activity among women living in disadvantaged neighborhoods? Objective: The aim of this study is to examine associations between the neighborhood social environment and leisure-time physical activity (LTPA)(1) and walking among women, and whether these associations are mediated by perceived personal safety. Methods: Women (n = 3784) living in disadvantaged urban and rural neighborhoods within Victoria, Australia completed a self-administered survey on five social environment variables (neighborhood crime, neighborhood violence, seeing others walking and exercising in the neighborhood, social trust/cohesion), perceived personal safety, and their physical activity in 2007/8. Linear regression analyses examined associations between social environment variables and LTPA and walking. Potential mediating pathways were assessed using the product-of-coefficients test. Moderated mediation by urban/rural residence was examined. Results: Each social environment variable was positively associated with engaging in at least 150 min/week of LTPA (OR = 1.16 to 1.56). Only two social environment variables, seeing others walking (OR = 1.45) and exercising (OR = 1.31), were associated with ≥ 150 min/week of walking. Perceived personal safety mediated all associations. Stronger mediation was found in urban areas for crime, violence and social trust/cohesion. Conclusion: The neighborhood social environment is an important influence on physical activity among women living in disadvantaged areas. Feelings of personal safety should not be included in composite or aggregate scores relating to the social environment. abstract_id: PUBMED:25733530 Psychosocial work environment and leisure-time physical activity: the Stormont study. Background: Research findings on the relationship between the psychosocial work environment and leisure-time physical activity (LTPA) are equivocal. This might partly be due to studies having focused on a restricted set of psychosocial dimensions, thereby failing to capture all relevant domains. Aims: To examine cross-sectional associations between seven psychosocial work environment domains and LTPA in a large sample of UK civil servants and to profile LTPA and consider this in relation to UK government recommendations on physical activity. Methods: In 2012 Northern Ireland Civil Service employees completed a questionnaire including measures of psychosocial working conditions (Management Standards Indicator Tool) and LTPA. We applied bivariate correlations and linear regression analyses to examine relations between psychosocial working conditions and LTPA. Results: Of 26000 civil servants contacted, 5235 (20%) completed the questionnaire. 24% of men and 17% of women reported having undertaken 30min or more of physical activity on five or more days in the past week. In men, job control (-0.08) and peer support (-0.05) were weakly but significantly negatively correlated with LTPA, indicating that higher levels of exposure to these psychosocial hazards was associated with lower levels of LTPA. Job role (-0.05) was weakly but significantly negatively correlated with LTPA in women. These psychosocial work characteristics accounted for 1% or less of the variance in LTPA. Conclusions: Longitudinal research to examine cause-effect relations between psychosocial work characteristics and LTPA might identify opportunities for psychosocial job redesign to increase employees' physical activity during leisure time. abstract_id: PUBMED:28260241 Association of Self-Perceived Physical Competence and Leisure-Time Physical Activity in Childhood-A Follow-Up Study. Background: The basis of self-perceived physical competence is built in childhood and school personnel have an important role in this developmental process. We investigated the association between initial self-perceived physical competence and reported leisure-time physical activity (LTPA) longitudinally in 10-, 12-, and 15-year-old children. Methods: This longitudinal follow-up study comprises pupils from an elementary school cohort (N = 1346) in the city of Turku, Finland (175,000 inhabitants). The self-perceived physical competence (fitness and appearance) and LTPA data were collected with questionnaires. The full longitudinal data were available from 571 pupils based on repeated studies at the ages of 10, 12, and 15 years in 2004, 2006, and 2010. We analyzed the association of self-perceived physical competence and LTPA using regression models. Results: Self-perceived physical competence was positively associated with LTPA at all ages (10 years p &lt; .05, 12 years p &lt; .0001, 15 years p &lt; .0001). Increase in the self-perceived physical fitness scores was likely to associate with higher LTPA at each age point (10 years [odds ratio, OR] = 1.18, 95% confidence interval, CI: 1.09-1.27; 12 years [OR] = 1.27, 95% CI: 1.18-1.37; and 15 years [OR] = 1.28, 95% CI: 1.19-1.38). Conclusions: Self-perceived physical competence is associated with LTPA in children and adolescents, and the association is strengthened with age. abstract_id: PUBMED:23933224 Is park visitation associated with leisure-time and transportation physical activity? Objective: The aim of this study was to examine whether frequency of park visitation was associated with time spent in various domains of physical activity among adults living in a disadvantaged neighbourhood of Victoria, Australia. Methods: In 2009, participants (n=319) self-reported park visitation and physical activity including: walking and cycling for transport, leisure-time walking, leisure-time moderate- to vigorous-intensity physical activity, and total physical activity. Results: The mean number of park visits per week was 3.3 (SD=3.8). Park visitation was associated with greater odds of engaging in high (as compared to low) amounts of transportation physical activity, leisure-time walking, leisure-time moderate- to vigorous-intensity physical activity (MVPA) and total physical activity. Each additional park visit per week was associated with 23% greater odds of being in the high category for transportation physical activity, 26% greater odds of engaging in high amounts of leisure-time walking, 11% greater odds of engaging in MVPA, and 40% greater odds of high total physical activity. Conclusions: Acknowledging the cross-sectional study design, the findings suggest that park visitation may be an important predictor and/or destination for transportation and leisure-time walking and physical activity. Findings highlight the potentially important role of parks for physical activity. abstract_id: PUBMED:19080030 Relationship of perceived environmental characteristics to leisure-time physical activity and meeting recommendations for physical activity in Texas. Introduction: We investigated the relationship of perceived environmental characteristics to self-reported physical activity in Texas adults using 2004 Behavioral Risk Factor Surveillance System data. Methods: The 2 research questions were, "Are perceived neighborhood characteristics and reported use of facilities associated with self-reported leisure-time physical activity for male and female Texas residents aged 18 to 64 years?" and "Are perceived neighborhood characteristics and reported use of facilities related to meeting recommendations for moderate to vigorous physical activity for Texas men and women aged 18 to 64 years?" Descriptive statistics and multiple logistic regression were used for the analyses. Results: Multiple logistic regression analyses controlling for sociodemographic factors showed that for women, perceptions of neighbors being physically active, pleasantness of the neighborhood, lighting, safety, and feelings of neighbor trustworthiness were associated with leisure-time physical activity. Several of these variables were also related to meeting recommendations for physical activity. Reports of use of several types of neighborhood facilities were related to men's and women's leisure-time physical activity and with meeting recommendations for physical activity for women. Conclusion: Perceptions of neighborhood characteristics and reported use of facilities were related to physical activity and to meeting recommendations for physical activity, with stronger associations for women than for men. Interventions to increase levels of physical activity among Texans should be informed by multilevel assessments including environmental characteristics and by attention to important subpopulations. abstract_id: PUBMED:26808440 Motivation and Barriers for Leisure-Time Physical Activity in Socioeconomically Disadvantaged Women. Introduction: The aim of this study was to examine cross-sectional and longitudinal associations between motivation and barriers for physical activity, and physical activity behavior in women living in socioeconomic disadvantage. This study also examined whether weight control intentions moderate those associations. Methods: Data from 1664 women aged 18-46 years was collected at baseline and three-year follow-up as part of the Resilience for Eating and Activity Despite Inequality study. In mail-based surveys, women reported sociodemographic and neighborhood environmental characteristics, intrinsic motivation, goals and perceived family barriers to be active, weight control intentions and leisure-time physical activity (assessed through the IPAQ-L). Linear regression models assessed the association of intrinsic motivation, goals and barriers with physical activity at baseline and follow-up, adjusting for environmental characteristics and also physical activity at baseline (for longitudinal analyses), and the moderating effects of weight control intentions were examined. Results: Intrinsic motivation and, to a lesser extent, appearance and relaxation goals for being physically active were consistently associated with leisure-time physical activity at baseline and follow-up. Perceived family barriers, health, fitness, weight and stress relief goals were associated with leisure-time physical activity only at baseline. Moderated regression analyses revealed that weight control intentions significantly moderated the association between weight goals and leisure-time physical activity at baseline (β = 0.538, 99% CI = 0.057, 0.990) and between intrinsic motivation and leisure-time physical activity at follow-up (β = 0.666, 99% CI = 0.188, 1.145). For women actively trying to control their weight, intrinsic motivation was significantly associated with leisure-time physical activity at follow-up (β = 0.184, 99% CI = 0.097, 0.313). Conclusions: Results suggest that, especially in women trying to control their weight, intrinsic motivation plays an important role in sustaining physical activity participation over time. Also, weight goals for being physically active seem to play a role regarding short-term physical activity participation in this particular population. Addressing these motivational features may be important when promoting physical activity participation in women living in socioeconomically disadvantaged neighborhoods. abstract_id: PUBMED:25773471 Leisure-time physical activity in relation to occupational physical activity among women. Objective: The objective of this study is to examine the association between occupational physical activity and leisure-time physical activity among US women in the Sister Study. Methods: We conducted a cross-sectional study of 26,334 women who had been employed in their current job for at least 1 year at baseline (2004-2009). Occupational physical activity was self-reported and leisure-time physical activity was estimated in metabolic equivalent hours per week. Log multinomial regression was used to evaluate associations between occupational (sitting, standing, manually active) and leisure-time (insufficient, moderate, high) activity. Models were adjusted for age, race/ethnicity, education, income, geographic region, and body mass index. Results: Only 54% of women met or exceeded minimum recommended levels of leisure-time physical activity (moderate 32% and high 22%). Women who reported sitting (prevalence ratio (PR)=0.82, 95% confidence interval (CI): 0.74-0.92) or standing (PR=0.84, 95% CI: 0.75-0.94) most of the time at work were less likely to meet the requirements for high leisure-time physical activity than manually active workers. Associations were strongest among women living in the Northeast and the South. Conclusion: In this nationwide study, low occupational activity was associated with lower leisure-time physical activity. Women who are not active in the workplace may benefit from strategies to promote leisure-time physical activity. abstract_id: PUBMED:24929197 Association between perceived urban built environment attributes and leisure-time physical activity among adults in Hangzhou, China. Background: Neighborhood built environment may influence residents' physical activity, which in turn, affects their health. This study aimed to determine the associations between perceived built environment and leisure-time physical activity in Hangzhou, China. Methods: 1440 participants aged 25-59 were randomly selected from 30 neighborhoods in three types of administrative planning units in Hangzhou. International Physical Activity Questionnaire long form and NEWS-A were used to obtain individual-level data. The China Urban Built Environment Scan Tool was used to objectively assess the neighborhood-level built environment. Multi-level regression was used to explore the relationship between perceived built environment variables and leisure-time physical activities. Data was collected in Hangzhou from June to December in 2012, and was analyzed in May 2013. Results: Significant difference between neighborhood random variations in physical activity was identified (P=0.0134); neighborhood-level differences accounted for 3.0% of the variability in leisure-time physical activity. Male residents who perceived higher scores on access to physical activity destinations reported more involvement in leisure-time physical activity. Higher scores on perception of esthetic quality, and lower on residential density were associated with more time in leisure-time walking in women. Conclusions: The present study demonstrated that perceived urban built environment attributes significantly correlate with leisure-time physical activity in Hangzhou, China. Answer: Yes, a perceived supportive physical environment is important for self-reported leisure-time physical activity (LTPA) among socioeconomically disadvantaged women with poor psychosocial characteristics. A study conducted on women living in socioeconomically disadvantaged areas in Victoria, Australia, found that those with medium and high perceived environment scores had significantly greater odds of achieving higher levels of LTPA compared to women with low environment scores. This association was particularly notable among women with the lowest psychosocial scores, suggesting that a supportive physical environment might help women with less favorable psychosocial characteristics achieve moderate amounts of LTPA, such as at least 120 minutes per week (PUBMED:23537188). The findings imply that perceptions of the physical environment can be a key component of strategies to promote physical activity, especially for those who may not have physical activity-promoting psychosocial characteristics. This supports the notion that interventions aiming to increase physical activity in socioeconomically disadvantaged populations should consider enhancing the physical environment to encourage LTPA.
Instruction: Should diffuse bronchiectasis still be considered a CFTR-related disorder? Abstracts: abstract_id: PUBMED:25797027 Should diffuse bronchiectasis still be considered a CFTR-related disorder? Background: Although several comprehensive studies have evaluated the role of the CFTR gene in idiopathic diffuse bronchiectasis (DB), it remains controversial. Methods: We analyzed the whole coding region of the CFTR gene, its flanking regions and the promoter in 47 DB patients and 47 controls. Available information about demographic, spirometric, radiological and microbiological data for the DB patients was collected. Unclassified CFTR variants were in vitro functionally assessed. Results: CFTR variants were identified in 24 DB patients and in 27 controls. DB variants were reclassified based on the results of in silico predictive analyses, in vitro functional assays and data from epidemiological and literature databases. Except for the sweat test value, no clear genotype-phenotype correlation was observed. Conclusions: DB should not be considered a classical autosomal recessive CFTR-RD. Moreover, although further investigations are necessary, we proposed a new class of "Non-Neutral Variants" whose impact on lung disease requires more studies. abstract_id: PUBMED:32172933 Genetic diagnosis in practice: From cystic fibrosis to CFTR-related disorders. Cystic fibrosis (CF) is a channelopathy caused by mutations in the gene encoding the CF transmembrane conductance regulator (CFTR) protein. Diagnosis of CF has long relied on a combination of clinical (including gastrointestinal and/or respiratory) symptoms and elevated sweat chloride concentration. After cloning of the CFTR gene in 1989, genetic analysis progressively became an important aspect of diagnosis. Although combination of sweat test and genetic analysis have simplified the diagnosis of CF in most cases, difficult situations remain, especially in cases that do not fulfill all diagnostic criteria. Such situations are most frequently encountered in patients presenting with a single-organ disease (e.g., congenital absence of the vas deferens, pancreatitis, bronchiectasis) leading to a diagnosis of CFTR-related disorder, or when the presence/ absence of CF is not resolved after newborn screening. This article reviews the diagnostic criteria of CF, with special emphasis on genetic testing. © 2020 French Society of Pediatrics. Published by Elsevier Masson SAS. All rights reserved. abstract_id: PUBMED:36388243 Corneal Refractive Surgery Considerations in Patients with Cystic Fibrosis and Cystic Fibrosis Transmembrane Conductance Regulator-Related Disorders. This article discusses common ocular manifestations of cystic fibrosis (CF) and cystic fibrosis transmembrane conductance regulator-related disorders (CFTR-RD). A structured approach for assessing and treating patients with CF/CFTR-RD seeking corneal refractive surgery is proposed, as well as a novel surgical risk scoring system. We also report two patients with various manifestations of CFTR dysfunction who presented for refractive surgery and the outcomes of the procedures. Surgeons seeking to perform refractive surgery on patients with CF/CFTR-RD should be aware of mild to severe clinical manifestations of CFTR dysfunction. Specific systemic and ocular manifestations of CF include chronic obstructive pulmonary disease (COPD), bronchiectasis, recurrent pulmonary infections, CF-related diabetes and liver disease, pancreatic insufficiency, conjunctival xerosis, night blindness, meibomian gland dysfunction (MGD), and blepharitis. Corneal manifestations include dry eye disease (DED), punctate keratitis (PK), filamentary keratitis (FK), xerophthalmia, and decreased endothelial cell density and central corneal thickness. Utilization of the appropriate review of systems (ROS) and screening tests will assist in determining if the patient is a suitable candidate for refractive surgery, as CF/CFTR-RD can impact the health of the cornea. Collaboration with other medical professionals who care for these patients is encouraged to ensure that their CF/CFTR-RD symptoms are best controlled via systemic and other treatment options. This will assist in reducing the severity of their ocular manifestations before and after surgery. abstract_id: PUBMED:30279124 Phenotypic spectrum of patients with cystic fibrosis and cystic fibrosis-related disease carrying p.Arg117His. Background: The "mild" gene variant, p.Arg117His in cystic fibrosis (CF) results in highly variable phenotypes ranging from male infertility to severe lung disease. Due to current interest to include this group in CFTR-targeted therapies, this study aims to describe the disease spectrum. Methods: Retrospective study of Toronto CF and CFTR-related p.Arg117His patients. Longitudinally captured clinical data were compared between patients with 5T/7T-variants and those with a CF or CFTR-related diagnosis. Comparison was made between p.Arg117His adults and infants identified through CF newborn screening (NBS). Results: Twenty of fifty patients carried the 5T variant, all with a diagnosis of CF (p.Arg117His-5TCF), and 30/50 carried 7T, 7 diagnosed with CF (p.Arg117His-7TCF) and 23 with a CFTR-related disorder (p.Arg117His-7TCFTR). For those with chest HRCT results available, 75% p.Arg117His-5TCF, 33% p.Arg117His-7TCF and 27% p.Arg117His-7TCFTR patients had bronchiectasis. Further, 79% p.Arg117His-5T, 29% p.Arg117His-7TCF and 13% p.Arg117His-7TCFTR had abnormal lung function. Of those, 80% grew CF-related pathogens on respiratory culture. Interestingly, the mean maximum sweat chloride and the percentage of patients growing CF-related bacterial pathogens were identical in p.Arg117His-7 TCFTR adults and p.Arg117His infants. Conclusions: Generally, p.Arg117His-5T patients had more severe CF disease. However, a subset of p.Arg117His-7 T patients demonstrated equally severe disease, thus warranting clinical monitoring of all p.Arg117His patients including p.Arg117His infants identified via NBS. abstract_id: PUBMED:28129813 Diagnosis of Cystic Fibrosis in Nonscreened Populations. Objective: Although the majority of cases of cystic fibrosis (CF) are now diagnosed through newborn screening, there is still a need to standardize the diagnostic criteria for those diagnosed outside of the neonatal period. This is because newborn screening started relatively recently, it is not performed everywhere, and even for individuals who were screened, there is the possibility of a false negative. To limit irreversible organ pathology, a timely diagnosis of CF and institution of CF therapies can greatly benefit these patients. Study Design: Experts on CF diagnosis were convened at the 2015 CF Foundation Diagnosis Consensus Conference. The participants reviewed and discussed published works and instructive cases of CF diagnosis in individuals presenting with signs, symptoms, or a family history of CF. Through a modified Delphi methodology, several consensus statements were agreed upon. These consensus statements were updates of prior CF diagnosis conferences and recommendations. Results: CF diagnosis in individuals outside of newborn screening relies on the clinical evidence and on evidence of CF transmembrane conductance regulator (CFTR) dysfunction. Clinical evidence can include typical organ pathologies seen in CF such as bronchiectasis or pancreatic insufficiency but often represent a broad range of severity including mild cases. CFTR dysfunction can be demonstrated using sweat chloride testing, CFTR molecular genetic analysis, or CFTR physiologic tests. On the basis of the large number of patients with bona fide CF currently followed in registries with sweat chloride levels between 30 and 40 mmol/L, the threshold considered "intermediate" was lowered from 40 mmol/L in the prior diagnostic guidelines to 30 mmol/L. The CF diagnosis was also discussed in the context of CFTR-related disorders in which CFTR dysfunction may be present, but the individual does not meet criteria for CF. Conclusions: CF diagnosis remains a rare but important condition that can be diagnosed when characteristic clinical features are seen in an individual with demonstrated CFTR dysfunction. abstract_id: PUBMED:27709245 Cystic fibrosis: a clinical view. Cystic fibrosis (CF), a monogenic disease caused by mutations in the CFTR gene on chromosome 7, is complex and greatly variable in clinical expression. Airways, pancreas, male genital system, intestine, liver, bone, and kidney are involved. The lack of CFTR or its impaired function causes fat malabsorption and chronic pulmonary infections leading to bronchiectasis and progressive lung damage. Previously considered lethal in infancy and childhood, CF has now attained median survivals of 50 years of age, mainly thanks to the early diagnosis through neonatal screening, recognition of mild forms, and an aggressive therapeutic attitude. Classical treatment includes pancreatic enzyme replacement, respiratory physiotherapy, mucolitics, and aggressive antibiotic therapy. A significant proportion of patients with severe symptoms still requires lung or, less frequently, liver transplantation. The great number of mutations and their diverse effects on the CFTR protein account only partially for CF clinical variability, and modifier genes have a role in modulating the clinical expression of the disease. Despite the increasing understanding of CFTR functioning, several aspects of CF need still to be clarified, e.g., the worse outcome in females, the risk of malignancies, the pathophysiology, and best treatment of comorbidities, such as CF-related diabetes or CF-related bone disorder. Research is focusing on new drugs restoring CFTR function, some already available and with good clinical impact, others showing promising preliminary results that need to be confirmed in phase III clinical trials. abstract_id: PUBMED:15463882 Diagnosis of cystic fibrosis in adults with diffuse bronchiectasis. We assessed the contribution of the sweat test, genotyping and nasal potential difference (NPD) in the diagnosis of cystic fibrosis (CF) in adults with diffuse bronchiectasis (DB). Among 601 adults referred for DB from 1992 to 2001, 46 were diagnosed with CF. The sweat test was positive in 37 patients and normal or intermediate in nine patients. Two CF mutations were identified in 18 patients (39%) by screening for 31 mutations and in 36 patients (78%) after complete genetic analysis. NPD was suggestive of CF in 71% of the patients. The combination of the sweat test and genetic analysis led to the diagnosis of CF in 45 patients. In the nine patients with normal or intermediate sweat test, the diagnosis was confirmed by screening for 31 mutations in five, by complete genetic screening in three, and by NPD in the remaining patient. Searching for CF should start with sweat test. If the sweat test is normal or intermediate, screening for 31 mutations may help to diagnose CF. A complete genetic analysis is indicated when only one mutation is detected and/or when other clinical features, such as obstructive azoospermia or pancreatic insufficiency, are suggestive of CF. NPD measurement is indicated in controversial cases. abstract_id: PUBMED:34248082 Cystic fibrosis with focal biliary cirrhosis and portal hypertension in Japan: a case report A 9-year-old Japanese girl was found to have persistently elevated hepatic enzymes, chronic bronchitis, chronic sinusitis, and poor weight gain beginning at 5 months of age. Chest computed tomography (CT) revealed diffuse bronchial wall thickening and peripheral bronchiectasis. Abdominal CT showed pancreatic atrophy, liver cirrhosis, a dilated splenic vein, and splenomegaly. Her sweat chloride concentration was 117mmol/l (normal, &lt;60mmol/l). CFTR gene analysis revealed the presence of the Y517H variant on one allele and the 1540del10 variant one the other allele. These findings established a definitive diagnosis of cystic fibrosis (CF). While CF is the most common autosomal recessive genetic disorder among Europeans, it is quite rare in Southeast Asia including Japan. It is important that CF be considered in the work-up of children with chronic hepatic and respiratory disorders even if it is uncommon among children of a similar background. abstract_id: PUBMED:32003094 Clinical characteristics and genetic analysis of cystic fibrosis transmembrane conductance reseptor-related disease. Background: Cystic fibrosis (CF) transmembrane conductance receptor (CFTR)-related disease is diagnosed in patients affected by CFTR dysfunction who do not fully meet the CF diagnostic criteria. Only 2% of all CF patients have CFTR-related disease. We define the demographic characteristics of such patients, described the performance of mutational analyses, and describe the clinical findings. Methods: Twenty-four patients were followed-up for CFTR-related disease. Patients with CF symptoms but who did not completely fulfil the CF diagnostic criteria were enrolled. Age, body mass index at the times of diagnosis and admission, symptoms, pulmonary function and fecal elastase test results, gene analyses, and clinical findings during follow-up were evaluated. Results: Ten patients (42%) were female and 14 (58%) male. Their mean age was 15.3 years (minimum-maximum 6-20 years). The mean age at diagnosis was 8.5 years (minimum-maximum 3-14 years) and the most common presenting complaint was a cough (n = 19). During follow up, chronic sinusitis developed in 15 patients, bronchiectasis in 13, nasal polyposis in six, failure to thrive in three, recurrent pancreatitis in two, asthma in one, and congenital bilateral absence of the vas deferens in one. Fecal elastase levels were low in only one of the three patients who failed to thrive. In terms of CFTR gene mutations, two were found in 10 patients, one in eight patients, and none in six. Conclusions: Cystic fibrosis transmembrane conductance receptor-related disease presents with various clinical findings. Serious symptoms may develop later in life. Late diagnosis significantly compromises the quality and duration of life in such patients. abstract_id: PUBMED:26526220 Non-allergic asthma as a CFTR-related disorder. Background: CFTR dysfunction can be involved in CBAVD, pancreatitis or bronchiectasis. Methods: Subjects with cystic fibrosis-like disease, equivocal sweat chloride concentrations and no or one disease-causing CFTR mutation were investigated by intestinal current and/or nasal potential difference measurements. Results: A subgroup of female patients who had been diagnosed to suffer from non-allergic asthma showed intermediary chloride concentrations in sweat test, normal chloride secretory responses in the intestine and an abnormal nasal potential difference with Sermet scores in the cystic fibrosis range. Conclusion: Non-allergic asthma is a clinical entity that may be associated with CFTR dysfunction of the respiratory epithelium. Answer: The relationship between diffuse bronchiectasis (DB) and CFTR-related disorders remains a subject of debate. A study analyzing the CFTR gene in 47 DB patients and 47 controls found CFTR variants in both groups, but no clear genotype-phenotype correlation was observed, except for the sweat test value. The study concluded that DB should not be considered a classical autosomal recessive CFTR-related disorder (CFTR-RD), suggesting that the impact of "Non-Neutral Variants" on lung disease requires further investigation (PUBMED:25797027). However, CFTR-related disorders are a broader category that includes conditions resulting from CFTR dysfunction that do not meet the full criteria for cystic fibrosis (CF). These can manifest as single-organ diseases such as congenital absence of the vas deferens, pancreatitis, or bronchiectasis (PUBMED:32172933). The phenotypic spectrum of patients with CF and CFTR-related disease carrying the p.Arg117His variant, for example, ranges from mild to severe lung disease, indicating that some patients with CFTR mutations can present with significant lung pathology, including bronchiectasis (PUBMED:30279124). The diagnosis of CF in nonscreened populations, which includes individuals presenting with signs and symptoms such as bronchiectasis, relies on clinical evidence and evidence of CFTR dysfunction (PUBMED:28129813). Additionally, CFTR dysfunction has been implicated in non-allergic asthma, suggesting that CFTR-related disorders can have a broad range of respiratory manifestations (PUBMED:26526220). In summary, while diffuse bronchiectasis may not be a classical CFTR-RD, CFTR gene variants can be associated with lung disease, and CFTR dysfunction can contribute to respiratory conditions that include bronchiectasis. Therefore, it seems that diffuse bronchiectasis can still be considered within the spectrum of CFTR-related disorders, but its classification may not fit the traditional autosomal recessive pattern typically associated with CF. Further research is necessary to fully understand the role of CFTR variants in the pathogenesis of diffuse bronchiectasis and other lung diseases.
Instruction: Can family medicine residents predict their performance on the in-training examination? Abstracts: abstract_id: PUBMED:15531984 Can family medicine residents predict their performance on the in-training examination? Background And Objectives: Evaluation of residents' competence is of great importance in the training of physicians, yet the evaluation process is largely subjective. Faculty and residents frequently rely on self assessments to make decisions regarding curriculum design and electives. The In-training Examination (ITE), the only widely available objective measure of residents' medical knowledge, provides an opportunity to test the reliability of self-assessments. This study's objective was to determine if family medicine residents are able to self-assess their medical knowledge by predicting their performance on the ITE. Methods: A survey asking the residents to estimate their performance on the ITE in each of the nine content areas was administered at 13 examination sites just prior to the ITE. Correlation coefficients were calculated for corresponding predicted and actual scores for each resident in each content area. Predictions were also compared to performance according to quartile. Results: Residents showed little ability to predict their scores in any of the content areas. Residents scoring in either the lowest or highest quartile were least able to predict accurately, with correct predictions ranging from 3% to 23%. Conclusions: Residents cannot reliably predict their performance on the ITE. Of special concern are residents scoring in the lowest quartile, since these residents greatly overestimated their performance. abstract_id: PUBMED:31143716 Family medicine residents' educational environment and satisfaction of training program in Riyadh. Background: Improving health outcome indicators worldwide needs well-trained family physicians, and the Kingdom of Saudi Arabia is of no exception from that need. Objectives: To address the level of satisfaction and assess the educational environment among residents of family medicine (FM) in Riyadh city. Methodology: A cross-sectional study; the Postgraduate Hospital Educational Environment Measure (PHEEM) was used to assess the educational environment for all FM residents in fully structured training centers that include all levels of residents in Riyadh during 2016. Results: About 187 surveys were distributed and 140 were collected, with a response rate of 74.87%. Cronbach's alpha scored at 0.917 for overall items. Out of 160 maximum score, the overall score of the PHEEM was 86.73 (standard deviation [SD]: 19.46). The perception of teaching score was 33.11 (SD: 8.80) out of 60, the perception of role autonomy score was 28.60 (SD: 7.35) out of 56, and the perception of social support was 25.02 (SD: 5.43) out of 44. Conclusion: The educational environment is an important determinant of medical trainees' achievements and success. The results are better than what had been found in the previous studies, but more attention and effort should be done, especially for the poorly rated points in this study. We recommend a continuous evaluation and reconstruction of the Saudi Board of FM program, and such results could be a tool that might help in fostering better and stronger educational program. abstract_id: PUBMED:31911044 A Survey on the Experience of Singaporean Trainees in Obstetrics/Gynecology and Family Medicine of Sexual Problems and Views on Training in Sexual Medicine. Introduction: Asian patients may have more difficulty seeking help for their sexual problems because of a largely conservative culture. Residents from both obstetrics and gynecology (OBGYN) and family medicine (FM) departments are ideally placed to address sexual problems. Aim: This survey explored the experience of residents from OBGYN and FM in managing sexual problems and their views on training in sexual medicine (SM). Method: An anonymized questionnaire collecting data on trainee characteristics, exposure to male and female sexual problems, and training in SM was sent to all FM and OBGYN residents in Singapore. These residents had completed their medical registration with the Singapore Medical Council and were at various stages of specialty training in both FM and OBGYN residency programs in Singapore. Main Outcome Measure: Trainees' exposure to male and female sexual problems and their views on training in Sexual Medicine. Results: The overall response from the survey was 63.5% (122/192)-54% (70/129) and 69% (52/75) of FM and OBGYN residents responded, respectively. 63% were female, with 22% being senior residents, and 55% attended Singaporean medical schools. About one quarter (30/122) of the respondents encountered patients with sexual problems at least monthly. Most would refer these patients directly to specialists, psychologists, and sex therapists. More than 80% of residents were not confident in managing sexual problems in either sex (89% for male problems; 83% for female problems). Among the recognized categories, only 30% felt confident to manage erectile dysfunction, 26% for vaginismus, while less than 10% felt confident to manage libido, arousal, or orgasm disorders. 95% of the residents agreed that SM should be part of both training curricula, with 70% and 25% suggesting at junior and senior residency, respectively. 93% of them were interested to obtain further knowledge and skills in SM through their core training curriculum and from seminars. Conclusions: This survey reported a significant number of residents in OBGYN and FM departments are regularly exposed to patients with sexual problems but lack the skills to manage them. OBGYN residents were more familiar with managing female sexual problems while FM residents tend to have more experience in male sexual problems. Almost universally, the residents in FM and OBGYN were very keen to acquire skills in SM, and the results support the incorporation of appropriate knowledge and skills into both national residency program curricula. Huang Z, Choong DS, Ganesan AP, et al. A Survey on the Experience of Singaporean Trainees in Obstetrics/Gynecology and Family Medicine of Sexual Problems and Views on Training in Sexual Medicine. J Sex Med 2019;8:107-113. abstract_id: PUBMED:1053500 An in-training examination for residents in family practice. An in-training examination for family practice residents has been developed and used in a regional network residency program over the past three years. The most striking result has been the strong preference expressed by residents for question-specific feedback in order to facilitate learning after taking the examination. A well-designed in-training examination has the potential to meet both individual resident and program goals as an additional measure of resident performance and growth, as well as of the effectiveness of teaching in the various curricular areas. In-training examinations for residents are in use by 12 other specialties in medicine, and have been well-accepted by program directors and residents. A nationally-sponsored in-training examination for family practice residents is needed which includes maximal teaching capability through comprehensive and specific feedback. abstract_id: PUBMED:28404721 Family medicine residents' training in, knowledge about, and perceptions of digital rectal examination. Objective: To evaluate family medicine residents' training in, knowledge about, and perceptions of digital rectal examination (DRE). Design: Descriptive study, using an online survey that was available in French and English. Setting: Quebec. Participants: A total of 217 residents enrolled in a family medicine program. Main Outcome Measures: Residents' demographic characteristics; the DRE teaching they received throughout their medical training; their reasons for omitting DRE; their recognition of DRE indications (strong vs weak) and application of DRE for 10 anorectal complaints; and their perceptions of the overall quality of the DRE training they received. Results: Of the 879 residents contacted, 217 (25%) responded to the survey. Throughout their training, one-third of respondents did not receive any supervision for or feedback on DRE technique. Seventy-one percent of respondents expressed their inability to identify the nature of abnormal examination findings at least once during their training. The most frequently reported reasons to omit DRE were patient refusal, inadequate setting, and lack of time. Conclusion: Most of the residents in this study had omitted DRE at least once in their clinical work despite recognizing its importance. There was discordance between recognition of a complaint requiring DRE and execution of this technique in a clinical setting. Family medicine education programs and continuing medical education committees should consider including DRE training. abstract_id: PUBMED:32029219 Learning Styles of Internal Medicine Residents and Association With the In-Training Examination Performance. Introduction: Assessment of how medical residents learn and the impact on standardized test performance is important for effective training. Kolb's learning study inventory categorizes learning into accommodating, assimilating, converging and diverging based on the four stages of learning: active experimentation, abstract conceptualization, concrete experience and reflective observation. The American College of Physicians (ACP) Internal Medicine In-Training Examination (IM-ITE) has been shown to positively correlate with successful performance on clinical assessments and board certification. We sought to evaluate the association between the individual learning styles of IM residents and performance on the ACP IM-ITE. Methods: The Kolb LSI questionnaire was administered to IM residents during the 2016/2017 academic year. Logistic regression was used to analyze the association between residents preferred learning styles and performance on the ACP IM - ITE. Results: 53 residents in the IM Residency Program of Morehouse School of Medicine completed the questionnaire. The predominant learning style was assimilating (49%), followed by converging (26%). There was no significant difference between the learning styles of residents when compared across gender, age, race, and PGY levels. Residents with a diverging learning style had the highest mean IM-ITE percentage score followed by assimilating and converging respectively (P = 0.14) CONCLUSIONS: The predominant learning styles among our IM residents are assimilating and converging, which is consistent with previous studies. Residents with a diverging style of learning appeared to perform better on the IM-ITE. We suggest that future studies should evaluate the feasibility of integrating brainstorming and group work sessions into the IM residency teaching curriculum and the impact on academic performance. abstract_id: PUBMED:36964563 Perspectives of family medicine residents in Riyadh on leadership training: a cross-sectional study. Background: Medical educators in academia have faced challenges incorporating leadership training into curricula while minimizing redundancy and assuring value and relevance for all learners. This study aims to assess the status of leadership training as perceived by family medicine residents in Riyadh to advise the development of a formal leadership training curriculum. Method: The research is cross-sectional and quantitative. Participants were asked via an electronic questionnaire about their leadership attitudes, perceived degree of training in various leadership domains, and where they could find additional training. Results: The survey was completed by 270 family medicine residents in Riyadh. Residents rated the importance of physician leadership in their communities as high (6 out of 7 on a Likert scale). In contrast, agreement with the statement 'I am a leader' obtained the lowest grade (4.4 of 7 on a Likert scale). Overall, most of the residents participating in the study (50% or more) voiced a desire for more training in all leadership domains. Over 50% of residents indicated that leadership electives or selective lectures, workshops, or seminars as well as WADAs (Weekly Academic Day Activities), leadership mentors or coaches teaching junior learners (with training), and leadership courses could be incorporated into the curriculum to foster leadership skills. Conclusion: Residents were enthusiastic about family physicians being leaders, aligning with the current educational philosophy but requiring formal training. They also indicated areas where leadership training might be improved and developed in the current curriculum. This poll's results could be used to help residents build leadership skills by incorporating them into a formal leadership curriculum. abstract_id: PUBMED:38152809 Evaluation of the Change in Family Medicine Residents' Confidence and Knowledge in Performing Basic Obstetric Ultrasound Post-training: A Prospective Study. Introduction The maternity care curriculum guidelines of the American Academy of Family Physicians (AAFP) state that family medicine residents (FMRs) should demonstrate the ability to independently perform limited obstetric ultrasound (OBUS) examinations as a core skill. This study's purpose is to examine whether basic OBUS training enhances the knowledge and confidence of FMRs in performing OBUS. Methods This is a Sparrow Institutional Review Board (IRB)-exempt prospective study that was completed at the Sparrow/Michigan State University (MSU) Family Medicine Residency Program (FMRP) in Michigan between December 2020 and December 2021, involving 40 residents. Assessment of knowledge and confidence in performing OBUS was completed prior to and following the training sessions. For training, an online lecture and two separate hands-on sessions with a pregnant patient were completed. Training materials by Prof. Dr. Mark Deutchman and the University of Washington (UoW) were used. Paired T-test was used for statistical analysis, and a p-value of &lt;0.05 was used to determine statistical significance. Results Thirty-two pre- and 25 post-training questionnaires were collected from the target group. Of the respondents, 92% (n=23) indicated that training increased their confidence levels in performing OBUS. The percentage of reported confidence level of 1 or 2 in performing OBUS (on a Likert scale of 5, with 5 as the highest confidence level) decreased by 60% post-training (p&lt;0.001). Levels 3, 4, and 5 in confidence level were increased. According to the respondents, an increased confidence level in OBUS is helpful for improving trust and rapport between the provider and the patient (92%, n=23), boosting the provider's diagnostic abilities (80%, n=20), improving patient satisfaction (76%, n=19), and decreasing healthcare costs (44%, n=11). Conclusion The basic OBUS training sessions improved the knowledge and confidence of residents in interpreting and performing OBUS; therefore, more OBUS training is needed during the residency. abstract_id: PUBMED:33995774 Learning style among family medicine residents, Qatar. Different learning style among family medicine residents is important to adjust the educational program that meet their needs and make the educational process fruitful to improve their academic performance. This study is aiming to assess learning styles among family medicine residents in Qatar. This cross-sectional descriptive study was conducted at the West Bay family medicine training center, Doha, Qatar, where all family medicine residents were invited to participate using self-administered validated questionnaire based on David Kolb model of experiential learning that has been extensively used in medical education research. Demographic data were assessed and analyzed as the predictor variables. Data were collected from 38 residents with response rate 76% revealing that the predominant pattern in postgraduate year one (PGY1) is activist in 65% and theorist in 55% while PGY2 tends to be reflector in 45% and theorist in 35% and in PGY3-4 changed to be 70-75% activist and 40-55% (reflector and pragmatic). General learning style pattern among all residents tend to be in the following order: activist 60.5%, then reflector 44.7%, followed by pragmatism 34.2% and finally theorist 36.8%. Learning style assessment is important and can be used to determine which teaching modalities will be best accepted and most effective for family medicine residents which should be considered while planning, designing, and implementing their educational program. abstract_id: PUBMED:37211440 Acquiring general practitioner roles during the outpatient postgraduate training section and profession-forming postgraduate training conditions in family physician practices - A survey among family medicine residents in Rhineland-Palatinate during their postgraduate training in general medicine Background: Postgraduate training in general medicine should be oriented on competencies and profession-forming, as is suggested by the German Regulations on Specialist Training of federal and state governments and the Competence-based Curriculum General Medicine. The learnability of general practitioner (GP) roles and the profession-forming orientation of the postgraduate training conditions during the outpatient postgraduate training period were investigated. Methods: A cross-sectional study in questionnaire design was conducted from October until December 2019 among 220 physicians in postgraduate training who were registered at the Association of Statutory Health Insurance-Accredited Physicians in Rhineland-Palatinate for the specialty of general medicine. The GP roles surveyed were based on the "CanMEDS General Medicine" roles. Using indicators created from the didactic model "Cognitive Apprenticeship", profession-forming alignment of postgraduate training conditions in the GP practices were studied. The data obtained were analysed descriptively. Results: The response of 70 evaluable questionnaires included a gender distribution of 51 female and 18 male family medicine residents. The family medicine residents were almost equally divided between single-handed practices, cooperating practice associations and group practices. Slightly more than half of the female physicians worked part-time, whereas the male physicians were all full-time employees. The roles of "member of an interprofessional team", "health advocate" and "medical expert" were perceived as learnable by 70 to 90% of the family medicine residents. In addition to approval, varying degrees of indecision and rejection were expressed regarding the learnability of the roles of "teacher/scholar", "member of a network" and "employer". A slight majority considered the appropriation of the role of "practice manager" to be critical. Within the survey of profession-forming postgraduate training conditions, the indicators "accessible contact person", "collegial appreciation" and "assumption of responsibility" were approved by more than 90% in some cases. The indicators "access to GP acting" (approx. 86%) and "resilience" (approx. 71%) also received high approval ratings. However, the indicator "continuous feedback" only received a slight majority of approval. Discussion: GP postgraduate training practices in Rhineland-Palatinate apparently offer suitable educational locations to family medicine residents to acquire a solid basis for professional practising and the skills for patient-oriented communicating of preventive health-promoting contents. Male physicians seem to follow more traditional professional hierarchies. In contrast, female physicians are more receptive to teamwork but tend to be sceptical about adopting a leadership position. The learnability of particular GP roles may benefit from close collaboration with the practice owner, especially in single-handed practices. In addition, the working time model chosen seems to have an effect. Conclusions: It can be assumed that profession-forming postgraduate training conditions predominate in Rhineland-Palatinate GP postgraduate training practices, where the role of "medical expert" also appeared to be learnable in most cases. In some cases the learnability of GP roles was significantly influenced by the factors "gender", "working time model" and "form of practice". Consequently, it could have a quality-enhancing impact if these factors were taken into account in the development of measures to realize competence orientation in GP postgraduate training. Answer: No, family medicine residents cannot reliably predict their performance on the in-training examination (ITE). A study found that residents showed little ability to predict their scores in any of the content areas of the ITE. Particularly, residents scoring in either the lowest or highest quartile were least able to predict accurately, with correct predictions ranging from 3% to 23% (PUBMED:15531984).
Instruction: Transitions from first substance use to substance use disorders in adolescence: is early onset associated with a rapid escalation? Abstracts: abstract_id: PUBMED:18768267 Transitions from first substance use to substance use disorders in adolescence: is early onset associated with a rapid escalation? Background: Early substance use (SU) in adolescence is known to be associated with an elevated risk of developing substance use disorders (SUD); it remains unclear though whether early SU is associated with more rapid transitions to SUD. Objective: To examine the risk and speed of transition from first SU (alcohol, nicotine, cannabis) to SUD as a function of age of first use. Methods: N=3021 community subjects aged 14-24 years at baseline were followed-up prospectively over 10-years. SU and SUD were assessed using the DSM-IV/M-CIDI. Results: (1) The conditional probability of substance-specific SU-SUD transition was the greatest for nicotine (36.0%) and the least for cannabis (18.3% for abuse, 6.2% for dependence) with alcohol in between (25.3% for abuse; 11.2% for dependence). (2) In addition to confirming early SU as a risk factor for SUD we find: (3) higher age of onset of any SU to be associated with faster transitions to SUD, except for cannabis dependence. (4) Transitions from first cannabis use (CU) to cannabis use disorders (CUD) occurred faster than for alcohol and nicotine. (5) Use of other substances co-occurred with risk and speed of transitions to specific SUDs. Conclusion: Type of substance and concurrent use of other drugs are of importance for the association between age of first use and the speed of transitions to substance use disorders. Given that further research will identify moderators and mediators affecting these differential associations, these findings may have important implications for designing early and targeted interventions to prevent disorder progression. abstract_id: PUBMED:35678296 Substance use-related factors and psychosocial characteristics among Turkish adults with early- and late-onset substance use disorder. The onset of substance use is a strong predictor of substance use disorders and related problems. This study examined the differences between early- and late-onset substance use in personality and substance use characteristics, traumatic experiences, and social support with a sample of 100 Turkish adults with substance use disorders. Early onset (&lt;18 years) was associated with more traumatic experiences, increased risk of developing posttraumatic stress disorder (PTSD), and higher levels of depressive symptoms compared to late onset (≥18 years). Also, depressive symptoms, living, and working status were predictors of substance use onset. The study addresses the groups at risk of initiating substance use. abstract_id: PUBMED:28077106 Age of onset of substance use and psychosocial problems among individuals with substance use disorders. Background: Substance use is generally initiated in adolescence or early adulthood and is commonly associated with several physical, psychological, emotional and social problems. The objective of this study is to assess the age of onset of substance use differences on psychosocial problems among individuals with substance use disorders (SUDs) residing in drug rehabilitation centers. Methods: A descriptive cross sectional research design was carried out. Probability Proportional to Size (PPS) sampling technique was used to select the drug rehabilitation centers and all the respondents meeting the inclusion criteria of the selected seven rehabilitation centers were taken as a sample and comprised of 221 diagnosed individuals with SUDs. A semi structured self administered questionnaires were used to collect the information regarding demographic and substance use related characteristics. A standard tool Drug Use Screening Inventory-Revised (DUSI-R) was used to assess the psychosocial problems among individuals with SUDs. Data were analyzed using both descriptive and inferential statistics. Multivariate general linear model (MANOVA and MANCOVA) was used to evaluate differences in psychosocial problems between early vs late onset substance users. Result: The age of onset of substance use was significantly associated with psychosocial problems. The mean psychosocial problem scores were higher in early onset substance user (17 years or younger) than late onset substance user (18 years or higher) in various domains of DUSI-R even after controlling confounding factors. The two groups (early vs late) differed significantly in relation to age, gender, occupational status, current types of substance use, frequency of use, mode of substance use and relapse history. Conclusion: The study indicated that early onset substance users are at higher risk for psychosocial problems in various areas of life such as Behavior Pattern, Psychiatric disorder, Family system, Peer relationship, Leisure/Recreation and Work adjustment compared to late onset substance users. It highlights the need for early prevention, screening, and timely intervention among those individuals. abstract_id: PUBMED:24525084 The impact of substance use at psychosis onset on First Episode Psychosis course: results from a 1 year follow-up study in Bologna. Objectives: Substance abuse is a well established risk factor for First-Episode Psychosis (FEP), but its influence on FEP course is less clear. Starting from our baseline observation that substance users were younger than non-users at the psychosis onset, we hypothesized that substance use at baseline could be an independent risk factor for a worse clinical course. Methods: An incidence cohort of patients with FEP collected in an 8year period (2002-2009) at the Bologna West Community Mental Health Centers (CMHCs) was assessed at baseline and at 12month follow-up. Drop-out, hospitalizations and service utilization were used as clinical outcomes. Results: Most of the patients were still in contact with CMHC at 12month follow up. Substance users had a significantly higher rate of hospitalizations during the follow-up after adjusting for age, gender and other potential confounders (OR 5.84, 95% CI 2.44-13.97, p≤0.001). Conclusions: This study adds to previous evidence showing the independent effect of substance use on FEP course. The identification of a "potentially modifiable" environmental predictor of the course of the illness such as substance use at psychosis onset allows us to envisage the possibility of ameliorating the course of the illness by managing this factor. abstract_id: PUBMED:26513726 Substance Use in Patients With First-Episode Psychosis: Is Gender Relevant? Objective: Only a few studies in patients with first-episode psychosis have included gender in the study hypothesis or considered this a primary study variable. The aim of this study was to explore the influence of gender in the pattern of substance use in patients with first-episode psychosis. Methods: This is a sub-analysis of a randomized open clinical trial that compared 1-year treatment retention rates of patients with first-episode psychosis randomized to haloperidol, olanzapine, quetiapine, risperidone, or ziprasidone. Our sub-analysis included 85 men and 29 women. Results: Substance use was relatively high among these patients and differed significantly by gender. Men were more likely to use substances overall than women (89.4% for men vs. 55.2% for women), χ(2) = 16.2, df = 1, p &lt;.001, and were also more likely to use alcohol (χ(2) = 13, df = 1, p &lt;.001), cannabis (χ(2) = 9.9; df = 1, p &lt;.002), and cocaine (χ(2) = 10.3; df = 1, p &lt;.001), compared to women. While there were no gender differences in age at first consumption of alcohol or cocaine, men were significantly younger at first consumption of cannabis (M = 16.08 years, SD = 2.1) than women (M = 18.0 years, SD = 3.8), F(1, 59) = 5, p &lt;.02. When analyzed separately by gender, women showed no significant differences in the influence of number of substances used on age at onset of psychosis, F(3, 29) = 1.2, p =.30. However, there was a significant difference among men, with earlier onset of psychosis noted in men consuming multiple substances; F(4, 85) = 5.8, p &lt;.0001. Regarding prediction of age at onset of psychosis, both male gender and the use of a higher number of substances significantly predicted an earlier age at onset of psychosis. Conclusions: Our study provides some evidence of gender differences in the pattern of substance use in patients with first-episode psychosis, suggesting the possible need for gender-specific approaches in the interventions performed in these patients. This study is registered as #12610000954022 with the Australian New Zealand Clinical Trials Registry (www.anzctr.org.au). abstract_id: PUBMED:26921724 Role transitions and substance use among Hispanic emerging adults: A longitudinal study using coarsened exact matching. Introduction: Emerging adulthood (ages 18 to 25) is characterized by changes in relationships, education, work, and viewpoints on life. The prevalence of substance use also peaks during this period. Among emerging adults, Hispanics have a unique substance use profile, and have been described as a priority population for substance use prevention. Cross-sectional studies among Hispanics have shown that specific role transitions (e.g., starting or ending romantic relationships) were associated with substance use. Negative affect from uncertainty/stress that accompanies role transitions in emerging adulthood may lead to substance use as a maladaptive coping mechanism. Longitudinal studies are needed to gain a more complete understanding of these associations. Methods: Participants completed surveys for Project RED, a longitudinal study of substance use among Hispanics in Southern California. This study used Coarsened Exact Matching to overcome the methodological limitations of previous studies. Participants were matched on pretreatment variables including age, gender, substance use behavior in high school, and depressive symptoms. Past-month cigarette use, binge drinking, marijuana use, and hard drug use were the outcomes of interest. After matching, each outcome was regressed on each individual role transition in year one of emerging adulthood with this process repeated in year two of emerging adulthood. Results: Role transitions in romance and work were positively associated with multiple categories of substance use. Conclusions: Prevention programs should teach emerging adults ways to cope with the stress from role transitions. Individual role transitions may be used to screen for subgroups of emerging adults at high risk for substance use. abstract_id: PUBMED:23941263 Cumulative and recent psychiatric symptoms as predictors of substance use onset: does timing matter? Aims: We examined two questions about the relationship between conduct disorder (CD), depression and anxiety symptoms and substance use onset: (i) what is the relative influence of recent and more chronic psychiatric symptoms on alcohol and marijuana use initiation and (ii) are there sensitive developmental periods when psychiatric symptoms have a stronger influence on substance use initiation? Design: Secondary analysis of longitudinal data from the Pittsburgh Youth Study, a cohort study of boys followed annually from 7 to 19 years of age. Setting: Recruitment occurred in public schools in Pittsburgh, Pennsylvania, USA. Participants: A total of 503 boys. Measurements: The primary outcomes were age of alcohol and marijuana use onset. Discrete-time hazard models were used to determine whether (i) recent (prior year); and (ii) cumulative (from age 7 until 2 years prior to substance use onset) psychiatric symptoms were associated with substance use onset. Findings: Recent anxiety symptoms [hazard ratio (HR) = 1.10, 95% confidence interval (CI) = 1.03-1.17], recent (HR = 1.59, 95% CI = 1.35-1.87), cumulative (HR = 1.45, 95% CI = 1.03-2.03) CD symptoms, and cumulative depression symptoms (HR = 1.04, 95% CI = 1.01-1.08) were associated with earlier alcohol use onset. Recent (HR = 1.39, 95% CI = 1.22-1.58) and cumulative CD symptoms (HR = 1.38, 95% CI = 1.02-1.85) were associated with marijuana use onset. Recent anxiety symptoms were only associated with alcohol use onset among black participants. Conclusions: Timing matters in the relationship between psychiatric symptoms and substance use onset in childhood and adolescence, and the psychiatric predictors of onset are substance-specific. There is no single sensitive developmental period for the influence of psychiatric symptoms on alcohol and marijuana use initiation. abstract_id: PUBMED:17127554 Gender specific associations between types of childhood maltreatment and the onset, escalation and severity of substance use in cocaine dependent adults. We examined associations between types of childhood maltreatment and the onset, escalation, and severity of substance use in cocaine dependent adults. In men (n = 55), emotional abuse was associated with a younger age of first alcohol use and a greater severity of substance abuse. In women (n = 32), sexual abuse, emotional abuse, and overall maltreatment was associated with a younger age of first alcohol use, and emotional abuse, emotional neglect, and overall maltreatment was associated with a greater severity of substance abuse. There was no association between childhood maltreatment and age of nicotine or cocaine use. However, age of first alcohol use predicted age of first cocaine use in both genders. All associations were stronger in women. Findings suggest that early intervention for childhood victims, especially females, may delay or prevent the early onset of alcohol use and reduce the risk for a more severe course of addiction. abstract_id: PUBMED:34044802 Relationships between age at first substance use and persistence of cannabis use and cannabis use disorder. Background: From a secondary prevention perspective, it is useful to know who is at greatest risk of progressing from substance initiation to riskier patterns of future use. Therefore, the aim of this study was to determine relationships between age at first use of alcohol, tobacco and cannabis and patterns of cannabis use, frequency of use and whether age of substance use onset is related to having a cannabis use disorder (CUD). Methods: We analysed data from Ireland's 2010/11 and 2014/15 National Drug Prevalence Surveys, which recruited 5134 and 7005 individuals respectively, aged 15 years and over, living in private households. We included only those people who reported lifetime cannabis use. Multinomial, linear and binary logistic regression analyses were used to determine relationships between age of substance use onset and patterns of cannabis use, frequency of use and having a CUD. Results: When compared to former users, the odds of being a current cannabis user were found to be reduced by 11% (OR = 0.89; 95% CI: 0.83, 0.95) and 4% (OR = 0.96; 95% CI: 0.92, 1.00) for each year of delayed alcohol and cannabis use onset, respectively. Among current users, significant inverse linear relationships were noted, with increasing age of first use of tobacco (β = - 0.547; P &lt; .001) and cannabis (β = - 0.634; P &lt; .001) being associated with a decreased frequency of cannabis use within the last 30 days. The odds of having a CUD were found to be reduced by 14% (OR = 0.86; 95% CI: 0.78, 0.94) and 11% (OR = 0.89; 95% CI: 0.82, 0.98) for each year of delayed tobacco and cannabis use onset respectively in analyses which examined survey participants aged 15-34 years. Conclusions: Among people who report past cannabis use, it is those with a more precocious pattern of early use of substances, including alcohol, and especially tobacco and cannabis, who are more likely to report ongoing, heavy and problematic cannabis use. Secondary prevention initiatives should prioritise people with a pattern of very early onset substance use. abstract_id: PUBMED:25529548 Dose escalation during the first year of long-term opioid therapy for chronic pain. Objective: To identify patient factors and health care utilization patterns associated with dose escalation during the first year of long-term opioid therapy for chronic pain. Design: Retrospective cohort study using electronic health record data. Setting: University health system. Subjects: Opioid naïve adults with musculoskeletal pain who received a new outpatient opioid prescription between July 1, 2011 and June 30, 2012 and stayed on opioids for 1 year. Methods: Mixed-effects regression was used to estimate patients' rate of opioid dose escalation. Demographics, clinical characteristics, and health care utilization for patients with and without dose escalation were compared. Results: Twenty-three (9%) of 246 patients in the final cohort experienced dose escalation (defined as an increase in mean daily opioid dose of ≥30-mg morphine equivalents over 1 year). Compared with patients without dose escalation, patients with escalation had higher rates of substance use diagnoses (17% vs 1%, P = 0.01) and more total outpatient encounters (51 vs 35, P = 0.002) over 1 year. Differences in outpatient encounters were largely due to more non face-to-face encounters (e.g., telephone calls, emails) among patients with dose escalation. Differences in age, race, concurrent benzodiazepine use, and mental health diagnoses between patients with and without dose escalation were not statistically significant. Primary care clinicians prescribed 89% of opioid prescriptions. Conclusions: Dose escalation during the first year of long-term opioid therapy is associated with higher rates of substance use disorders and more frequent outpatient encounters, especially non face-to-face encounters. Answer: The transition from first substance use to substance use disorders (SUDs) in adolescence does appear to be influenced by the age of onset, with early onset being associated with various risks and outcomes. Research indicates that early substance use (SU) is a known risk factor for the development of SUDs, but the relationship between early SU and the speed of transition to SUDs is complex and varies by substance type and other factors (PUBMED:18768267). For instance, early onset of substance use (before 18 years) has been linked to more traumatic experiences, an increased risk of developing posttraumatic stress disorder (PTSD), and higher levels of depressive symptoms compared to late onset (18 years or older) (PUBMED:35678296). Additionally, early onset substance users are at higher risk for psychosocial problems in various areas of life, such as behavior patterns, psychiatric disorders, family systems, peer relationships, leisure/recreation, and work adjustment compared to late onset substance users (PUBMED:28077106). Substance use at the onset of psychosis has been shown to be an independent risk factor for a worse clinical course, with substance users experiencing a significantly higher rate of hospitalizations during follow-up (PUBMED:24525084). Gender also plays a role, with men more likely to use substances overall than women and to have an earlier onset of cannabis use, which is associated with an earlier onset of psychosis in men (PUBMED:26513726). Furthermore, role transitions during emerging adulthood have been positively associated with multiple categories of substance use, suggesting that the stress from these transitions may lead to substance use as a maladaptive coping mechanism (PUBMED:26921724). Psychiatric symptoms, both recent and cumulative, have been found to influence the onset of substance use, with the relationship being substance-specific and no single sensitive developmental period identified (PUBMED:23941263). In summary, early onset of substance use is associated with a rapid escalation to SUDs, with various factors such as the type of substance, concurrent use of other drugs, traumatic experiences, psychosocial problems, and psychiatric symptoms influencing the speed and risk of transition to SUDs. These findings underscore the importance of early prevention and targeted interventions to prevent disorder progression (PUBMED:18768267).
Instruction: Can the normobaric oxygen paradox (NOP) increase reticulocyte count after traumatic hip surgery? Abstracts: abstract_id: PUBMED:23333785 Can the normobaric oxygen paradox (NOP) increase reticulocyte count after traumatic hip surgery? Study Objective: To determine if the normobaric oxygen paradox (NOP) was effective in increasing reticulocyte count and reducing postoperative requirements for allogeneic red blood cell transfusion after traumatic hip surgery. Design: Prospective, randomized, double blinded, multi-center study. Setting: Surgical wards of two academic hospitals. Patients: 85 ASA physical status 1 and 2 patients undergoing surgery for traumatic hip fracture. Interventions: Patients were randomly assigned to receive 30 minutes of air [air group (control); n = 40] or 30 minutes of 100% oxygen (O2 group; n = 14) at 15 L/min every day from the first postoperative day (POD 1) until discharge. Measurements: Venous blood samples were taken at admission and after surgery on POD 1, POD 3, and POD 7. Hemoglobin (Hb), hematocrit (Hct), reticulocytes, hemodynamic variables, and transfusion requirements were recorded, as were hospital length of stay (LOS) and mortality. Main Results: Full analysis was obtained for 80 patients. On hospital discharge, the mean increase in reticulocyte count was significantly higher in the O2 group than the air group. Percent variation also increased: 184.9% ± 41.4% vs 104.7% ± 32.6%, respectively; P &lt; 0.001. No difference in Hb or Hct levels was noted at discharge. Allogeneic red blood cell transfusion was 7.5% in the O2 group versus 35% in the air group (P = 0.0052). Hospital LOS was significantly shorter in the O2 group than the air group (7.2 ± 0.7 days vs 7.8 ± 1.6 days, respectively; P &lt; 0.05). Conclusions: Transient O2 administration increases reticulocyte count after traumatic hip surgery. Hospital LOS also was shorter in the O2 group than the control group. Allogeneic red blood cell transfusion was reduced in the O2 group but it was not due to the NOP mechanism. abstract_id: PUBMED:34646161 Physiological and Clinical Impact of Repeated Inhaled Oxygen Variation on Erythropoietin Levels in Patients After Surgery. The "Normobaric Oxygen Paradox" (NOP) is a physiologic mechanism that induces an increase of endogenous erythropoietin (EPO) production by creating a state of relative hypoxia in subjects previously exposed to hyperoxia, followed by a rapid return to normoxia. Oxygen exposure duration and inspired oxygen fraction required to observe a significant increase in EPO or hemoglobin are not clearly defined. Consequently, we here study the effect of one model of relative hypoxia on EPO, reticulocytes and hemoglobin stimulation in patients after surgery. Patients were prospectively randomized in two groups. The O2 group (n = 10) received 100% oxygen for 1 h per day for eight consecutive days, via a non-rebreathing mask. The control group (n = 12) received no oxygen variation. Serum EPO, hemoglobin and reticulocyte count were measured on admission and postoperatively on days seven and nine. Percentage EPO at day nine with respect to the baseline value was significantly elevated within the groups [O2 group: 323.7 (SD ± 139.0); control group: 365.6 (SD± 162.0)] but not between them. No significant difference was found between the groups in terms of reticulocytes count and hemoglobin. Our NOP model showed no difference on EPO increase between the two groups. However, both groups expressed separately significant EPO elevation. abstract_id: PUBMED:2009946 Physiological response to phlebotomies for autologous transfusion at elective hip-joint surgery. In order to study the physiological response to phlebotomies for autotransfusion, an autotransfusion program was designed for 10 patients undergoing hip-joint replacement surgery for arthrosis. 4 phlebotomies of 450 ml each were performed within 12 days. Blood samples were taken immediately before phlebotomy for blood hemoglobin (Hb), serum erythropoietin (Epo), reticulocyte count (ret) and erythrocyte 2,3-diphosphoglycerate (DPG). All 4 phlebotomies could be performed in 9/10 patients, and only 1 patient had significant symptoms (fatigue). The operation was performed 2 weeks after the last phlebotomy. None of the patients had recovered the initial Hb level at operation (24.8 +/- 9 per liter lower than initially), and they were all even more anemic after the operation (36.8 +/- 16.9 g/l lower than initially). Serum Epo increased from 13.6 +/- 7.2 IU to 30.6 +/- 12.2 (SD) IU per liter, and reticulocyte counts increased to a maximum of 3.68 +/- 1.69%. DPG increased in all patients except the one who had significant fatigue. It is concluded that the patients tolerated the phlebotomy program well but that a significant anemia developed. The compensatory increase in erythropoietin and reticulocyte count, adequate for this degree of anemia, was small compared to the increase seen at more severe anemia, indicating that there may be a role for pharmacological stimulation of erythropoiesis in blood predeposit programs. abstract_id: PUBMED:11039215 Oxygen transport values in patients with surgery performed under extracorporeal blood circulation The aim of the work was to study the O2-transport changes to tissues in cardio-surgical patients suffering from CAD and operated during extra-corporeal blood circulation (ECC). The changes of selected haematologic variables, 2,3-diphosphoglycerate (2,3-DPG) and ATP concentration, acid-base balance parameters with p50 calculation were measured in the venous blood samples taken before the operation, during the operation and on the 1st, 2nd, 3d, 5th, 7th and 10th day after the operation. From the obtained results follows, that extreme haemodilution causes significant decrease of the haematocrit value (Htc) by 35%, the value of haemoglobin (Hb) by 37% and the count of erythrocytes (Er) by 37% from the initial values. The count of reticulocytes (Ret) was increased by 52%. In the days after operation the increase in Htc values, the values of Hb and count of erythrocytes was observed, whereby the initial values were not reach even on the 10th day after the operation. The increase of the reticulocytes count by 33% prevailed to the 10th day after the operation in comparison with the initial values. 2,3-DPG concentration was increased between 3d and 10th day after the operation by 30% and ATP concentration between 5th and 10th day was increased by 23% from the initial values. Hb-O2 saturation (SpO2) and pO2 were increased already during the operation, the increase prevailed until the 7th day by 27%, pO2 until the 3d day by 39% from the initial values. Calculated values p50 did not change in the course of this study--they fluctuated in range +/- 0.04 kPa from the initial value 3.55 kPa. Supposing multifactorial character of Hb oxygenation and deoxygenation process it is possible to conclude, that the determined changes of observed parameters did not significantly influence O2-transport to tissues during ECC. (Fig. 3, Ref. 12.) abstract_id: PUBMED:8167177 Perioperative plasma erythropoietin levels in hip arthroplasty. To examine the influence of intra- and postoperative blood loss and operative trauma on erythropoietin (EPO) production we studied patients undergoing endoprosthetic surgery of the hip. Immunoreactive plasma EPO was determined in ten patients (seven male, three female, aged 39-68 years), undergoing surgery for hip arthroplasty (n = 8) or revision hip arthroplasty (n = 2). EPO levels had already been determined during preoperative autologous deposit, thus allowing direct comparison between EPO response to blood loss alone and the response to blood loss and operative trauma. Perioperative blood loss amounted to 1720 (480-8100) ml (median, range). The hemoglobin concentration decreased from 12.4 (10.6-14.0) g/dl (median, range) before the operation to 10.0 (9.3-12.3) g/dl 2 h after the operation. Thereafter, the hemoglobin concentration increased slowly due to transfusion and erythropoiesis and was not significantly different (p &lt; 0.05) from the preoperative value on the seventh postoperative day. The EPO concentration was preoperatively 26 (11-28) mU/ml and increased 2 h after the end of the operation, reaching a peak of 64 (45-104) mU/ml at 24 h. This peak was followed by a plateau at lower, but still elevated levels. The EPO concentration remained significantly elevated above the preoperative value on the seventh postoperative day. Plasma EPO concentrations showed an adequate response to postoperative anemia compared with the time course after autologous donation. In the early postoperative phase, they do not seem to be appreciably influenced by the neuroendocrine response to trauma, by mediators of inflammation, or by the postoperative catabolic state. The slightly elevated EPO concentration in the late postoperative phase indicates that factors other than anemia may contribute to EPO production at this time. abstract_id: PUBMED:8098389 Effectiveness of perioperative recombinant human erythropoietin in elective hip replacement. Canadian Orthopedic Perioperative Erythropoietin Study Group. Concern about the risk of transmission of viral infection has led to attempts to reduce transfusion requirements in patients undergoing surgery. To determine whether recombinant human erythropoietin decreases blood transfusion requirements in patients undergoing elective hip arthroplasty, a multicentre double-blind, randomised, placebo-controlled trial was conducted. 208 patients undergoing elective primary or revision hip arthroplasty were randomised to 3 groups. All received daily subcutaneous injections of either erythropoietin or placebo starting 10 days before surgery. Group 1 (78 patients) received 14 days of placebo, group 2 (77 patients) received 14 days of erythropoietin (300 units/kg to a maximum of 30,000 units), and group 3 (53 patients) received placebo for days 10 to 6 before surgery and erythropoietin for the next 9 days. A primary outcome event (any transfusion or a haemoglobin concentration &lt; 80 g/L) occurred in 46% of patients in group 1, 23% in group 2, and 32% in group 3 (p = 0.003). The mean number of transfusions was 1.14 in group 1, 0.52 in group 2, 0.70 in group 3. The mean reticulocyte count the day before surgery was 72 x 10(9)/L in group 1, 327 in group 2, and 170 in group 3. Deep venous thrombi were detected in 5 patients in group 1, 8 patients in group 2, and 8 patients in group 3. Patients who had a haemoglobin concentration before randomisation of &lt; 135 g/L benefited most from erythropoietin. Thus erythropoietin given for 14 days perioperatively decreases the need for transfusion in patients undergoing elective hip arthroplasty. abstract_id: PUBMED:15877644 Perioperative stimulation of erythropoiesis with intravenous iron and erythropoietin reduces transfusion requirements in patients with hip fracture. A prospective observational study. Background And Objectives: Patients undergoing surgery for hip fracture (HF) often receive perioperative allogeneic blood transfusions (ABT) to avoid anaemia. However, concerns about the adverse effects of ABT have prompted the review of transfusion practice and the search for a safer treatment of perioperative anaemia. Materials And Methods: We prospectively investigated the effect of a blood-saving protocol of perioperative iron sucrose (3 x 200 mg/48 h, intravenously) plus erythropoietin (1 x 40,000 IU, subcutaneously) if admission haemoglobin level &lt; 130 g/l, on transfusion requirements and postoperative morbid-mortality in patients with HF (group 2; n= 83). A parallel series of 41 HF patients admitted to another surgical unit within the same hospital served as the control group (group 1). Perioperative blood samples were taken for haematimetric, iron metabolism and inflammatory parameter determination. Results: This blood-saving protocol reduced the number of transfused patients (P &lt; 0.001), the number of transfused units (P &lt; 0.0001), increased the reticulocyte count and improved iron metabolism. In addition, the blood-saving protocol also reduced the rate of postoperative infections (P = 0.016), but not the 30-day mortality rate or the mean length of hospital stay. Conclusions: The blood-saving protocol implemented seems to reduce ABT requirements in patients with HF, and is associated with a lower postoperative morbidity. The possible mechanisms involved in these effects are discussed. abstract_id: PUBMED:10903019 Preoperative treatment with recombinant human erythropoietin or predeposit of autologous blood in women undergoing primary hip replacement. Background: Controversy exists about the advantages of predeposit of autologous blood (PDAB), and whether more comfortable blood conservation regimens may yield comparable results. To test the hypothesis that preoperative treatment with recombinant human erythropoietin (rHuEPO) with or without acute concomitant normovolaemic haemodilution (ANHD) is as effective as PDAB in reducing allogeneic blood transfusions, we conducted a prospective randomised study in women undergoing primary hip replacement. Methods: Sixty consecutive female patients scheduled for primary hip replacement and suitable for PDAB were randomly assigned to one of 3 groups. Group I (EPO) and II (ANHD) received 600 U/kg rHuEPO s.c. and 100 mg iron saccharate i.v. on day 14 and, if needed, on day 7 before surgery. Additionally, in group II acute normovolaemic haemodilution (ANHD) was implemented after induction of anaesthesia. In group III (PDAB) conventional PDAB up to 3 U, without volume replacement but with concomitant oral iron therapy, was performed starting 4 weeks before surgery. Results: The blood conservation methods resulted in a comparable net gain of red cells in all 3 groups until the day of surgery. Because of the withdrawal of autologous blood, haemoglobin values before surgery were lower in the PDAB group than in the EPO and ANHD groups, and during surgery were lower in the PDAB and ANHD groups than in the rHuEPO-only group. Applying moderate ANHD in conjunction with preoperative rHuEPO treatment did not yield an incremental decrease in allogeneic transfusions. There was no difference between the groups in the number of patients who received allogeneic transfusions or in the total number of allogeneic units transfused. Conclusions: Withdrawal of autologous blood is associated with lower pre- and intraoperative haemoglobin levels when compared to preoperative augmentation of red cell mass using rHu-EPO. As a measure to reduce allogeneic transfusion requirements, preoperative treatment with rHuEPO may be as effective as standard predeposit of autologous blood in women undergoing primary hip replacement, but requires less preoperative time. abstract_id: PUBMED:8723583 Effectiveness of perioperative epoetin alfa in patients scheduled for elective hip surgery. Several strategies have been investigated as a means of reducing allogeneic blood requirements in patients undergoing surgery, including the perioperative administration of epoetin alfa. In a multicenter, double-blind, placebo-controlled study in 208 patients undergoing elective hip replacement surgery, subcutaneous administration of epoetin alfa (300 IU/kg daily) for 14 or 9 days perioperatively (commencing 10 and 5 days preoperatively, respectively) significantly reduced the incidence of primary outcome events (any allogeneic blood transfusion or a postoperative hemoglobin [Hb] level &lt; 8.0 g/dL) compared with placebo (P = .003). Furthermore, the transfusion requirements of epoetin alfa-treated patients were significantly lower than those of patients treated with placebo (P = .007). Preoperative and postoperative Hb levels and reticulocyte counts were higher in epoetin alfa-treated patients compared with placebo. Epoetin alfa was well tolerated, and the incidence of deep vein thrombosis (DVT) was not different from that observed in placebo recipients. Thus, perioperative administration of epoetin alfa reduces the allogeneic blood requirements of patients undergoing elective hip replacement surgery and is of particular benefit in the subgroup of patients whose baseline Hb levels are less than 13.5 g/dL. abstract_id: PUBMED:11103054 Erythropoietin with iron supplementation to prevent allogeneic blood transfusion in total hip joint arthroplasty. A randomized, controlled trial. Background: The optimum regimen of epoetin alfa for prevention of allogeneic blood transfusion is unknown. Objective: To determine whether a modified regimen of epoetin alfa reduces allogeneic blood transfusion in patients undergoing hip arthroplasty. Design: Randomized, double-blind, multicenter trial comparing two modified dose regimens of epoetin alfa with placebo. Setting: 13 teaching hospitals and 4 community hospitals in Canada. Patients: 201 patients undergoing primary hip arthroplasty who had a hemoglobin concentration of 98 to 137 g/L and did not predonate blood. Intervention: Patients were assigned in a 3:5:5 ratio to receive four weekly doses of epoetin alfa, 40 000 U (high-dose; n = 44) or 20 000 U (low-dose; n = 79), or placebo (n = 78), starting 4 weeks before surgery. All patients received oral iron supplementation, 450 mg/d, for 42 or more days before surgery. Measurements: The primary end point was allogeneic transfusion. Secondary end points were thromboembolic events and change in reticulocyte count and hemoglobin concentration. Results: Both modified epoetin alfa regimens significantly reduced the need for allogeneic transfusion: Five (11.4%) patients in the high-dose group (P = 0.001) and 18 (22. 8%) patients in the low-dose group (P = 0.003) had transfusion, compared with 35 (44.9%) patients in the placebo group. The hematologic response was substantial in patients who received epoetin alfa. In the high-dose group, low-dose group, and placebo group, the preoperative increase in reticulocyte count was 58.8, 37. 0 and 1.8 x 10(9) cells/L (P &lt; 0.001), respectively, and the increase in hemoglobin concentration was 19.5, 17.2, and 1.2 g/L (P &lt; 0.001). The incidence of thromboembolic events did not differ among groups. Conclusions: Both modified epoetin alfa regimens were effective compared with placebo in reducing allogeneic transfusion in patients undergoing hip arthroplasty. Patients who received high-dose epoetin alfa had the lowest transfusion rate. Answer: Yes, the normobaric oxygen paradox (NOP) can increase reticulocyte count after traumatic hip surgery. A study found that transient administration of oxygen increased the reticulocyte count in patients after traumatic hip surgery. Specifically, on hospital discharge, the mean increase in reticulocyte count was significantly higher in the group that received 100% oxygen than in the control group that received air. The percent variation in reticulocyte count also increased significantly in the oxygen group compared to the air group (PUBMED:23333785). However, it is important to note that while the NOP mechanism was associated with an increase in reticulocyte count, it did not lead to a difference in hemoglobin or hematocrit levels at discharge, and the reduction in allogeneic red blood cell transfusion in the oxygen group was not attributed to the NOP mechanism (PUBMED:23333785).
Instruction: Does lengthening after acute correction negatively affect bone healing during distraction osteogenesis? Abstracts: abstract_id: PUBMED:26312468 Does lengthening after acute correction negatively affect bone healing during distraction osteogenesis? Objective: Lengthening after acute correction has a negative effect on bone healing during distraction osteogenesis. In this study, we investigated whether correcting an acute deformity prior to lengthening resulted in a negative effect on bone healing. Methods: Patients with shortened femora were assigned to 3 matched groups. Retrograde femoral nailing after distal metaphyseal-diaphyseal osteotomy was used in all cases. Group 1 (9 femora) included cases of lengthening &gt;4 cm using intramedullary distraction devices after acute correction. Group 2 (16 femora) included pure lengthening cases of ≥4 cm using intramedullary distraction devices. Group 3 (13 femora) included cases of lengthening ≥4 cm with lengthening and the retrograde nailing method (LORN) following acute correction. Results: Healing indices and full weight-bearing times of patients were evaluated. Mean lengthening values were 6.6 (range: 4-14 cm), 5.7 (range: 4-8 cm), and 5.2 cm (range: 4-6.5 cm) in Groups 1-3, respectively, and mean radiographic consolidation index and full weight-bearing times were 31.0±8.2, 30.2±5.5, and 39.0±5.0 day/cm in Groups 1-3, respectively. The consolidation index was significantly better in the Groups 1 and 2 compared to that in Group 3, but no difference was detected between Groups 1 and 2. Conclusion: Acute correction had no negative effect on bone healing after distraction osteogenesis using new-generation intramedullary distraction devices. We suggest that the negative impact on healing and the prolonged consolidation index in patients undergoing LORN may be due to impaired periosteal blood supply due to fixator pins. abstract_id: PUBMED:34536594 The influence of advanced age in bone healing after intramedullary limb lengthening. Background: Distraction osteogenesis with an intramedullary motorized nail is a well-established method to treat leg length discrepancy (LLD). The complex process of bone consolidation is affected by age, location, comorbidities, smoking and gender. The purpose of this case series was to investigate influencing factors in bone regeneration after intramedullary callus distraction. Hypothesis: Advanced age influences the outcome of intramedullary limb lengthening. Patients And Methods: This retrospective analysis included 19 patients after intramedullary telescopic nailing (PRECICE) on the lower limb with a mean age of 43 years. Bone healing was assessed by distraction and healing parameters such as distraction-consolidation time (DCT), distraction index (DI), healing index (HI), lengthening index (LI), and consolidation index (CI). Results: Confounding factors such as smoking, previous operations on the treated bone, but also the occurrence of complications, and the number of revision surgeries are independent of the patients' age. Younger patients showed a shorter distraction distance, a lower DCT, a lower DI, a higher HI, and a higher CI than older patients. The complication rate requiring nail exchange was higher among the younger patients. Bony healing was observed in all age groups treated with a telescopic nail regardless of age. Conclusion: Advanced age did not influence bone healing or complication rate in intramedullary lengthening. However, the conclusion is limited by the small patient number. Level Of Evidence: IV; Case control study. abstract_id: PUBMED:30030039 Intramedullary Metatarsal Fixation for Treatment of Delayed Regenerate Bone in Lengthening of Brachymetatarsia. Delayed regenerate healing after distraction osteogenesis can be a challenging problem for patients and surgeons alike. In the present study, we retrospectively reviewed the data from a cohort of patients with delayed regenerate healing during gradual lengthening treatment of brachymetatarsia. Additionally, we present a novel technique developed by 1 of us (B.M.L.) for the management of delayed regenerate healing. We hypothesized that application of intramedullary metatarsal fixation would safely and effectively promote healing of poor quality, atrophic regenerate during bone lengthening in brachymetatarsia correction. We formulated a study to retrospectively review the data from a cohort of patients with delayed regenerate healing after gradual lengthening for brachymetatarsia. All patients underwent temporary placement of intramedullary fixation after identification of delayed regenerate healing. Patient-related variables and objective measurements were assessed. We identified 10 patients with 13 metatarsals treated with intramedullary fixation for delayed regenerate healing. All 10 patients were female, with 6 (46.2%) right metatarsals and 7 (53.8%) left metatarsals treated. No complications developed with the use of this technique. All subjects progressed to successful consolidation of the regenerate bone at a mean of 44.5 ± 30.2 days after placement of intramedullary metatarsal fixation. No regenerate fracture or reoperations were noted. In conclusion, intramedullary metatarsal fixation is a safe and effective method for managing delayed regenerate healing encountered during distraction osteogenesis correction of brachymetatarsia. abstract_id: PUBMED:9443783 Histomorphometry of distraction osteogenesis in a caprine tibial lengthening model. Standardized histomorphometry of bone formation and remodeling during distraction osteogenesis (DO) has not been well characterized. Increasing the rhythm or number of incremental lengthenings performed per day is reported to enhance bone formation during limb lengthening. In 17 skeletally immature goats, unilateral tibial lengthenings to 20 or 30% of original length were performed at a rate of 0.75 mm/day and rhythms of 1, 4, or 720 times per day using standard Ilizarov external fixation and an autodistractor system. Two additional animals underwent frame application and osteotomy without lengthening and served as osteotomy healing controls. Histomorphometric indices were measured at predetermined regions from undecalcified tibial specimens. Within the distraction region, bone formation and remodeling activity were location dependent. Intramembranous bone formed linearly oriented columns of interconnecting trabecular plates of woven and lamellar type bone. Total new bone volume and bone formation indices were significantly increased within the distraction and osteotomy callus regions (Tb.BV/TV, 226% [p &lt; 0.05]; BFR/BS, 235-650% [p &lt; 0.01]) respectively, compared with control metaphyseal bone. Bone formation indices were greatest adjacent to the mineralization zones at the center of the distraction gap; mineral apposition rate 96% (p &lt; 0.01); mineralized bone surfaces 277% [p &lt; 0.001]); osteoblast surfaces 359% [p &lt; 0.001]); and bone formation rate (650% [p &lt; 0.01]). There was no significant difference (p &lt; 0.14; R = 0.4) in the bone formation rate of the distracted callus compared with the osteotomy control callus. Within the original cortices of the lengthened tibiae, bone remodeling indices were significantly increased compared with osteotomy controls; activation frequency (200% [p &lt; 0.05]); osteoclast surfaces (295% [p &lt; 0.01]); erosion period (75%); porosity (240% [p &lt; 0.001]). Neither the rhythm of distraction nor the percent lengthening appeared to significantly influence any morphometric parameter evaluated. Distraction osteogenesis shares many features of normal fracture gap healing. The enhanced bone formation and remodeling appeared to result more from increased recruitment and activation of bone forming and resorbing cells rather than from an increased level of individual cellular activity. abstract_id: PUBMED:2293942 Bone regenerate formation in cortical bone during distraction lengthening. An experimental study. The aim of this study was to delineate the pattern of bone regeneration from cortical bone segments during distraction lengthening. The lengthening procedure was applied for various periods through the Ilizarov system on the forearms of mature dogs. Bone was sectioned either by corticotomy, preserving the nutrient artery integrity, or by osteotomy. When an osteotomy was performed, the marrow cavity was in some cases plugged with either resorbable bone wax or nonresorbable material. Under distraction, both periosteal and medullary callus on either side of the gap gave rise to new bone trabeculae. The trabeculae on either side were oriented along the direction of distraction and progressively approached one another. This striated callus emerging from both sides was the most characteristic pattern of bone regeneration subsequent to distraction lengthening. Fusion was achieved approximately four weeks after the end of the lengthening period. Most of the new bone was formed by membranous ossification; some cartilaginous nodules developed. Corticalization of the bone trabeculae that had begun at three months was not fully achieved at five months after the lengthening period. There were no differences found in the pattern of bone healing and the amount of newly formed bone after corticotomy or osteotomy with or without resorbable bone wax plugging. abstract_id: PUBMED:3392584 Bone healing during lower limb lengthening by distraction epiphysiolysis. A study of limb lengthening by distraction epiphysiolysis in the rabbit tibia is presented. A special external distraction device was developed that allowed 10 mm lengthening of the leg. Bone formation in the elongated zone was studied by computed tomography and [99mTc] methylene diphosphonate (MDP) scintigraphy. Computed tomography showed bone formation proceeding for several weeks after the end of the distraction period, followed by a decrease in the amount of bone during a remodeling phase leading to the formation of a solid cortical structure. The uptake of [99mTc]MDP increased parallel to, but preceeding the actual accretion of bone, followed by a decrease during the bone remodeling phase. Uptake of the tracer will partly reflect bone metabolism, but other factors, like trauma, determine much of the uptake. abstract_id: PUBMED:33363643 Lengthening Nails for Distraction Osteogenesis: A Review of Current Practice and Presentation of Extended Indications. Purpose: Circular frames have been the gold standard of treatment for complex deformity corrections and bone loss. However, despite the success of frames, patient satisfaction has been low, and complications are frequent. Most recently, lengthening nails have been used to correct leg length discrepancies. In this article, we review the current trends in deformity correction with emphasis on bone lengthening and present our case examples on the use of lengthening nails for management of complex malunions, non-unions, and a novel use in bone transport. Materials And Methods: A nonsystematic literature review on the topic was performed. Four case examples from our institute, Brighton and Sussex University Hospitals, East Sussex, England, UK, were included. Results: New techniques based on intramedullary bone lengthening and deformity correction are replacing the conventional external frames. Introduction of lengthening and then nailing and lengthening over a nail techniques paved the way for popularization of the more recent lengthening nails. Lengthening nails have gone through evolution from the first mechanical nails to motorized nails and more recently the magnetic lengthening nails. Two case examples demonstrate successful use of lengthening nails for management of malunion, and two case examples describe novel use in management of non-unions, including the first report in the literature of plate-assisted bone segment transport for the longest defect successfully treated using this novel technique. Conclusion: With the significant advancement of intramedullary lengthening devices with lower complications rates and higher patient satisfaction, the era of the circular frame may be over. How To Cite This Article: Barakat AH, Sayani J, O'Dowd-Booth C, et al. Lengthening Nails for Distraction Osteogenesis: A Review of Current Practice and Presentation of Extended Indications. Strategies Trauma Limb Reconstr 2020;15(1):54-61. abstract_id: PUBMED:35851475 An Update on the Intramedullary Implant in Limb Lengthening: A Quinquennial Review Part 2: Extending Surgical Indications and Further Innovation. The use of the intramedullary lengthening nail has gained in popularity over the last decade. The reduction in complications associated with the use of external fixators and excellent patient outcomes has resulted in the largest change in management of limb length discrepancy since the concept of distraction osteogenesis was accepted by the Western world in the 1980s. Success following "simple" limb lengthening has led to surgeons extending the indications for the lengthening nail, including different bone segments, lengthening associated with potential joint instability and lengthening combined with acute deformity correction. There has been a drive for further implant modification to reduce complications, and enable full weight bearing during the lengthening process. This would offer the opportunity to consider simultaneous limb lengthening. The aim of this review is to evaluate the literature published over the last five years and highlight important learning points and technical tips for these expanding indications. abstract_id: PUBMED:12563926 Bone union of distracted region after limb lengthening Objective: To investigate the factors which affect the bone union of distracted region after limb lengthening, so as improve the curative effect and diminish the incidence of complication. Methods: To look up the latest literatures dealing with the bone union in limb lengthening, then review the procedure of osteogenesis and the affecting factors. Results: The osteogenesis of distracted region after limb lengthening is a sophisticated procedure. It can be affected by the velocity of lengthening, the period of lengthening, the site and method of osteotomy, the age etiology of patient. Conclusion: The bone union of distracted region after limb lengthening can be facilitated by following factors: 1. the velocity of lengthening slower than 1.0 mm/day; 2. moderate delay in distraction; 3. axial shortening of distracted region; 4. micromovement stimulation. abstract_id: PUBMED:38284955 Correction of Congenital Ring-Little Finger Metacarpal Synostosis Through Simultaneous Interpositional Allograft Bone After Split Osteotomy of the Synostosis Site and Distraction Lengthening of the Fifth Metacarpal. Purpose: We attempted a technique for patients with congenital ring-little finger metacarpal synostosis involving simultaneous interpositional allograft bone after split osteotomy of the synostosis site and distraction lengthening of the fifth metacarpal along with correction of the metacarpal joint abduction contracture. The purpose of this study was to describe the surgical technique and its outcomes. Methods: We reviewed the medical records of children with congenital ring-little finger metacarpal synostosis treated surgically at our institute. Eight hands of six children with an average age of 5.0 (range, 1.7-9.3) years were treated by simultaneous interpositional allograft bone after split osteotomy, distraction lengthening, and tenotomy of abductor digiti minimi. We measured the metacarpal head-to-capitate area ratios from serial radiographs and analyzed them according to age. We also measured the change in the intermetacarpal angle (IMA) and metacarpal length ratio during an average of 8.1 (range, 1.4-16.8) years of follow-up. These changes were compared with changes in seven hands of five children with an average age of 8.1 (range, 1.5-15.6) years treated by the same method, but without a distraction lengthening of the fifth metacarpal and followed up for an of average 12.1 (range, 4.1-19.8) years, as a control group. Results: Abnormal metacarpal head-to-capitate area ratio before surgery was normalized in all patients within the first 2 years after surgery. The IMA change averaged 39.8°, and the metacarpal length ratio changed by 17%. The control group showed an average IMA change of 36.6° and metacarpal length ratio change of 6%. Conclusions: Simultaneous interpositional allograft bone after split osteotomy of the synostosis site and distraction lengthening of the fifth metacarpal with correction of metacarpal joint abduction contracture can restore the radiographic parameters in congenital ring-little finger metacarpal synostosis. The normalized ossification of the fifth metacarpal head indicates that the surgical procedure is probably safe. Type Of Study/level Of Evidence: Therapeutic V. Answer: The study described in abstract PUBMED:26312468 investigated whether correcting an acute deformity prior to lengthening has a negative effect on bone healing during distraction osteogenesis. The study included patients with shortened femora who were assigned to three matched groups, all undergoing retrograde femoral nailing after distal metaphyseal-diaphyseal osteotomy. Group 1 consisted of cases of lengthening >4 cm using intramedullary distraction devices after acute correction, Group 2 included pure lengthening cases of ≥4 cm using intramedullary distraction devices, and Group 3 involved cases of lengthening ≥4 cm with lengthening and the retrograde nailing method (LORN) following acute correction. The results showed that the consolidation index was significantly better in Groups 1 and 2 compared to Group 3, but no difference was detected between Groups 1 and 2. The conclusion drawn from this study was that acute correction had no negative effect on bone healing after distraction osteogenesis using new-generation intramedullary distraction devices. The negative impact on healing and the prolonged consolidation index in patients undergoing LORN may be due to impaired periosteal blood supply due to fixator pins. Therefore, based on this study, lengthening after acute correction does not negatively affect bone healing during distraction osteogenesis (PUBMED:26312468).
Instruction: Endovascular intervention for acute thromboembolic stroke in young patients: an ideal population for aggressive intervention? Abstracts: abstract_id: PUBMED:18821835 Endovascular intervention for acute thromboembolic stroke in young patients: an ideal population for aggressive intervention? Object: Endovascular treatment of acute thromboembolic stroke is a rapidly developing field that appears to hold great promise. Young patients may be particularly suited to benefit from endovascular acute stroke therapy. The authors sought to identify outcomes in young patients with thromboembolic stroke who underwent endovascular intervention. Methods: The authors retrospectively reviewed a prospectively collected endovascular intervention registry of patients with ischemic strokes treated at a single large-volume institution between December 2000 and June 2007 to identify patients 18-35 years of age who were treated for thromboembolic stroke. Data are presented as the mean +/- standard deviation unless otherwise noted. Results: Seven young patients underwent 8 consecutive endovascular interventions for thromboembolic stroke (mean age 26 +/- 6 years; 5 women). The National Institutes of Health Stroke Scale score at presentation was 13 +/- 4.3 (median 13). All patients presented within 6 hours of symptom onset. Revascularization was attempted with mechanical thrombectomy/disruption, intraarterial thrombolysis, and/or angioplasty, with or without stent placement. The modified Rankin Scale (mRS) score at discharge was 2.2 +/- 1.5 (median 1.5), with 5 patients (62.5%) achieving independence at discharge (mRS Score 0-2). There were no deaths. Hospital length of stay was 6.5 +/- 3.7 days (4.4 +/- 1.5 days for patients with an mRS score of 0-2; 10 +/- 3.6 days for patients with an mRS score of 4). All patients became independent and had reached an mRS score of &lt; or = 2 at last follow-up evaluation (29 +/- 25 months). Conclusions: The data demonstrate the relative safety of endovascular intervention in young patients with thromboembolic cerebral ischemia and may suggest a potential benefit in outcome. Further investigation is indicated with larger numbers of patients and an appropriate control population. abstract_id: PUBMED:28059705 Emergent Endovascular Management of Long-Segment and Flow-Limiting Carotid Artery Dissections in Acute Ischemic Stroke Intervention with Multiple Tandem Stents. Background And Purpose: Although most cervical dissections are managed medically, emergent endovascular treatment may become necessary in the presence of intracranial large-vessel occlusions, flow-limiting and long-segment dissections with impending occlusion, and/or hypoperfusion-related ischemia at risk of infarction. We investigated the role of emergent endovascular stenting of long-segment carotid dissections in the acute ischemic stroke setting. Materials And Methods: We retrospectively studied long-segment carotid dissections requiring stent reconstruction with multiple tandem stents (≥3 stents) and presenting with acute (&lt;12 hours) ischemic stroke symptoms (NIHSS score, ≥4). We analyzed patient demographics, vascular risk factors, clinical presentations, imaging/angiographic findings, technical procedures/complications, and clinical outcomes. Results: Fifteen patients (mean age, 51.5 years) with acute ischemic stroke (mean NIHSS score, 15) underwent endovascular stent reconstruction for vessel and/or ischemic tissue salvage. All carotid dissections presented with &gt;70% flow limiting stenosis and involved the distal cervical ICA with a minimum length of 3.5 cm. Carotid stent reconstruction was successful in all patients with no residual stenosis or flow limitation. Nine patients (60%) harbored intracranial occlusions, and 6 patients (40%) required intra-arterial thrombolysis/thrombectomy, achieving 100% TICI 2b-3 reperfusion. Two procedural complications were limited to thromboembolic infarcts from in-stent thrombus and asymptomatic hemorrhagic infarct transformation (7% morbidity, 0% mortality). Angiographic and ultrasound follow-up confirmed normal carotid caliber and stent patency, with 2 cases of &lt;20% in-stent stenosis. Early clinical improvement resulted in a mean discharge NIHSS score of 6, and 9/15 (60%) patients achieved a 90-day mRS of ≤2. Conclusions: Emergent stent reconstruction of long-segment and flow-limiting carotid dissections in acute ischemic stroke intervention is safe and effective, with favorable clinical outcomes, allowing successful thrombectomy, vessel salvage, restoration of cerebral perfusion, and/or prevention of recurrent thromboembolic stroke. abstract_id: PUBMED:36751289 Association of CHA2DS2-VASc score with successful recanalization in acute ischemic stroke patients undergoing endovascular thrombectomy. Introduction: The CHA2DS2-VASc (congestive heart failure, hypertension, age, diabetes mellitus, stroke, vascular disease and sex) score is a simple risk stratification algorithm to estimate stroke/thromboembolic risk in patients with non-valvular atrial fibrillation (AF). Higher pre-stroke CHA2DS2-VASc score is known to be associated with greater stroke severity and poorer outcomes. AF patients generally have higher CHA2DS2-VASc scores than non-AF patients. The Modified Thrombolysis in Cerebral Infarction (mTICI) score is the most widely used grading system to assess the result of recanalizing therapies in acute ischemic stroke (AIS). mTICI 2c and mTICI 3 are conventionally accepted as successful recanalization. Aim: We investigated whether pre-stroke CHA2DS2-VASc score is associated with mTICI recanalization score in AIS patients with and without AF undergoing percutaneous thrombectomy. Material And Methods: One hundred fifty-nine patients with the diagnosis of AIS who were admitted within 6 h from symptom onset were included in the study (mean age: 65.7 ±12.9). All subjects underwent endovascular treatment. CHA2DS2-VASc scores of the participants were calculated. Subjects were grouped according to mTICI scores achieved after endovascular treatment. mTICI 2c and mTICI 3 were accepted as successful recanalization. Results: Successful reperfusion was observed in 130 (81.8%) of all patients who underwent endovascular treatment (mTICI flow ≥ 2c) and first-pass reperfusion was observed in 107 (67.3%) patients. When the patients with successful (mTICI flow ≥ 2c) and unsuccessful (mTICI flow ≤ 2b) reperfusion were divided into groups, no significant difference was observed between the patients in terms of comorbidities such as AF, hypertension, hyperlipidemia, coronary artery disease and cerebrovascular accident history. Patients with unsuccessful reperfusion were older than patients with successful reperfusion (71.4 ±11.2 vs. 64.5 ±13.01, p = 0.006), with a higher CHA2DS2-VASc score (4.1 ±1.5 vs. 3.04 ±1.6, p = 0.002). In addition, the duration of the procedure was longer in the unsuccessful reperfusion group (92.4 ±27.2 min vs. 65.0 ±25.1 min, p &lt; 0.001). CHA2DS2-VASc score significantly correlated with successful recanalization (correlation coefficient; 0.243, p = 0.002). Multivariate logistic regression analysis revealed that only CHA2DS2-VASc score (OR = 1.43, 95% CI: 1.09-1.87, p = 0.006) and procedure time (OR = 1.03, 95% CI: 1.01-1.05, p &lt; 0.001) were independent predictors of successful reperfusion. The receiver-operating characteristic (ROC) curve was used to determine the cut-off value for the CHA2DS2-VASc score that best predicts successful reperfusion. The optimal threshold was 3.5, with a sensitivity of 58.6% and specificity of 59.2% (area under the curve (AUC): 0.669, p = 0.005). Conclusions: For the first time in the literature, we investigated and demonstrated that pre-stroke CHA2DS2-VASc score was associated with success of recanalization as assessed with mTICI 2c and mTICI 3 in a cohort of AIS patients regardless of AF presence who underwent endovascular treatment. Our findings deserve to be tested with large scale long term studies. abstract_id: PUBMED:35371691 Acute Calcific Cerebral Embolism Large Vessel Occlusion: A Unique Stroke Mechanism With Hard Challenges. We present the case of an ischemic stroke associated with partially occlusive acute calcified cerebral emboli large vessel occlusion (CCE LVO). No revascularization strategy guidelines have been established for this unique acute ischemic stroke population, although many studies have reported impaired and inconsistent responses to both thrombolysis and thrombectomy. The patient in this case report, unfortunately, experienced a failed attempt at complete thrombolysis, resulting in a poor clinical outcome. Endovascular thrombectomy was not performed because of incomplete obstruction and risk of injury. Follow-up imaging revealed an acute ischemic stroke at the large middle cerebral artery and a new intraparenchymal hemorrhage with complete absence of the previously identified calcified embolus. This case and current literature demonstrate that more data are needed to determine the best revascularization approach for patients with CCE LVO stroke. With tissue plasminogen activator marginally effective in these patients, thrombectomy should be considered in highly unstable, clinically symptomatic patients even only with partial vessel occlusion. abstract_id: PUBMED:25236521 Triple therapy for atrial fibrillation and percutaneous coronary intervention: a contemporary review. Chronic oral anticoagulant therapy is recommended (class I) in patients with mechanical heart valves and in patients with atrial fibrillation with a CHA2DS2-VASc (Congestive heart failure, Hypertension, Age ≥75 years, Diabetes mellitus, prior Stroke or transient ischemic attack or thromboembolism, Vascular disease, Age 65 to 74 years, Sex category) score ≥1. When these patients undergo percutaneous coronary intervention with stenting, treatment with aspirin and a P2Y12 receptor inhibitor also becomes indicated. Before 2014, guidelines recommended the use of triple therapy (vitamin K antagonists, aspirin, and clopidogrel) for these patients. However, major bleeding is increasingly recognized as the Achilles' heel of the triple therapy regimen. Lately, various studies have investigated this topic, including a prospective randomized trial, and the evidence for adding aspirin to the regimen of vitamin K antagonists and clopidogrel seems to be weakened. In this group of patients, the challenge is finding the optimal equilibrium to prevent thromboembolic events, such as stent thrombosis and thromboembolic stroke, without increasing bleeding risk. abstract_id: PUBMED:24057773 Cather-based approaches to stroke prevention in atrial fibrillation. The left atrial appendage (LAA) is a prominent source of cardioembolic stroke in patients with nonvalvular atrial fibrillation (AF). While systemic anticoagulation is the common therapeutic choice, these medications carry many contraindications and possible complications. Epicardial and endovascular techniques for occlusion of LAA have been explored and early clinical data is accumulating. In the coming years, this data will help guide the management of AF patients at risk of bleeding as well as potentially become first-line therapy to reduce the risk of thromboembolic stroke. The purpose of this article is to review current endovascular and epicardial catheter based LAA occlusion devices and the clinical data supporting their use. abstract_id: PUBMED:26861024 Subclavian steal: Endovascular treatment of total occlusions of the subclavian artery using a retrograde transradial subintimal approach. Introduction: In symptomatic subclavian steal syndrome, endovascular treatment is the first line of therapy prior to extra-anatomic surgical bypass procedures. Subintimal recanalization has been well described in the literature for the coronary arteries, and more recently, in the lower extremities. By modifying this approach, we present a unique retrograde technique using a heavy tip microwire to perform controlled subintimal dissection. Methods: We present two cases of symptomatic subclavian steal related to chronic total occlusion of the left subclavian artery and right innominate artery, respectively. Standard crossing techniques were unsuccessful. Commonly at this point, the procedures would be aborted and open surgical intervention would have to be pursued. In our cases, retrograde access was easily achieved via an ipsilateral retrograde radial artery, using controlled subintimal dissection and a heavy-tipped wire. Results: We were able to easily achieve recanalization in both attempted cases of chronic total occlusion of the subclavian and innominate artery, using a retrograde radial subintimal approach. Subsequent stent-supported angioplasty resulted in complete revascularization. No major complications were encountered during the procedures; however, one patient did develop thromboembolic stroke secondary to platelet aggregation to the stent graft, 9 days post-procedure. Conclusions: Endovascular treatment is considered the first-line intervention in medically refractory patients with symptomatic subclavian steal syndrome. In the setting of chronic total occlusions, a retrograde radial subintimal approach using a heavy tip wire for controlled subintimal dissection is a novel technique that may be considered when standard approaches and wires have failed. abstract_id: PUBMED:27453515 Evaluation of 5 Prognostic Scores for Prediction of Stroke, Thromboembolic and Coronary Events, All-Cause Mortality, and Major Adverse Cardiac Events in Patients With Atrial Fibrillation and Coronary Stenting. Management of antithrombotic therapy in patients with atrial fibrillation (AF) and coronary stenting remains challenging, and there is a need for efficient tools to predict their risk of different types of cardiovascular events and death. Several scores exist such as the CHA2DS2-VASc score, the Global Registry of Acute Coronary Events (GRACE) score, the Synergy between Percutaneous Coronary Intervention with Taxus and Cardiac Surgery (SYNTAX) score, the Anatomical and Clinical Syntax II Score and the Reduction of Atherothrombosis for Continued Health score. These 5 scores were investigated in patients with AF with coronary stenting with the aim of determining which was most predictive for stroke/thromboembolic (TE) events, nonlethal coronary events, all-cause mortality, and major adverse cardiac events (MACE). Among 845 patients with AF with coronary stenting seen from 2000 to 2014, 440 (52%) were admitted for acute coronary syndrome and 405 (48%) for elective percutaneous coronary intervention. The rate of cardiovascular complication was at 14.1% per year, and nonlethal coronary events were the most frequent complications with a yearly rate of 6.5%. CHA2DS2-VASc score was the best predictor of stroke/TE events with a c-statistic of 0.604 (95% CI 0.567 to 0.639) and a best cut-off point of 5. SYNTAX score was better to predict nonlethal coronary events and MACE with c-statistics of 0.634 (95% CI 0.598 to 0.669) and 0.612 (95% CI 0.575 to 0.647), respectively, with a best cut-off point of 9. GRACE score appeared to be the best to predict all-cause mortality with a c-statistic of 0.682 (95% CI 0.646 to 0.717) and a best cut-off point of 153. In conclusions, among validated scores, none is currently robust enough to simultaneously predict stroke/TE events, nonlethal coronary events, death, and MACE in patients with AF with stents. The CHA2DS2-VASc score remained the best score to assess stroke/TE risk, as was the SYNTAX score for nonlethal coronary events and MACE, and finally, the GRACE score for all-cause mortality in this study population. abstract_id: PUBMED:25519166 Recurrent posterior circulation infarction caused by anomalous occipital bony process in a young patient. Background: Structural anomaly of the cervical spine or craniocervical junction has been reported as one of the rare causes of ischemic stroke. We report a case of a young patient with recurrent posterior circulation infarction that may have been associated with an anomalous occipital bony process compressing the vertebral artery. Case Presentation: A 23-year-old man experienced recurrent posterior circulation infarction 5 times over a period of 5 years. He had no conventional vascular risk factors. Young age stroke work-up including thorough cardiac, intra- and extracranial vascular evaluation and laboratory tests for the hypercoagulable state or connective tissue disease yielded unremarkable results. An anomalous bony process from the occipital base compressing the left vertebral artery was observed on brain CT. All the recurrent strokes were explainable by the arterial thromboembolism originating from the compressed left vertebral artery. Therefore, the left vertebral artery compressed by the anomalous occipital bony process may have been the culprit behind the recurrent thromboembolic strokes in our patient. Intractable recurrent strokes even under optimal medical treatment led us to make a decision for the intervention. Instead of surgical removal of the anomalous occipital bony process, the left vertebral artery was occluded permanently by endovascular coiling after confirming that this would cause no neurological deficits or flow disturbance in the posterior circulation. There was no recurrence of stroke for 2 years after permanent occlusion of the left vertebral artery. Conclusion: Arterial thromboembolism originating from the left vertebral artery compressed by the anomalous occipital bony process is a rare but not to be overlooked cause of posterior circulation infarction. When intractable to medical treatment, endovascular occlusion of the vertebral artery without flow disturbance to the posterior circulation may be a useful treatment option when surgical removal is not feasible. abstract_id: PUBMED:17287682 Acute myocardial infarction interventional procedures: primary percutaneous coronary intervention versus facilitated percutaneous coronary intervention, rescue angioplasty, rescue excimer laser. ST-elevation myocardial infarction is due to the occlusion of a coronary artery, mainly due to a rupture of an atherosclerotic plaque with superimposed thrombosis. The main therapeutic goal is to restore the blood flow within the culprit artery as quickly as possible. In this review we discussed the several approaches which have been employed to reach this target. Primary percutaneous coronary intervention (PCI) is considered the best treatment option, as it is associated to lower in-hospital mortality, reduced risk of reinfarction and stroke, lower rate of intracranial bleeding and ventricular rupture from myocardial hemorrhage compared with fibrinolytic therapy. Also, it is superior to facilitated PCI, i.e. immediate planned PCI after i.v. thrombolytic therapy administration, because of lower mortality, reinfarction rate, strokes and bleedings. Rescue PCI after failed thrombolysis was associated with a reduction of early severe heart failure and improved survival at 1 year, in patients with moderate to large infarctions, compared to conservative medical therapy, in a pooled analysis of 9 randomized trials, carried out in the balloon era. Also in the stent era, a meta-analysis of 5 randomized trials found a significant 36% reduction in the risk of 30-day mortality, a trend to lower risk of heart failure, although a marginally increased risk of thromboembolic stroke, in the rescue PCI arm. However, rescue PCI is not associated with a better long-term clinical outcome. Laser thrombectomy before PCI could be a useful additional strategy which might be compared to standard stenting in future randomized studies. Answer: The retrospective review of a prospectively collected endovascular intervention registry by authors in PUBMED:18821835 suggests that young patients with thromboembolic stroke may be particularly suited to benefit from endovascular acute stroke therapy. In their study, seven young patients underwent eight consecutive endovascular interventions for thromboembolic stroke, with a mean age of 26 years. All patients presented within 6 hours of symptom onset and were treated with various revascularization techniques such as mechanical thrombectomy/disruption, intraarterial thrombolysis, and/or angioplasty, with or without stent placement. The outcomes were promising, with no deaths reported, a mean modified Rankin Scale (mRS) score at discharge of 2.2, and 62.5% of patients achieving independence at discharge. Furthermore, all patients became independent and had reached an mRS score of ≤2 at the last follow-up evaluation. These results demonstrate the relative safety and potential benefit of endovascular intervention in young patients with thromboembolic cerebral ischemia, indicating that this population may be ideal for aggressive intervention. However, the authors call for further investigation with larger numbers of patients and an appropriate control population to substantiate these findings.
Instruction: Is insulin resistance an intrinsic defect in asian polycystic ovary syndrome? Abstracts: abstract_id: PUBMED:23549804 Is insulin resistance an intrinsic defect in asian polycystic ovary syndrome? Purpose: Approximately 50% to 70% of women with polycystic ovary syndrome (PCOS) have some degree of insulin resistance, and obesity is known to worsen insulin resistance. Many metabolic consequences of PCOS are similar to those of obesity; therefore, defining the cause of insulin resistance in women can be difficult. Our objective was to clarify the factors contributing to insulin resistance in PCOS. Materials And Methods: We consecutively recruited 144 women with PCOS [age: 26±5 yr, body mass index, body mass index (BMI): 24.4±4.0 kg/m2] and 145 controls (age: 25±5 yr, BMI: 23.0±3.6 kg/m2), and divided them into overweight/obese (ow/ob, BMI≥23 kg/m2) and lean (BMI&lt;23 kg/m2) groups. Anthropometric measures and a 75-g oral glucose tolerance test were performed, and insulin sensitivity index (ISI) was calculated as an index of insulin sensitivity. Factors predictive of ISI were determined using regression analysis. Results: ISI was significantly lower in both lean and ow/ob women with PCOS compared to BMI-matched controls (p&lt;0.05). Increasing BMI by 1 kg/m2 decreased ISI by 0.169 in PCOS patients (p&lt;0.05) and by 0.238 in controls (p&lt;0.05); there was no significant difference between these groups. In lean PCOS patients and lean controls, BMI had no effect on ISI. Multiple regression analysis revealed that PCOS status (β=-0.423, p&lt;0.001) and BMI (β=-0.375, p&lt;0.001) were significantly associated with ISI. Conclusion: Insulin resistance is an intrinsic defect of PCOS, and a high BMI could exacerbate insulin resistance in all women, irrespective of whether they have PCOS. abstract_id: PUBMED:15613682 Insulin resistance in the skeletal muscle of women with PCOS involves intrinsic and acquired defects in insulin signaling. Insulin resistance in polycystic ovary syndrome (PCOS) is due to a postbinding defect in signaling that persists in cultured skin fibroblasts and is associated with constitutive serine phosphorylation of the insulin receptor (IR). Cultured skeletal muscle from obese women with PCOS and age- and body mass index-matched control women (n = 10/group) was studied to determine whether signaling defects observed in this tissue in vivo were intrinsic or acquired. Basal and insulin-stimulated glucose transport and GLUT1 abundance were significantly increased in cultured myotubes from women with PCOS. Neither IR beta-subunit abundance and tyrosine autophosphorylation nor insulin receptor substrate (IRS)-1-associated phosphatidylinositol (PI) 3-kinase activity differed in the two groups. However, IRS-1 protein abundance was significantly increased in PCOS, resulting in significantly decreased PI 3-kinase activity when normalized for IRS-1. Phosphorylation of IRS-1 on Ser312, a key regulatory site, was significantly increased in PCOS, which may have contributed to this signaling defect. Insulin signaling via IRS-2 was also decreased in myotubes from women with PCOS. In summary, decreased insulin-stimulated glucose uptake in PCOS skeletal muscle in vivo is an acquired defect. Nevertheless, there are intrinsic abnormalities in glucose transport and insulin signaling in myotubes from affected women, including increased phosphorylation of IRS-1 Ser312, that may confer increased susceptibility to insulin resistance-inducing factors in the in vivo environment. These abnormalities differ from those reported in other insulin resistant states consistent with the hypothesis that PCOS is a genetically unique disorder conferring an increased risk for type 2 diabetes. abstract_id: PUBMED:6749340 Acanthosis nigricans, hirsutism, insulin resistance and insulin receptor defect. A 24-year-old negress with the triad of acanthosis nigricans, hirsutism associated with polycystic ovaries and insulin resistance is reported. Metabolic studies were done 3 years after a bilateral ovarian wedge resection. Partial remission of the hirsutism and return of menstrual cycles occurred after surgery. Extreme resistance to endogenous and exogenous insulin was observed. Three studies of insulin receptors on circulating red blood cells (RBC) showed abnormal inhibition-competition curves, characterized by increased percentage insulin binding at higher unlabelled insulin levels. Scatchard plots suggested an apparent increase in the number of low affinity receptors. Despite the changes in receptor-insulin interaction, the defect does not seem to explain the insulin resistance since binding of insulin to a target tissue (RBC) appeared to be quantitatively normal at physiological insulin levels, suggesting a simultaneous post receptor defect. abstract_id: PUBMED:19533482 Serum visfatin in Asian women with polycystic ovary syndrome. Objective: To determine serum visfatin levels in Asian polycystic ovary syndrome (PCOS) women and its correlations with various parameters. Study Design: Case-control study. Setting: University hospital. Subjects: Eighty women were enrolled in this study. Of these, 40 women were PCOS and 40 age-matched subjects with regular menstrual cycles were controls. Intervention: Seventy-five gram oral glucose tolerance tests were performed in all women. Fasting venous blood samples for serum visfatin, insulin and androgen levels were obtained both from the PCOS and the control women. Main Outcome Measures: Serum concentrations of visfatin, fasting insulin (FI), fasting glucose, 2-h post-load glucose (2hPG), homeostasis model assessment insulin resistance, homeostasis model assessment beta cell function, total testosterone, free testosterone, androstenedione and dehydroepiandrosterone sulfate were measured in both groups. Results: Women with PCOS had significantly higher serum visfatin levels than the healthy controls [100.39 +/- 41.90 vs. 45.09 +/- 28.24 mg/ml, p &lt; 0.01]. PCOS women also had significantly higher concentrations of all androgens (p &lt; 0.01). Insulin resistance seemed to be greater in the PCOS than the control groups, but did not reach a statistically significant level. In the PCOS group, serum visfatin levels were positively correlated with 2hPG, and systolic blood pressure and diastolic blood pressure. Serum visfatin levels were negatively associated with FI (r = -0.80, p = 0.03) and positively associated with systolic and diastolic blood pressure (r = 0.77, p = 0.04, r = 0.79, p = 0.03, respectively) in the sub-group of PCOS women with abnormal glucose tolerance (AGT). Conclusions: Asian PCOS women had significantly higher serum visfatin levels than age-matched healthy controls. Their levels were significantly correlated with 2hPG and blood pressure in PCOS women, and with FI and blood pressure in PCOS women with AGT. abstract_id: PUBMED:15008997 Insulin resistance and endothelial dysfunction in the brothers of Indian subcontinent Asian women with polycystic ovaries. Background: Ultrasonographic appearances of polycystic ovaries (PCO) are found in 50% of South London Indian subcontinent Asians, a population at high risk of coronary disease and type 2 diabetes (DM). PCO is a familial condition but the genetics remain to be clarified. At present, the only characteristic documented in male family members is premature male pattern balding before the age of 30 years. Our aim was to quantify insulin resistance and endothelial cell function in the brothers of Indian subcontinent Asian women with PCO and/or a family history of type 2 DM. Methods: Indian subcontinent Asian women (n = 40, age 16-40 years) with a brother available for study were recruited from the local population. They were stratified into four groups according to the ultrasound appearances of PCO and/or a family history of type 2 DM. Control subjects had no PCO and no family history of DM. Insulin sensitivity (KITT) was measured using a short insulin tolerance test and endothelial function using brachial artery ultrasound to measure flow-mediated dilatation (FMD). Findings: Groups were well matched for age, body mass index (BMI) and waist-hip circumference ratios. Asian women with PCO demonstrated insulin resistance independent of BMI or family history of diabetes. Women with PCO and a family history of DM have reduced FMD, though PCO alone was not a marker. The brothers of women with PCO also have insulin resistance, comparable to that associated with a family history of type 2 DM. This was associated with elevations of blood pressure, abnormalities in serum lipid concentrations and impaired endothelial cell function. Endothelial cell function was particularly impaired in those subjects with both a sister with PCO and a family history of DM. Interpretation: In an ethnic minority population at higher risk of coronary heart disease, brothers of women with PCO have evidence of insulin resistance and endothelial cell dysfunction in early adult life. Further study is required to establish whether these findings are associated with an increased incidence of cardiovascular events in this population. abstract_id: PUBMED:8090402 Insulin resistance, hypersecretion of LH, and a dual-defect hypothesis for the pathogenesis of polycystic ovary syndrome. Objective: To review the literature dealing with the roles of insulin resistance and elevated LH levels in the development of the polycystic ovary syndrome and to outline a new hypothesis of the pathogenesis of this disorder. Data Sources: We reviewed articles on the topics of insulin resistance, elevated LH levels, and polycystic ovary syndrome that were contained in the CD-PLUS MEDLINE system data base for years 1976-1994. Methods Of Study Selection: Ninety-one original reports published in English-language, peer-reviewed biomedical journals were selected. Data Extraction And Synthesis: The selected studies were reviewed critically and their conclusions were evaluated. The available literature indicates that insulin resistance and increased LH secretion are frequent features of polycystic ovary syndrome and may be important in its pathogenesis. It appears that both the amplitude and the frequency of LH pulses are increased in this disorder. Although the causes of these abnormalities of LH secretion are unknown, they could be either primary (due to increased sensitivity of LH secretion to GnRH) or secondary (due to the effects of sex steroids on LH secretion). The cause of insulin resistance in polycystic ovary syndrome also is unknown. It most likely results from a post-binding defect in the insulin action pathway. There is both in vitro and in vivo evidence that elevated LH and hyperinsulinemia act synergistically to enhance ovarian growth, androgen secretion, and ovarian cyst formation. Conclusions: Based on the available literature, we propose a "dual-defect" hypothesis of polycystic ovary syndrome. We suggest that in a significant subset of patients, this disorder may be caused by a conjunction of two independent genetic defects: one that produces elevated LH secretion and another that produces insulin resistance. Thus, polycystic ovary syndrome develops as a result of the synergistic action of increased LH levels and hyperinsulinemia on the ovary. This working hypothesis may serve as a useful guide for further studies of the pathogenesis of polycystic ovary syndrome. abstract_id: PUBMED:25339479 Intrinsic factors rather than vitamin D deficiency are related to insulin resistance in lean women with polycystic ovary syndrome. Objective: To investigate the correlation between insulin resistance (IR) and serum 25-OH-Vit D concentrations and hormonal parameters in lean women with polycystic ovary syndrome (PCOS). Patients And Methods: 50 lean women with PCOS and 40 body mass index (BMI) matched controls were compared in terms of fasting insulin and glucose, homeostatic model assessment insulin resistance (HOMA-IR), 25-OH-Vit D, high sensitivity C-reactive protein (hs-CRP), luteinizing hormone (LH), follicle-stimulating hormone (FSH), total testosterone, dehydroepiandrosterone sulfate (DHEA-S), total cholesterol, high density lipoprotein (HDL), low density lipoprotein (LDL), triglycerides and Ferriman-Gallway (FG) scores. Correlation analyses were performed between HOMA-IR and metabolic and endocrine parameters. Results: 30% of patients with PCOS demonstrated IR. Levels of 25-OH-Vit D, hsCRP, cholesterol, HDL, LDL, triglyceride and fasting glucose did not differ between the study and control groups. Fasting insulin, HOMA-IR, LH, total testosterone, and DHEA-S levels were higher in PCOS group. HOMA-IR was found to correlate with hs-CRP and total testosterone but not with 25-OH-Vit D levels in lean patients with PCOS. Conclusions: An association between 25-OH-Vit D levels and IR is not evident in lean women with PCOS. hs-CRP levels do not indicate to an increased risk of cardiovascular disease in this population of patients. Because a strong association between hyperinsulinemia and hyperandrogenism exists in lean women with PCOS, it is advisable for this population of patients to be screened for metabolic disturbances, especially in whom chronic anovulation and hyperandrogenism are observed together. abstract_id: PUBMED:17454169 Prevalence of the metabolic syndrome in Asian women with polycystic ovary syndrome: using the International Diabetes Federation criteria. Background: Since insulin resistance and compensatory hyperinsulinemia are the major causes of the metabolic syndrome (MS) and are also the main pathophysiology of polycystic ovary syndrome (PCOS), PCOS women are at risk of MS. The aim of the present cross-sectional study was to determine the prevalence of MS in Asian women with PCOS using the International Diabetes Federation (IDF) criteria and to define the risk factors. Methods: One hundred and seventy women with PCOS were enrolled in the study from September 3, 2002 to June 14, 2005. A 75-g oral glucose tolerance test with plasma glucose and serum insulin levels was performed. Also, blood samples were examined for fasting triglycerides, high-density lipoprotein cholesterol and adiponectin levels. Results: The mean (+/-standard deviation) age, body mass index (BMI) and waist-to-hip ratio were 28.8+/-5.9 years, 27.1 +/- 7.0 kg/m(2) and 0.85+/-0.06, respectively. The prevalence of MS was 35.3%. Age, BMI, waist circumference and all metabolic parameters were higher in PCOS women with MS than in those without MS. MS prevalence increased with age, BMI and insulin resistance as determined by homeostasis model assessment (HOMA-IR), but not with adiponectin after BMI adjustment. Conclusions: According to the IDF criteria, one-third of the PCOS women had MS. This study also showed that age, BMI and HOMA-IR are important risk factors for MS. abstract_id: PUBMED:23367497 Syndrome of extreme insulin resistance (Rabson-Mendenhall phenotype) with atrial septal defect: clinical presentation and treatment outcomes. Syndrome of extreme insulin resistance (SEIR) is a rare spectrum disorder with a primary defect in insulin receptor signalling, noted primarily in children, and is often difficult to diagnose due to the clinical heterogeneity.SEIR was diagnosed in an adolescent girl with facial dysmorphism,exuberant scalp and body hair, severe acanthosis, lipoatrophy, dental abnormalities, and short stature (Rabson-Mendenhall phenotype). She had elevated fasting (422.95 pmol/L) and post-glucose insulin levels(&gt;2083 pmol/L). Total body fat was decreased (11%; dual-energy X-ray absorptiometry). Basal growth hormone (GH) was increased (7.9 μg/L)with normal insuline-like growth factor 1 (37.6 nmol/L) suggestive of GH resistance. She had fatty liver and polycystic ovaries. Echocardiography revealed ostium secundum type atrial septal defect (ASD). Blood glucose normalized with pioglitazone (30 mg/day). Delayed development, severe insulin resistance, mild hyperglycemia, absence of ketosis, and remarkable response of hyperinsulinemia and hyperglycemia to pioglitazone which persisted even after 1 year of diagnosis are some of the notable features of this patient. This is perhaps the first report of occurrence of congenital heart disease (ASD) in a patient of SEIR (Rabson-Mendenhall phenotype). This report highlights the clinical features of SEIR and the role of insulin sensitizers like pioglitazone in the management of such patients. abstract_id: PUBMED:21950959 Current perspectives of insulin resistance and polycystic ovary syndrome. Aims: To review the relationship between insulin resistance and polycystic ovary syndrome. Methods: A literature review. Results: Insulin resistance likely plays a central pathogenic role in polycystic ovary syndrome and may explain the pleiotropic presentation and involvement of multiple organ systems. Insulin resistance in the skeletal muscle of women with polycystic ovary syndrome involves both intrinsic and acquired defects in insulin signalling. The cellular insulin resistance in polycystic ovary syndrome has been further shown to involve a novel post-binding defect in insulin signal transduction. Treatment of insulin resistance through lifestyle therapy or with a diabetes drug has become mainstream therapy in women with polycystic ovary syndrome. However, effects with current pharmacologic treatment with metformin tend to be modest, with limited benefit as an agent to treat infertility. Insulin resistance contributes to increased risk for pregnancy complications, diabetes and cardiovascular disease risk profile in polycystic ovary syndrome, which is further exacerbated by obesity. While numerous studies demonstrate increased prevalence of cardiovascular disease risk factors in women with polycystic ovary syndrome, there are limited data showing that women with polycystic ovary syndrome are at increased risk for cardiovascular disease events. Conclusions: Insulin resistance is linked to polycystic ovary syndrome. Further study of lifestyle and pharmacologic interventions that reduce insulin resistance, such as metformin, are needed to demonstrate that they are effective in reducing the risk of diabetes, endometrial abnormalities and cardiovascular disease events in women with polycystic ovary syndrome. Answer: Yes, insulin resistance appears to be an intrinsic defect in Asian women with polycystic ovary syndrome (PCOS). Studies have indicated that insulin resistance is significantly lower in both lean and overweight/obese women with PCOS compared to BMI-matched controls, suggesting that insulin resistance is an intrinsic defect of PCOS. Additionally, a high BMI could exacerbate insulin resistance in all women, irrespective of whether they have PCOS (PUBMED:23549804). Furthermore, insulin resistance in the skeletal muscle of women with PCOS involves intrinsic and acquired defects in insulin signaling, with intrinsic abnormalities in glucose transport and insulin signaling in myotubes from affected women (PUBMED:15613682). These findings are consistent with the hypothesis that PCOS is a genetically unique disorder conferring an increased risk for type 2 diabetes (PUBMED:15613682). Moreover, the presence of insulin resistance in lean women with PCOS, who do not have vitamin D deficiency, indicates that intrinsic factors rather than vitamin D deficiency are related to insulin resistance in this population (PUBMED:25339479). The literature also suggests that insulin resistance likely plays a central pathogenic role in PCOS and may explain the pleiotropic presentation and involvement of multiple organ systems (PUBMED:21950959). In summary, the evidence points to insulin resistance being an intrinsic defect in Asian women with PCOS, which is further exacerbated by factors such as obesity but is not solely dependent on extrinsic factors like vitamin D deficiency.
Instruction: Diagnostic value of colonoscopy indication as predictor of colorectal cancer: is it possible to design a fast track diagnosis? Abstracts: abstract_id: PUBMED:18783685 Diagnostic value of colonoscopy indication as predictor of colorectal cancer: is it possible to design a fast track diagnosis? Background: Diagnostic delay in patients with colorectal cancer (CRC) is a quality indicator and its reduction could improve prognosis of the disease. Objective: To analyze the diagnostic value of different colonoscopy indications in CRC and to select the signs or symptoms that, if prioritized in a rapid diagnostic circuit, would be most efficient. Material And Methods: A retrospective analysis of 2219 outpatients who underwent colonoscopy from 2000 to 2007 was performed. For each indication we calculated the sensitivity (S), positive predictive value (PPV), positive likelihood ratio (LR+), and number of colonoscopies needed to diagnose a case of CRC (NND). Results: A total of 179 patients were diagnosed with CRC. The indications with greatest PPV were liver metastases (35.3%), suspicious radiological image (20.8%), and non-distal rectal bleeding (22%). Iron deficiency anemia (11%), constitutional syndrome (10%), any rectal bleeding (9.4%) and rectal syndrome (9%) had intermediate PPV. Constipation (6.3%), alternating constipation-diarrhea (3.3%), changes in bowel habits (3%), distal rectal bleeding (2.1%), diarrhea (1.8%) and abdominal pain (1.1%) had low PPV. The NND was 4 in liver metastases, 7 in non-distal bleeding and 8 in suspicious radiological image. Distal bleeding (13), diarrhea (14), abdominal pain (14), changes in bowel habits (15) and alternating constipation-diarrhoea (21) had negative NND. The subgroup of patients aged &gt;or= 50 years showed lower NND in non-distal rectal bleeding (5), suspicious radiological image (5) and any rectal bleeding (16). Conclusions: Patients with non-distal rectal bleeding should be prioritized over other indications in a strategy of rapid diagnosis of CRC. Age equal to or more than 50 years should also be considered because this factor seems to reduce NND. Distal bleeding, abdominal pain and changes in bowel habits had low PPV and were associated with other diagnoses than CRC. Consequently, prioritization of these factors would be inefficient. abstract_id: PUBMED:34468784 Elective colorectal fast-track resections-Treatment adherence due to coordination by specialized nursing personnel Fast-track treatment pathways reduce the frequency of postoperative complications in elective colorectal resections by approximately 40% and due to the rapid recovery reduce the postoperative duration of hospitalization by approximately 50%. Specialized nursing personnel (enhanced recovery after surgery, ERAS, nurses) have already been appointed internationally to accompany and monitor the execution of multimodal perioperative treatment. In November 2018 a fast-track assistant was appointed in the Clinic for General and Visceral Surgery of the Municipal Clinic in Solingen for coordination of the fast-track treatment pathway. The results confirmed that a high adherence to perioperative fast-track treatment concepts can also be achieved in the German healthcare system by the assignment of specialized nursing personnel, with the known advantages for patients, nursing personnel, physicians and hospital sponsors. abstract_id: PUBMED:31345180 Prediction of advanced colonic neoplasm in symptomatic patients: a scoring system to prioritize colonoscopy (COLONOFIT study). Background: Fast-track colonoscopy to detect patients with colorectal cancer based on high-risk symptoms is associated with low sensitivity and specificity. The aim was to derive a predictive score of advanced colonic neoplasia in symptomatic patients in fast-track programs. Methods: All patients referred for fast-track colonoscopy were evaluated. Faecal immunological haemoglobin test (3 samples; positive&gt; 4 μg Hb/g), and a survey to register clinical variables of interest were performed. Colorectal cancer and advanced adenoma were considered as advanced colonic neoplasia. A sample size of 600 and 500 individuals were calculated for each phase 1 and phase 2 of the study, respectively (Phase 1, derivation and Phase 2, validation cohort). A Bayesian logistic regression analysis was used to derive a predictive score. Results: 1495 patients were included. Age (OR, 21), maximum faecal-Hb value (OR, 2.3), and number of positive samples (OR, 28) presented the highest ORs predictive of advanced colonic neoplasia. The additional significant predictive variables adjusted for age and faecal-Hb variables in Phase 1 were previous colonoscopy (last 5 years) and smoking (no, ex/active). With these variables a predictive score of advanced colonic neoplasia was derived. Applied to Phase 2, patients with a Score &gt; 20 had an advanced colonic neoplasia probability of 66% (colorectal cancer, 32%), while those with a Score ≤ 10, a probability of 10% (colorectal cancer, 1%). Prioritizing patients with Score &gt; 10, 49.4% of patients would be referred for fast-track colonoscopy, diagnosing 98.3% of colorectal cancers and 77% of advanced adenomas. Conclusions: A scoring system was derived and validated to prioritize fast-track colonoscopies according to risk, which was efficient, simple, and robust. abstract_id: PUBMED:31531383 Colonoscopy Indication Algorithm Performance Across Diverse Health Care Systems in the PROSPR Consortium. Background: Despite the importance of characterizing colonoscopy indication for quality monitoring and cancer screening program evaluation, there is no standard approach to documenting colonoscopy indication in medical records. Methods: We applied two algorithms in three health care systems to assign colonoscopy indication to persons 50-89 years old who received a colonoscopy during 2010-2013. Both algorithms used standard procedure, diagnostic, and laboratory codes. One algorithm, the KPNC algorithm, used a hierarchical approach to classify exam indication into: diagnostic, surveillance, or screening; whereas the other, the SEARCH algorithm, used a logistic regression-based algorithm to provide the probability that colonoscopy was performed for screening. Gold standard assessment of indication was from medical records abstraction. Results: There were 1,796 colonoscopy exams included in analyses; age and racial/ethnic distributions of participants differed across health care systems. The KPNC algorithm's sensitivities and specificities for screening indication ranged from 0.78-0.82 and 0.78-0.91, respectively; sensitivities and specificities for diagnostic indication ranged from 0.78-0.89 and 0.74-0.82, respectively. The KPNC algorithm had poor sensitivities (ranging from 0.11-0.67) and high specificities for surveillance exams. The Area Under the Curve (AUC) of the SEARCH algorithm for screening indication ranged from 0.76-0.84 across health care systems. For screening indication, the KPNC algorithm obtained higher specificities than the SEARCH algorithm at the same sensitivity. Conclusion: Despite standardized implementation of these indication algorithms across three health care systems, the capture of colonoscopy indication data was imperfect. Thus, we recommend that standard, systematic documentation of colonoscopy indication should be added to medical records to ensure efficient and accurate data capture. abstract_id: PUBMED:35565258 Urinary Volatile Organic Compound Testing in Fast-Track Patients with Suspected Colorectal Cancer. Colorectal symptoms are common but only infrequently represent serious pathology, including colorectal cancer (CRC). A large number of invasive tests are presently performed for reassurance. We investigated the feasibility of urinary volatile organic compound (VOC) testing as a potential triage tool in patients fast-tracked for assessment for possible CRC. A prospective, multi-center, observational feasibility study was performed across three sites. Patients referred to NHS fast-track pathways for potential CRC provided a urine sample that underwent Gas Chromatography-Mass Spectrometry (GC-MS), Field Asymmetric Ion Mobility Spectrometry (FAIMS), and Selected Ion Flow Tube Mass Spectrometry (SIFT-MS) analysis. Patients underwent colonoscopy and/or CT colonography and were grouped as either CRC, adenomatous polyp(s), or controls to explore the diagnostic accuracy of VOC output data supported by an artificial neural network (ANN) model. 558 patients participated with 23 (4%) CRC diagnosed. 59% of colonoscopies and 86% of CT colonographies showed no abnormalities. Urinary VOC testing was feasible, acceptable to patients, and applicable within the clinical fast track pathway. GC-MS showed the highest clinical utility for CRC and polyp detection vs. controls (sensitivity = 0.878, specificity = 0.882, AUROC = 0.896) but it is labour intensive. Urinary VOC testing and analysis are feasible within NHS fast-track CRC pathways. Clinically meaningful differences between patients with cancer, polyps, or no pathology were identified suggesting VOC analysis may have future utility as a triage tool. abstract_id: PUBMED:37947802 Structured implementation of fast-track pathways to enhance recovery after elective colorectal resection : First results from five German hospitals Background: Multimodal optimized perioperative management (mPOM, fast-track, enhanced recovery after surgery, ERAS) leads to a significantly accelerated recovery of patients with elective colorectal resections. Nevertheless, fast-track surgery has not yet become established in everyday clinical practice in Germany. We present the results of a structured fast-track implementation in five German hospitals. Methods: Prospective data collection in the context of a 13-month structured fast-track implementation. All patients ≥ 18 years undergoing elective colorectal resection and who gave informed consent were included. After 3 months of preparation (pre-FAST), fast-track treatment was initiated and continued for 10 months (FAST). Outcome criteria were adherence to internationally recommended fast-track elements, postoperative complications, functional recovery, and postoperative hospital stay. Results: Data from 192 pre-FAST and 529 FAST patients were analyzed. Age, sex, patient risk, location, and type of disease were not different between both groups. The FAST patients were more likely to have undergone minimally invasive surgery (82% vs. 69%). Fast-track adherence increased from 52% (35-65%) under traditional treatment to 83% (65-96%) under fast-track treatment (p &lt; 0.01). The duration until the end of infusion treatment, removal of the bladder catheter, first bowel movement, oral solid food, regaining autonomy, suitability for discharge and postoperative length of stay were significantly lower in the FAST group. Complications, reoperations, and readmission rates did not differ. Conclusion: Fast-track adherence rates &gt; 75% can also be achieved in German hospitals through structured fast-track implementation and the recovery of patients can be significantly accelerated. abstract_id: PUBMED:21689337 Diagnostic yield of colonoscopy for constipation as the sole indication. Aim: There is controversy over whether constipation as the only symptom should be an indication for routine diagnostic colonoscopy. The study was carried out to assess the prevalence of abnormal pathology on colonoscopy and to assess the risk factors for colonic neoplasia in patients with constipation but without 'high risk symptoms'. Method: A cross-sectional, single-centre study was conducted on individuals who underwent colonoscopy for constipation as the sole indication between 2005 and 2008. Standardized endoscopic and pathology reports were reviewed. Univariable and multivariable analyses were performed. Results: A total of 786 patients (595 women, 75.7%; mean age, 57.4±13.5 years) underwent diagnostic colonoscopy for constipation. Forty-three (5.5%) had polyps, of whom 19 (2.4%) had hyperplastic polyps and 19 (2.4%) adenomas. No cancers were found. In patients with adenoma, the detection rate was 2.9% for patients below age 40 years and 1.7% for patients below age 50 years. Older age was associated with a polyp in both univariate and multivariate analysis. Gender, ethnicity and smoking were not associated with polyp or adenoma. Conclusion: Colonoscopy for patients with constipation as the sole indication had a lower yield of neoplastic lesions than that for patients undergoing routine screening colonoscopy. Colonoscopy in constipation may only be warranted in patients who are over 50 years of age. abstract_id: PUBMED:26602596 Colorectal Cancer Initial Diagnosis: Screening Colonoscopy, Diagnostic Colonoscopy, or Emergent Surgery, and Tumor Stage and Size at Initial Presentation. Introduction/background: Rates of colorectal cancer screening are improving but remain suboptimal. Limited information is available regarding how patients are diagnosed with colorectal cancer (for example, asymptomatic screened patients or diagnostic workup because of the presence of symptoms). The purpose of this investigation was to determine how patients were diagnosed with colorectal cancer (screening colonoscopy, diagnostic colonoscopy, or emergent surgery) and tumor stage and size at diagnosis. Patients And Methods: Adults evaluated between 2011 and 2014 with a diagnosis of colorectal cancer were identified. Clinical notes, endoscopy reports, surgical reports, radiology reports, and pathology reports were reviewed. Sex, race, ethnicity, age at the time of initial diagnosis, method of diagnosis, presenting symptom(s), and primary tumor size and stage at diagnosis were recorded. Colorectal cancer screening history was also recorded. Results: The study population was 54% male (265 of 492) with a mean age of 58.9 years (range, 25-93 years). Initial tissue diagnosis was established at the time of screening colonoscopy in 10.7%, diagnostic colonoscopy in 79.2%, and during emergent surgery in 7.1%. Cancers diagnosed at the time of screening colonoscopy were more likely to be stage 1 than cancers diagnosed at the time of diagnostic colonoscopy or emergent surgery (38.5%, 7.2%, and 0%, respectively). Median tumor size was 3.0 cm for the screening colonoscopy group, 4.6 cm for the diagnostic colonoscopy group, and 5.0 cm for the emergent surgery group. At least 31% of patients diagnosed at the time of screening colonoscopy, 19% of patients diagnosed at the time of diagnostic colonoscopy, and 26% of patients diagnosed at the time of emergent surgery had never undergone a screening colonoscopy. Conclusion: Nearly 90% of colorectal cancer patients were diagnosed after development of symptoms and had more advanced disease than asymptomatic screening patients. Colorectal cancer outcomes will be improved by improving rates of colorectal cancer screening. abstract_id: PUBMED:28687580 Accuracy of Referring Provider and Endoscopist Impressions of Colonoscopy Indication. Background: Referring provider and endoscopist impressions of colonoscopy indication are used for clinical care, reimbursement, and quality reporting decisions; however, the accuracy of these impressions is unknown. This study assessed the sensitivity, specificity, positive and negative predictive value, and overall accuracy of methods to classify colonoscopy indication, including referring provider impression, endoscopist impression, and administrative algorithm compared with gold standard chart review. Methods: We randomly sampled 400 patients undergoing a colonoscopy at a Veterans Affairs health system between January 2010 and December 2010. Referring provider and endoscopist impressions of colonoscopy indication were compared with gold-standard chart review. Indications were classified into 4 mutually exclusive categories: diagnostic, surveillance, high-risk screening, or average-risk screening. Results: Of 400 colonoscopies, 26% were performed for average-risk screening, 7% for high-risk screening, 26% for surveillance, and 41% for diagnostic indications. Accuracy of referring provider and endoscopist impressions of colonoscopy indication were 87% and 84%, respectively, which were significantly higher than that of the administrative algorithm (45%; P&lt;.001 for both). There was substantial agreement between endoscopist and referring provider impressions (κ=0.76). All 3 methods showed high sensitivity (&gt;90%) for determining screening (vs nonscreening) indication, but specificity of the administrative algorithm was lower (40.3%) compared with referring provider (93.7%) and endoscopist (84.0%) impressions. Accuracy of endoscopist, but not referring provider, impression was lower in patients with a family history of colon cancer than in those without (65% vs 84%; P=.001). Conclusions: Referring provider and endoscopist impressions of colonoscopy indication are both accurate and may be useful data to incorporate into algorithms classifying colonoscopy indication. abstract_id: PUBMED:22437619 Determination of colonoscopy indication from administrative claims data. Background: Colonoscopy outcomes, such as polyp detection or complication rates, may differ by procedure indication. Objectives: To develop methods to classify colonoscopy indications from administrative data, facilitating study of colonoscopy quality and outcomes. Research Design: We linked 14,844 colonoscopy reports from the Clinical Outcomes Research Initiative, a national repository of endoscopic reports, to the corresponding Medicare Carrier and Outpatient File claims. Colonoscopy indication was determined from the procedure reports. We developed algorithms using classification and regression trees and linear discriminant analysis (LDA) to classify colonoscopy indication. Predictor variables included ICD-9CM and CPT/HCPCS codes present on the colonoscopy claim or in the 12 months prior, patient demographics, and site of colonoscopy service. Algorithms were developed on a training set of 7515 procedures, then validated using a test set of 7329 procedures. Results: Sensitivity was lowest for identifying average-risk screening colonoscopies, varying between 55% and 86% for the different algorithms, but specificity for this indication was consistently over 95%. Sensitivity for diagnostic colonoscopy varied between 77% and 89%, with specificity between 55% and 87%. Algorithms with classification and regression trees with 7 variables or LDA with 10 variables had similar overall accuracy, and generally lower accuracy than the algorithm using LDA with 30 variables. Conclusions: Algorithms using Medicare claims data have moderate sensitivity and specificity for colonoscopy indication, and will be useful for studying colonoscopy quality in this population. Further validation may be needed before use in alternative populations. Answer: The diagnostic value of colonoscopy indication as a predictor of colorectal cancer (CRC) has been explored in several studies, and there is evidence to suggest that it is possible to design a fast track diagnosis system for CRC. A retrospective analysis of 2219 outpatients who underwent colonoscopy found that certain indications had a higher positive predictive value (PPV) for CRC, such as liver metastases (35.3%), suspicious radiological image (20.8%), and non-distal rectal bleeding (22%) (PUBMED:18783685). The study concluded that patients with non-distal rectal bleeding should be prioritized in a rapid diagnostic circuit, especially those aged 50 years or older, as this could reduce the number of colonoscopies needed to diagnose a case of CRC. Another study developed a scoring system, the COLONOFIT study, to prioritize fast-track colonoscopies based on risk, which was found to be efficient, simple, and robust. The scoring system included variables such as age, maximum fecal-Hb value, number of positive fecal immunochemical test samples, previous colonoscopy, and smoking status. Patients with a score greater than 10 could be referred for fast-track colonoscopy, diagnosing 98.3% of CRCs and 77% of advanced adenomas (PUBMED:31345180). The implementation of fast-track treatment pathways, coordinated by specialized nursing personnel, has been shown to reduce postoperative complications and hospitalization duration in elective colorectal resections (PUBMED:34468784). Structured implementation of fast-track pathways in German hospitals demonstrated that adherence rates greater than 75% could be achieved, significantly accelerating patient recovery without increasing complications, reoperations, or readmission rates (PUBMED:37947802). Additionally, urinary volatile organic compound (VOC) testing has been investigated as a non-invasive triage tool in fast-track patients with suspected CRC. The study found that urinary VOC testing was feasible and could differentiate between patients with CRC, polyps, or no pathology, suggesting potential utility as a triage tool (PUBMED:35565258). In summary, the diagnostic value of colonoscopy indication can indeed be a predictor of CRC, and with the use of scoring systems, specialized fast-track treatment pathways, and potentially non-invasive tests like urinary VOC testing, it is possible to design an efficient fast track diagnosis system for CRC.
Instruction: Do bonding and bridging social capital have differential effects on self-rated health? Abstracts: abstract_id: PUBMED:24331905 Do bonding and bridging social capital affect self-rated health, depressive mood and cognitive decline in older Japanese? A prospective cohort study. Little is known regarding the longitudinal effects of bonding and bridging social capital on health. This study examined the longitudinal associations of bonding and bridging social capital with self-rated health, depressive mood, and cognitive decline in community-dwelling older Japanese. Data analyzed in this study were from the 2010 (baseline) and 2012 (follow-up) Hatoyama Cohort Study. Bonding social capital was assessed by individual perception of homogeneity of the neighborhood (the level of homogeneity among neighbors) and of networks (the amount of homogeneous personal networks) in relation to age, gender, and socioeconomic status. Bridging social capital was assessed by individual perception of heterogeneity of networks (the amount of heterogeneous personal networks) in relation to age, gender, and socioeconomic status. Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated to evaluate the effects of baseline social capital on poor health outcome at follow-up by logistic regression analysis. In total, 681 people completed baseline and follow-up surveys. The mean age of participants was 71.8 ± 5.1 years, and 57.9% were male. After adjusting for sociodemographics, lifestyle factors, comorbidity, functional capacity, baseline score of each outcome, and other bonding/bridging social capital, stronger perceived neighborhood homogeneity was inversely associated with poor self-rated health (OR = 0.55, 95% CI = 0.30-1.00) and depressive mood assessed by the Geriatric Depression Scale (OR = 0.58, 95% CI = 0.34-0.99). When participants who reported a depressive mood at baseline were excluded, stronger perceived heterogeneous network was inversely associated with depressive mood (OR = 0.40, 95% CI = 0.19-0.87). Neither bonding nor bridging social capital was significantly associated with cognitive decline assessed by the Mini-Mental State Examination. In conclusion, bonding and bridging social capital affect health in different ways, but they both have beneficial effects on the health of older Japanese. Our findings suggest that intervention focusing on bonding and bridging social capital may improve various health outcomes in old age. abstract_id: PUBMED:26536915 Can Geographic Bridging Social Capital Improve the Health of People Who Live in Deprived Urban Neighborhoods? The growing number of people living in deprived urban neighborhoods, which often have unhealthy environments, is of growing concern to inequality researchers. Social capital could be a resource to help such communities get ahead. In this study, we examined the differential effects of bonding and bridging social capital on self-rated health using two operational definitions, which we call personal and geographic social capital. Bonding and bridging social capital were operationally distinguished as respondents' perceived similarity to other members of a group with respect to personal characteristics (personal social capital) or as structural similarity with respect to geographical location (geographic social capital). The results showed that although both bonding and bridging social capital as defined by person-based criteria were associated with increased odds of self-rated health compared to those who reported zero participation, when defined by place-based criteria, only bridging social capital was associated with increased odds of self-rated health; no clear association was found between health and belonging to groups within the neighborhood, so-called geographic bonding social capital. The present study suggests that geographic bridging social capital can function as linking social capital that enables an upward approach depending on the political and economic contexts of urbanization. abstract_id: PUBMED:31806012 The relationship between social capital and self-rated health: a multilevel analysis based on a poverty alleviation program in the Philippines. Background: Poor health is both a cause and consequence of poverty, and there is a growing body of evidence suggesting that social capital is an important factor for improving health in resource-poor settings. International Care Ministries (ICM) is a non-governmental organization in the Philippines that provides a poverty alleviation program called Transform. A core aim of the program is to foster social connectedness and to create a network of support within each community, primarily through consistent community-led small group discussions. The purpose of this research was to investigate the relationship between social capital and self-rated health and how ICM's Transform program may have facilitated changes in those relationships. Methods: Three types of social capital were explored: bonding-structural, bridging-structural and cognitive. Using cross-sectional data collected before and after Transform, multilevel modelling was used to examine their effects on self-rated health between the two time points. Results: The analyses showed that while social capital had minimal effects on self-rated health before Transform, a series of associations were identified after the program. Evidence of interdependence between the different types of social capital was also observed: bonding social capital only had a beneficial effect on self-rated health in the presence of bridging social capital, but we found that there was a 17 percentage point increase in self-rated health when individuals possessed all possible bridging and bonding relationships. At the same time, our estimates showed that maximising all forms of social capital is not necessarily constructive, as the positive effect of cognitive social capital on self-rated health was weaker at higher levels of bridging social capital. Conclusions: The results from this study has shown that building social capital can influence the way people perceive their own health, which can be facilitated by intervention programs which seek to create bonding and bridging relationships. Transform's intentional design to learn in community could be relevant to program planners as they develop and evaluate community-based programs, making adaptations as necessary to achieve organisation-specific goals while acknowledging the potential for varied effects when applied in different contexts or circumstances. abstract_id: PUBMED:24341568 Group involvement and self-rated health among the Japanese elderly: an examination of bonding and bridging social capital. Background: To date, only a small amount of research on bonding/bridging social capital has separately examined their effects on health though they have been thought to have differential effects on health outcomes. By using a large population-based sample of elderly Japanese people, we sought to investigate the association between bonding and bridging social capital and self-rated health for men and women separately. Methods: In August 2010, questionnaires were sent to all residents aged ≥ 65 years in three municipalities in Okayama prefecture (n = 21232), and 13929 questionnaires were returned (response rate: 65.6%). Social capital was measured from survey responses to questions on participation in six different types of groups: a) the elderly club or sports/hobby/culture circle; b) alumni association; c) political campaign club; d) citizen's group or environmental preservation activity; e) community association; and f) religious organization. Participant perception of group homogeneity (gender, age, and previous occupation) was used to divide social capital into bonding or bridging. Odds ratios (ORs) and 95% confidence intervals (CIs) for poor self-rated health were calculated. Results: A total of 11146 subjects (4441 men and 6705 women) were available for the analysis. Among men, bonding and bridging social capital were inversely associated with poor self-rated health (high bonding social capital; OR: 0.55, 95% CI: 0.31-0.99; high bridging social capital; OR: 0.62, 95% CI: 0.48-0.81) after adjusting for age, educational attainment, smoking status, frequency of alcohol consumption, overweight, living arrangements, and type-D personality. The beneficial effect among women was more likely limited to bonding social capital (high bonding social capital; OR: 0.34, 95% CI: 0.12-1.00), and the association between bridging social capital and self-rated health was less clear (high bridging social capital; OR: 0.69, 95% CI: 0.44-1.07). Conclusions: Bonding/bridging social capital could have differential associations with self-rated health among the Japanese elderly depending on the individual's sex. Considering the lack of consensus on how to measure bonding and bridging social capital, however, we need to carefully assess the generalizability of our findings. Further research is warranted to identify health-relevant dimensions of social capital in different cultural or economic settings. abstract_id: PUBMED:26569107 Bonding, Bridging, and Linking Social Capital and Self-Rated Health among Chinese Adults: Use of the Anchoring Vignettes Technique. Three main opposing camps exist over how social capital relates to population health, namely the social support perspective, the inequality thesis, and the political economy approach. The distinction among bonding, bridging, and linking social capital probably helps close the debates between these three camps, which is rarely investigated in existing literatures. Moreover, although self-rated health is a frequently used health indicator in studies on the relationship between social capital and health, the interpersonal incomparability of this measure has been largely neglected. This study has two main objectives. Firstly, we aim to investigate the relationship between bonding, bridging, and linking social capital and self-rated health among Chinese adults. Secondly, we aim to improve the interpersonal comparability in self-rated health measurement. We use data from a nationally representative survey in China. Self-rated health was adjusted using the anchoring vignettes technique to improve comparability. Two-level ordinal logistic regression was performed to model the association between social capital and self-rated health at both individual and community levels. The interaction between residence and social capital was included to examine urban/rural disparities in the relationship. We found that most social capital indicators had a significant relationship with adjusted self-rated health of Chinese adults, but the relationships were mixed. Individual-level bonding, linking social capital, and community-level bridging social capital were positively related with health. Significant urban/rural disparities appeared in the association between community-level bonding, linking social capital, and adjusted self-rated health. For example, people living in communities with higher bonding social capital tended to report poorer adjusted self-rated health in urban areas, but the opposite tendency held for rural areas. Furthermore, the comparison between multivariate analyses results before and after the anchoring vignettes adjustment showed that the relationship between community-level social capital and self-rated health might be distorted if comparability problems are not addressed. In conclusion, the framework of bonding, bridging, and linking social capital helps us better understand the mechanism between social capital and self-rated health. Cultural and socioeconomic factors should be considered when designing health intervention policies using social capital. Moreover, we recommend that more studies improve the comparability of self-rated health by using the anchoring vignettes technique. abstract_id: PUBMED:24531015 A multilevel analysis of social capital and self-rated health: evidence from China. We investigate relationship between social capital and self-rated health (SRH) in urban and rural China. Using a nationally representative data collected in 2005, we performed multilevel analyses. The social capital indicators include bonding trust, bridging trust, social participation and Chinese Communist Party membership. Results showed that only trust was beneficial for SRH in China. Bonding trust mainly promoted SRH at individual level and bridging trust mainly at county level. Moreover, the individual-level bridging trust was only positively associated with SRH of urban residents, which mirrored the urban-rural dual structure in China. We also found a cross-level interaction effect of bonding trust in urban area. In a county with high level of bonding trust, high-bonding-trust individuals obtained more health benefit than others; in a county with low level of bonding trust, the situation was the opposite. abstract_id: PUBMED:31202996 Temporal heterogeneity of the association between social capital and health: an age-period-cohort analysis in China. Objectives: The temporal heterogeneity of the association between social capital and health has not been fully discussed yet, so this study aimed to examine whether and how the association between social capital and health varied with age, period, and cohort. Study Design: Data were taken from the Chinese General Social Survey of 2005 and 2015, with 15,488 samples being collected. Methods: An ordinary least square model with interaction terms was used to examine the age, period, and cohort variations in the association between bonding/bridging social capital and self-rated health/depression from the perspective of urban-rural comparison. Results: In urban China, the association between bonding social capital and self-rated health varied with age, the association between bonding social capital and depression varied with age and cohort, the association between bridging social capital and self-rated health varied with period, and the association between bridging social capital and depression varied with period and cohort. By contrast, in rural China, only the association between bonding social capital and self-rated health varied with period and the association between bridging social capital and depression varied with cohort. Conclusions: This study extends the traditional perspective of social capital and health study, and the results indicate that we should not only examine the association between social capital and health from the perspective of urban-rural comparison but also consider the impacts of life course and social development on this association. In this sense, specific interventions should be taken to improve social capital and health. abstract_id: PUBMED:29342115 Investigating the Associations between Ethnic Networks, Community Social Capital, and Physical Health among Marriage Migrants in Korea. This study examines factors associated with the physical health of Korea's growing immigrant population. Specifically, it focuses on the associations between ethnic networks, community social capital, and self-rated health (SRH) among female marriage migrants. For empirical testing, secondary analysis of a large nationally representative sample (NSMF 2009) is conducted. Given the clustered data structure (individuals nested in communities), a series of two-level random intercepts and slopes models are fitted to probe the relationships between SRH and interpersonal (bonding and bridging) networks among foreign-born wives in Korea. In addition to direct effects, cross-level interaction effects are investigated using hierarchical linear modeling. While adjusting for confounders, bridging (inter-ethnic) networks are significantly linked with better health. Bonding (co-ethnic) networks, to the contrary, are negatively associated with immigrant health. Net of individual-level covariates, living in a commuijnity with more aggregate bridging social capital is positively linked with health. Community-level bonding social capital, however, is not a significant predictor. Lastly, two cross-level interaction terms are found. First, the positive relationship between bridging network and health is stronger in residential contexts with more aggregate bridging social capital. Second, it is weaker in communities with more aggregate bonding social capital. abstract_id: PUBMED:33407300 Personal social capital and self-rated health among middle-aged and older adults: a cross-sectional study exploring the roles of leisure-time physical activity and socioeconomic status. Background: Personal social capital, which refers to the scope and quality of an individual's social networks within a community, has received increasing attention as a potential sociological factor associated with better individual health; yet, the mechanism relating social capital to health is still not fully understood. This study examined the associations between social capital and self-rated health while exploring the roles of leisure-time physical activity (LTPA) and socioeconomic status (SES) among middle-aged and older adults. Methods: Cross-sectional data were collected from 662 middle-aged and older adults (Mean age: 58.11 ± 10.59 years old) using the Qualtrics survey panel. Personal Social Capital Scale was used to measure bonding and bridging social capital and the International Physical Activity Questionnaire was used to assess LTPA levels. SES was assessed by education and household income levels. Self-rated health was assessed using a single item, by which the participants were categorized into the two groups, having 'good' vs. 'not good' self-rated health. A series of univariate and multivariate logistic regression models were established to examine the independent and adjusted associations of social capital with self-rated health and to test mediating and moderating roles of LTPA and SES, respectively. Results: Bonding and bridging social capital were positively associated with self-rated health (Odds ratios = 1.11 and 1.09; P's &lt; .05, respectively), independent of LTPA that was also significantly associated with greater self-rated health (P-for-linear trends = .007). After adjusting SES, the associations of social capital were significantly attenuated and there was a significant interaction effect by household income (P-for-interaction = .012). Follow-up analyses stratified by household income showed that beneficial associations of social capital with self-rated health were more apparent among the people with low and high levels of household income; yet, LTPA was the stronger predictor of self-rated health among those in the middle class of household income. Conclusions: Findings suggest that both social capital and LTPA are associated with better self-rated health; yet, these associations vary by SES. The health policymakers should address both social capital and LTPA for enhancing perceived health among aging populations but may need to consider varying SES backgrounds. abstract_id: PUBMED:19742061 Social capital and self-rated health in 21 European countries. Study Objective: The aim of this paper is to explore the association between social capital and self-rated health in different European countries. Methods: For the cross sectional, comparative analyses data from 21 European countries were used. 40,856 people aged 15 years and older were personally interviewed in 2003 (European Social Survey). Perceptions of social trust, and membership, participation and voluntary work in civic organisations were used as social capital indicators. Analyses are conducted on an aggregate level (country) and on an individual level. Main Results: Results indicate comparatively low levels of social capital in East and South European countries. Countries with low levels of social capital have a high percentage of residents reporting poor health. Social capital is significantly associated with self-rated health in most of the European countries on the individual level after accounting for gender and age. However, additional adjustment for socio-economic status results in a decrease of the associations between activities in voluntary organisations and health. Further adjustment for social contacts and emotional support results in only minor changes of the associations. Conclusions: Analyses reveal strong correlations between social capital and self-rated health in a number of European countries on the aggregate level. Associations on the individual level are weaker in East and South European countries. Moreover, association of self-rated health with perceptions of social trust seems to be more consistent than with activities in voluntary organisations. Answer: Yes, bonding and bridging social capital have been found to have differential effects on self-rated health. Bonding social capital, which refers to the relationships within a homogenous group, such as family and close friends, has been associated with better self-rated health in some studies. For instance, in a study of older Japanese individuals, stronger perceived neighborhood homogeneity (a form of bonding social capital) was inversely associated with poor self-rated health and depressive mood (PUBMED:24331905). Similarly, among the Japanese elderly, bonding social capital was inversely associated with poor self-rated health for both men and women, although the effect was more pronounced for men (PUBMED:24341568). On the other hand, bridging social capital, which connects individuals across diverse social groups, has also shown positive associations with self-rated health. For example, in a study examining the health of people living in deprived urban neighborhoods, bridging social capital defined by place-based criteria was associated with increased odds of self-rated health (PUBMED:26536915). Additionally, in the context of a poverty alleviation program in the Philippines, the presence of bridging social capital was found to have a beneficial effect on self-rated health after the program, especially when combined with bonding social capital (PUBMED:31806012). However, the effects of bonding and bridging social capital on health can vary by context and population. For instance, in urban China, individual-level bonding and linking social capital, as well as community-level bridging social capital, were positively related to health, but the relationships were mixed and showed urban/rural disparities (PUBMED:26569107). Moreover, the association between social capital and health can vary with age, period, and cohort, as shown in a study from China (PUBMED:31202996). In summary, both bonding and bridging social capital have been found to have beneficial effects on self-rated health, but their impact can differ depending on various factors such as age, gender, socioeconomic status, urban/rural residence, and cultural context. It is important to consider these nuances when interpreting the relationship between social capital and health outcomes.
Instruction: Is Clostridium difficile infection an increasingly common severe disease in adult intensive care units? Abstracts: abstract_id: PUBMED:25791766 Is Clostridium difficile infection an increasingly common severe disease in adult intensive care units? A 10-year experience. Purpose: Despite the high concentration of patients with known risk factors for Clostridium difficile infection (CDI) in intensive care units (ICUs), data on ICU patients are scarce. The aim of this study was describe the incidence, clinical characteristics, and evolution of CDI in critically ill patients. Materials And Methods: From 2003 to 2012, adult patients admitted to an ICU (A-ICU) and positive for CDI were included and classified as follows: pre-ICU, if the positive sample was obtained within ±3 days of ICU admission; in-ICU, if obtained after 3 days of ICU admission and up to 3 days after ICU discharge. Results: We recorded 4095 CDI episodes, of which 328 were A-ICU (8%). Episodes of A-ICU decreased from 19.4 to 8.7 per 10000 ICU days of stay (P &lt; .0001). Most A-ICU CDIs (66.3%) were mild to moderate. Pre-ICU episodes accounted for 16.2% and were more severe complicated than in-ICU episodes (11% vs 0%; P = .020). Overall mortality was 28.6%, and CDI-attributable mortality was only 3%. Conclusion: The incidence of A-ICU CDI has decreased steadily over the last 10 years. A significant proportion of A-ICU CDI episodes are pre-ICU and are more severe than in-ICU CDI episodes. Most episodes of A-ICU CDI were nonsevere, with low associated mortality. abstract_id: PUBMED:27509051 Clostridium difficile Infections in Medical Intensive Care Units of a Medical Center in Southern Taiwan: Variable Seasonality and Disease Severity. Critical patients are susceptible to Clostridium difficile infections (CDIs), which cause significant morbidity and mortality in the hospital. In Taiwan, the epidemiology of CDI in intensive care units (ICUs) is not well understood. This study was aimed to describe the incidence and the characteristics of CDI in the ICUs of a medical center in southern Taiwan. Adult patients with diarrhea but without colostomy/colectomy or laxative use were enrolled. Stool samples were collected with or without 5 ml alcohol and were plated on cycloserine-cefoxitin-fructose agar. C. difficile identification was confirmed by polymerase chain reaction. There were 1,551 patients admitted to ICUs, 1,488 screened, and 145 with diarrhea. A total of 75 patients were excluded due either to laxative use, a lack of stool samples, or refusal. Overall, 70 patients were included, and 14 (20%) were diagnosed with CDI, with an incidence of 8.8 cases per 10,000 patient-days. The incidence of CDI was found to be highest in March 2013 and lowest in the last quarter of 2013. The cases were categorized as the following: 5 severe, complicated, 5 severe, and 4 mild or moderate diseases. Among the 14 cases of CDI, the median patient age was 74 (range: 47-94) years, and the median time from admission to diarrhea onset was 16.5 (4-53) days. Eight cases received antimicrobial treatment (primarily metronidazole), and the time to diarrheal resolution was 11.5 days. Though 6 cases were left untreated, no patients died of CDI. The in-hospital mortality of CDI cases was 50%, similar to that of patients without CDI (46.4%; P = 1.0). We concluded that the overall incidence of CDI in our medical ICUs was low and there were variable seasonal incidences and disease severities of CDI. abstract_id: PUBMED:32212102 Clostridioides difficile infections in the intensive care unit: a monocentric cohort study. Introduction: Patient-level data from Clostridioides difficile infections (CDI) treated in an intensive care setting is limited, despite the growing medical and financial burden of CDI. Methods: We retrospectively analyzed data from 100 medical intensive care unit patients at the University Hospital Cologne with respect to demography, diagnostics, severity scores, treatment, and outcome. To analyze factors influencing response to treatment and death, a backward-stepwise multiple logistic regression model was applied. Results: Patients had significant comorbidities including 26% being immunocompromised. The mean Charlson Comorbidity Index was 6.3 (10-year survival rate of 2.25%). At the time of diagnosis, the APACHE II was 17.4±6.3 (predicted mortality rate of 25%), and the ATLAS score was 5.2±1.9 (predicted cure rate of 75%). Overall, 47% of CDI cases were severe, 35% were complicated, and 23% were both. At least one concomitant antibiotic was given to 74% of patients. The cure rate after 10 and 90 days was 56% and 51%, respectively. Each unit increment in APACHE II score was associated with poorer treatment response (OR 0.931; 95% CI 0.872-0.995; p = 0.034). Age above 65 years was associated with death (OR 2.533; 95% CI 1.031-6.221; p = 0.043), and overall mortality at 90 days was 56%. Conclusions: CDI affects a high-risk population, in whom predictive scoring tools are not accurate, and outcomes are poor despite intensive treatment. Further research in this field is warranted to improve prediction scoring and patient outcomes. abstract_id: PUBMED:29058580 Sleeping with the enemy: Clostridium difficile infection in the intensive care unit. Over the last years, there was an increase in the number and severity of Clostridium difficile infections (CDI) in all medical settings, including the intensive care unit (ICU). The current prevalence of CDI among ICU patients is estimated at 0.4-4% and has severe impact on morbidity and mortality. An estimated 10-20% of patients are colonized with C. difficile without showing signs of infection and spores can be found throughout ICUs. It is not yet possible to predict whether and when colonization will become infection. Figuratively speaking, our patients are sleeping with the enemy and we do not know when this enemy awakens.Most patients developing CDI in the ICU show a mild to moderate disease course. Nevertheless, difficult-to-treat severe and complicated cases also occur. Treatment failure is particularly frequent in ICU patients due to comorbidities and the necessity of continued antibiotic treatment. This review will give an overview of current diagnostic, therapeutic, and prophylactic challenges and options with a special focus on the ICU patient.First, we focus on diagnosis and prognosis of disease severity. This includes inconsistencies in the definition of disease severity as well as diagnostic problems. Proceeding from there, we discuss that while at first glance the choice of first-line treatment for CDI in the ICU is a simple matter guided by international guidelines, there are a number of specific problems and inconsistencies. We cover treatment in severe CDI, the problem of early recognition of treatment failure, and possible concepts of intensifying treatment. In conclusion, we mention methods for CDI prevention in the ICU. abstract_id: PUBMED:23182524 Preventing clostridium difficile infection in the intensive care unit. Clostridium difficile is a formidable problem in the twenty-first century. Because of injudicious use of antibiotics, the emergence of the hypervirulent epidemic strain of this organism has been difficult to contain. The NAP1/BI/027 strain causes more-severe disease than other widely prevalent strains and affects patients who were not traditionally thought to be at risk for Clostridium difficile infection. Critically ill patients remain at high risk for this pathogen, and preventive measures, such as meticulous contact precautions, hand hygiene, environmental disinfection, and, most importantly, antibiotic stewardship, are the cornerstones of mitigation in the intensive care unit. abstract_id: PUBMED:30979531 Characteristics, risk factors and outcomes of Clostridium difficile infections in Greek Intensive Care Units. Background: Clostridium difficile is one of the major causes of diarrhoea among critically ill patients and its prevalence increases exponentially in relation to the use of antibiotics and medical devices. We sought to investigate the incidence of C. difficile infection in Greek units, and identify potential risk factors related to C. difficile infection. Methods: A prospective multicenter cohort analysis of critically ill patients (3 ICUs from 1/1/2014 to 31/12/2014). Results: Among 970(100%) patients, 95(9.79%) with diarrhoea, were included. Their demographic, comorbidity and clinical characteristics were recorded on admission to the unit. The known predisposing factors for the infection were recorded and the diagnostic tests to confirm C. difficile were conducted, based on the current guidelines. The incidence of C. difficile infection was 1.3% (n = 13). All-cause mortality in patients with diarrhoea, C. difficile infection and attributable mortality in patients with C. difficile infection was 28%, 38.5% and 30.8% respectively. Sequential Organ Failure Assessment (SOFA) scores on admission were significantly lower and prior C. difficile infection was more common in patients with current C. difficile infection. Regarding other potential risk factors, no difference was found between groups. No factor was independently associated with C. difficile infection. Conclusions: C. difficile infection is low in Greek intensive care units, but remains a serious problem among the critically-ill. Mortality was similar to reports from other countries. No factor was independently associated with C. difficile infection. abstract_id: PUBMED:23337482 Length of stay and mortality due to Clostridium difficile infection acquired in the intensive care unit. Purpose: The purpose of this study was to determine the attributable intensive care unit (ICU) and hospital length of stay and mortality of ICU-acquired Clostridium difficile infection (CDI). Materials And Methods: In this retrospective cohort study of 3 tertiary and 3 community ICUs, we screened all patients admitted between April 2006 and December 2011 for ICU-acquired CDI. Using both complete and matched cohort designs and Cox proportional hazards analysis, we determined the association between CDI and ICU and hospital length of stay and mortality. Adjustment or matching variables were site, age, sex, severity of illness, and year of admission; any infection as an ICU admitting or acquired diagnosis before the diagnosis of CDI and diagnosis of CDI were time-dependent exposures. Results: Of 15314 patients admitted to the ICUs during the study period, 236 developed CDI in the ICU. In the complete cohort analysis, the hazard ratios (95% confidence interval) for CDI related to ICU and hospital discharge were 0.82 (0.72, 0.94) and 0.83 (0.73, 0.95), respectively (0.5 additional ICU days and 3.4 hospital days), and related to death in ICU and hospital, they were 1.00 (0.73, 1.38) and 1.19 (0.93, 1.52), respectively. In the matched analysis, the hazard ratios for CDI related to ICU and hospital discharge were 0.91 (0.81, 1.03) and 0.98 (0.85, 1.13), respectively, and related to death in ICU and hospital, they were 1.18 (0.85, 1.63) and 1.08 (0.82, 1.43), respectively. Conclusions: C difficile infection acquired in ICU is associated with an increase in length of ICU and hospital stay but not with any difference in ICU or hospital mortality. abstract_id: PUBMED:24307797 Predictors of Clostridium difficile infection severity in patients hospitalised in medical intensive care. Aim: To describe and analyse factors associated with Clostridium difficile infection (CDI) severity in hospitalised medical intensive care unit patients. Methods: We performed a retrospective cohort study of 40 patients with CDI in a medical intensive care unit (MICU) at a French university hospital. We include patients hospitalised between January 1, 2007 and December 31, 2011. Data on demographics characteristics, past medical history, CDI description was collected. Exposure to risk factors associated with CDI within 8 wk before CDI was recorded, including previous hospitalisation, nursing home residency, antibiotics, antisecretory drugs, and surgical procedures. Results: All included cases had their first episode of CDI. The mean incidence rate was 12.94 cases/1000 admitted patients, and 14.93, 8.52, 13.24, 19.70, and 8.31 respectively per 1000 admitted patients annually from 2007 to 2011. Median age was 62.9 [interquartile range (IQR) 55.4-72.40] years, and 13 (32.5%) were women. Median length of MICU stay was 14.0 d (IQR 5.0-22.8). In addition to diarrhoea, the clinical symptoms of CDI were fever (&gt; 38 °C) in 23 patients, abdominal pain in 15 patients, and ileus in 1 patient. The duration of diarrhoea was 13.0 (8.0-19.5) d. In addition to diarrhoea, the clinical symptoms of CDI were fever (&gt; 38 °C) in 23 patients, abdominal pain in 15 patients, and ileus in 1 patient. Prior to CDI, 38 patients (95.0%) were exposed to antibiotics, and 12 (30%) received at least 4 antibiotics. Fluoroquinolones, 3(rd) generation cephalosporins, coamoxiclav and tazocillin were prescribed most frequently (65%, 55%, 40% and 37.5%, respectively). The majority of cases were hospital-acquired (n = 36, 90%), with 5 cases (13.9%) being MICU-acquired. Fifteen patients had severe CDI. The crude mortality rate within 30 d after diagnosis was 40% (n = 16), with 9 deaths (9 over 16; 56.3%) related to CDI. Of our 40 patients, 15 (37.5%) had severe CDI. Multivariate logistic regression showed that male gender [odds ratio (OR): 8.45; 95%CI: 1.06-67.16, P = 0.044], rising serum C-reactive protein levels (OR = 1.11; 95%CI: 1.02-1.21, P = 0.021), and previous exposure to fluoroquinolones (OR = 9.29; 95%CI: 1.16-74.284, P = 0.036) were independently associated with severe CDI. Conclusion: We report predictors of severe CDI not dependent on time of assessment. Such factors could help in the development of a quantitative score in ICU's patients. abstract_id: PUBMED:21129912 Herpes simplex virus: a marker of severity in bacterial ventilator-associated pneumonia. Purpose: Ventilator-associated pneumonia (VAP) is the most frequent nosocomial infection in intensive care units and has a high morbidity and mortality rate. It is mainly a bacterial disease, although the potential role of viruses as pathogens or copathogens in VAP is under discussion. Our study aims were to determine the incidence of herpes simplex virus (HSV) in the lower respiratory tract (LRT) secretions in patients with bacterial VAP and to assess its potential clinical relevance. Material And Methods: This is a prospective observational study carried out over a 14-month period. All LRT samples of adult patients with VAP were sent for bacterial culture and virus isolation. We compared patients with bacterial VAP with isolation of HSV and those without. Results: One-hundred seventy-seven patients had confirmed bacterial VAP. Herpes simplex virus was present in 13.4% of them. Patients with HSV had more severe underlying conditions and worse outcome. They consumed more antibiotics for the VAP episode, had more Clostridium difficile infection, spent a longer time on mechanical ventilation, had a longer intensive care unit and hospital stay, and had greater mortality than those without. Conclusions: Herpes simplex virus excretion in LRT secretions is not infrequent in VAP, and it is associated with greater severity and worse prognosis. abstract_id: PUBMED:24859995 Surgical Management of Severe Colitis in the Intensive Care Unit. Severe colitis, an umbrella encompassing several entities, is one of the most common acute gastrointestinal disorders resulting in critical illness. Clostridium difficile infection is responsible for the majority of nosocomial diarrhea with fulminant C difficile colitis (CDC) carrying a high mortality. Optimal outcomes can be achieved by early identification and treatment of fulminant CDC, with appropriate surgical intervention when indicated. Ischemic colitis, on the other hand, is uncommon with a range of etiological factors including abdominal aortic surgery, inotropic drugs, rheumatoid diseases, or often no obvious triggering factor. Most cases resolve with nonsurgical management; however, prompt recognition of full-thickness necrosis and gangrene is crucial for good patient outcomes. Fulminant colitis is a severe disease secondary to progressive ulcerative colitis with systemic deterioration. Surgical intervention is indicated for hemorrhage, perforation, or peritonitis and failure of medical therapy to control the disease. Although, failure of medical management is the most common indication, it can be difficult to define objectively and requires a collaborative multidisciplinary approach. This article proposes some simple management algorithms for these clinical entities, with a focus on critically ill patients. Answer: The available abstracts suggest that Clostridium difficile infection (CDI) remains a significant concern in adult intensive care units (ICUs), but the trends in incidence and severity appear to be variable and context-dependent. One study reported that the incidence of CDI in an adult ICU decreased steadily over a 10-year period, with most episodes being nonsevere and associated with low mortality (PUBMED:25791766). Another study from a medical center in southern Taiwan found a low overall incidence of CDI in their medical ICUs, with variable seasonal incidences and disease severities (PUBMED:27509051). Similarly, a study from Greek ICUs reported a low incidence of CDI, with mortality rates similar to other countries (PUBMED:30979531). However, other studies highlight the severity and challenges associated with CDI in the ICU setting. A monocentric cohort study found that CDI affects a high-risk population in the ICU, with poor outcomes despite intensive treatment, and that predictive scoring tools are not accurate in this setting (PUBMED:32212102). Another study emphasized the increase in the number and severity of CDI in all medical settings, including the ICU, with a prevalence estimated at 0.4-4%, significantly impacting morbidity and mortality (PUBMED:29058580). Furthermore, a study focusing on the surgical management of severe colitis, including fulminant C. difficile colitis, in the ICU setting, indicated that fulminant CDC carries a high mortality and that early identification and treatment are crucial for optimal outcomes (PUBMED:24859995). In summary, while some studies report a decrease in the incidence of CDI in ICUs and a predominance of nonsevere cases, other research highlights the severe impact of CDI in the ICU population, with significant morbidity and mortality. The variability in findings may be due to differences in local epidemiology, patient populations, and healthcare practices. Overall, CDI remains a serious concern in the ICU setting, and its severity can be considerable, especially in high-risk patient groups.
Instruction: Antenatal screening for hepatitis C: Universal or risk factor based? Abstracts: abstract_id: PUBMED:26121909 Antenatal screening for hepatitis C: Universal or risk factor based? Background: There is no clear consensus on whether antenatal screening for hepatitis C (HCV) should be universal, or based on an assessment of risk factors. Aim: To report the HCV status and risk factors for HCV amongst women delivering at a tertiary metropolitan hospital in order to better understand the implications of changing from universal to risk factor based HCV screening. Materials And Methods: An audit of practice was performed at Mater Mothers' Hospitals (Brisbane) using routinely collected data from 2007 to 2013 (n = 57,659). The demographic and clinical characteristics of HCV-positive women (n = 281) were compared with those with a negative result (n = 57,378), and compared for the presence or absence of risk factors for HCV. Results: From a cohort of 57,659 women, 281 (0.5%) women were HCV positive. HCV-positive women were more likely to have received blood products (10.0 vs 3.1%; P &lt; 0.001), have a history of illicit drug use (72.2 vs 9.8%; P &lt; 0.001), and have at least one risk factor for HCV infection (92 vs 17%; P &lt; 0.001). Of the HCV-positive women, only seven of the 281 (2.5%) had no identifiable risk factor, whilst most (83%) HCV-negative women did not have any documented risk factor for HCV infection. Conclusion: Most women testing positive for HCV antibodies have identifiable risk factors; however, a small number will not be detected if a risk factor based screening approach is adopted. The benefits of universal screening must be weighed against the potential cost savings of a risk factor based screening program. abstract_id: PUBMED:38178637 Identifying missed opportunities for hepatitis C virus antenatal testing and diagnosis in England. New case-finding opportunities are needed to achieve hepatitis C virus (HCV) elimination in England by the year 2030. HCV antenatal testing is not offered universally in England but is recommended for women with risk factors for HCV (e.g. injecting drug use, being born in a high-prevalence country). The aim of this analysis was to investigate the missed opportunities for HCV antenatal testing among women who had given birth and were subsequently diagnosed with HCV at some time after childbirth. By linking data on live births (2010-2020) to laboratory reports of HCV diagnoses (1995-2021), we identified all women who were diagnosed with HCV after the date of their first childbirth. This group was considered to potentially have experienced a missed opportunity for HCV antenatal testing; HCV-RNA testing and treatment outcomes were also obtained for these women. Of the 32,295 women who gave birth between 2010 and 2020 with a linked diagnosis of HCV (median age: 34 years, 72.1% UK-born), over half (n = 17,123) were diagnosed after childbirth. In multivariable analyses, the odds of being diagnosed with HCV after childbirth were higher in those of Asian Bangladeshi, Black African or Chinese ethnicity and among those born in Africa. Over four-fifths (3510/4260) of those eligible for treatment were linked to treatment, 30.7% (747/2435) of whom had a liver scarring level of at least moderate and 9.4% (228/2435) had cirrhosis. Given the potential opportunity to identify cases of HCV with targeted case-finding through antenatal services, universal opt-out testing should be considered in these settings. abstract_id: PUBMED:37439817 Expanded or Risk Factor-Based Annual Screening for Hepatitis C Virus (HCV) Among Persons With HIV: Which Is the Best Approach? Introduction. Universal one-time screening for hepatitis C virus (HCV) is recommended for all adults. For persons with HIV (PWH), guidelines recommend HCV screening at entry into care and annually in men who have unprotected sex with other men (MSM) and persons who inject drugs (PWID). Public health experts recommend expanded annual screening in all PWH given concerns for undiagnosed new HCV diagnoses when risk factors are not assessed. Electronic medical record (EMR) with clinical decision support using a Best Practice Advisory (BPA) tool can aid HCV risk factor assessment. We conducted a prospective study among three HIV clinics to compare the two screening approaches. Methods. Two clinics implemented the EMR-triggered risk factor-based screening; one clinic used the expanded screening approach. We evaluated BPA uptake and compared HCV testing and positivity rates from August 12, 2019 to March 12, 2020. Results. In the risk factor-based screening clinics, of 1,343 PWH, 239 tests were performed with 139 attributed to the BPA (testing rate 10%). At the expanded screening site, among 434 patients, 237 HCV tests were performed (testing rate 55%). The risk factor-based screening sites were less likely to test for HCV (odds ratio [OR] = 0.0884, p &lt; .01) and identify positive cases (OR = 0.55, p = .025). Conclusions. An EMR-based clinical-decision support tool was successfully implemented for HCV risk factor-based screening resulting in a lower HCV annual screening rate compared with an expanded approach. Although in this group of HIV clinics with limited longitudinal follow-up, no previously undiagnosed HCV cases were detected, additional work is needed to guide the design of the best approach. abstract_id: PUBMED:23914572 Universal antenatal screening for hepatitis C. The aims of this study were to pilot universal antenatal HCV screening and to determine the true seroprevalence of HCV infection in an unselected antenatal population. A risk assessment questionnaire for HCV infection was applied to all women booking for antenatal care over a 1-year period. In addition the prevalence of anti-HCV antibody positive serology in this population was determined. Over the course of the year, 9121 women booked for antenatal care at the Rotunda and 8976 women agreed to take part in the study, representing an uptake of 98.4%. 78 (0.9%) women were diagnosed as anti-HCV positive, the majority of whom were Irish (60.3%) or from Eastern Europe (24.4%). 73% of anti-HCV positive women reported one or more known risk factor with tattooing and a history of drug abuse the most commonly reported. 27% (n = 21) of anti-HCV positive women had no identifiable risk factors. Due to selective screening, seroprevalence of HCV is impossible to accurately calculate. However the universal screening applied here and the high uptake of testing has allowed the prevalence of anti-HCV among our antenatal population to be calculated at 0.9%. A significant proportion (27%) of anti-HCV positive women in this study reported no epidemiological risk factors at the time abstract_id: PUBMED:38017404 A rapid review of antenatal hepatitis C virus testing in the United Kingdom. Background: The United Kingdom (UK) has committed to the World Health Organization's viral hepatitis elimination targets. New case finding strategies, such as antenatal testing, may be needed to achieve these targets. We conducted a rapid review to understand hepatitis C-specific antibody (anti-HCV) and HCV RNA test positivity in antenatal settings in the United Kingdom to inform guidance. Methods: Articles and conference abstracts published between January 2000 and June 2022 reporting anti-HCV testing in antenatal settings were identified through PubMed and Web of Science searches. Results were synthesised using a narrative approach. Results: The search identified 2,011 publications; 10 studies were included in the final synthesis. Seven studies used anonymous testing methods and three studies used universal opt-out testing. Anti-HCV test positivity ranged from 0.1 to 0.99%, with a median value of 0.38%. Five studies reported HCV RNA positivity, which ranged from 0.1 to 0.57% of the testing population, with a median value of 0.22%. One study reported cost effectiveness of HCV and found it to be cost effective at £9,139 per quality adjusted life years. Conclusion: The relative contribution of universal opt-out antenatal testing for HCV should be reconsidered, as antenatal testing could play an important role in new case-finding and aid achieving elimination targets. abstract_id: PUBMED:25506493 Seroprevalence of Human Immunodeficiency Virus, Hepatitis B, Hepatitis C, Syphilis, and Co-infections among Antenatal Women in a Tertiary Institution in South East, Nigeria. Background: Sexually transmitted infections and human immunodeficiency virus (HIV)/AIDS are a major public health concern owing to both their prevalence and propensity to affect offspring through vertical transmission. Aim: The aim was to determine the seroprevalence of HIV, hepatitis B virus (HBV), hepatitis C virus (HCV), syphilis, and co-infections among antenatal women in Enugu, South-East Nigeria. Materials And Methods: A retrospective study of antenatal women at the University of Nigeria Teaching Hospital, Enugu, South-East Nigeria from 1(st) May 2006 to 30(th) April 2008. A pretested data extraction form was used to obtain data on sociodemographic variables and screening test results from the antenatal records. The analysis was done with SPSS version 17 (Chicago, IL, USA). Results: A total of 1239 antenatal records was used for the study. The seroprevalence of HIV, HBV, HCV, and syphilis among the antenatal women were 12.4% (154/1239), 3.4% (42/1239) 2.6 (32/1239) 0.08% (1/1239), respectively. The HIV/HBV and HIV/HCV co-infection prevalence rates were 0.24% (3/1239), 0.16% (2/1239), respectively. There was no HBC and HCV co-infection among both HIV positive and negative antenatal women. There was no statistically significant difference in HBV and HCV infection between the HIV positive and negative antenatal women. The only woman that was seropositive for syphilis was also positive to HIV. Conclusion: The seroprevalence of HIV, HBV, HCV, and syphilis is still a challenge in Enugu. Community health education is necessary to reduce the prevalence of this infection among the most productive and economically viable age bracket. abstract_id: PUBMED:25364599 Seroprevalence of Human Immunodeficiency Virus, Hepatitis B, Hepatitis C, Syphilis and Co-infections among Antenatal Women in a Tertiary Institution in South-East Nigeria. Background: Sexually transmitted infections and human immunodeficiency virus (HIV)/AIDS are a major public health concern owing to both their prevalence and propensity to affect offspring through vertical transmission. Aim: The aim was to determine the seroprevalence of HIV, hepatitis B virus (HBV), hepatitis C virus (HCV), syphilis, and co-infections among antenatal women in Enugu, South-East Nigeria. Materials And Methods: A retrospective study of antenatal women at the University of Nigeria Teaching Hospital, Enugu, South-East Nigeria from May 1, 2006 to April 30, 2008. A pretested data extraction form was used to obtain data on sociodemographic variables and screening test results from the antenatal records. The analysis was carried out with SPSS version 17 (Chicago, IL, USA). Results: A total of 1239 antenatal records was used for the study. The seroprevalence of HIV, HBV, HCV, and syphilis among the antenatal women were 12.4%(154/1239(, 3.4%(42/1239), 2.6%(32/1239), and 0.08%(1/1239), respectively. The HIV/HBV and HIV/HCV co-infection prevalence rates were 0.24%(3/1239) and 0.14%(2/1239), respectively. There was no HBC and HCV co-infection among both HIV positive and negative antenatal women. There was no statistically significant difference in HBV and HCV infection between the HIV positive and negative antenatal women. The only woman that was seropositive for syphilis was also positive to HIV. Conclusion: The seroprevalence of HIV, HBV, HCV, and syphilis is still a challenge in Enugu. Community health education is necessary to reduce the prevalence of this infection among the most productive and economically viable age bracket. abstract_id: PUBMED:25623176 Reliability of risk-based screening for hepatitis C virus infection among pregnant women in Egypt. Objectives: The Centers for Disease Control and Prevention (CDC) only recommends risk-based HCV screening for pregnant women in the United States. This study sought to determine the reliability of risk-based versus universal HCV screening for pregnant women in Egypt, a country with the world's highest HCV prevalence that also relies on risk-based screening, and to identify additional characteristics that could increase the reliability of risk-based screening. Methods: Pregnant women attending the Cairo University antenatal clinic were tested for anti-HCV antibodies and RNA, and demographic characteristics and risk factors for infection were assessed. Results: All 1250 pregnant women approached agreed to participate (100%) with a mean age of 27.4 ± 5.5 years (range:16-45). HCV antibodies and RNA were positive in 52 (4.2%) and 30 (2.4%) women respectively. After adjustment, only age (OR:1.08, 95%CI:1.002-1.16, p &lt; 0.01), history of prior pregnancies (OR:1.20, 95%CI:1.01-1.43, p &lt; 0.04), and working in the healthcare sector (OR:8.68, 95%CI:1.72-43.62, p &lt; 0.01), remained significantly associated with chronic HCV infection. Conclusions: Universal antenatal HCV screening was widely accepted (100%) and traditional risk-based screening alone would have missed 3 (10%) chronically infected women, thereby supporting universal screening of pregnant women whenever possible. Otherwise, risk-based screening should be modified to include history of prior pregnancy and healthcare employment. abstract_id: PUBMED:11999256 Are recommendations about routine antenatal care in Australia consistent and evidence-based? Objective: To describe the variability and evidence base of recommendations in Australian protocols and national policies about six aspects of routine antenatal care. Design: Comparison of recommendations from local protocols, national guidelines and research about number of visits, screening for gestational diabetes (GDM), syphilis, hepatitis C (HCV), and HIV, and advice on smoking cessation. Setting: Australian public hospitals with more than 200 births/year, some smaller hospitals in each State and Territory, and all Divisions of General Practice were contacted in 1999 and 2000. We reviewed 107 protocols, which included 80% of those requested from hospitals and 92% of those requested from Divisions. Main Outcome Measures: Frequency and consistency of recommendations. Results: Recommendations about syphilis testing were notable in demonstrating consistency between local protocols, national policies and research evidence. Most protocols recommended screening for GDM, despite lack of good evidence of its effectiveness in improving outcomes. Specific approaches to screening for GDM varied widely. Coverage and specific recommendations about testing for HIV and HCV were also highly variable. Smoking-cessation information and advice was rarely included, despite good evidence of the effectiveness of interventions in improving outcomes. No national policies about the number of routine visits and smoking cessation could be identified. There were inconsistent national policies for both HIV and GDM screening. Conclusions: Antenatal care recommended in protocols used in Australia varies, and is not always consistent with national policies or research evidence. Producing and disseminating systematic reviews of research evidence and national guidelines might reduce this variability and improve the quality of Australian antenatal care. abstract_id: PUBMED:29142576 Frequency and the risk factors of hepatitis C virus in pregnant women; A hospital based descriptive study in Gadap Town Karachi. Objective: To determine the frequency and the risk factors of hepatitis C virus in pregnant women at Al-Tibri Medical College &amp; Hospital in Gadap Town Karachi. Methods: This was a descriptive cross sectional study conducted at Obstetrics &amp; Gynecology OPD of Al-Tibri Medical College &amp; Hospital, Isra University Karachi Campus from 10th June to 10th September 2016. A total of 400 pregnant women of 16-45 years of age, who came in outpatient department for antenatal checkup were selected for the study. The diagnosed cases of Hepatitis C were excluded from the study. Detailed history including age, parity, risk factor like history of transfusion, previous surgeries, vaginal deliveries was taken and relevant examination was performed. Patients on routine antenatal investigation if found to have Anti HCV positive on Immunochromatography Test (ICT) method were further confirmed by Elisa. A well designed proforma was used for data collection. Results: During the period of 3 months 400 women in antenatal clinic were tested for hepatitis C, out of which 27 (6.6%) were positive for HCV antibodies. The age of the women included ranges from 16-45 years. Thirteen (7.9%) pregnant women having HCV +ve antibodies fell in 26-30 years of age group. From 27 HCV +ve patients, 19 (70.3%) were multigravida &amp; 8 (29.6%) were primigravida. Majority of the patients (77%) had history of injections. Conclusion: There is high prevalence of Hepatitis C infection among pregnant female in our setup. The possible risk factors are injection, blood transfusion and surgery. Answer: The question of whether antenatal screening for hepatitis C (HCV) should be universal or risk factor-based is a subject of debate. The provided abstracts offer insights into different aspects of this issue. An audit at Mater Mothers' Hospitals in Brisbane found that most women testing positive for HCV antibodies had identifiable risk factors, but a small number (2.5%) had no identifiable risk factor, suggesting that some cases may be missed with risk factor-based screening (PUBMED:26121909). Similarly, a study in England identified missed opportunities for HCV antenatal testing among women who were diagnosed with HCV after childbirth, indicating that universal opt-out testing should be considered (PUBMED:38178637). A study comparing expanded versus risk factor-based annual screening for HCV among persons with HIV found that the risk factor-based screening resulted in a lower HCV annual screening rate compared to an expanded approach (PUBMED:37439817). A pilot study for universal antenatal HCV screening at the Rotunda found a seroprevalence of 0.9%, with 27% of anti-HCV positive women reporting no known risk factors, suggesting that risk-based screening might miss a significant number of cases (PUBMED:23914572). A rapid review in the UK found that antenatal testing could play an important role in new case-finding and aid in achieving elimination targets, with anti-HCV test positivity ranging from 0.1 to 0.99% (PUBMED:38017404). In contrast, a study in Egypt, where risk-based screening is practiced, found that universal antenatal HCV screening was widely accepted and that risk-based screening alone would have missed 10% of chronically infected women (PUBMED:25623176). The variability and evidence base of recommendations in Australian protocols about routine antenatal care, including HCV screening, were found to be inconsistent and not always evidence-based (PUBMED:11999256). Lastly, a study in Karachi reported a high prevalence of HCV infection among pregnant women, with the majority having a history of injections as a risk factor (PUBMED:29142576). In summary, the evidence suggests that while most HCV-positive women have identifiable risk factors, a risk factor-based approach may miss a proportion of cases. Universal screening has the potential to identify more cases, which could be particularly important for achieving HCV elimination targets. However, the decision to implement universal versus risk factor-based screening may also depend on factors such as prevalence, cost-effectiveness, and healthcare infrastructure.
Instruction: Is there value in using physician billing claims along with other administrative health care data to document the burden of adolescent injury? Abstracts: abstract_id: PUBMED:15720709 Is there value in using physician billing claims along with other administrative health care data to document the burden of adolescent injury? An exploratory investigation with comparison to self-reports in Ontario, Canada. Background: Administrative health care databases may be particularly useful for injury surveillance, given that they are population-based, readily available, and relatively complete. Surveillance based on administrative data, though, is often restricted to injuries that result in hospitalization. Adding physician billing data to administrative data-based surveillance efforts may improve comprehensiveness, but the feasibility of such an approach has rarely been examined. It is also not clear how injury surveillance information obtained using administrative health care databases compares with that obtained using self-report surveys. This study explored the value of using physician billing data along with hospitalization data for the surveillance of adolescent injuries in Ontario, Canada. We aimed i) to document the burden of adolescent injury using administrative health care data, focusing on the relative contribution of physician billing information; and ii) to explore data quality issues by directly comparing adolescent injuries identified in administrative and self-report data. Methods: The sample included adolescents aged 12 to 19 years who participated in the 1996-1997 cross-sectional Ontario Health Survey, and whose survey responses were linked to administrative health care datasets (N = 2067). Descriptive analysis was used to document the burden of injuries as a proportion of all physician care by gender and location of care, and to examine the distribution of both administratively-defined and self-reported activity-limiting injuries according to demographic characteristics. Administratively-defined and self-reported injuries were also directly compared at the individual level. Results: Approximately 10% of physician care for the sample was identified as injury-related. While 18.8% of adolescents had self-reported injury in the previous year, 25.0% had documented administratively-defined injury. The distribution of injuries according to demographic characteristics was similar across data sources, but congruence was low at the individual level. Possible reasons for discrepancies between the data sources included recall errors in the survey data and errors in the physician billing data algorithm. Conclusion: If further validated, physician billing data could be used along with hospital inpatient data to make an important and unique contribution to adolescent injury surveillance. The limitations inherent in different datasets highlight the need to continue rely on multiple information sources for complete injury surveillance information. abstract_id: PUBMED:30022560 Validation studies of claims data in the Asia-Pacific region: A comprehensive review. Purpose: To describe published validation studies of administrative health care claims data in the Asia-Pacific region. Methods: A comprehensive literature search was conducted in PubMed for English language articles published through 31-Oct-2017 in humans from 10 Asian-Pacific countries or regions (Japan, Australia, New Zealand, China, Hong Kong, India, Singapore, South Korea, Taiwan, and Thailand) that validated claims-based diagnoses with a gold standard data source. Search terms included the: validation, validity, accuracy, sensitivity, agreement, specificity, positive predictive value, kappa, kappa coefficient, and Cohen's kappa. Results: Forty-three studies across six countries were identified: Australia (21); Japan (6); South Korea (6); Taiwan (7); Singapore (2); and New Zealand (1). Gold standard diagnoses were obtained from: medical records (18); registry data (11); self-reported questionnaires (5); and other data sources (9). Validity measures used included sensitivity, specificity, positive and negative predictive values (12); sensitivity, specificity, and positive predictive value (4); sensitivity and specificity (4); sensitivity and positive predictive value (4); and combinations of other measures (19). Validated outcomes included medical conditions (28); disease-specific comorbidities (8); death, smoking, and other (ie, injury, hospital outcome measures) (5); medication/transfusion (2). Approximately 72% of the studies were published within the last 5 years. Conclusions: Validation studies of claims data published in the English language in the Asia-Pacific region are very limited. Given the increased reliance on administrative health care databases for pharmacoepidemiology and the need for ensuring the credibility of results from such data, additional support for the conduct of validation research of claims data in the Asia-Pacific region is needed. abstract_id: PUBMED:23412882 Identifying suicidal behavior among adolescents using administrative claims data. Purpose: To assess the safety of psychotropic medication use in children and adolescents, it is critical to be able to identify suicidal behaviors from medical claims data and distinguish them from other injuries. The purpose of this study was to develop an algorithm using administrative claims data to identify medically treated suicidal behavior in a cohort of children and adolescents. Methods: The cohort included 80,183 youth (6-18 years) enrolled in Tennessee's Medicaid program from 1995-2006 who were prescribed antidepressants. Potential episodes of suicidal behavior were identified using external cause-of-injury codes (E-codes) and ICD-9-CM codes corresponding to the potential mechanisms of or injuries resulting from suicidal behavior. For each identified episode, medical records were reviewed to determine if the injury was self-inflicted and if intent to die was explicitly stated or could be inferred. Results: Medical records were reviewed for 2676 episodes of potential self-harm identified through claims data. Among 1162 episodes that were classified as suicidal behavior, 1117 (96%) had a claim for suicide and self-inflicted injury, poisoning by drugs, or both. The positive predictive value of code groups to predict suicidal behavior ranged from 0-88% and improved when there was a concomitant hospitalization but with the limitation of excluding some episodes of confirmed suicidal behavior. Conclusions: Nearly all episodes of confirmed suicidal behavior in this cohort of youth included an ICD-9-CM code for suicide or poisoning by drugs. An algorithm combining these ICD-9-CM codes and hospital stay greatly improved the positive predictive value for identifying medically treated suicidal behavior. abstract_id: PUBMED:28633003 Assessment of Selected Overdose Poisoning Indicators in Health Care Administrative Data in 4 States, 2012. Objectives: In 2012, a consensus document was developed on drug overdose poisoning definitions. We took the opportunity to apply these new definitions to health care administrative data in 4 states. Our objective was to calculate and compare drug (particularly opioid) poisoning rates in these 4 states for 4 selected Injury Surveillance Workgroup 7 (ISW7) drug poisoning indicators, using 2 ISW7 surveillance definitions, Option A and Option B. We also identified factors related to the health care administrative data used by each state that might contribute to poisoning rate variations. Methods: We used state-level hospital and emergency department (ED) discharge data to calculate age-adjusted rates for 4 drug poisoning indicators (acute drug poisonings, acute opioid poisonings, acute opioid analgesic poisonings, and acute or chronic opioid poisonings) using just the principal diagnosis or first-listed external cause-of-injury fields (Option A) or using all diagnosis or external cause-of-injury fields (Option B). We also calculated the high-to-low poisoning rate ratios to measure rate variations. Results: The average poisoning rates per 100 000 population for the 4 ISW7 poisoning indicators ranged from 11.2 to 216.4 (ED) and from 14.2 to 212.8 (hospital). For each indicator, ED rates were usually higher than were hospital rates. High-to-low rate ratios between states were lowest for the acute drug poisoning indicator (range, 1.5-1.6). Factors potentially contributing to rate variations included administrative data structure, accessibility, and submission regulations. Conclusions: The ISW7 Option B surveillance definition is needed to fully capture the state burden of opioid poisonings. Efforts to control for factors related to administrative data, standardize data sources on a national level, and improve data source accessibility for state health departments would improve the accuracy of drug poisoning surveillance. abstract_id: PUBMED:29256268 Suicidal Behavior and Non-Suicidal Self-Injury in Emergency Departments Underestimated by Administrative Claims Data. Background: External causes of injury codes (E-codes) are used in administrative and claims databases for billing and often employed to estimate the number of self-injury visits to emergency departments (EDs). Aims: This study assessed the accuracy of E-codes using standardized, independently administered research assessments at the time of ED visits. Method: We recruited 254 patients at three psychiatric emergency departments in the United States between 2007 and 2011, who completed research assessments after presenting for suicide-related concerns and were classified as suicide attempters (50.4%, n = 128), nonsuicidal self-injurers (11.8%, n = 30), psychiatric controls (29.9%, n = 76), or interrupted suicide attempters (7.8%, n = 20). These classifications were compared with their E-code classifications. Results: Of the participants, 21.7% (55/254) received an E-code. In all, 36.7% of research-classified suicide attempters and 26.7% of research-classified nonsuicidal self-injurers received self-inflicted injury E-codes. Those who did not receive an E-code but should have based on the research assessments had more severe psychopathology, more Axis I diagnoses, more suicide attempts, and greater suicidal ideation. Limitations: The sample came from three large academic medical centers and these findings may not be generalizable to all EDs. Conclusion: The frequency of ED visits for self-inflicted injury is much greater than current figures indicate and should be increased threefold. abstract_id: PUBMED:34558615 Identification of Fall-Related Injuries in Nursing Home Residents Using Administrative Claims Data. Background: Fall-related injuries (FRIs) are a leading cause of morbidity, mortality, and costs among nursing home (NH) residents. Carefully defining FRIs in administrative data is essential for improving injury-reduction efforts. We developed a series of novel claims-based algorithms for identifying FRIs in long-stay NH residents. Methods: This is a retrospective cohort of residents of NH residing there for at least 100 days who were continuously enrolled in Medicare Parts A and B in 2016. FRIs were identified using 4 claims-based case-qualifying (CQ) definitions (Inpatient [CQ1], Outpatient and Provider with Procedure [CQ2], Outpatient and Provider with Fall [CQ3], or Inpatient or Outpatient and Provider with Fall [CQ4]). Correlation was calculated using phi correlation coefficients. Results: Of 153 220 residents (mean [SD] age 81.2 [12.1], 68.0% female), we identified 10 104 with at least one FRI according to one or more CQ definition. Among 2 950 residents with hip fractures, 1 852 (62.8%) were identified by all algorithms. Algorithm CQ4 (n = 326-2 775) identified more FRIs across all injuries while CQ1 identified less (n = 21-2 320). CQ2 identified more intracranial bleeds (1 028 vs 448) than CQ1. For nonfracture categories, few FRIs were identified using CQ1 (n = 20-488). Of the 2 320 residents with hip fractures identified by CQ1, 2 145 (92.5%) had external cause of injury codes. All algorithms were strongly correlated, with phi coefficients ranging from 0.82 to 0.99. Conclusions: Claims-based algorithms applied to outpatient and provider claims identify more nonfracture FRIs. When identifying risk factors, stakeholders should select the algorithm(s) suitable for the FRI and study purpose. abstract_id: PUBMED:15933413 Concordance between childhood injury diagnoses from two sources: an injury surveillance system and a physician billing claims database. Objectives: (1) To determine the concordance between injury diagnoses (head injury (HI), probable HI, or orthopedic injury) for children visiting an emergency department for an injury using two Data Sources: an injury surveillance system (Canadian Hospitals Injury Research and Prevention Program, CHIRPP) and a physician billing claims database (Regie de l'assurance maladie de Quebec, RAMQ), and (2) to determine the sensitivity and specificity of diagnostic and procedure codes in billing claims for identifying HI and orthopedic injury among children. Design: In this cross sectional cohort, data for 3049 children who sought care for an injury (2000-01) were obtained from both sources and linked using the child's personal health insurance number. Methods: The physician recorded diagnostic codes from CHIRPP were used to categorize the children into three groups (HI, probable HI, and orthopedic), while an algorithm, using ICD-9-CM diagnostic and procedures codes from the RAMQ, was used to classify children into the same three groups. Results: Concordance between the data sources was "substantial" (weighted Kappa 0.66; 95% CI 0.63 to 0.69). The sensitivity of diagnostic and procedure codes in the RAMQ database for identifying HI and for orthopedic injury were 0.61 (95% CI 0.57 to 0.64) and 0.97 (95% CI 0.96 to 0.98), respectively. The specificity for identifying HI and for orthopedic injury were 0.97 (95% CI 0.96 to 0.98) and 0.58 (95% CI 0.56 to 0.63), respectively. Conclusion: Combining diagnostic and procedures codes in a physician billing claims database (the RAMQ database) may be a valid method of estimating injury occurrence among children. abstract_id: PUBMED:28918696 Nature of Injury and Risk of Multiple Claims Among Workers in Manitoba Health Care. In industrial societies, work-related musculoskeletal disorders are common among workers, frequently resulting in recurrent injuries, work disability, and multiple compensation claims. The risk of idiopathic musculoskeletal injuries is thought to be more than twice the risk of any other health problem among workers in the health care sector. This risk is highly prevalent particularly among workers whose job involves frequent physical tasks, such as patient lifting and transfer. Workers with recurrent occupational injuries are likely to submit multiple work disability claims and progress to long-term disability. The objective of this study was to explore the influence of injury type and worker characteristics on multiple compensation claims, using workers' compensation claims data. This retrospective study analyzed 11 years of secondary claims data for health care workers. Workers' occupational groups were classified based on the nature of physical tasks associated with their jobs, and the nature of work injuries was categorized into non-musculoskeletal, and traumatic and idiopathic musculoskeletal injuries. The result shows that risk of multiple injury claims increased with age, and the odds were highest for older workers aged 55 to 64 (odds ratio [OR] = 3.5). A large proportion of those who made an injury claim made multiple claims that resulted in more lost time than single injury claims. The study conclusion is that the nature of injury and work tasks are probably more significant risk factors for multiple claims than worker characteristics. abstract_id: PUBMED:30219029 The Ottawa SAH search algorithms: protocol for a multi- centre validation study of primary subarachnoid hemorrhage prediction models using health administrative data (the SAHepi prediction study protocol). Background: Conducting prospective epidemiological studies of hospitalized patients with rare diseases like primary subarachnoid hemorrhage (pSAH) are difficult due to time and budgetary constraints. Routinely collected administrative data could remove these barriers. We derived and validated 3 algorithms to identify hospitalized patients with a high probability of pSAH using administrative data. We aim to externally validate their performance in four hospitals across Canada. Methods: Eligible patients include those ≥18 years of age admitted to these centres from January 1, 2012 to December 31, 2013. We will include patients whose discharge abstracts contain predictive variables identified in the models (ICD-10-CA diagnostic codes I60** (subarachnoid hemorrhage), I61** (intracranial hemorrhage), 162** (other nontrauma intracranial hemorrhage), I67** (other cerebrovascular disease), S06** (intracranial injury), G97 (other postprocedural nervous system disorder) and CCI procedural codes 1JW51 (occlusion of intracranial vessels), 1JE51 (carotid artery inclusion), 3JW10 (intracranial vessel imaging), 3FY20 (CT scan (soft tissue of neck)), and 3OT20 (CT scan (abdominal cavity)). The algorithms will be applied to each patient and the diagnosis confirmed via chart review. We will assess each model's sensitivity, specificity, negative and positive predictive value across the sites. Discussion: Validating the Ottawa SAH Prediction Algorithms will provide a way to accurately identify large SAH cohorts, thereby furthering research and altering care. abstract_id: PUBMED:1453150 The impact of a patient survey or a physician reminder on the provision of adolescent preventive health care. Background: The objective of this study was to determine if the use of a patient survey or a chart stamp could increase the implementation of adolescent preventive health care in a family practice center. Methods: Subjects were all patients 13 to 18 years old (date of birth 1972 to 1977), who visited the Aultman Family Practice Centers from October 1, 1989, through September 30, 1990 (N = 801 patient visits). Three different 1-month interventions (patient questionnaire, physician stamp, and both patient questionnaire and physician stamp) as well as a 1-month control period were implemented. The effect of the intervention on adolescent preventive health care was measured by review of documentation in the patient's chart. Results: Those charts that indicated that either the questionnaire or stamp had been used showed significantly more documented discussion of issues relating to mood, injury, sexuality, exposure to toxins, and lifestyle (all P &lt; .01). These discussions most commonly took place during a visit for a physical examination. The percentage of visits with documented discussions did not vary significantly according to type of reminder, nor with any physician or patient characteristic. Conclusions: The use of a reminder, especially in the context of an office visit for a physical examination, significantly increased the implementation of adolescent preventive health care in this family practice center. Answer: Yes, there is value in using physician billing claims along with other administrative health care data to document the burden of adolescent injury. A study exploring the use of physician billing data along with hospitalization data for the surveillance of adolescent injuries in Ontario, Canada, found that approximately 10% of physician care for the sample was identified as injury-related. While 18.8% of adolescents self-reported injury in the previous year, 25.0% had documented administratively-defined injury. The distribution of injuries according to demographic characteristics was similar across data sources, but congruence was low at the individual level. The study concluded that if further validated, physician billing data could be used alongside hospital inpatient data to make an important and unique contribution to adolescent injury surveillance. However, the limitations inherent in different datasets highlight the need to continue relying on multiple information sources for complete injury surveillance information (PUBMED:15720709). This conclusion is supported by the broader context of validation studies of claims data in the Asia-Pacific region, which emphasize the need for ensuring the credibility of results from such data and call for additional support for the conduct of validation research of claims data (PUBMED:30022560). Moreover, the use of administrative claims data has been shown to be valuable in identifying specific health events such as suicidal behavior among adolescents (PUBMED:23412882) and fall-related injuries in nursing home residents (PUBMED:34558615), although the accuracy of such data can vary and may underestimate certain conditions like suicidal behavior and non-suicidal self-injury (PUBMED:29256268). Overall, the evidence suggests that physician billing claims, when used in conjunction with other administrative health care data, can enhance the documentation and surveillance of adolescent injuries, but it is important to be aware of the potential discrepancies and limitations of these data sources.
Instruction: Is resistin a link between highly active antiretroviral therapy and fat redistribution in HIV-infected children? Abstracts: abstract_id: PUBMED:18787374 Is resistin a link between highly active antiretroviral therapy and fat redistribution in HIV-infected children? Objectives: To assess the features of fat redistribution, detected by clinical and ultrasound (US) methods, and the presence of metabolic disorders in HIV-infected children undergoing antiretroviral therapy. To evaluate if serum levels of resistin, a hormone produced only by visceral adipose tissue, are a marker of fat redistribution in these patients. Design And Methods: Forty-five consecutive symptomatic HIV-infected children were considered for inclusion in the study. Patients were enrolled if treated for at least 6 months with antiretroviral therapy with or without protease inhibitor (PI) and if compliant to the study protocol. Patients were evaluated for: anthropometric measures, fat redistribution by clinical and US methods, serum lipids, parameters of insulin resistance by homeostasis model assessment for insulin resistance, serum resistin levels by an enzyme-linked immunosorbent assay. Results: Eighteen children fulfilled the inclusion criteria and were enrolled in the study. Twelve (66%) children had clinical and/or US evidence of fat redistribution; 9 (75%) of them were on PI therapy; only 3 of 6 children without fat redistribution were on PI therapy (p&lt;0.05). Serum lipids and insulin resistance parameters did not differ between children with or without fat redistribution. There was a highly significant linear correlation between visceral fat detected by US and circulating resistin levels (r=0.87; p&lt;0.0001). Conclusions: Fat redistribution occurred in most HIV-infected children undergoing PI therapy. Because serum resistin levels reflect the amount of visceral fat, they could be considered a sensitive marker of fat redistribution in HIV-infected children. abstract_id: PUBMED:15956078 Circulating resistin levels are not associated with fat redistribution, insulin resistance, or metabolic profile in patients with the highly active antiretroviral therapy-induced metabolic syndrome. Context: The mechanisms underlying the development of the highly active antiretroviral therapy (HAART)-induced metabolic syndrome remain to be fully elucidated. Objective: The objective of this study was to investigate whether the adipocyte-secreted hormone, resistin, is associated with anthropometric and metabolic abnormalities of the HAART-induced metabolic syndrome. Design, Setting, And Patients: We conducted a cross-sectional study of 227 HIV-positive patients (37 women and 190 men) recruited from the infectious diseases clinics. On the basis of history, physical examination, dual-energy x-ray absorptiometry, and single-slice computed tomography, patients were classified into four groups: non-fat redistribution (n = 85), fat accumulation (n = 42), fat wasting (n = 35), and mixed fat redistribution (n = 56). Main Outcome Measures: The main outcome measures were serum resistin levels and anthropometric and metabolic variables. Results: Mean serum resistin levels were not significantly different among subjects with fat accumulation, fat wasting, or mixed fat redistribution or between these groups and the non-fat redistribution group. We found a weak, but significant, positive correlation between resistin and percent total body fat (r = 0.20; P &lt; 0.01), total extremity fat (r = 0.18; P &lt; 0.01), and abdominal sc fat (r = 0.19; P &lt; 0.01), but not abdominal visceral fat (r = -0.10; P = 0.16) or waist to hip ratio (r = -0.05; P = 0.43). When adjustments were made for gender (women, 3.92 +/- 2.71 ng/ml; men, 2.96 +/- 2.61 ng/ml; P = 0.05), correlations between resistin and the above parameters were no longer significant. Importantly, resistin levels were not correlated with fasting glucose, insulin, homeostasis model assessment of insulin resistance index, triglycerides, or cholesterol levels in the whole group. Conclusions: Resistin is related to gender, but is unlikely to play a major role in the insulin resistance and metabolic abnormalities of the HAART-induced metabolic syndrome. abstract_id: PUBMED:20497248 Adipokine profiles and lipodystrophy in HIV-infected children during the first 4 years on highly active antiretroviral therapy. Objective: The aim of the study was to evaluate the evolution of plasma adipokines and lipodystrophy in protease inhibitor-naıve vertically HIV-infected children on highly active antiretroviral therapy(HAART). Patients And Methods: We carried out a multicentre retrospective study of 27 children during 48 months on HAART. Every 3 months, CD4+ T-cells, CD8+ T-cells, viral load (VL), cholesterol, triglycerides, lipoproteins and adipokines were measured. Diagnoses of lipodystrophy were based on clinical examinations. Results: We found hypercholesterolaemia (4200 mg/dL) in 9.5, 30.4, 21.7, 14.3 and 13.3% of the subjects at months 0, 12, 24, 36 and 48, respectively, and hypertriglyceridaemia (4170 mg/dL) in 14.3, 8.3, 13,4.5 and 0% at the same time-points. During follow-up, and especially at the end of the study, we found an increase in plasma resistin levels and significant increases in total plasminogen activator inhibitor type 1, adiponectin, and leptin levels (Po0.05). We also observed slight increases in the leptin/adiponectin ratio, homeostatic model assessment, and C-peptide values during the first months of treatment followed by a moderate decrease or stabilization after 24 months on HAART.At the end of the study, 12 of the 27 children (44.4%) had lipodystrophy, 10 (37%) had lipoatrophy,and 11 (40.7%) had lipohypertrophy; and only three of the 27 children (11.1%) were diagnosed with lipoatrophy and lipohypertrophy with scores 2. Conclusions: HIV-infected children showed an increase in serum adipokine levels, but this was not associated with the emergence of lipodystrophy during 48 months on HAART. abstract_id: PUBMED:20547180 The HIV-1/HAART associated metabolic syndrome - novel adipokines, molecular associations and therapeutic implications. The use of highly active anti-retroviral therapy has been associated with effects on various metabolic and morphological parameters that are underlied by significant hormonal and cytokine changes. Novel adipokines may have a role in the pathogenesis of these changes. In fact, leptin deficiency and hypoadiponectinemia correlate with lipoatrophy and central fat accumulation respectively whereas hypoadiponectinemia also appears to be associated with insulin resistance and lipid disorders. Preliminary evidence from proof-of-concept studies suggests that administration of recombinant leptin in replacement doses and/or administration of medications that increase adiponectin levels may improve the HIV-1/HAART associated metabolic syndrome (HAMS). Immune effects of hypoleptinemia and hypoadiponectinemia might have a role in the evolution of the HAMS that need to be further elucidated. The role for resistin in relation to HAMS is controversial and further investigation is necessary. A potential role of other novel cytokines like visfatin, retinol-binding protein-4, apelin, acylation stimulating protein, omentin and vaspin needs to be further elucidated. abstract_id: PUBMED:24100713 Infection duration and inflammatory imbalance are associated with atherosclerotic risk in HIV-infected never-smokers independent of antiretroviral therapy. Objectives: To determine whether the reported increased atherosclerotic risk among HIV-infected individuals is related to antiretroviral therapy (ART) or HIV infection, whether this risk persists in never-smokers, and whether inflammatory profiles are associated with higher risk. Design: Matched cross-sectional study. Methods: A total of 100 HIV-infected patients (50 ART-treated &gt;4 years, 50 ART-naive but HIV-infected &gt;2 years) and 50 HIV-negative controls were recruited in age-matched never-smoking male triads (mean age 40.2 years). Carotid intima-media maximal thickness (c-IMT) was measured across 12 sites. Pro-inflammatory [highly sensitive C-reactive protein (hs-CRP), resistin, interleukin-6, interleukin-18, insulin, serum amyloid A, D-dimer) and anti-inflammatory (total and high molecular weight adiponectin, interleukin-27, interleukin-10) markers were dichotomized into high/low scores (based on median values). c-IMT was compared across HIV/treatment groups or inflammatory profiles using linear regression models adjusted for age, diabetes, hypertension, and, for HIV-infected patients, nadir CD4 cell counts. Results: Although adjusted c-IMT initially tended to be thicker in ART-exposed patients (P=0.2), in post-hoc analyses stratifying by median HIV duration we observed significantly higher adjusted c-IMT in patients with longer (&gt;7.9 years: 0.760±0.008 mm) versus shorter prevalent duration of known HIV infection (&lt;7.9 years: 0.731±0.008 mm, P=0.02), which remained significant after additionally adjusting for ART (P=0.04). Individuals with low anti-inflammatory profile (&lt;median versus &gt;median score) had thicker c-IMT (0.754±0.006mm versus 0.722±0.006 mm, P&lt;0.001), with anti-inflammatory markers declining as prevalent duration of HIV infection increased (P for linear trend &lt;0.001). Conclusion: Known HIV duration is related to thicker c-IMT, irrespective of ART, in these carefully selected age-matched never-smoking HIV-treated and ART-naive male individuals. Higher levels of anti-inflammatory markers appeared protective for atherosclerosis. abstract_id: PUBMED:27605560 Chronic binge alcohol administration impairs glucose-insulin dynamics and decreases adiponectin in asymptomatic simian immunodeficiency virus-infected macaques. Alcohol use disorders (AUDs) frequently exist among persons living with HIV/AIDS. Chronic alcohol consumption, HIV infection, and antiretroviral therapy (ART) are independently associated with impairments in glucose-insulin dynamics. Previous studies from our laboratory have shown that chronic binge alcohol (CBA) administration decreases body mass index, attenuates weight gain, and accentuates skeletal muscle wasting at end-stage disease in non-ART-treated simian immunodeficiency virus (SIV)-infected male rhesus macaques. The aim of this study was to investigate whether CBA and ART alone or in combination alter body composition or glucose-insulin dynamics in SIV-infected male rhesus macaques during the asymptomatic phase of SIV infection. Daily CBA or sucrose (SUC) administration was initiated 3 mo before intrarectal SIV inoculation and continued until the study end point at 11 mo post-SIV infection. ART or placebo was initiated 2.5 mo after SIV infection and continued until study end point. Four treatment groups (SUC/SIV ± ART and CBA/SIV ± ART) were studied. CBA/SIV macaques had significantly decreased circulating adiponectin and resistin levels relative to SUC/SIV macaques and reduced disposition index and acute insulin response to glucose, insulin, and C-peptide release during frequently sampled intravenous glucose tolerance test, irrespective of ART status. No statistically significant differences were observed in homeostatic model assessment-insulin resistance values, body weight, total body fat, abdominal fat, or total lean mass or bone health among the four groups. These findings demonstrate CBA-mediated impairments in glucose-insulin dynamics and adipokine profile in asymptomatic SIV-infected macaques, irrespective of ART. abstract_id: PUBMED:24532267 Lipodystrophy syndrome in HIV treatment-multiexperienced patients: implication of resistin. Background: Impaired production of adipocytokines is a major factor incriminated in the occurrence of lipodystrophy (LD). Objective: To evaluate LD prevalence and subtypes in HIV treatment-multiexperienced patients, and to determine the correlations between adipocytokines and LD subtypes. Methods: Cross-sectional study in a Romanian tertiary care hospital, between 2008 and 2010, in HIV-positive patients, undergoing cART for ≥6 months. LD diagnosis, based on clinical and anthropometric data, was classified into lipoatrophy (LA), lipohypertrophy (LH) and mixed fat redistribution (MFR). Blood samples were collected for leptin, adiponectin and resistin assessments. Results: We included 100 patients, 44 % with LD, among which LA had 63 %. LA patients had sex ratio, median age, treatment duration and median number of ARV regimens of 1, 20, 93 and 3.5 compared to non-LD patients: 1.65, 31, 44 and 1. LH and MFR patients were older and had higher total and LDL cholesterol versus non-LD patients. For both overall group and female group, LA was associated in univariate and multivariate analysis with increased resistin (p = 0.02 and 0.04) and number of ARV regimens (p &lt; 0.001). Determination coefficient (Nagelkerke R (2)) of increased resistin and the number of ARV combinations in the presence of LA was 33 % in overall group and 47 % in female patients. Conclusions: In our young HIV-positive population, LD had high prevalence with predominance of LA subtype. LA was associated with high resistin levels and greater number of ARV regimens in overall group and female subgroup. Resistin could be used as a marker of peripheral adipose tissue loss and might be used as a target for new anti-LD therapeutic strategies. abstract_id: PUBMED:32920318 Adipocytokine dysregulation, abnormal glucose metabolism, and lipodystrophy in HIV-infected adolescents receiving protease inhibitors. Background: Lipodystrophy is common in HIV-infected patients receiving protease inhibitors (PIs), stavudine, and zidovudine. Adipocytokines may be altered in lipodystrophy. We evaluated risk factors, adipocytokine levels, insulin resistance, and lipid profiles in HIV-infected adolescents with different lipodystrophy types. Methods: A cross-sectional study was conducted in 80 perinatally HIV-infected adolescents receiving PI-based highly active antiretroviral therapy for ≥ 6 months. Patients underwent oral glucose tolerance tests and measurements of high-molecular-weight (HMW) adiponectin, leptin, resistin, insulin, and lipids. They were classified into 3 groups based on the clinical findings: no lipodystrophy, isolated lipoatrophy, and any lipohypertrophy (isolated lipohypertrophy or combined type). Results: Of the 80 patients (median age, 16.7 years), 18 (22.5%) had isolated lipoatrophy, while 8 (10%) had any lipohypertrophy (four with isolated lipohypertrophy, and four with the combined type). In a multivariate analysis, longer exposure to stavudine (OR: 1.03; 95% CI, 1.01-1.06; p = 0.005) and indinavir (OR: 1.03; 95% CI, 1.01-1.06; p = 0.012) were associated with lipoatrophy, while longer exposure to didanosine (OR: 1.04; 95% CI, 1.01-1.08; p = 0.017) and indinavir (OR: 1.10; 95% CI, 1.00-1.21; p = 0.045) were associated with any lipohypertrophy. Leptin levels were highest in the any-lipohypertrophy group and lowest in the isolated-lipoatrophy group (p = 0.013). HMW adiponectin levels were significantly lowest in the any-lipohypertrophy group and highest in the no-lipodystrophy group (p = 0.001). There were no significant differences in the levels of resistin among the three groups (p = 0.234). The prevalence of insulin resistance (p = 0.002) and prediabetes/diabetes (p &lt; 0.001) were significantly highest in the any-lipohypertrophy group. Patients with lipoatrophy and those without lipodystrophy had comparable degrees of insulin resistance (p = 0.292). In multiple linear regression analysis, adjusted for age, sex, and waist-height ratio, HMW adiponectin levels were associated with Matsuda index (β = 0.5; p = 0.003) and quantitative insulin sensitivity check index (QUICKI) (β = 40.1; p = 0.010) and almost significantly associated with homeostatic model assessment of insulin resistance (HOMA-IR) (p = 0.054). Leptin and resistin levels were not associated with HOMA-IR, Matsuda index, or QUICKI (all p &gt; 0.05). Conclusions: Abnormal glucose metabolism and dysregulation of adipocytokines were common in the HIV-infected adolescents with lipohypertrophy and the combined type. Preventive screening for cardiovascular diseases caused by metabolic alterations should be routinely performed. abstract_id: PUBMED:24958357 Adipokines, hormones related to body composition, and insulin resistance in HIV fat redistribution syndrome. Background: Lipodystrophies are characterized by adipose tissue redistribution, insulin resistance (IR) and metabolic complications. Adipokines and hormones related to body composition may play an important role linking these alterations. Our aim was to evaluate adipocyte-derived hormones (adiponectin, leptin, resistin, TNF-α, PAI-1) and ghrelin plasma levels and their relationship with IR in HIV-infected patients according to the presence of lipodystrophy and fat redistribution. Methods: Anthropometric and metabolic parameters, HOMA-IR, body composition by DXA and CT, and adipokines were evaluated in 217 HIV-infected patients on cART and 74 controls. Fat mass ratio defined lipodystrophy (L-FMR) was defined as the ratio of the percentage of the trunk fat mass to the percentage of the lower limb fat mass by DXA. Patient's fat redistribution was classified into 4 different groups according the presence or absence of either clinical lipoatrophy or abdominal prominence: no lipodystrophy, isolated central fat accumulation (ICFA), isolated lipoatrophy and mixed forms (MXF). The associations between adipokines levels and anthropometric, metabolic and body composition were estimated by Spearman correlation. Results: Leptin levels were lower in patients with FMR-L and isolated lipoatrophy, and higher in those with ICFA and MXF. Positive correlations were found between leptin and body fat (total, trunk, leg, arm fat evaluated by DXA, and total, visceral (VAT), subcutaneous adipose tissue (SAT), and VAT/SAT ratio evaluated by CT) regardless of FMR-L, and with HOMA-IR only in patients with FMR-L. Adiponectin correlated negatively with VAT, and its mean levels were lower in patients with ICFA and higher in those with no lipodystrophy. Resistin was not correlated with adipose tissue but positively correlated with HOMA-IR in FMR-L patients. PAI-1 levels were higher in MXF-patients and their levels were positively correlated with VAT in those with FMR-L. Ghrelin was higher in HIV-infected patients than controls despite BMI-matching. Conclusion: The overall body fat reduction in HIV lipoatrophy was associated with low leptin plasma levels, and visceral fat accumulation was mainly associated with decreased plasma levels of adiponectin. abstract_id: PUBMED:22145933 Serum leptin level mediates the association of body composition and serum C-reactive protein in HIV-infected persons on antiretroviral therapy. Higher body mass index (BMI) is associated with increased serum C-reactive protein (CRP) levels in HIV-infected individuals on antiretroviral therapy (ART), but the relationship of adipose tissue mass to systemic inflammation is not well described in this population. We hypothesized that serum adipokine levels (i.e., hormones produced by adipocytes) are a superior predictor of CRP compared to anthropometric or radiographic measures of body composition in patients on effective, stable ART. We evaluated the relationship of serum leptin, adiponectin, and resistin, BMI, and dual energy x-ray absorptiometry (DEXA) measurements with serum highly sensitive CRP (hsCRP) in a cross-sectional cohort of 106 predominantly virologically suppressed, HIV-infected adults on ART for ≥24 weeks using multivariable linear regression and formal criteria to assess statistical mediation. Median BMI, hsCRP, and leptin values were 25.2 kg/m(2), 3.0 mg/liter, and 3.8 ng/ml, respectively. BMI and DEXA limb fat, body fat, and trunk fat measurements were significantly associated with both serum leptin and hsCRP levels (all p≤0.02). Leptin was also associated with hsCRP (p&lt;0.01). The regression coefficient for the effect of BMI or DEXA measurements on hsCRP was reduced, and the relationship was no longer statistically significant, after adjusting for leptin, indicating leptin functioned as a mediating variable within these relationships. Adiponectin and resistin levels did not demonstrate similar effects. Serum leptin was a superior predictor of hsCRP compared to BMI and DEXA body fat measurements, which may reflect alterations in body composition in treated HIV infection and the important contribution of adipose tissue to inflammation in this population. Answer: The relationship between resistin and fat redistribution in HIV-infected children undergoing highly active antiretroviral therapy (HAART) is not entirely clear, as studies have shown conflicting results. One study found that fat redistribution occurred in most HIV-infected children undergoing protease inhibitor (PI) therapy, which is a component of HAART. In this study, there was a highly significant linear correlation between visceral fat detected by ultrasound and circulating resistin levels, suggesting that serum resistin levels could be considered a sensitive marker of fat redistribution in HIV-infected children (PUBMED:18787374). However, another study conducted on HIV-positive patients, which included children, found that mean serum resistin levels were not significantly different among subjects with various types of fat redistribution, including fat accumulation, fat wasting, or mixed fat redistribution. This study concluded that resistin is unlikely to play a major role in the insulin resistance and metabolic abnormalities of the HAART-induced metabolic syndrome (PUBMED:15956078). Further research on adipokine profiles in HIV-infected children during the first 4 years on HAART showed an increase in serum adipokine levels, but this was not associated with the emergence of lipodystrophy (PUBMED:20497248). Another study suggested that novel adipokines, including resistin, may have a role in the pathogenesis of metabolic and morphological changes associated with HAART, but the role of resistin in relation to the HAART-associated metabolic syndrome (HAMS) is controversial and requires further investigation (PUBMED:20547180). In summary, while there is some evidence to suggest that resistin could be linked to fat redistribution in HIV-infected children on HAART, particularly in the context of PI therapy, other studies have not found a significant association between resistin levels and fat redistribution or metabolic abnormalities. Therefore, more research is needed to clarify the role of resistin in this context.
Instruction: Does HistoScanning™ predict positive results in prostate biopsy? Abstracts: abstract_id: PUBMED:26215749 True targeting-derived prostate biopsy: HistoScanning™ remained inadequate despite advanced technical efforts. Purpose: To verify the reliability of HistoScanning™-based, true targeting (TT)-derived prostate biopsy. Methods: We relied on 40 patients suspicious for prostate cancer who underwent standard and TT-derived prostate biopsy. Sensitivity, specificity, positive predictive value, negative predictive value and the area under the curve (AUC) were assessed for the prediction of biopsy results per octant by HistoScanning™, using different HistoScanning™ signal volume cutoffs (&gt;0, &gt;0.2 and &gt;0.5 ml). Results: Overall, 319 octants were analyzed. Of those, 64 (20.1 %) harbored prostate cancer. According to different HistoScanning™ signal volume cutoffs (&gt;0, &gt;0.2 and &gt;0.5 ml), the AUCs for predicting biopsy results were: 0.51, 0.51 and 0.53, respectively. Similarly, the sensitivity, specificity, positive predictive and negative predictive values were: 20.7, 78.2, 17.4 and 81.6 %; 20.7, 82.0, 20.3 and 82.3 %; and 12.1, 94.6, 33.3 and 82.9 %, respectively. Conclusions: Prediction of biopsy results based on HistoScanning™ signals and TT-derived biopsies was unreliable. Moreover, the AUC of TT-derived biopsies was low and did not improve when additional signal volume cutoffs were applied (&gt;0.2 and &gt;0.5 ml). We cannot recommend a variation of well-established biopsy standards or reduction in biopsy cores based on HistoScanning™ signals. abstract_id: PUBMED:24871425 Does HistoScanning™ predict positive results in prostate biopsy? A retrospective analysis of 1,188 sextants of the prostate. Purpose: The role of HistoScanning™ (HS) in prostate biopsy is still indeterminate. Existing literature is sparse and controversial. To provide more evidence according to that important clinical topic, we analyzed institutional data from the Martini-Clinic, Prostate Cancer Center, Hamburg. Methods: Patients who received prostate biopsy and who also received HS were included in the study cohort. A single examiner, blinded to pathological results, re-analyzed all HS data in accordance with sextants of the prostate. Each sextant was considered as an individual case. Corresponding results from biopsy and HS were analyzed. The area under the receiver-operating characteristic curve (AUC) for the prediction of a positive biopsy by HS was calculated. Furthermore, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were assessed according to different HS signal volume cutoffs (&gt;0, &gt;0.2 and &gt;0.5 ml). Results: Overall, 198 men were identified and 1,188 sextants were analyzed. The AUC to predict positive biopsy results by HS was 0.58. Sensitivity, specificity, PPV and NPV for HS to predict positive biopsy results per sextant, depending on different HS signal volume cutoffs (&gt;0, &gt;0.2 and &gt;0.5 ml) were 84.1, 27.7, 29.5 and 82.9 %, 60.9, 50.6, 28.8 and 79.7 %, and 40.1, 73.3, 33.1 and 78.8 %, respectively. Conclusions: Positive HS signals do not accurately predict positive prostate biopsy results according to sextant analysis. We cannot recommend a variation of well-established random biopsy patterns or reduction of biopsy cores in accordance with HS signals at the moment. abstract_id: PUBMED:23594146 Prostate HistoScanning: a screening tool for prostate cancer? Objective: To evaluate Prostate HistoScanning as a screening tool for prostate cancer in a pilot study. Methods: During a 6-month period, 94 men with normal or suspicious digital rectal examination, normal or elevated prostate-specific antigen, or an increased prostate-specific antigen velocity were examined with Prostate HistoScanning. Based on these parameters and HistoScanning analysis, 41 men were referred for prostate biopsy under computer-aided ultrasonographic guidance. The number of random biopsy cores varied depending on the prostate volume. Targeted biopsies were taken in the case of computer-aided ultrasonographic area suspicious for malignancy. A logistic regression analysis was carried out to estimate the probability of resulting in a positive prostate biopsy based on the HistoScanning findings. Results: Following a logistic regression analysis, after adjusting for age, digital rectal examination, serum prostate-specific antigen level, prostate volume and tumor lesion volume, every cancer volume increase of 1 mL estimated by HistoScanning was associated with a nearly threefold increase in the probability of resulting in a positive biopsy (odds ratio 2.9; 95% confidence interval 1.2-7.0; P-value 0.02). Prostate cancer was found in 17 of 41 men (41%). In patients with cancer, computer-aided ultrasonography-guided biopsy was 4.5-fold more likely to detect cancer than random biopsy. The prostate cancer detection rate for random biopsy and directed biopsy was 13% and 58%, respectively. HistoScanning-guided biopsy significantly decreased the number of biopsies necessary (P-value &lt;0.0001). Conclusions: Our findings suggest that Prostate HistoScanning might be helpful for the selection of patients in whom prostate biopsies are necessary. This imaging technique can be used to direct biopsies in specific regions of the prostate with a higher cancer detection rate. abstract_id: PUBMED:25860379 Controversial evidence for the use of HistoScanning™ in the detection of prostate cancer. Introduction: Given the growing body of literature since first description of HistoScanning™ in 2008, there is an unmet need for a contemporary review. Evidence Acquisition: Studies addressing HistoScanning™ in prostate cancer (PCa) were considered to be included in the current review. To identify eligible reports, we relied on a bibliographic search of PubMed database conducted in January 2015. Evidence Synthesis: Twelve original articles were available to be included in the current review. The existing evidence was reviewed according to the three following topics: prediction of final pathology at radical prostatectomy, prediction of disease stage and application at prostate biopsy. Conclusions: High sensitivity and specificity for HistoScanning™ to predict cancer foci ≥0.5 ml at final pathology were achieved in the pilot study. These results were questioned, when HistoScanning™ derived tumor volume does not correlate with final pathology results. Additionally, HistoScanning™ was not able to provide reliable staging information according to neither extraprostatic extension, nor seminal vesicle invasion prior to radical prostatectomy. Controversy data also exist according to the use of HistoScanning™ at prostate biopsy. Specifically, most encouraging results were recorded in a small patient cohort. Conversely, HistoScanning™ achieved poor prediction of positive biopsies, when relying on larger studies. Finally, the combination of HistoScanning™ and conventional ultrasound achieved lower detection rates than systematic biopsy. Currently, evidence is at best weak and questions whether HistoScanning™ might improve the detection of PCa. abstract_id: PUBMED:31356028 Fusion biopsy of the prostate Aim: to compare the prostate cancer (PCa) detection rate, accuracy and safety of prostate image-guided fusion biopsy methods (cognitive fusion, software-fusion and HistoScanning-guided biopsy) on the basis of published studies in patients from 48 to 75 years with suspected prostate cancer during primary or repeat biopsy. To identify the limitations of these methods and improve the efficiency of fusion biopsy of the prostate in a further clinical trial. Materials And Methods: search was carried out in the PubMed, Medline, Web of Science and eLibrary databases using following requests: (prostate cancer OR prostate adenocarcinoma) AND (MRI or magnetic resonance) AND (targeted biopsy); (prostate cancer OR prostate adenocarcinoma) AND (PHS OR Histoscanning) AND (targeted biopsy) and (prostate cancer OR prostate adenocarcinoma) AND (MRI or magnetic resonance) AND (targeted biopsy) AND (cognitive registration), targeted prostate biopsy, prostate histoscanning, histoscanning, cognitive prostate biopsy. Results: a total of 672 publications were found, of which 25 original scientific papers were included in the analysis (n=4634). According to the results, PCa detection rate in patients with an average age of 62.5 years. (48-75) and an average PSA of 6.3 ng/ml (4.1-10.8), who underwent cognitive fusion biopsy under MRI control (MR-fusion) was 32.5%, compared to 30% and 35% for histoscanning in combination with a systematic biopsy and combination of methods (MR-fusion biopsy and histoscanning-guided biopsy), respectively. The accuracy of cognitive MR-fusion biopsy was 49.8% (20.8%-82%), the accuracy of the software MR-fusion biopsy was 52.5% (26.5%-69.7%), the accuracy of histoscanning-guided targeted biopsy was 46.8% (26%-75.8%). The highest values were observed in the patients undergoing primary biopsy (75.8%). Discussion: Currently, imaging methods allow us to change the approach to the diagnosis of PCa by improving the efficiency of prostate biopsy, the only formal method for verifying PCa. A common method for PCa diagnosis in 2018 is a systematic prostate biopsy. However, due to the its drawbacks, fusion biopsy under control of MRI or ultrasound has being introduced into clinical practice with superior results. So far, there is a lack of sufficient scientific data to select a specific technique of the fusion biopsy of the prostate. According to the analysis, it was concluded that the incidence of complications didnt increase when performing targeted biopsy in addition to the systematic protocol. Conclusion: The efficiency of cognitive MR-fusion biopsy is comparable to software MR-fusion biopsy. Histoscanning-guided biopsy has lower diagnostic value than MR-guided target biopsy using software. The lack of solid conclusions in favor of a particular prostate fusion biopsy technique stresses on the relevance of further research on this topic. abstract_id: PUBMED:25501797 Prostate histoscanning true targeting guided prostate biopsy: initial clinical experience. Objective: To evaluate the feasibility of prostate histoscanning true targeting (PHS-TT) guided transrectal ultrasound (TRUS) biopsy. Methods: This is a prospective, single center, pilot study performed during February 2013-September 2013. All consecutive patients planned for prostate biopsy were included in the study, and all the procedure was performed by a single surgeon aided by the specialized true targeting software. Initially, the patients underwent PHS to map the abnormal areas within the prostate that were ≥0.2 cm(3). TRUS guided biopsies were performed targeting the abnormal areas with a specialized software. Additionally, routine bisextant biopsies were also taken. The final histopathology of the target cores was compared with the bisextant cores. Results: A total of 43 patients underwent combined 'targeted PHS guided' and 'standard 12 core systematic' biopsies. The mean volume of abnormal area detected by PHS is 4.3 cm(3). The overall cancer detection rate was 46.5 % (20/43) with systemic cores and target cores detecting cancer in 44 % (19/43) and 26 % (11/43), respectively. The mean % cancer/core length of the PHS-TT cores were significantly higher than the systematic cores (55.4 vs. 37.5 %. p &lt; 0.05). In biopsy naïve patients, the cancer detection rate (43.7 % vs. 14.8 %. p = 0.06) and the cancer positivity of the cores (30.1 vs. 6.8 %. p &lt; 0.01) of target cores were higher than those patients with prior biopsies. Conclusion: PHS-TT is feasible and can be an effective tool for real-time guidance of prostate biopsies. abstract_id: PUBMED:25794587 Value of perineal HistoScanning™ template-guided prostate biopsy Background: Modern imaging modalities improve prostate diagnostics. Objectives: This study was performed to determine the outcome characteristics of biopsy procedures using the results of HistoScanning(TM) analysis (HS) for identifying prostate cancer (PCa) in patients with perineal template-guided prostate biopsy. Patients And Methods: A total of 104 consecutive men (mean age 69 years, mean PSA 9.9 ng/ml) underwent HS prior to the extended prostate biopsy procedure. Patients received a targeted transperineal (template-assisted) as well as a targeted transrectal prostate biopsy using HS projection reports supplemented by a standardized 14-core systematic transrectal prostate biopsy (Bx). The cancer detection rate was analyzed on the sector level and HS targeted results were correlated to biopsy outcome, sensitivity, specificity, predictive accuracy, negative predictive value (NPV) and positive predictive value (PPV). Results: Of 104 patients, 44 patients (42%) were found to have PCa. Histology detected atypical small acinar proliferation in 3 patients (2.9%), high-grade prostatic intraepithelial neoplasia in 16 (15.4%), and chronic active inflammation in 74 (71.1%), respectively. The detection rate for each region was significantly higher in HS-targeted biopsies compared to Bx. The detection rate per patient was not significantly different, although a smaller number of regions were biopsied with the targeted approach. The overall sensitivity, specificity, predictive accuracy, NPV, and PPV on the sector level were 37.2, 85.6, 78.6, 88.7 and 30.8%, respectively. Conclusion: The use of HS analysis results in a higher detection rate of prostate cancer compared to common transrectal ultrasonography (TRUS)-guided Bx. This technique increases the informative value of TRUS imaging and improves the diagnostic impact at least in the targeted biopsy setting. abstract_id: PUBMED:29121982 Application of ultrasound imaging biomarkers (HistoScanning™) improves staging reliability of prostate biopsies. Objective: Imaging biomarkers like HistoScanning™ augment the informative value of ultrasound. Analogue image-guidance might improve the diagnostic accuracy of prostate biopsies and reduce misclassifications in preoperative staging and grading. Results: Comparison of 77 image-guided versus 88 systematic prostate biopsies revealed that incorrect staging and Gleason misclassification occurs less frequently in image-guided than in systematic prostate biopsies. Systematic prostate biopsies (4-36 cores, median 12 cores) tended to detect predominantly unilateral tumors (39% sensitivity, 90.9% specificity, 17.5% negative and 50% positive predictive values). Bilateral tumors were diagnosed more frequently by image-guided prostate biopsies (87.9% sensitivity, 72.7% specificity, 50% negative and 96.8% positive predictive values). Regarding the detection of lesions with high Gleason scores ≥ 3 + 4, systematic prostate and image-guided biopsies yielded sensitivity and specificity rates of 66.7% vs 93.5%, 86% vs 64.5%, as well as negative and positive predictive values of 71.2% vs 87%, and 83.3% vs 79.6%, respectively. Potential reason for systematic prostate biopsies missing the correct laterality and the correct Gleason score was a mismatch between the biopsy template and the respective pathological cancer localization. This supports the need for improved detection techniques such as ultrasound imaging biomarkers and image-adapted biopsies. abstract_id: PUBMED:28753891 Evaluation of Prostate HistoScanning as a Method for Targeted Biopsy in Routine Practice. Background: Prostate HistoScanning (PHS) is a tissue characterization system used to enhance prostate cancer (PCa) detection via transrectal ultrasound imaging. Objective: To assess the impact of supplementing systematic transrectal biopsy with up to three PHS true targeting (TT) guided biopsies on the PCa detection rate and preclinical patient assessment. Design, Setting, And Participants: This was a prospective study involving a cohort of 611 consecutive patients referred for transrectal prostate biopsy following suspicion of PCa. PHS-TT guided cores were obtained from up to three PHS lesions of ≥0.5cm3 per prostate and only one core per single PHS lesion. Histological outcomes from a systematic extended 12-core biopsy (Bx) scheme and additional PHS-TT guided cores were compared. Outcome Measurements And Statistical Analysis: Comparison of PHS results and histopathology was performed per sextant. The χ2 and Mann-Whitney test were used to assess differences. Statistical significance was set at p&lt;0.05. Results And Limitations: PHS showed lesions of ≥0.5cm3 in 312 out of the 611 patients recruited. In this group, Bx detected PCa in 59% (185/312) and PHS-TT in 87% (270/312; p&lt;0.001). The detection rate was 25% (944/3744 cores) for Bx and 68% (387/573 cores) for PHS-TT (p&lt;0.001). Preclinical assessment was significantly better when using PHS-TT: Bx found 18.6% (58/312) and 8.3% (26/312), while PHS-TT found 42.3% (132/312) and 20.8% (65/312) of Gleason 7 and 8 cases, respectively (p&lt;0.001). PHS-TT attributed Gleason score 6 to fewer patients (23.4%, 73/312) than Bx did (32.4%, 101/312; p=0.0021). Conclusions: Patients with a suspicion of PCa may benefit from addition of a few PHS-TT cores to the standard Bx workflow. Patient Summary: Targeted biopsies of the prostate are proving to be equivalent to or better than standard systematic random sampling in many studies. Our study results support supplementing the standard schematic transrectal ultrasound-guided biopsy with a few guided cores harvested using the ultrasound-based prostate HistoScanning true targeting approach in cases for which multiparametric magnetic resonance imaging is not available. abstract_id: PUBMED:33825986 A systematic review and meta-analysis of Histoscanning™ in prostate cancer diagnostics. Context: The value of Histoscanning™ (HS) in prostate cancer (PCa) imaging is much debated, although it has been used in clinical practice for more than 10 years now. Objective: To summarize the data on HS from various PCa diagnostic perspectives to determine its potential. Materials And Methods: We performed a systematic search using 2 databases (Medline and Scopus) on the query "Histoscan*". The primary endpoint was HS accuracy. The secondary endpoints were: correlation of lesion volume by HS and histology, ability of HS to predict extracapsular extension or seminal vesicle invasion. Results: HS improved cancer detection rate "per core", OR = 16.37 (95% CI 13.2; 20.3), p &lt; 0.0001, I2 = 98% and "per patient", OR = 1.83 (95% CI 1.51; 2.21), p &lt; 0.0001, I2 = 95%. The pooled accuracy was markedly low: sensitivity - 0.2 (95% CI 0.19-0.21), specificity - 0.12 (0.11-0.13), AUC 0.12. 8 of 10 studiers showed no additional value for HS. The pooled accuracy with histology after RP was relatively better, yet still very low: sensitivity - 0.56 (95% CI 0.5-0.63), specificity - 0.23 (0.18-0.28), AUC 0.4. 9 of 12 studies did not show any benefit of HS. Conclusion: This meta-analysis does not see the incremental value in comparing prostate Histoscanning with conventional TRUS in prostate cancer screening and targeted biopsy. HS proved to be slightly more accurate in predicting extracapsular extension on RP, but the available data does not allow us to draw any conclusions on its effectiveness in practice. Histoscanning is a modification of ultrasound for prostate cancer visualization. The available data suggest its low accuracy in screening and detecting of prostate cancer. Answer: The evidence regarding the ability of HistoScanning™ (HS) to predict positive results in prostate biopsy is mixed and generally suggests that HS is not reliably predictive of positive biopsy results. Several studies have assessed the performance of HS in predicting biopsy outcomes with varying results: - A study analyzing 1,188 sextants of the prostate found that HS did not accurately predict positive prostate biopsy results, with an area under the receiver-operating characteristic curve (AUC) of only 0.58. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) varied depending on the HS signal volume cutoffs used, but overall, the authors concluded that HS signals should not be used to vary well-established random biopsy patterns or to reduce biopsy cores (PUBMED:24871425). - Another study involving 40 patients found that HS-based true targeting (TT)-derived prostate biopsy was unreliable. The AUCs for predicting biopsy results were low (0.51 to 0.53), and the sensitivity, specificity, PPV, and NPV were not sufficiently high to recommend altering established biopsy standards or reducing biopsy cores based on HS signals (PUBMED:26215749). - Conversely, a pilot study suggested that Prostate HistoScanning might be helpful for selecting patients for prostate biopsies and could direct biopsies to specific regions with a higher cancer detection rate. However, this study was conducted on a small cohort of 94 men (PUBMED:23594146). - A systematic review and meta-analysis of HS in prostate cancer diagnostics found that HS improved the cancer detection rate per core and per patient, but the pooled accuracy was markedly low, with sensitivity, specificity, and AUC all being low. The majority of studies included in the meta-analysis did not show additional value for HS (PUBMED:33825986). - Other studies have reported that HS may have some utility in guiding biopsies and improving cancer detection rates, but these findings are often based on small patient cohorts or specific clinical settings (PUBMED:25501797, PUBMED:25794587, PUBMED:29121982, PUBMED:28753891). In summary, while some individual studies suggest that HS may have a role in guiding prostate biopsies, the overall evidence indicates that HS does not reliably predict positive prostate biopsy results and should not be used to replace established biopsy protocols. The accuracy and predictive value of HS are generally low, and larger, more robust studies are needed to determine its potential utility in clinical practice.
Instruction: Practices of weight regulation among elite athletes in combat sports: a matter of mental advantage? Abstracts: abstract_id: PUBMED:23672331 Practices of weight regulation among elite athletes in combat sports: a matter of mental advantage? Context: The combination of extensive weight loss and inadequate nutritional strategies used to lose weight rapidly for competition in weight-category sports may negatively affect athletic performance and health. Objective: To explore the reasoning of elite combat-sport athletes about rapid weight loss and regaining of weight before competitions. Design: Qualitative study. Setting: With grounded theory as a theoretical framework, we employed a cross-examinational approach including interviews, observations, and Internet sources. Sports observations were obtained at competitions and statements by combat-sport athletes were collected on the Internet. Patients Or Other Participants: Participants in the interviews were 14 Swedish national team athletes (9 men, 5 women; age range, 18 to 36 years) in 3 Olympic combat sports (wrestling, judo, and taekwondo). Data Collection And Analysis: Semistructured interviews with 14 athletes from the Swedish national teams in wrestling, judo, and taekwondo were conducted at a location of each participant's choice. The field observations were conducted at European competitions in these 3 sports. In addition, interviews and statements made by athletes in combat sports were collected on the Internet. Results: Positive aspects of weight regulation other than gaining physical advantage emerged from the data during the analysis: sport identity, mental diversion, and mental advantage. Together and individually, these categories point toward the positive aspects of weight regulation experienced by the athletes. Practicing weight regulation mediates a self-image of being "a real athlete." Weight regulation is also considered mentally important as a part of the precompetition preparation, serving as a coping strategy by creating a feeling of increased focus and commitment. Moreover, a mental advantage relative to one's opponents can be gained through the practice of weight regulation. Conclusions: Weight regulation has mentally important functions extending beyond the common notion that combat-sport athletes reduce their weight merely to gain a physical edge over their opponents. abstract_id: PUBMED:35455842 The Association between Rapid Weight Loss and Body Composition in Elite Combat Sports Athletes. Rapid Weight Loss (RWL) is a rapid reduction in weight over a short period of time seeking to attain the norm required for a competition in a particular weight category. RWL has a negative health impact on athletes including the significant muscle damage induced by RWL. This study aimed to identify the association between RWL and body composition among competitive combat athletes (n = 43) in Lithuania. Our focus was laid on the disclosure of their RWL practice by using a previously standardized RWL Questionnaire. The body composition of the athletes was measured by means of the standing-posture 8-12-electrode multi-frequency bioelectrical impedance analysis (BIA) and the electrical signals of 5, 50, 250, 550 and 1000 kHz. This non-experimental cross-sectional study resulted in preliminary findings on the prevalence and profile of RWL among combat athletes in Lithuania. 88% of the athletes surveyed in our study had lost weight in order to compete, with the average weight loss of 4.6 ± 2% of the habitual body mass. The athletes started to resort to weight cycling as early as 9 years old, with a mean age of 12.8 ± 2.1 years. The combination of practiced weight loss techniques such as skipping meals (adjusted Odd Ratio (AOR) 6.3; 95% CI: 1.3−31.8), restricting fluids (AOR 5.5; 95% CI: 1.0−31.8), increased exercise (AOR 3.6; 95% CI: 1.0−12.5), training with rubber/plastic suits (AOR 3.2; 95% CI: 0.9−11.3) predicted the risk of RWL aggressiveness. RWL magnitude potentially played an important role in maintaining the loss of muscle mass in athletes during the preparatory training phase (β −0.01 kg, p &lt; 0.001). Therefore, an adequate regulatory programme should be integrated into the training plans of high-performance combat sports athletes to keep not only the athletes but also their coaches responsible for a proper weight control. abstract_id: PUBMED:35859622 Rapid weight loss among elite-level judo athletes: methods and nutrition in relation to competition performance. Background: Rapid weight loss (RWL) followed by rapid weight gain (RWG) is a regular pre-competition routine in combat sports and weightlifting. With the prevalence of these sports exceeding 20% at the 2020 Tokyo Olympics, there are limited data on RWL and RWG practices and their impact on well-being and competitive success in elite-level athletes. Methods: A total of 138 elite-level female and male judokas, 7.7% of the athletes ranked as top 150 on the International Judo Federation Senior World Ranking List (WRL), completed a survey on RWL, RWG, and the consequences of these practices. Results: Our findings showed that 96% of the respondents practice RWL. The average reduced body mass percentage was 5.8 ± 2.3%. Respondents who used either of the dehydration methods - fluid restriction, sauna suit, and/or sauna/hot bath - to reduce weight were 88%, 85%, and 76%, respectively. Furthermore, 91% of the respondents reported reduced energy as a negative consequence of RWL and 21% experienced a collapse episode during the RWL period. Respondents ranked 1-20 on the WRL experienced fewer negative consequences of RWL and RWG (p = 0.002) and had more dietitian and/or medical doctor support (p = 0.040) than lower-ranked respondents. Those who started with RWL practices before the age of 16 (38%) were ranked lower on the WRL (p = 0.004) and reported more negative consequences of RWL and RWG (p = 0.014). Conclusions: This study is the first to provide insight into the RWL practices of worldwide elite-level judokas and provides valuable information for the combat sports society, especially coaches. Proper weight management and optimal timed initiation of RWL practices in a judoka's career may contribute to success at the elite level. abstract_id: PUBMED:31092119 Body composition of elite Olympic combat sport athletes. Physique traits of a range of elite athletes have been identified; however, few detailed investigations of Olympic combat sports (judo, wrestling, taekwondo and boxing) exist. This is surprising given the importance of body composition in weight category sports. We sought to develop a descriptive database of Olympic combat sport athletes, compare variables relative to weight division and examine differences within and between sports. Additionally, we investigated the appropriateness of athletes' self-selected weight classes compared to an internationally recognised classification system (the NCAA minimum wrestling weight scheme used to identify minimum 'safe' weight). Olympic combat sport athletes (56♂, 38♀) had body mass (BM), stretch stature and dual-energy X-ray absorptiometry derived body composition assessed within 7-21 days of competition. Most athletes were heavier than their weight division. Sport had an effect (p &lt; .05) on several physique traits, including; lean mass, lean mass distribution, stretch stature and BMI. BM was strongly positively correlated (r &gt; 0.6) with; fat free mass, fat mass and body fat percentage, however, was not predictive of total mass/weight division. The Olympic combat sports differ in competitive format and physiological requirements, which is partly reflected in athletes' physique traits. We provide reference ranges for lean and fat mass across a range of BM. Lighter athletes likely must utilise acute weight loss in order to make weight, whereas heavier athletes can potentially reduce fat mass. abstract_id: PUBMED:31097450 Mental health in elite athletes: International Olympic Committee consensus statement (2019). Mental health symptoms and disorders are common among elite athletes, may have sport related manifestations within this population and impair performance. Mental health cannot be separated from physical health, as evidenced by mental health symptoms and disorders increasing the risk of physical injury and delaying subsequent recovery. There are no evidence or consensus based guidelines for diagnosis and management of mental health symptoms and disorders in elite athletes. Diagnosis must differentiate character traits particular to elite athletes from psychosocial maladaptations.Management strategies should address all contributors to mental health symptoms and consider biopsychosocial factors relevant to athletes to maximise benefit and minimise harm. Management must involve both treatment of affected individual athletes and optimising environments in which all elite athletes train and compete. To advance a more standardised, evidence based approach to mental health symptoms and disorders in elite athletes, an International Olympic Committee Consensus Work Group critically evaluated the current state of science and provided recommendations. abstract_id: PUBMED:35693326 Effects of the lockdown period on the mental health of elite athletes during the COVID-19 pandemic: a narrative review. Purpose: This review aimed to assess the effects of COVID-19 pandemic lockdown on mental health to elite athletes. The emotional background influenced their sport career and was examined by questionnaires. Methods: We included original studies that investigated psychological outcomes in elite athletes during COVID-19 lockdown. Sixteen original studies (n = 4475 participants) were analyzed. Results: The findings showed that COVID-19 has an impact on elite athletes' mental health and was linked with stress, anxiety and psychological distress. The magnitude of the impact was associated with athletes' mood state profile, personality and resilience capacity. Conclusion: The lockdown period impacted also elite athletes' mental health and training routines with augmented anxiety but with fewer consequences than the general population thanks to adequate emotion regulation and coping strategies. abstract_id: PUBMED:38084133 Elite athletes' mental well-being and life satisfaction: a study of elite athletes' resilience and social support from an Asian unrecognised National Olympic Committee. Background: This study aimed to investigate elite athletes' mental well-being, and to ascertain whether the personal factor resilience and the social factor social support can play a role in promoting mental well-being and life satisfaction. In addition, this is one of the first studies to investigate well-being among elite athletes who are from a region belonging to an unrecognised National Olympic Committee and are not eligible to join the Olympic Games. Participants And Procedure: Eighty-four full-time elite athletes (37 males, 47 females) with mean age of 22.36 years old participated in this quantitative research study. Formal letters describing the purpose and organiser of the study were sent to the sport entities in Macao asking their permission for the researchers to contact the elite athletes to participate in this study. After gaining the permission, the elite athletes belonging to these entities were approached individually, to inform them of the purpose of the study and receive their consent. Results: Regression revealed that emotional support and adaptability of resilience were strong positive predictors of mental well-being. Additionally, mental well-being was found to be a strong positive predictor of life satisfaction. The results reflected that in elite athletes possessing high adaptability and receiving more emotional support could help to maintain their mental well-being. Conclusions: Implications (based on the findings) are discussed in order to provide insights for policy makers or coaches how to promote elite athletes' mental well-being. abstract_id: PUBMED:30944086 Psychotherapy for mental health symptoms and disorders in elite athletes: a narrative review. Background: Athletes, like non-athletes, suffer from mental health symptoms and disorders that affect their lives and their performance. Psychotherapy, either as the sole treatment or combined with other non-pharmacological and pharmacological strategies, is a pivotal component of management of mental health symptoms and disorders in elite athletes. Psychotherapy takes the form of individual, couples/family or group therapy and should address athlete-specific issues while being embraced as normative by athletes and their core stakeholders. Main Findings: This narrative review summarises controlled and non-controlled research on psychotherapy for elite athletes with mental health symptoms and disorders. In summary, treatment is similar to that of non-athletes-although with attention to issues that are athlete-specific. Challenges associated with psychotherapy with elite athletes are discussed, including diagnostic issues, deterrents to help-seeking and expectations about services. We describe certain personality characteristics sometimes associated with elite athletes, including narcissism and aggression, which could make psychotherapy with this population more challenging. The literature regarding psychotherapeutic interventions in elite athletes is sparse and largely anecdotal. abstract_id: PUBMED:27488101 Osteoarthritis is associated with symptoms of common mental disorders among former elite athletes. Purpose: The primary aim was to establish the association between osteoarthritis (OA) and the occurrence and comorbidity of symptoms of common mental disorders (CMD: distress, anxiety/depression, sleep disturbance, adverse alcohol use) in a group of former elite athletes (rugby, football, ice hockey, Gaelic sports and cricket). A secondary aim was to explore this association in the subgroups of sports. Methods: Cross-sectional analysis was performed on the baseline questionnaires from five prospective cohort studies conducted between April 2014 and January 2016 in former elite athletes of rugby, football, ice hockey, Gaelic sports and cricket. The presence of OA (diagnosed by a medical professional) was examined with a single question, and symptoms of CMD were evaluated through multiple validated questionnaires (4DSQ, GHQ-12, PROMIS, AUDIT-C). Results: There was a significant association between OA and symptoms of distress (OR 1.7, 95 % CI 1.2-2.6), sleep disturbance (OR 1.6, 95 % CI 1.1-2.3), adverse alcohol use (OR 1.8, 95 % CI 1.2-2.6) and a comorbidity of symptoms of CMD (OR 1.5, 95 % CI 1.0-2.1) in former elite athletes. Conclusion: OA might be a risk factor for developing symptoms of CMD in former elite athletes. The clinical relevance of this study is that an interdisciplinary approach to the clinical care and support of former elite athletes after their careers is advocated as the interaction between the physical and mental health issues occurring on the long term is complex. Monitoring OA among former elite athletes should be empowered while strategies to prevent symptoms worsening should be developed and implemented. The self-awareness, prevention and care of mental health problems that might occur after a professional sports career should also be addressed. Level Of Evidence: Level III. abstract_id: PUBMED:35572229 Psychological Distress in Elite Sambo and Recreational Athletes. Background: Previous studies suggest that engagement in any type of physical activity can be protective against mental health issues, whereas elite-level athletes can endure various mental health challenges. The aim of this study was to determine variations in the prevalence of psychological distress among elite sambo athletes and their recreational counterparts. Methods: A sample consisting of 245 athletes (127 males and 118 females) was chosen. Out of the total sample, 105 were elite-level athletes while 140 were recreational athletes. Participants were accessed via the Depression Anxiety Stress Scales-21 to determine their stress in various domains at a given time. Results: Data indicated that all tested differences between elite sambo athletes and recreational athletes were statistically significant; recreational athletes had a higher score on the depression scale, anxiety and stress, and a general distress score than sambo athletes. Although there are no gender differences in psychological distress in the total sample of athletes, elite sambo athletes achieve significantly lower scores in all tested variants than recreational ones. Women who engage in recreational activities have stood out as a vulnerable subsample in psychological stress. Conclusion: Future epidemiological and interventional studies should explore optimal strategies to identify mental health needs based on specific sport activity, especially in terms of gender. There is a need to place special emphasis on psychological distress in the context of combat sports. Answer: The practices of weight regulation among elite athletes in combat sports do indeed appear to be a matter of mental advantage. Elite combat-sport athletes engage in rapid weight loss (RWL) and weight cycling as part of their pre-competition preparation, which is not only about gaining a physical edge but also has significant mental aspects. Athletes view weight regulation as an integral part of their sport identity and consider it mentally important, serving as a coping strategy that creates a feeling of increased focus and commitment. It is also believed to confer a mental advantage over opponents, as it mediates a self-image of being "a real athlete" (PUBMED:23672331). However, the practice of RWL is not without its risks. It has been associated with negative health impacts, including significant muscle damage, and can potentially play a role in maintaining the loss of muscle mass during the preparatory training phase (PUBMED:35455842). Moreover, athletes who start RWL practices before the age of 16 are ranked lower and report more negative consequences of RWL and rapid weight gain (RWG), suggesting that proper weight management and optimal timing of RWL initiation may contribute to success at the elite level (PUBMED:35859622). Despite the physical risks, the mental aspects of weight regulation practices are significant. Athletes may experience reduced energy and even collapse episodes during RWL, but those with more dietitian and/or medical doctor support experience fewer negative consequences (PUBMED:35859622). The mental well-being of athletes is crucial, and resilience and social support are important factors in promoting mental well-being and life satisfaction among elite athletes (PUBMED:38084133). In summary, weight regulation practices among elite combat sports athletes are indeed a matter of mental advantage, as they are seen as a crucial part of the athletes' identity and mental preparation for competition. However, these practices must be carefully managed to mitigate the associated health risks and ensure the athletes' overall well-being (PUBMED:23672331; PUBMED:35455842; PUBMED:35859622; PUBMED:38084133).
Instruction: Eugenol derivatives as potential anti-oxidants: is phenolic hydroxyl necessary to obtain an effect? Abstracts: abstract_id: PUBMED:24372555 Eugenol derivatives as potential anti-oxidants: is phenolic hydroxyl necessary to obtain an effect? Objectives: Eugenol, obtained from clove oil (Eugenia caryophyllata), possess several biological activities. It is anti-inflammatory, analgesic, anaesthesic, antipyretic, antiplatelet, anti-anaphylactic, anticonvulsant, anti-oxidant, antibacterial, antidepressant, antifungal and antiviral. The anti-oxidant activity of eugenol have already been proven. From this perspective testing, a series of planned structural derivatives of eugenol were screened to perform structural optimization and consequent increase of the potency of these biological activities. Methods: In an attempt to increase structural variability, 16 compounds were synthesized by acylation and alkylation of the phenolic hydroxyl group. Anti-oxidant activity capacity was based on the capture of DPPH radical (2,2-diphenyl-1-picryl-hydrazyl), ABTS radical 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulphonic acid), measure of TBARS (thiobarbituric acid-reactive species), total sulfhydryl and carbonyl content (eugenol derivatives final concentrations range from 50 to 200 μm). Key Findings: Four derivatives presented an efficient concentration to decrease 50% of the DPPH radical (EC50 ) &lt; 100 μm, which has a good potential as a free-radical scavenger. Three of these compounds also showed reduction of ABTS radical. Eugenol derivatives presenting alkyl or aryl (alkylic or arylic) groups substituting hydroxyl 1 of eugenol were effective in reducing lipid peroxidation, protein oxidative damage by carbonyl formation and increase total thiol content in cerebral cortex homogenates. In liver, the eugenol derivatives evaluated had no effect. Conclusions: Our results suggest that these molecules are promising anti-oxidants agents. abstract_id: PUBMED:16141589 Trapping effect of eugenol on hydroxyl radicals induced by L-DOPA in vitro. Many researchers have stated that eugenol might inhibit lipid peroxidation at the stage of initiation, propagation, or both, and many attempts have been made to elucidate the mechanism of its antioxidant activity. Nevertheless, details of its mechanism are still obscure. This study was carried out to investigate the trapping effect of eugenol on hydroxyl radical generated from L-3,4-dihydroxyphenylalanine (DOPA) in MiliQ water and the generation mechanism of the hydroxyl radical by this system which uses no metallic factor. This was studied by adding L-DOPA and 5,5-dimethyl-1-pyrroline N-oxide (DMPO) to phosphate buffered saline (PBS) or MiliQ water, and the generation of hydroxyl radical was detected on an ESR spectrum. By this method, the effect of antioxidants was detected as a modification of ESR spectra. We found that the eugenol trapped hydroxyl radicals directly, because it had no iron chelating action, did not trap L-DOPA semiquinone radical and inhibited hydroxyl radicals with or without iron ion. abstract_id: PUBMED:30580605 Synthesis of eugenol derivatives and its anti-inflammatory activity against skin inflammation. Eugenol is a phytochemical present in aromatic plants has generated considerable interest in the pharmaceutical industries mainly in cosmetics. A series of eugenol esters (ST1-ST7) and chloro eugenol (ST8) have been synthesized. The structures of newly synthesized compounds were confirmed by 1H and 13C NMR and mass spectrometry. In an effort to evaluate the pharmacological activity of eugenol derivatives, we explored its anti-inflammatory potential against skin inflammation using in-vitro and in-vivo bioassay. Synthesized derivatives significantly inhibited the production of pro-inflammatory cytokines against LPS-induced inflammation in macrophages. Among all derivatives, ST8 [Chloroeugenol (6-chloro, 2-methoxy-4-(prop-2-en-1-yl)-phenol)] exhibited most potent anti-inflammatory activity without any cytotoxic effect. We have further evaluated the efficacy and safety in in-vivo condition. ST8 exhibited significant anti-inflammatory activity against TPA-induced skin inflammation without any skin irritation effect on experimental animals. These findings suggested that ST8 may be a useful therapeutic candidate for the treatment of skin inflammation. abstract_id: PUBMED:37175309 Novel Derivatives of Eugenol as a New Class of PPARγ Agonists in Treating Inflammation: Design, Synthesis, SAR Analysis and In Vitro Anti-Inflammatory Activity. The main objective of this research was to develop novel compounds from readily accessed natural products especially eugenol with potential biological activity. Eugenol, the principal chemical constituent of clove (Eugenia caryophyllata) from the family Myrtaceae is renowned for its pharmacological properties, which include analgesic, antidiabetic, antioxidant, anticancer, and anti-inflammatory effects. According to reports, PPARγ regulates inflammatory reactions. The synthesized compounds were structurally analyzed using FT-IR, 1HNMR, 13CNMR, and mass spectroscopy techniques. Molecular docking was performed to analyze binding free energy and important amino acids involved in the interaction between synthesized derivatives and the target protein. The development of the structure-activity relationship is based on computational studies. Additionally, the stability of the best-docked protein-ligand complexes was assessed using molecular dynamic modeling. The in-vitro PPARγ competitive binding Lanthascreen TR-FRET assay was used to confirm the affinity of compounds to the target protein. All the synthesized derivatives were evaluated for an in vitro anti-inflammatory activity using an albumin denaturation assay and HRBC membrane stabilization at varying concentrations from 6.25 to 400 µM. In this background, with the aid of computational research, we were able to design six novel derivatives of eugenol synthesized, analyzed, and utilized TR-FRET competitive binding assay to screen them for their ability to bind PPARγ. Anti-inflammatory activity evaluation through in vitro albumin denaturation and HRBC method revealed that 1f exhibits maximum inhibition of heat-induced albumin denaturation at 50% and 85% protection against HRBC lysis at 200 and 400 µM, respectively. Overall, we found novel derivatives of eugenol that could potentially reduce inflammation by PPARγ agonism. abstract_id: PUBMED:37546528 Synthesis, anti-amoebic activity and molecular docking simulation of eugenol derivatives against Acanthamoeba sp. Amoebae of the genus Acanthamoeba can cause diseases such as amoebic keratitis and granulomatous amoebic encephalitis. Until now, treatment options for these diseases have not been fully effective and have several drawbacks. Therefore, research into new drugs is needed for more effective treatment of Acanthamoeba infections. Eugenol, a phenolic aromatic compound mainly derived from cloves, has a variety of pharmaceutical properties. In this study, nine eugenol derivatives (K1-K9), consisting of five new and four known compounds, were synthesized and screened for their antiamoebic properties against Acanthamoeba sp. The structure of these compounds was characterized spectroscopically by Fourier transform infrared (FTIR), Ultraviolet-Visible (UV-Vis), 1H and 13C Nuclear Magnetic Resonance (NMR) and mass spectrometer (MS). The derived molecules were screened for antiamoebic activity by determining IC50 values based on 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay and observation of amoeba morphological changes by light and fluorescence microscopy. Most of the tested compounds possessed strong to moderate cytotoxic effects against trophozoite cells with IC50 values ranging from 0.61 to 24.83 μg/mL. Observation of amoebae morphology by light microscopy showed that the compounds caused the transformed cells to be roundish and reduced in size. Furthermore, fluorescence microscopy observation using acridine orange (AO) and propidium iodide (PI) (AO/PI) staining showed that the cells have damaged membranes by displaying a green cytoplasm with orange-stained lysosomes. Acidification of the lysosomal structure indicated disruption of the internal structure of Acanthamoeba cells when treated with eugenol derivatives. The observed biological results were also confirmed by interaction simulations based on molecular docking between eugenol derivatives and Acanthamoeba profilin. These interactions could affect the actin-binding ability of the protein, disrupting the shape and mobility of Acanthamoeba. The overall results of this study demonstrate that eugenol derivatives can be considered as potential drugs against infections caused by Acanthamoeba. abstract_id: PUBMED:36431899 Essential Oils and Their Compounds as Potential Anti-Influenza Agents. Essential oils (EOs) are chemical substances, mostly produced by aromatic plants in response to stress, that have a history of medicinal use for many diseases. In the last few decades, EOs have continued to gain more attention because of their proven therapeutic applications against the flu and other infectious diseases. Influenza (flu) is an infectious zoonotic disease that affects the lungs and their associated organs. It is a public health problem with a huge health burden, causing a seasonal outbreak every year. Occasionally, it comes as a disease pandemic with unprecedentedly high hospitalization and mortality. Currently, influenza is managed by vaccination and antiviral drugs such as Amantadine, Rimantadine, Oseltamivir, Peramivir, Zanamivir, and Baloxavir. However, the adverse side effects of these drugs, the rapid and unlimited variabilities of influenza viruses, and the emerging resistance of new virus strains to the currently used vaccines and drugs have necessitated the need to obtain more effective anti-influenza agents. In this review, essential oils are discussed in terms of their chemistry, ethnomedicinal values against flu-related illnesses, biological potential as anti-influenza agents, and mechanisms of action. In addition, the structure-activity relationships of lead anti-influenza EO compounds are also examined. This is all to identify leading agents that can be optimized as drug candidates for the management of influenza. Eucalyptol, germacrone, caryophyllene derivatives, eugenol, terpin-4-ol, bisabolene derivatives, and camphecene are among the promising EO compounds identified, based on their reported anti-influenza activities and plausible molecular actions, while nanotechnology may be a new strategy to achieve the efficient delivery of these therapeutically active EOs to the active virus site. abstract_id: PUBMED:33291666 New Eugenol Derivatives with Enhanced Insecticidal Activity. Eugenol, the generic name of 4-allyl-2-methoxyphenol, is the major component of clove essential oil, and has demonstrated relevant biological potential with well-known antimicrobial and antioxidant actions. New O-alkylated eugenol derivatives, bearing a propyl chain with terminals like hydrogen, hydroxyl, ester, chlorine, and carboxylic acid, were synthesized in the present work. These compounds were later subjected to epoxidation conditions to give the corresponding oxiranes. All derivatives were evaluated against their effect upon the viability of insect cell line Sf9 (Spodoptera frugiperda), demonstrating that structural changes elicit marked effects in terms of potency. In addition, the most promising molecules were evaluated for their impact in cell morphology, caspase-like activity, and potential toxicity towards human cells. Some molecules stood out in terms of toxicity towards insect cells, with morphological assessment of treated cells showing chromatin condensation and fragmentation, which are compatible with the occurrence of programmed cell death, later confirmed by evaluation of caspase-like activity. These findings point out the potential use of eugenol derivatives as semisynthetic insecticides from plant natural products. abstract_id: PUBMED:29615339 Benzoxazine derivatives of phytophenols show anti-plasmodial activity via sodium homeostasis disruption. Development of new class of anti-malarial drugs is an essential requirement for the elimination of malaria. Bioactive components present in medicinal plants and their chemically modified derivatives could be a way forward towards the discovery of effective anti-malarial drugs. Herein, we describe a new class of compounds, 1,3-benzoxazine derivatives of pharmacologically active phytophenols eugenol (compound 3) and isoeugenol (compound 4) synthesised on the principles of green chemistry, as anti-malarials. Compound 4, showed highest anti-malarial activity with no cytotoxicity towards mammalian cells. Compound 4 induced alterations in the intracellular Na+ levels and mitochondrial depolarisation in intraerythrocytic Plasmodium falciparum leading to cell death. Knowing P-type cation ATPase PfATP4 is a regulator for sodium homeostasis, binding of compound 3, compound 4 and eugenol to PfATP4 was analysed by molecular docking studies. Compounds showed binding to the catalytic pocket of PfATP4, however compound 4 showed stronger binding due to the presence of propylene functionality, which corroborates its higher anti-malarial activity. Furthermore, anti-malarial half maximal effective concentration of compound 4 was reduced to 490 nM from 17.54 µM with nanomaterial graphene oxide. Altogether, this study presents anti-plasmodial potential of benzoxazine derivatives of phytophenols and establishes disruption of parasite sodium homeostasis as their mechanism of action. abstract_id: PUBMED:23908002 Structure-activity relationships of monoterpenes and acetyl derivatives against Aedes aegypti (Diptera: Culicidae) larvae. Background: Dengue fever virus transmitted by Aedes aegypti causes lethal mortalities of human beings, and, because of the lack of any vaccine, management of this vector, especially with phytochemicals, is essential. In the present investigation, the structure-activity relationship of monoterpenes and their acetyl derivatives was studied to identify structural features that are responsible for mosquitocidal activity. Results: Derivatization of monoterpenes (eugenol, geraniol, linalool, L-menthol and terpeniole) followed by structure-activity relationship studies identified all five acetyl derivatives as having enhanced mosquitocidal activity against fourth-instar larvae of Aedes aegypti. Acetylation of the hydroxyl group in general increased activity in comparison with hydroxyl compounds. Based on LC50 values (ppm), the activities could be placed in the following order: eugenyl acetate (50.2) &gt; linalyl acetate (119.7) &gt; terpinyl acetate (287.1) &gt; menthyl acetate (308.4) &gt; geranyl acetate (325.5), as compared with monoterpenoids: eugenol (82.8) &gt; linalool (242.6) &gt; terpineol (331.7) &gt; L-menthol (365.8) &gt; geraniol (415.0). In eugenyl acetate, the presence of an aromatic ring and a side chain with an allylic double bond makes it most effective. Conclusion: Bioactive functional groups identified in the study may contribute to the understanding of larvicidal activity of acetyl derivatives and may help in the development of ecofriendly mosquito larvicidal compounds. abstract_id: PUBMED:34346150 Synthesis and Anti-Oomycete Activity of 1-Sulfonyloxy/Acyloxydihydroeugenol Derivatives. Endeavor to discover biorational natural products-based fungicides, two series (26) of novel 1-sulfonyloxy/acyloxydihydroeugenol derivatives (3a-p and 5a-j) were prepared and assessed for their fungicidal activity against P. capsici Leonian, in vitro. Results of fungicidal activity revealed that, among all compounds, especially compounds 3a, 5c, and 5e displayed the most potent anti-oomycete activity against P. capsici with EC50 values of 69.33, 68.81, and 67.77 mg/L, respectively. Overall, the anti-oomycete activities of 1-acyloxydihydroeugenol derivatives (5a-j) were higher than that of 1-sulfonyloxydihydroeugenol derivatives (3a-p). It is proved that the introduction of the acyl group at hydroxy position of dihydroeugenol is more beneficial to improve its anti-oomycete activity than that of the sulfonyl group. These preliminary results will pave the way for further modification of dihydroeugenol in the development of potential new fungicides. Answer: The question of whether the phenolic hydroxyl group is necessary for the antioxidant effect of eugenol derivatives has been explored in various studies. According to the findings presented in the abstracts, it appears that the phenolic hydroxyl group is not strictly necessary to obtain an antioxidant effect, but its modification can influence the potency and efficacy of the derivatives. In one study, a series of eugenol derivatives were synthesized by acylation and alkylation of the phenolic hydroxyl group, and several of these derivatives showed good potential as free-radical scavengers, suggesting that the presence of the phenolic hydroxyl group is not essential for antioxidant activity (PUBMED:24372555). Additionally, the study found that derivatives with alkyl or aryl groups substituting the hydroxyl group were effective in reducing lipid peroxidation and protein oxidative damage, as well as increasing total thiol content in cerebral cortex homogenates, although they had no effect in the liver. Another study investigated the trapping effect of eugenol on hydroxyl radicals and found that eugenol could trap hydroxyl radicals directly, indicating that its antioxidant mechanism does not necessarily rely on the presence of the phenolic hydroxyl group (PUBMED:16141589). Furthermore, the synthesis of eugenol esters and chloro eugenol (ST8) demonstrated that these derivatives could significantly inhibit the production of pro-inflammatory cytokines and exhibit potent anti-inflammatory activity without cytotoxic effects, suggesting that modifications to the eugenol molecule, including changes to the phenolic hydroxyl group, can retain or enhance biological activity (PUBMED:30580605). In summary, while the phenolic hydroxyl group of eugenol is a significant structural feature, the studies suggest that it is not strictly necessary for antioxidant effects, and that derivatives with modifications at this position can still exhibit potent biological activities (PUBMED:24372555; PUBMED:16141589; PUBMED:30580605).
Instruction: Can radiological assessment of abdominal computerized scans diagnose encapsulating peritoneal sclerosis in long-term peritoneal dialysis patients? Abstracts: abstract_id: PUBMED:26730546 Can radiological assessment of abdominal computerized scans diagnose encapsulating peritoneal sclerosis in long-term peritoneal dialysis patients? Aim: Encapsulating peritoneal sclerosis (EPS) is a rare but potentially devastating complication of long-term peritoneal dialysis (PD). Changes to the peritoneal membrane occur with duration of PD therapy. To determine the potential effect of prospective computerized tomography (CT) scanning, we reviewed the scans of patients who had developed EPS compared with those without EPS. Methods: We retrospectively compared CT scans that had been prospectively performed in a screening program for PD patients after 4 years of PD and compared scans from 18 patients with confirmed EPS and 26 vintage matched controls without EPS. Anonymized scans were reported independently by two blinded experienced radiologists. Results: Peritoneal thickening, calcification, bowel tethering, thickening and dilatation were significantly more commonly reported in the EPS group. Total combined radiological scores, also including septation within peritoneal fluid, were significantly higher in the EPS group and the greatest for those who died as a consequence of EPS. Simplified scoring based on presence or absence, then for a score of ≥3.0, gave a receiver operating characteristic value of 0.87 for EPS, with a sensitivity of 78% and specificity of 85%, respectively. Inter-observer agreement varied from poor to good, being the greatest for calcification and bowel dilatation and the lowest for peritoneal thickening. Conclusion: CT scan reporting can differentiate EPS from peritoneal changes associated with duration of PD therapy. Severity of abnormalities was associated with clinical outcomes. However, inter-observer agreement varies with different radiological appearances, and future studies are required to determine weighting of radiological changes to provide prognostic information for clinicians and patients. abstract_id: PUBMED:36007503 Histopathological Changes of Long-Term Peritoneal Dialysis Using Physiological Solutions: A Case Report and Review of the Literature. Background: Long-term peritoneal dialysis (PD), especially with nonphysiological solutions, is afflicted with the severe complication of encapsulating peritoneal sclerosis (EPS). Physiologic PD solutions have been introduced to reduce pH trauma. Data on peritoneal biopsies in pediatrics with long-term PD using physiological solutions are scant. Case Report: We report an adolescent who had been on 10-h continuous hourly cycles using mostly 2.27% Physioneal™ for 5 years. There were two episodes of peritonitis in October 2017 (Klebsiella oxytoca) and May 2018 (Klebsiella pneumoniae), which were treated promptly. This adolescent, who lost two kidney transplants from recurrent focal and segmental glomerulosclerosis, underwent a peritoneal membrane biopsy at the time of a third PD catheter placement, 16 months after the second renal transplant. Laparoscopically, the peritoneum appeared grossly normal, but fibrosis and abundant hemosiderin deposition were noted on histology. The thickness of the peritoneum was 200-900 (mean 680) µm; normal for age of 14 years is 297 [IQR 229, 384] μm. The peritoneum biopsy did not show specific EPS findings, as the mesothelial cells were intact, and there was a lack of fibrin exudation, neo-membrane, fibroblast proliferation, infiltration, or calcification. Conclusions: While the biopsy was reassuring with respect to the absence of EPS, significant histopathological changes suggest that avoiding pH trauma may not ameliorate the effects of glucose exposure in long-term PD. abstract_id: PUBMED:35570613 Encapsulated Peritoneal Sclerosis in an Adolescent With Kidney Transplant After Long-Term Peritoneal Dialysis. Encapsulated peritoneal sclerosis is a rare complication of long-term peritoneal dialysis that has a high rate of morbidity and mortality. We present an 18-year-old female patient who was first diagnosed with renal failure at 8 years of age and who had 7 years of peritoneal dialysis and then hemodialysis before kidney transplant from a deceased donor. Before transplant, the patient developed encapsulated peritoneal sclerosis and was treated with tamoxifen and steroids. Three years after transplant, the patient presented with complaints of vomiting, abdominal pain, and abdominal distension and was again diagnosed with encapsulated peritoneal sclerosis. The patient required excretory paracentesis, pulse steroid treatment for 3 days, and treatment with methylprednisone and tamoxifen, which resulted in regression of signs and symptoms. Factors such as long-term peritoneal dialysis, a history of bacterial peritonitis, and use of high-concentration dialysate may cause encapsulated peritoneal sclerosis, but symptoms can recur after transplant, as shown in our patient. Thus, it is important to recognize that encapsulated peritoneal sclerosis may cause graft loss due to the various complications that it can cause. abstract_id: PUBMED:22073836 Chronic abdominal pain in a patient on maintenance peritoneal dialysis. Encapsulating peritoneal sclerosis (EPS) is an uncommon but one of the most serious complications in patients on long-term peritoneal dialysis (PD). The diffuse thickening and sclerosis of the peritoneal membrane that characterizes EPS leads to decreased ultrafiltration and ultimately to bowel obstruction. Given that the prognosis of established EPS is poor, early recognition of the preceding symptoms is essential. Computed tomography of the abdomen is a reliable and noninvasive diagnostic tool. Typical computed tomography features of EPS include peritoneal calcification, bowel wall thickening, peritoneal thickening, loculated fluid collections, and tethered bowel loops. These findings are diagnostic of EPS in the appropriate clinical setting. Here we present a case report of chronic abdominal pain in a patient on maintenance PD representing a case of EPS. abstract_id: PUBMED:11045299 The spectrum of peritoneal fibrosing syndromes in peritoneal dialysis. A variable degree of diffuse peritoneal fibrosis has been documented in all patients who have been on long-term peritoneal dialysis. Peritoneal dialysis-induced diffuse peritoneal fibrosis varies from opacification and "tanning" of the peritoneum, which may have only a moderate detrimental effect on peritoneal transport kinetics, to a progressive, sclerosing encapsulating peritonitis (SEP), which may lead to cessation of peritoneal dialysis and to death. Fewer than 1% of peritoneal dialysis patients develop overt SEP as manifested by combinations of intestinal obstruction, weight loss, and ultrafiltration failure. The diagnosis of SEP depends on a combination of laparotomy and radiological features in suspected cases and consequently the true incidence of SEP is most likely underestimated. Several predisposing, interrelated risk factors for both peritoneal fibrosis and sclerosing encapsulating peritonitis have been identified: prolonged duration of peritoneal dialysis, history of severe or recurrent episodes of peritonitis, and higher exposure to hypertonic glucose-based dialysis solutions. Nevertheless, the etiology of SEP is unknown and several causal factors may simultaneously or sequentially initiate and maintain a low-grade serositis that leads to uncontrolled fibroneogenesis. The high mortality rate of SEP has emphasized the need to develop preventive strategies. These strategies include early peritoneal catheter removal to avoid refractory peritonitis, the development of more biocompatible dialysis solutions, restriction of the use of hypertonic glucose-based dialysis solutions during and after episodes of peritonitis, and, perhaps, limiting the duration of peritoneal dialysis in at-risk patients. This approach was followed in a Japanese unit where a subgroup of all patients who had been on peritoneal dialysis for more than 5 years and who had poor ultrafiltration and peritoneal calcification on computed tomography (CT) scan were shown to have peritoneal sclerosis on peritoneal biopsy and were therefore electively transferred to hemodialysis. This acquired spectrum of peritoneal fibrosing syndromes leads to long-term complications in peritoneal dialysis, whereas localized fibrous adhesions secondary to prior abdominal surgery may prevent the successful initiation of peritoneal dialysis. abstract_id: PUBMED:9686629 Factors increasing severity of peritonitis in long-term peritoneal dialysis patients. Peritonitis is the most frequent complication and a leading cause of discontinuation of peritoneal dialysis (PD). Intact epithelial lining, sufficient blood flow, and adequate immunologic responses are vital to eradicate infection. In long-term PD, various pathological changes such as denudation of peritoneal mesothelial cells, duplication of submesothelial and/or capillary basement membranes, submesothelial fibrin deposit, and peritoneal fibrosis have been reported. Causes of these changes of the peritoneum are multifactorial. Commonly used dialysis solutions that are acidic, hypertonic, containing high concentrations of glucose and lactate, contaminated by glucose and/or plastic degradation products are not biocompatible and may induce chronic immune reactions in the peritoneal cavity. Long-term exposure of the peritoneum to dialysis solutions, the peritoneal catheter, and recurrent episodes of peritonitis all contribute to peritoneal injury. In addition, long-term exposure of peritoneal cells such as macrophages, mesothelial cells, and fibroblasts to dialysis solutions may also alter the normal immunologic reactions against bacteria. Peritoneal concentrations of opsonins such as Ig, complement, and protease are approximately 1% of the serum levels and far below the level sufficient to eradicate bacteria due to continuous peritoneal lavage and dilution with dialysis solutions. Furthermore, glycation of IgG induces chronic activation of macrophages and decreases normal opsonic activities against bacteria. Fibrin deposits, collagen accumulation, and cellular desert of the peritoneum observed in long-term peritoneal dialysis patients may serve as a safe shelter for bacteria from contact with inflammatory cells and opsonin and delay eradication of bacteria. In conclusion, peritonitis is often more severe in patients on long-term PD. In this setting, peritonitis needs special attention to prevent life-threatening infection and further damage of the peritoneum. abstract_id: PUBMED:33270013 A Successful Treatment of Encapsulating Peritoneal Sclerosis in an Adolescent Boy on Long-term Peritoneal Dialysis: A Case Report. Encapsulating peritoneal sclerosis (EPS) is a rare life-threatening complication associated with peritoneal dialysis (PD). EPS is characterized by progressive fibrosis and sclerosis of the peritoneum, with the formation of a membrane and tethering of loops of the small intestine resulting in intestinal obstruction. It is very rare in children. We present a case of a 16-year-old adolescent boy who developed EPS seven years after being placed on continuous ambulatory peritoneal dialysis (CAPD) complicated by several episodes of bacterial peritonitis. The diagnosis was based on clinical, radiological, intraoperative and histopathological findings. The patient was successfully treated with surgical enterolysis. During a 7-year follow-up, there have been no further episodes of small bowel obstruction documented. He still continues to be on regular hemodialysis and is awaiting a deceased donor kidney transplant. EPS is a long-term complication of peritoneal dialysis and is typically seen in adults. Rare cases may be seen in the pediatric population and require an appropriate surgical approach that is effective and lifesaving for these patients. abstract_id: PUBMED:26280248 Preserving the peritoneal membrane in long-term peritoneal dialysis patients. What Is Known And Objective: Peritoneal dialysis (PD) has been widely used by patients with end-stage renal disease. However, chronic exposure of the peritoneal membrane to bioincompatible PD solutions, and peritonitis and uraemia during long-term dialysis result in peritoneal membrane injury and thereby contribute to membrane changes, ultrafiltration (UF) failure, inadequate dialysis and technical failure. Therefore, preserving the peritoneal membrane is important to maintain the efficacy of PD. This article reviews the current literature on therapeutic agents for preserving the peritoneal membrane. Methods: A literature search of PubMed was conducted using the search terms peritoneal fibrosis, peritoneal sclerosis, membrane, integrity, preserve, therapy and peritoneal dialysis, but not including peritonitis. Published clinical trials, in vitro studies, experimental trials in animal models, meta-analyses and review articles were identified and reviewed for relevance. Results And Discussion: We focus on understanding how factors cause peritoneal membrane changes, the characteristics and mechanisms of peritoneal membrane changes in patients undergoing PD and the types of therapeutic agents for peritoneal membrane preservation. There have been many investigations into the preservation of the peritoneal membrane, including PD solution improvement, the inhibition of cytokine and growth factor expression using renin-angiotensin-aldosterone system (RAAS) blockade, glycosaminoglycans (GAGs), L-carnitine and taurine additives. In addition, there are potential future therapeutic agents that are still in experimental investigations. What Is New And Conclusion: The efficacy of many of the therapeutic agents is uncertain because there are insufficient good-quality clinical studies. Overall membrane preservation and patient survival remain unproven in using more biocompatible PD solutions. With RAAS blockade, results are still inconclusive, as many of the clinical studies were retrospective. With GAGs, L-carnitine and taurine additives, there is no sufficiently long follow-up clinical study with a large sample size to support its efficacy. Therefore, better quality clinical studies within this area should be performed. abstract_id: PUBMED:22056978 MicroRNAs in peritoneal dialysis effluent are promising biomarkers for peritoneal fibrosis in peritoneal dialysis patients. Peritoneal fibrosis is a common complication of long-term peritoneal dialysis, and contributes to encapsulating peritoneal sclerosis and eventually peritoneal ultrafiltration failure, which restricts the wide application of peritoneal dialysis. Therefore, the prevention and treatment of peritoneal fibrosis is important to maintain peritoneal membrane integrity and prolong peritoneal dialysis treatment. Unfortunately, neither specific biomarkers nor effective therapies are available for peritoneal fibrosis in the clinic up to now. Emerging evidence suggests that extracellular microRNAs in body fluids are promising biomarkers for the diagnosis of diseases. microRNAs were reported to be involved in multiple fibrotic diseases and the serum levels of specific microRNAs were correlated with the degree of fibrosis. Moreover, extracellular microRNAs were found in peritoneal fluids and ascites. Based on these findings, here we present our hypothesis that extracellular microRNAs associated with peritoneal epithelial-to-mesenchymal transition and fibrosis could potentially be detected in peritoneal dialysis effluent, and serve as novel biomarkers for early assessment and diagnosis of peritoneal fibrosis. abstract_id: PUBMED:28576435 Encapsulating peritoneal sclerosis Encapsulating peritoneal sclerosis is a rare but devastating complication of long-term peritoneal dialysis with a high mortality rate. The incidence is between 0.5 and 2.5%, decreasing with time. PSE is defined as a clinical syndrome with signs of gastrointestinal obstruction, inflammation parameters, radiological and macroscopic changes. The duration of treatment and the cessation of peritoneal dialysis are the main risks. About 75% occured in patients on hemodialysis or after kidney transplantation. Morphological alterations are disappearance of mesothelial layer, submesothelial fibrosis, interstitial sclerosis and vasculopathy. Ultrafiltration failure, fast transport status of the peritoneal membrane and loss of sodium sieving, the most powerful predictor, are the functional abnormalities. Biomarkers in peritoneal effluent include cancer antigen 125, interleukin-6. The pathophysiology is probably a consequence of a multiple-hit process in which expression of growth factors and cytokines play a central role. Medical strategies (corticosteroids, tamoxifen) in association with parenteral nutrition and/or surgery (enterolysis) are discussed. Prevention is the use of physiological peritoneal dialysis solutions, icodextrine instead of high glucose concentration solutions and peritoneal lavage after peritoneal dialysis stopping. Answer: Yes, radiological assessment of abdominal computerized scans can diagnose encapsulating peritoneal sclerosis (EPS) in long-term peritoneal dialysis patients. A study that retrospectively compared CT scans from patients with confirmed EPS and those without EPS found that peritoneal thickening, calcification, bowel tethering, thickening, and dilatation were significantly more commonly reported in the EPS group. The study also developed a simplified scoring system based on these radiological features, which showed a receiver operating characteristic value of 0.87 for EPS, with a sensitivity of 78% and specificity of 85%. However, the inter-observer agreement varied, being the greatest for calcification and bowel dilatation and the lowest for peritoneal thickening. This indicates that CT scan reporting can differentiate EPS from peritoneal changes associated with the duration of peritoneal dialysis (PD) therapy, and the severity of abnormalities was associated with clinical outcomes (PUBMED:26730546). Additionally, typical computed tomography (CT) features of EPS include peritoneal calcification, bowel wall thickening, peritoneal thickening, loculated fluid collections, and tethered bowel loops, which are diagnostic of EPS in the appropriate clinical setting (PUBMED:22073836). However, it is important to note that the true incidence of EPS may be underestimated as the diagnosis depends on a combination of laparotomy and radiological features in suspected cases (PUBMED:11045299).
Instruction: Is resection of the tendon edge necessary to enhance the healing process? Abstracts: abstract_id: PUBMED:24927882 Is resection of the tendon edge necessary to enhance the healing process? An evaluation of the homeostasis of apoptotic and inflammatory processes in the distal 1 cm of a torn supraspinatus tendon: part I. Background: We hypothesize that the expression of proapoptotic and antiapoptotic molecules and cytokines is dependent on the distance from the torn supraspinatus tendon edge and this expression may influence its potential for healing. The aim of this work is to evaluate the expression of proapoptotic Bax molecule and caspases 3, 8, and 9; antiapoptotic Bcl-2 molecule; and proinflammatory tumor necrosis factor (TNF) α and anti-inflammatory interleukin 10 (IL-10) in 3 sections taken from a 1-cm section of the edge of a torn supraspinatus tendon: 3 mm distal and 3 mm proximal, as well as the remaining 4-mm middle section between them. Methods: Nine patients, with a mean age of 58 years, were included in the study. All fulfilled strict inclusion criteria regarding the morphology of the tear and reconstruction technique. Samples were taken from the ruptured supraspinatus tendon at the time of arthroscopic repair. Quantitative real-time polymerase chain reaction assay was used for analysis. Results: The expression of caspases 9, 8 and 3; Bax; and TNF-α significantly decreased from the distal to the proximal parts of the tendon edge (P &lt; .05). However, a significant increase in Bcl-2 and IL-10 expression was also found in the same direction (P &lt; .05). Conclusions: Tenocytes can reduce the expression of proapoptotic caspases 3, 8, and 9 and Bax, as well as proinflammatory TNF-α, by increasing the expression of Bcl-2 and IL-10 within 1 cm of the supraspinatus edge in a distal to proximal direction. Resection 4 to 7 mm from the edge of the torn supraspinatus tendon may enhance the healing process by reaching a reasonable compromise between molecular homeostasis of apoptotic and inflammatory processes and mechanical aspects of rotator cuff reconstruction. abstract_id: PUBMED:25440131 Is resection of the tendon edge necessary to enhance the healing process? An evaluation of the expression of collagen type I, IL-1β, IFN-γ, IL-4, and IL-13 in the distal 1 cm of a torn supraspinatus tendon: part II. Background: Type I collagen proin pro-in expression in a damaged supraspinatus tendon is thought to be dependent on the distance from the edge of the tear and the local expression of pro-inflammatory, anti-proliferative, and pro-proliferative cytokines. The study evaluates the expression of type I collagen, pro-inflammatory interleukin (IL) 1β, anti-proliferative interferon-γ (IFN-γ), and pro-proliferative IL-4 and IL-13 cytokines along a 1-cm section taken from the edge of a torn supraspinatus tendon. Three sections were taken: 3 mm distal to the tear, 3 mm proximal to the tear, and the 4-mm middle section between them. Methods: Nine patients (average age, 58 years) were included in the study. All fulfilled strict inclusion criteria regarding tear morphology and reconstruction technique. Samples were taken from the ruptured supraspinatus tendon at the time of arthroscopic repair. Quantitative real-time polymerase chain reaction assay was used for analysis. Results: The expression of type I collagen, IL-4, and IL-13 significantly increased and that of IL-1β and IFN-γ decreased from the distal to the proximal parts of the tendon edge (P &lt; .05). Conclusions: The expression of type I collagen is dependent on the distance from the edge of the torn supraspinatus tendon, the balance between anti-proliferative IFN-γ and pro-proliferative IL-4 and IL-13, and the expression of pro-inflammatory IL-1β. Hence, whereas resection of the distal 3 mm of the torn supraspinatus tendon edge eliminates its least valuable part, resection between 4 and 7 mm may enhance the healing process by reaching a reasonable compromise between the mechanical features of the tendon characterized by collagen type I expression and the technical abilities of reconstruction. abstract_id: PUBMED:37738787 The mechanisms and functions of TGF-β1 in tendon healing. Tendon injury accounts for 30% of musculoskeletal diseases and often leads to disability, pain, healthcare cost, and lost productivity. Following injury to tendon, tendon healing proceeds via three overlapping healing processes. However, due to the structural defects of the tendon itself, the tendon healing process is characterized by the formation of excessive fibrotic scar tissue, and injured tendons rarely return to native tendons, which can easily contribute to tendon reinjury. Moreover, the resulting fibrous scar is considered to be a precipitating factor for subsequent degenerative tendinopathy. Despite this, therapies are almost limited because underlying molecular mechanisms during tendon healing are still unknown. Transforming Growth Factor-β1 (TGF-β1) is known as one of most potent profibrogenic factors during tendon healing process. However, blockage TGF-β1 fails to effectively enhance tendon healing. A detailed understanding of real abilities of TGF-β1 involved in tendon healing can bring promising perspectives for therapeutic value that improve the tendon healing process. Thus, in this review, we describe recent efforts to identify and characterize the roles and mechanisms of TGF-β1 involved at each stage of the tendon healing and highlight potential roles of TGF-β1 leading to the fibrotic response to tendon injury. abstract_id: PUBMED:32190920 The role of the macrophage in tendinopathy and tendon healing. The role of the macrophage is an area of emerging interest in tendinopathy and tendon healing. The macrophage has been found to play a key role in regulating the healing process of the healing tendon. The specific function of the macrophage depends on its functional phenotype. While the M1 macrophage phenotype exhibits a phagocytic and proinflammatory function, the M2 macrophage phenotype is associated with the resolution of inflammation and tissue deposition. Several studies have been conducted on animal models looking at enhancing or suppressing macrophage function, targeting specific phenotypes. These studies include the use of exogenous biological and pharmacological substances and more recently the use of transgenic and genetically modified animals. The outcomes of these studies have been promising. In particular, enhancement of M2 macrophage activity in the healing tendon of animal models have shown decreased scar formation, accelerated healing, decreased inflammation and even enhanced biomechanical strength. Currently our understanding of the role of the macrophage in tendinopathy and tendon healing is limited. Furthermore, the roles of therapies targeting macrophages to enhance tendon healing is unclear. Clinical Significance: An increased understanding of the significance of the macrophage and its functional phenotypes in the healing tendon may be the key to enhancing tendon healing. This review will present the current literature on the function of macrophages in tendinopathy and tendon healing and the potential of therapies targeting macrophages to enhance tendon healing. abstract_id: PUBMED:17996614 Tendon healing. An understanding of the processes of tendon healing and tendon-to-bone healing is important for the intraoperative and postoperative management of patients with tendon ruptures or of patients requiring tendon transfers in foot and ankle surgery. Knowledge of the normal process allows clinicians to develop strategies when normal healing fails. This article reviews the important work behind the identification of the normal phases and control of tendon healing. It outlines the failed response in tendinopathy and describes tendon-to-bone healing in view of its importance in foot and ankle surgery. abstract_id: PUBMED:22809137 Tendon healing: repair and regeneration. Injury and degeneration of tendon, the soft tissue that mechanically links muscle and bone, can cause substantial pain and loss of function. This review discusses the composition and function of healthy tendon and describes the structural, biological, and mechanical changes initiated during the process of tendon healing. Biochemical pathways activated during repair, experimental injury models, and parallels between tendon healing and tendon development are emphasized, and cutting-edge strategies for the enhancement of tendon healing are discussed. abstract_id: PUBMED:30135861 Ultrasonographic Evaluation of the Early Healing Process After Achilles Tendon Repair. Background: Little is known about early healing of repaired Achilles tendons on imaging, particularly up to 6 months postoperatively, when patients generally return to participation in sports. Purpose: To examine changes in repaired Achilles tendon healing with ultrasonography for up to 12 months after surgery. Study Design: Case series; Level of evidence, 4. Methods: Ultrasonographic images of 26 ruptured Achilles tendons were analyzed at 1, 2, 3, 4, 6, and 12 months after primary repair. The cross-sectional areas (CSAs) and intratendinous morphology of the repaired tendons were evaluated using the authors' own grading system (tendon repair scores), which assessed the anechoic tendon defect area, intratendinous hyperechoic area, continuity of intratendinous fibrillar appearance, and paratendinous edema. Results: The mean ratios (%) of the CSA for the affected versus unaffected side of repaired Achilles tendons gradually increased postoperatively, reached a maximum (632%) at 6 months, and then decreased at 12 months. The mean tendon repair scores increased over time and reached a plateau at 6 months. Conclusion: Ultrasonography is useful to observe the intratendinous morphology of repaired Achilles tendons and to provide useful information for patients who wish to return to sports. Clinical parameters such as strength, functional performance, and quality of healed repaired tendons should also be assessed before allowing patients to return to sports. abstract_id: PUBMED:20187462 Effects of growth factors on tendon healing Objective: To review the research and delivery methods of growth factors in tendon injuries, and to point out the problems at present as well as to predict the trend of development in this field. Methods: Domestic and international literature concerning growth factors to enhance tendon and ligament healing in recent years was extensively reviewed and thoroughly analyzed. Results: Cell growth factor could promote tendon healing, improve the mechanical properties as well as reduce the adhesion postoperatively. The use of transgenic technology mediating cell factors to promote tendon repair shows its advantages in many ways. Conclusion: The growth factors play a vital role in tendon healing. Reasonable treatment of growth factors through direct application or gene transfer techniques is of great value for the healing process. abstract_id: PUBMED:33569739 Metabolic Regulation of Tendon Inflammation and Healing Following Injury. Purpose Of Review: This review seeks to provide an overview of the role of inflammation and metabolism in tendon cell function, tendinopathy, and tendon healing. We have summarized the state of knowledge in both tendon and enthesis. Recent Findings: Recent advances in the field include a substantial improvement in our understanding of tendon cell biology, including the heterogeneity of the tenocyte environment during homeostasis, the diversity of the cellular milieu during in vivo tendon healing, and the effects of inflammation and altered metabolism on tendon cell function in vitro. In addition, the mechanisms by which altered systemic metabolism, such as diabetes, disrupts tendon homeostasis continue to be better understood. A central conclusion of this review is the critical need to better define fundamental cellular and signaling mechanisms of inflammation and metabolism during tendon homeostasis, tendinopathy, and tendon healing in order to identify therapies to enhance or maintain tendon function. abstract_id: PUBMED:29345715 Tendon - function-related structure, simple healing process and mysterious ageing. Tendons are connective tissue structures of paramount importance to human ability of locomotion. The understanding of their physiology and pathology is gaining importance as advances in regenerative medicine are being made today. So far, very few studies were conducted to extend the knowledge about pathology, healing response and management of tendon lesions. In this paper we summarise actual knowledge on structure, process of healing and ageing of the tendons. The structure of tendon is optimised for the best performance of the tissue. Despite the simplicity of the healing response, numerous studies showed that the problems with full recovery are common and much more significant than we thought; that is why we discussed the issue of immobilisation and mechanical stimulation during healing process. The phenomenon of tendons' ageing is poorly understood. Although it seems to be a natural and painless process, it is completely different from degeneration in tendinopathy. Recent studies of biological treatment reported faster and optimal healing of the tendons when augmented by growth factors and stem cells. Despite advances in biology of tendons, management of their injuries is still a challenge for physicians; therefore, further studies are required to improve treatment outcomes. Answer: The resection of the tendon edge may be necessary to enhance the healing process, as suggested by studies evaluating the molecular and cellular aspects of tendon healing. Specifically, research on the torn supraspinatus tendon indicates that the expression of proapoptotic molecules (such as caspases 3, 8, and 9, and Bax) and proinflammatory cytokines (like TNF-α) significantly decreases from the distal to the proximal parts of the tendon edge. Conversely, the expression of antiapoptotic (Bcl-2) and anti-inflammatory (IL-10) molecules significantly increases in the same direction. This suggests that tenocytes can modulate the expression of these molecules within 1 cm of the supraspinatus edge in a distal to proximal direction. Resection 4 to 7 mm from the edge of the torn supraspinatus tendon may thus enhance the healing process by achieving a balance between molecular homeostasis of apoptotic and inflammatory processes and the mechanical aspects of rotator cuff reconstruction (PUBMED:24927882). Additionally, the expression of type I collagen, which is crucial for tendon structure, along with pro-proliferative cytokines (IL-4 and IL-13), increases from the distal to the proximal parts of the tendon edge, while the expression of pro-inflammatory IL-1β and anti-proliferative IFN-γ decreases. This suggests that resection of the distal 3 mm of the torn supraspinatus tendon edge removes its least valuable part, and resection between 4 and 7 mm may enhance the healing process by reaching a compromise between the mechanical features of the tendon characterized by collagen type I expression and the technical abilities of reconstruction (PUBMED:25440131). In general, tendon healing is a complex process involving various cells and molecular pathways, including the role of macrophages in regulating the healing process and the influence of growth factors on tendon repair (PUBMED:32190920, PUBMED:20187462). Understanding the mechanisms of tendon healing, such as the roles of TGF-β1 and the metabolic regulation of tendon inflammation and healing, is crucial for developing therapeutic strategies to improve tendon repair (PUBMED:37738787, PUBMED:33569739).
Instruction: Is Late Recurrence a Predictive Clinical Marker for Better Sunitinib Response in Metastatic Renal Cell Carcinoma Patients? Abstracts: abstract_id: PUBMED:26320661 Is Late Recurrence a Predictive Clinical Marker for Better Sunitinib Response in Metastatic Renal Cell Carcinoma Patients? Background: We investigated the clinicopathological features in patients with recurrent renal cell carcinoma (RCC) within 5 years or more than 5 years after nephrectomy and determined predictors of overall survival (OS) and progression-free survival (PFS) after disease recurrence in the administration of first-line sunitinib in the treatment of metastatic RCC (mRCC). Patients And Methods: In this study we enrolled 86 Turkish patients with mRCC who received sunitinib. Univariate analyses were performed using the log rank test. Results: Fifty-six patients (65%) were diagnosed with disease recurrence within 5 years after radical nephrectomy (early recurrence) and 30 patients (35%) were diagnosed with recurrence more than 5 years after radical nephrectomy (late recurrence). Fuhrman grade was statistically significantly different between the 2 groups (P = .013). The late recurrence patients were significantly associated with the Memorial Sloan Kettering Cancer Center favorable risk group compared with patients with early recurrence (P = .001). There was a statistically significant correlation between recurrence time and the rate of objective remission (ORR) (the late recurrence group vs. the early recurrence group: 43.3% vs. 14.3%, respectively; P = .004). From the time of disease recurrence, the median OS was 42.0 (95% confidence interval [CI], 24.4-59.5) months in the late recurrence group, and 16 (95% CI, 11.5-20.4) months in the early recurrence group (P = .001). Median PFS was 8 (95% CI, 4.05-11.9) months in the early recurrence group, and 20 (95% CI, 14.8-25.1) months in the late recurrence group (P ≤ .001). Conclusion: The study demonstrated a potential prognostic value of late recurrence in terms of PFS, OS, and ORR. abstract_id: PUBMED:29187872 Tumoral ANXA1 Is a Predictive Marker for Sunitinib Treatment of Renal Cancer Patients. Background and aims: There is no established predictive marker for the treatment of renal cancer. Metastatic renal cell carcinoma (mRCC) patients are often treated with sunitinib, a tyrosine kinase inhibitor. Sunitinibs anti-cancer effect is at least partly mediated through interfering with angiogenesis. Our aim with the current study was to assess annexin A1 (ANXA1), which stimulates angiogenesis, as a predictive marker for sunitinib therapy in mRCC patients. Since previous studies have indicated a predictive potential for cubilin, we also investigated the predictivity of ANXA1 combined with cubilin. Methods: ANXA1 expression was analysed in tumor tissue from a cohort of patients with advanced RCC (n=139) using immunohistochemistry. Ninety-nine of the patients were treated with sunitinib in the first or second-line setting. Twenty-two of these were censored because of toxicity leading to the termination of treatment and the remaining (n=77) were selected for the present study. Results: Twenty-five (32%) out of seventy-seven of the tumors lacked ANXA1 in the cytoplasm. On statistical analyses using Kaplan-Meier method, aNXA1 negative tumors were significantly associated with a longer treatment benefit in terms of progression free survival (PFS). Overall survival was also significantly better for patients with ANXA1 negative tumors. The combined ANXA1 positive and cubilin negative expression could more accurately than ANXA1 alone define the group not benefitting from treatment. Conclusions: Our results indicate that cytoplasmic expression of ANXA1 is a negative predictive marker for sunitinib therapy in mRCC patients. A possible explanation for this finding is that sunitinibs anti-angiogenic effect cannot overcome the pro-angiogenic drive from many ANXA1 proteins. abstract_id: PUBMED:31289593 Tumoral Pyruvate Kinase L/R as a Predictive Marker for the Treatment of Renal Cancer Patients with Sunitinib and Sorafenib. Background and aims: Treatment with tyrosine kinase inhibitors (TKI) like sunitinib and sorafenib has improved the prognosis of patients with metastatic renal cell cancer (mRCC). No predictive marker is available to select patients who will gain from these treatments. Tumoral pyruvate kinase L/R (PKLR) is a membrane protein with highly specific expression in the renal tubule. We have previously shown that the tumoral expression of cubilin (CUBN) is associated with progression free survival (PFS) in mRCC patients treated with sunitinib and sorafenib. The aim of the present study was to investigate if PKLR can predict response in these patients, alone and/or in combination with CUBN. Methods: A tissue microarray (TMA) was constructed of tumor samples from 139 mRCC patients. One hundred and thirty-six of these patients had been treated with sunitinib or sorafenib in the first or second-line setting. Thirty patients suffered from early severe toxicity leading to the termination of treatment. The remaining patients (n=106) were selected for the current study. Results: Fifty-five (52%) of the tumors expressed membranous PKLR. Patients with PKLR tumor expression experienced a significantly longer PFS compared to patients with no expression (eight versus five months, p = 0.019). Overall survival (OS) was also significantly better for patients with PKLR expression. In addition, the combined expression of PKLR and CUBN resulted in a higher predictive value than either marker alone. Conclusions: In this real world study we show that tumoral PKLR membrane expression is a positive predictive biomarker for sunitinib and sorafenib treatment in patients suffering from mRCC. Our results also indicate that the combined expression with cubilin more accurately than PKLR alone can select patients with no benefit from treatment. abstract_id: PUBMED:26045116 Very late recurrence of renal cell carcinoma experiencing long-term response to sunitinib: a case report. Renal cell carcinoma (RCC) is responsible for 4% of all neoplasms in adults and for 80% of all primary renal tumors. Metastatic RCC is resistant to all cytotoxic agents and generally prognosis is poor. However, the clinical behavior of RCC is unpredictable, and late recurrences of disease can occur even after several years from the initial surgical approach, so response to the currently available targeted agents is uncertain, due to the lack of reliable prognostic and predictive factors. We report the case of a patient who developed a metastatic recurrence of RCC 16 years after primary treatment, in spite of metastatic disease at diagnosis. At the time of relapse, the disease showed a surprisingly long-term response to Sunitinib, which is maintained after 74 months of treatment. This case report highlights the unpredictable behavior of RCC and underlines the presence of a subset of patients with metastatic RCC achieving long-term response to Sunitinib, despite poor clinical features. In this subset of patients, an important clinical question arises about the appropriate duration of treatment and the need to continue it indefinitely. abstract_id: PUBMED:32321460 Tumor endothelial ELTD1 as a predictive marker for treatment of renal cancer patients with sunitinib. Background: Patients with metastatic renal cell cancer (mRCC) are commonly treated with the tyrosine kinase inhibitor sunitinib, which blocks signalling from vascular endothelial growth factor (VEGF) - and platelet-derived growth factor-receptors, inhibiting development of new blood vessels. There are currently no predictive markers available to select patients who will gain from this treatment. Epidermal growth factor, latrophilin and seven transmembrane domain-containing protein 1 (ELTD1) is up-regulated in tumor endothelial cells in many types of cancer and may be a putative predictive biomarker due to its association with ongoing angiogenesis. Methods: ELTD1, CD34 and VEGF receptor 2 (VEGFR2) expressions were analysed in tumor vessels of renal cancer tissues from 139 patients with mRCC using immunohistochemistry. Ninety-nine patients were treated with sunitinib as the first or second-line therapy. Early toxicity, leading to the termination of the treatment, eliminated 22 patients from the analyses. The remaining (n = 77) patients were included in the current study. In an additional analysis, 53 sorafenib treated patients were evaluated. Results: Patients with high ELTD1 expression in the tumor vasculature experienced a significantly better progression free survival (PFS) with sunitinib treatment as compared to patients with low ELTD1 expression (8 versus 5.5 months, respectively). The expression level of CD34 and VEGFR2 showed no correlation to sunitinib response. In sorafenib treated patients, no association with ELTD1 expression and PFS/OS was found. Conclusions: Our results identify tumor vessel ELTD1 expression as a positive predictive marker for sunitinib-treatment in patients suffering from mRCC. The negative results in the sorafenib treated group supports ELTD1 being a pure predictive and not a prognostic marker for sunitinib therapy. abstract_id: PUBMED:28260162 Tumoral cubilin is a predictive marker for treatment of renal cancer patients with sunitinib and sorafenib. Purpose: Tyrosine kinase inhibitors like sunitinib and sorafenib are commonly used to treat metastatic renal cell cancer patients. Cubilin is a membrane protein expressed in the proximal renal tubule. Cubilin and megalin function together as endocytic receptors mediating uptake of many proteins. There is no established predictive marker for metastatic renal cell cancer patients and the purpose of the present study was to assess if cubilin can predict response to treatment with tyrosine kinase inhibitors. Methods: Cubilin protein expression was analyzsed in tumor tissue from a cohort of patients with metastatic renal cell cancer (n = 139) using immunohistochemistry. One hundred and thirty six of the patients were treated with sunitinib or sorafenib in the first- or second-line setting. Thirty of these were censored because of toxicity leading to the termination of treatment and the remaining (n = 106) were selected for the current study. Results: Fifty-three (50%) of the tumors expressed cubilin in the membrane. The median progression-free survival was 8 months in patients with cubilin expressing tumors and 4 months in the cubilin negative group. In addition, the overall survival was better for patients with cubilin positive tumors. We also found that the fraction of cubilin negative patients was significantly higher in the non-responding group (PFS ≤3 months) compared to responding patients (PFS &gt;3 months). Conclusions: We show for the first time that tumoral expression of cubilin is a positive predictive marker for treatment of metastatic renal cell cancer patients with sunitinib and sorafenib. abstract_id: PUBMED:24922691 Hypothyroidism as a predictive clinical marker of better treatment response to sunitinib therapy. Background: Tyrosine kinase inhibitors are standard treatment in patients with metastatic renal cell carcinoma (mRCC). Several studies have indicated that side-effects including hypothyroidism may serve as potential predictive biomarkers of treatment efficacy. Patients And Methods: All patients with clear cell mRCC treated with sunitinib in the first-line setting in our Center between November 2008 and October 2013 were included. Thyroid function was assessed after every 2 cycles. Prognostic factors were tested using Cox proportional hazards model for univariate analysis. Results: During treatment, 29.3% developed hypothyroidism, with a median of peak TSH values of 34.4 mIU/L. Patients who had both TSH &gt;4 mIU/L and were receiving substitution therapy with levothyroxine had prolonged PFS compared to all other patients (25.3 months vs. 9.0 months; p=0.042). Conclusion: The rate of hypothyroidism as a side-effect of sunitinib in patients with mRCC is significant. Patients with symptomatic hypothyroidism experienced significantly longer PFS, but without difference in OS. abstract_id: PUBMED:26417621 Very late recurrence of renal cell carcinoma experiencing long-term response to sunitinib: a case report. Renal cell carcinoma (RCC) is responsible for 4% of all neoplasms in adults and for 80% of all primary renal tumors. Metastatic RCC is resistant to all cytotoxic agents and generally prognosis is poor. However, the clinical behavior of RCC is unpredictable, and late recurrences of disease can occur even after several years from the initial surgical approach, so response to the currently available targeted agents is uncertain, due to the lack of reliable prognostic and predictive factors. We report the case of a patient who developed a metastatic recurrence of RCC 16 years after primary treatment, in spite of metastatic disease at diagnosis. At the time of relapse, the disease showed a surprisingly long-term response to Sunitinib, which is maintained after 74 months of treatment. This case report highlights the unpredictable behavior of RCC and underlines the presence of a subset of patients with metastatic RCC achieving long-term response to Sunitinib, despite poor clinical features. In this subset of patients, an important clinical question arises about the appropriate duration of treatment and the need to continue it indefinitely. abstract_id: PUBMED:29721091 Soluble CD146 is a predictive marker of pejorative evolution and of sunitinib efficacy in clear cell renal cell carcinoma. The objective of the study was to use CD146 mRNA to predict the evolution of patients with non-metastatic clear cell renal cell carcinoma (M0 ccRCC) towards metastatic disease, and to use soluble CD146 (sCD146) to anticipate relapses on reference treatments by sunitinib or bevacizumab in patients with metastatic ccRCC (M1). Methods: A retrospective cohort of M0 patients was used to determine the prognostic role of intra-tumor CD146 mRNA. Prospective multi-center trials were used to define plasmatic sCD146 as a predictive marker of sunitinib or bevacizumab efficacy for M1 patients. Results: High tumor levels of CD146 mRNA were linked to shorter disease-free survival (DFS) and overall survival (OS). ccRCC patients from prospective cohorts with plasmatic sCD146 variation &lt;120% following the first cycle of sunitinib treatment had a longer progression-free survival (PFS) and OS. The plasmatic sCD146 variation did not correlate with PFS or OS for the bevacizumab-based treatment. In vitro, resistant cells to sunitinib expressed high levels of CD146 mRNA and protein in comparison to sensitive cells. Moreover, recombinant CD146 protected cells from the sunitinib-dependent decrease of cell viability. Conclusion: CD146/sCD146 produced by tumor cells is a relevant biological marker of ccRCC aggressiveness and relapse on sunitinib treatment. abstract_id: PUBMED:25948423 Is sunitinib-induced hypothyroidism a predictive clinical marker for better response in metastatic renal cell carcinoma patients? Background: The main goal of this study was to examine whether the occurrence of hypothyroidism during sunitinib therapy in patients with metastatic renal cell carcinoma (mRCC) is associated with a better outcome. Methods: The study enrolled 81 patients with pathologically proven mRCC who were treated with sunitinib between March 2008 and June 2013.Thyroid function evaluation comprised (free-thyroxine) FT4 and thyroid-stimulating hormone (TSH) before treatment and at day 1 of each 6-week cycle. Survival analysis was performed using the Kaplan-Meier method, and the differences among the groups were determined using the log-rank test. Results: Hypothyroidism occurred in 30 (37%) of 81 patients within a median 3 months (range 1-18) of treatment initiation. There was a statistically significant correlation between the occurrence of hypothyroidism during treatment and the rate of objective remission (ORR) (hypothyroid patients vs euthyroid patients: 46.7 vs 13.7%, respectively; P = 0.001). Median progression-free survival (PFS) was 10 (95% CI 6.13-13.8) months in the euthyroid patients, and 17 (95% CI 9.33-24.6) months in the hypothyroid patients (P = 0.001). The median overall survival (OS) was 39 (95% CI 25.4-52.5) months in the hypothyroid patients and 20 (95% CI 14.7-25.2) months in the euthyroid patients (P = 0.019). Conclusions: The occurrence of hypothyroidism during treatment in patients was significantly associated with longer PFS, OS and better ORR in the current study. Answer: Yes, late recurrence is a predictive clinical marker for better sunitinib response in metastatic renal cell carcinoma (mRCC) patients. A study demonstrated that patients with late recurrence, defined as disease recurrence more than 5 years after radical nephrectomy, had significantly better outcomes when treated with sunitinib compared to those with early recurrence (within 5 years). The late recurrence group had a higher rate of objective remission (43.3% vs. 14.3%), longer median overall survival (OS) (42.0 months vs. 16 months), and longer median progression-free survival (PFS) (20 months vs. 8 months) (PUBMED:26320661).
Instruction: Could Adherence to Quality of Care Indicators for Hospitalized Patients With Cirrhosis-Related Ascites Improve Clinical Outcomes? Abstracts: abstract_id: PUBMED:26729545 Could Adherence to Quality of Care Indicators for Hospitalized Patients With Cirrhosis-Related Ascites Improve Clinical Outcomes? Objectives: The diagnosis of cirrhotic ascites is associated with significant morbidity, mortality, and reduced health-related quality of life. Adherence by health professionals to quality indicators (QIs) of care for ascites is low. We evaluated the effect of adherence to ascites QIs on clinical outcomes for patients hospitalized with new onset cirrhotic ascites. Methods: The medical records of 302 patients admitted with new onset cirrhotic ascites were interrogated for demographic and clinical data and adherence to eight Delphi panel-derived QIs for ascites management. Associations between adherence to each QI and 30-day emergent readmission and 90-day mortality were analyzed. Results: The majority of patients were males (68.9%) over 50 years of age (mean 57±12.83 years) with alcohol-related cirrhosis (59%). Twenty-nine percent were readmitted within 30 days. Patients who received an abdominal paracentesis within 30 days of ascites diagnosis (QI 1, relative risk (RR) 0.41, P=0.004) or during index hospitalization (QI 2, RR 0.57, P=0.006) were significantly less likely to experience a 30-day emergent readmission. Baseline serum bilirubin &gt;2.5 mg/dl was associated with increased 30-day cirrhosis-related readmission (RR 1.51, P=0.03). A total of 18.5% of patients died within 90 days of index admission; median interval to death was 139 days (37-562 days). Pneumonia was the most frequent cause of death. Independent predictors of 90-day mortality included older age (odds ratio (OR) 1.03, P=0.03), increased Model for End-stage Liver Disease (MELD)-Na score (OR 1.06, P=0.05), primary SBP prophylaxis (QI 7, OR 2.30, P=0.04), and readmission within 30 days (OR 30.26, P&lt;0.001). Discharge prescription of diuretics (QI 8, OR 0.28, P=0.01) was associated with reduced 90-day mortality. Conclusions: Early paracentesis in patients with new onset cirrhotic ascites lowers 30-day readmission rates, and early initiation of diuretic therapy lowers 90-day mortality. abstract_id: PUBMED:25837700 Impact of physician specialty on quality care for patients hospitalized with decompensated cirrhosis. Background: Decompensated cirrhosis is a common precipitant for hospitalization, and there is limited information concerning factors that influence the delivery of quality care in cirrhotic inpatients. We sought to determine the relation between physician specialty and inpatient quality care for decompensated cirrhosis. Design: We reviewed 247 hospital admissions for decompensated cirrhosis, managed by hospitalists or intensivists, between 2009 and 2013. The primary outcome was quality care delivery, defined as adherence to all evidence-based specialty society practice guidelines pertaining to each specific complication of cirrhosis. Secondary outcomes included new complications, length-of-stay, and in-hospital death. Results: Overall, 147 admissions (59.5%) received quality care. Quality care was given more commonly by intensivists, compared with hospitalists (71.7% vs. 53.1%, P = .006), and specifically for gastrointestinal bleeding (72% vs. 45.8%, P = .03) and hepatic encephalopathy (100% vs. 63%, P = .005). Involvement of gastroenterology consultation was also more common in admissions in which quality care was administered (68.7% vs. 54.0%, P = .023). Timely diagnostic paracentesis was associated with reduced new complications in admissions for refractory ascites (9.5% vs. 46.6%, P = .02), and reduced length-of-stay in admissions for spontaneous bacterial peritonitis (5 days vs. 13 days, P = .02). Conclusions: Adherence to quality indicators for decompensated cirrhosis is suboptimal among hospitalized patients. Although quality care adherence appears to be higher among cirrhotic patients managed by intensivists than by hospitalists, opportunities for improvement exist in both groups. Rational and cost-effective strategies should be sought to achieve this end. abstract_id: PUBMED:22465432 The quality of care provided to patients with cirrhosis and ascites in the Department of Veterans Affairs. Background & Aims: Ascites are the most common complication of cirrhosis. Evidence-based guidelines define the criteria and standards of care for patients with cirrhosis and ascites. However, little is known about the extent to which patients with ascites meet these standards. Methods: We evaluated the quality of ascites care, measured by 8 explicit Delphi panel-derived quality indicators, in 774 patients with cirrhosis and ascites, seen at 3 Veterans Affairs Medical Centers between 2000 and 2007. We also conducted a structured implicit review of patients' medical charts to determine whether patient refusal, outside care, or other justifiable exceptions to care processes account for nonadherence to the quality indicators. Results: Quality scores (maximum 100%) varied among individual indicators, ranging from 30% for secondary prophylaxis of spontaneous bacterial peritonitis, to 90% for assays for cell number and type in the paracentesis fluid. In general, care targeted at treatment was more likely to meet standards than preventive care. Only 33.2% (95% confidence interval [CI]: 29.9%-32.9%) of patients received all recommended care. Patients with no comorbidity (Deyo index 0 vs &gt;3; odds ratio = 2.21; 95% CI: 1.43-3.43), who saw a gastroenterologist (odds ratio = 1.33; 95% CI, 1.01-1.74), or were seen in a facility with academic affiliation (odds ratio = 1.73; 95% CI: 1.29-2.35) received higher-quality care. Justifiable exceptions to indicated care, documented in charts, were common for patients with paracentesis after diagnosis with ascites, patients that received antibiotics for gastrointestinal bleeding, and patients that required diuretics. However, most patients did not have an explanation documented for nonadherence to recommended care. Conclusions: Health care quality, measured by whether patients received recommended services, was suboptimal for patients with cirrhosis-related ascites. Care that included gastroenterologists was associated with high quality. However, for some of the quality indicators, too many denominator exceptions existed to allow for accurate automated measurement. abstract_id: PUBMED:20385251 An explicit quality indicator set for measurement of quality of care in patients with cirrhosis. Background & Aims: Cirrhosis is a prevalent and expensive condition. With an increasing emphasis on quality in health care and recognition of inconsistencies in the management of patients with cirrhosis, we established a set of explicit quality indicators (QIs) for their treatment. Methods: We organized an 11-member, multidisciplinary expert panel and followed modified Delphi methods to systematically identify a set of QIs for cirrhosis. We provided the panel with a report that summarized the results of a comprehensive literature review of data linking candidate QIs to outcomes. The panel performed independent ratings of each candidate QI by using a standard 9-point RAND appropriateness scale (RAS) (ranging from 1 = not appropriate to 9 = most appropriate). The panel members then met, reviewed the ratings, and voted again by using an iterative process of discussion. The final set of QIs was selected; QIs had a median RAS &gt;7, and panel members agreed on those selected. Results: Among 169 candidate QIs, the panel rated 41 QIs as valid measures of quality care. The selected QIs cover 6 domains of care including ascites (13 QIs), variceal bleeding (18 QIs), hepatic encephalopathy (4 QIs), hepatocellular cancer (1 QI), liver transplantation (2 QIs), and general cirrhosis care (3 QIs). Content coverage included prevention, diagnosis, treatment, timeliness, and follow-up. Conclusions: We developed an explicit set of evidence-based QIs for treatment of cirrhosis. These provide physicians and institutions with a tool to identify processes amenable to quality improvement. This tool is intended to be applicable in any setting where care for patients with cirrhosis is provided. abstract_id: PUBMED:31688022 Early Paracentesis in High-Risk Hospitalized Patients: Time for a New Quality Indicator. Introduction: Symptomatic ascites is the most common indication for hospitalization in patients with cirrhosis. Although guidelines recommend paracentesis for all inpatients with ascites, the timing of paracentesis is likely to be crucial. Performance of an early paracentesis and its relationship to outcomes are unknown, particularly among patients at high risk of spontaneous bacterial peritonitis (SBP). Methods: We included 75,462 discharges of adult patients with cirrhosis presenting with ascites who underwent paracentesis from the State Inpatient Databases of New York, Florida, and Washington from 2009 to 2013. High-risk patients were identified as having concomitant hepatic encephalopathy or acute kidney injury present on admission. The primary outcome was performance of early paracentesis (within 1 hospital day) with secondary outcomes being inpatient mortality, SBP-related mortality, and 30-day readmission. Multivariable logistic regression models included a priori covariates known to impact outcomes. Results: There were 43,492 (57.6%) patients who underwent early paracentesis. High-risk patients (27,496) had lower rates of early paracentesis (52.8% vs 60.5%, P &lt; 0.001). On multivariable analysis, high-risk patients had significantly decreased odds of undergoing early paracentesis (odds ratio [OR] 0.74, 95% confidence interval [CI] 0.71-0.78, P &lt; 0.001). Early paracentesis was associated with a reduced inpatient all-cause mortality (OR 0.68, 95% CI 0.63-0.73, P &lt; 0.001), SBP-related mortality (OR 0.84, 95% CI 0.73-0.94, P = 0.01), and 30-day readmission (OR 0.87, 95% CI 0.82-0.92, P &lt; 0.001). Discussion: Early paracentesis is associated with reduced inpatient mortality, SBP-related mortality, and 30-day readmission. Given its impact on outcomes, early paracentesis should be a new quality metric. Further education and interventions are needed to improve both adherence and outcomes. abstract_id: PUBMED:15942950 Determining the extent of quality health care for hospitalized patients with cirrhosis. Background/aims: Since few data are available concerning the clinical course of decompensated hepatitis C virus (HCV)-related cirrhosis, the aim of the present study was to define the natural long-term course after the first hepatic decompensation. Methods: Cohort of 200 consecutive patients with HCV-related cirrhosis, and without known hepatocellular carcinoma (HCC), hospitalized for the first hepatic decompensation. Results: Ascites was the most frequent first decompensation (48%), followed by portal hypertensive gastrointestinal bleeding (PHGB) (32.5%), severe bacterial infection (BI) (14.5%) and hepatic encephalopathy (HE) (5%). During follow-up (34+/-2 months) there were 519 readmissions, HCC developed in 33 (16.5%) patients, and death occurred in 85 patients (42.5%). The probability of survival after diagnosis of decompensated cirrhosis was 81.8 and 50.8% at 1 and 5 years, respectively. HE and/or ascites as the first hepatic decompensation, baseline Child-Pugh score, age, and presence of more than one decompensation during follow-up were independently correlated with survival. Conclusions: Once decompensated HCV-related cirrhosis was established, patients showed not only a very high frequency of readmissions, but also developed decompensations different from the initial one. These results contribute to defining the natural course and prognosis of decompensated HCV-related cirrhosis. abstract_id: PUBMED:30586188 Development of Quality Measures in Cirrhosis by the Practice Metrics Committee of the American Association for the Study of Liver Diseases. Health care delivery is increasingly evaluated according to quality measures, yet such measures are underdeveloped for cirrhosis. The Practice Metrics Committee of the American Association for the Study of Liver Diseases was charged with developing explicit process-based and outcome-based measures for adults with cirrhosis. We identified candidate measures from comprehensive reviews of the literature and input from expert clinicians and patient focus groups. We conducted an 11-member expert clinician panel and used a modified Delphi method to systematically identify a set of quality measures in cirrhosis. Among 119 candidate measures, 46 were identified as important measures to define the quality of cirrhosis care, including 26 process measures, 7 clinical outcome measures, and 13 patient-reported outcome measures. The final process measures captured care processes for ascites (n = 5), varices/bleeding (n = 7), hepatic encephalopathy (n = 4), hepatocellular cancer (HCC) screening (n = 1), liver transplantation evaluation (n = 2), and other care (n = 7). Clinical outcome measures included survival, variceal bleeding and rebleeding, early-stage HCC, liver-related hospitalization, and rehospitalization within 7 and 30 days. Patient-reported outcome measures covered physical symptoms, physical function, mental health, general function, cognition, social life, and satisfaction with care. The final list of patient-reported outcomes was validated in 79 patients with cirrhosis from nine institutions in the United States. Conclusion: We developed an explicit set of evidence-based quality measures for adult patients with cirrhosis. These measures are a tool for providers and institutions to evaluate their care quality, drive quality improvement, and deliver high-value cirrhosis care. The quality measures are intended to be applicable in any clinical care setting in which care for patients with cirrhosis is provided. abstract_id: PUBMED:23983598 The role of serum procalcitonin levels in predicting ascitic fluid infection in hospitalized cirrhotic and non-cirrhotic patients. Objective: To determine the role of serum procalcitonin levels in predicting ascites infection in hospitalized cirrhotic and non-cirrhotic patients. Methods: A total of 101 patients (mean age: 63.4 ± 1.3, 66.3% were males) hospitalized due to cirrhosis (n=88) or malignancy related (n=13) ascites were included in this study. Spontaneous bacterial peritonitis (SBP, 19.8%), culture-negative SBP (38.6%), bacterascites (4.9%), sterile ascites (23.8%) and malign ascites (12.9%) groups were compared in terms of procalcitonin levels in predicting ascites infection. Receiver operating characteristic (ROC) curves were used to evaluate the diagnostic performance of procalcitonin levels and predicting outcome of procalcitonin levels was compared with C-reactive protein (CRP). Results: Culture positivity was determined in 26.7% of overall population. Serum procalcitonin levels were determined to be significantly higher in patients with positive bacterial culture in ascitic fluid compared to patients without culture positivity (median (min-max): 4.1 (0.2-36.4) vs. 0.4 (0.04-15.8), p&lt;0.001). Using ROC analysis, a serum procalcitonin level of &lt;0.61 ng/mL in SBP (area under curve (AUC): 0.981, CI 95%: 0.000-1.000, p&lt;0.001), &lt;0.225 ng/mL in culture-negative SBP (AUC: 0.743, CI 95%: 0.619-0.867, p&lt;0.001), &lt;0.42 ng/mL in SBP and culture-negative SBP patients (AUC: 0.824, CI 95%: 0.732-0.916, p&lt;0.001), and &lt;1.12 ng/mL in bacterascites (AUC: 0.837, CI 95%: 0.000-1.000, p=0.019) were determined to accurately rule out the diagnosis of bacterial peritonitis. Predictive power of serum procalcitonin levels in SBP + culture-negative SBP group (AUCs: 0.824 vs 0.622, p=0.004, Fig 4), culture-positive SBP (AUCs: 0.981 vs 0.777, p=0.006, Fig 5) and (although less powerfull) in culture-negative SBP (AUCs: 0.743 vs 0.543, p=0.02, Fig 6) were found significantly higher than CRP. Conclusion: According to our findings determination of serum procalcitonin levels seems to provide satisfactory diagnostic accuracy in differentiating bacterial infections in hospitalized patients with liver cirrhosis related ascites. abstract_id: PUBMED:31749900 Impact on 30-d readmissions for cirrhotic patients with ascites after an educational intervention: A pilot study. Background: A low proportion of patients admitted to hospital with cirrhosis receive quality care with timely paracentesis an important target for improvement. We hypothesized that a medical educational intervention, delivered to medical residents caring for patients with cirrhosis, would improve quality of care. Aim: To determine if an educational intervention can improve quality of care in cirrhotic patients admitted to hospital with ascites. Methods: We performed a pilot prospective cohort study with time-based randomization over six months at a large teaching hospital. Residents rotating on hospital medicine teams received an educational intervention while residents rotating on hospital medicine teams on alternate months comprised the control group. The primary outcome was provision of quality care- defined as adherence to all quality-based indicators derived from evidence-based practice guidelines- in admissions for patients with cirrhosis and ascites. Patient clinical outcomes- including length of hospital stay (LOS); 30-d readmission; in-hospital mortality and overall mortality- and resident educational outcomes were also evaluated. Results: Eighty-five admissions (60 unique patients) met inclusion criteria over the study period-46 admissions in the intervention group and 39 admissions in the control group. Thirty-seven admissions were female patients, and 44 admissions were for alcoholic liver disease. Mean model for end-stage liver disease (MELD)-Na score at admission was 25.8. Forty-seven (55.3%) admissions received quality care. There was no difference in the provision of quality care (56.41% vs 54.35%, P = 0.9) between the two groups. 30-d readmission was lower in the intervention group (35% vs 52.78%, P = 0.1) and after correction for age, gender and MELD-Na score [RR = 0.62 (0.39, 1.00), P = 0.05]. No significant differences were seen for LOS, complications, in-hospital mortality or overall mortality between the two groups. Resident medical knowledge and self-efficacy with paracentesis improved after the educational intervention. Conclusion: Medical education has the potential to improve clinical outcomes in patients admitted to hospital with cirrhosis and ascites. abstract_id: PUBMED:37568531 Telemedicine in Patients Affected by Chronic Liver Disease: A Scoping Review of Clinical Outcomes and the Devices Evaluated. For patients with chronic liver disease (CLD), telemedicine is emerging as a useful tool to prevent liver decompensation or hospitalization, allowing access to and the decentralization of care, even for patients with limited resources. However, research and attendant evidence are still lacking; thus, this review aims to systematically explore the topic of telemonitoring for CLD to describe the currently used tools and clinical outcomes. The review was conducted by using key terms on PubMed/EMBASE and searching for observational studies or clinical trials (according to PRISMA recommendations) that were published between 6 April 2013 and 6 April 2023 to keep the technological framework limited to the last 10 years. The studies were described and grouped according to the aim of telemonitoring, the underlying disease, and the tools adopted to achieve remote monitoring. A total of 32 articles met the inclusion criteria. Of these, 11 articles report the successful use of a telehealth program to support and improve access to care in the management of HCV-related cirrhosis, eight articles examine the efficacy of telemedicine for remote monitoring interventions to prevent or decrease the risk of decompensation in high-risk patients, and five articles examine improvements in the physical performance and quality of life of cirrhotic patients through telehealth rehabilitation programs. Four studies were completed during the recent COVID-19 pandemic. Telehealth has the potential to provide and expand treatment access and reduce barriers to care for the most disadvantaged patients and might be able to reduce the need for hospital readmission for CLD, though most practice to test feasibility is still in the pilot stage. Answer: Yes, adherence to quality of care indicators (QIs) for hospitalized patients with cirrhosis-related ascites could improve clinical outcomes. Studies have shown that early paracentesis in patients with new onset cirrhotic ascites lowers 30-day readmission rates, and early initiation of diuretic therapy lowers 90-day mortality (PUBMED:26729545). Additionally, timely diagnostic paracentesis was associated with reduced new complications in admissions for refractory ascites and reduced length-of-stay in admissions for spontaneous bacterial peritonitis (PUBMED:25837700). Furthermore, the quality of care provided to patients with cirrhosis and ascites was found to be suboptimal, and care that included gastroenterologists was associated with higher quality (PUBMED:22465432). An explicit set of evidence-based QIs for the treatment of cirrhosis has been developed, which covers various domains of care including ascites, and these QIs are intended to help identify processes amenable to quality improvement (PUBMED:20385251). Moreover, early paracentesis has been associated with reduced inpatient mortality, SBP-related mortality, and 30-day readmission, suggesting that early paracentesis should be a new quality metric (PUBMED:31688022). An educational intervention aimed at medical residents caring for patients with cirrhosis and ascites showed potential to improve clinical outcomes, such as reducing 30-day readmissions (PUBMED:31749900). Lastly, telemedicine has emerged as a useful tool for patients with chronic liver disease, potentially preventing liver decompensation or hospitalization and improving access to care, which could lead to better clinical outcomes (PUBMED:37568531). In summary, adherence to QIs for the management of cirrhosis-related ascites, including timely paracentesis and diuretic therapy, as well as educational interventions and the use of telemedicine, can lead to improved clinical outcomes for these patients.
Instruction: Are there reliable indicators predicting post-operative complications in acute appendicitis? Abstracts: abstract_id: PUBMED:26310685 Are there reliable indicators predicting post-operative complications in acute appendicitis? Purpose: To clarify the predictors of post-operative complications of pediatric acute appendicitis. Methods: The medical records of 485 patients with acute appendicitis operated on between January 2006 and November 2014 were retrospectively reviewed. Age, sex, preoperative WBC, CRP, and appendix maximum short diameter on diagnostic imaging (AMSD) were compared retrospectively with the complications group (Group C) vs the non-complication group (Group NC) by Student's T test, Fisher exact test and Multivariate analysis. Regression analysis with p less than 0.01 was considered significant. We analyzed the most recent 314 laparoscopic appendectomy patients similarly. Results: Complications were found in 29 of the 485 appendectomies (6.0%). Comparing Group C to Group NC, preoperative WBC (×10(3)/μl) 16.4 ± 5.6 vs 14.1 ± 4.1 (p &lt; 0.01), CRP (mg/dl) 8.3 ± 7.1 vs 3.3 ± 4.6 (p &lt; 0.01), AMSD (mm) was 12.1 ± 3.7 vs 9.9 ± 2.8 (p &lt; 0.01). The CRP was significantly different by Multivariate analysis, but the WBC and AMSD wasn't. The results following laparoscopic appendicectomy data were identical. Conclusion: Preoperative WBC, CPR and AMSD all indicated an increased risk of complications. If WBC (/μl) &gt;16,500, CRP &gt;3.1 mg/dl and AMSD &gt;11.4 mm, complications increased sixfold. abstract_id: PUBMED:37303449 Post-Operative Outcomes of Laparoscopic Appendectomy in Acute Complicated Appendicitis: A Single Center Study. Background: Acute appendicitis (AA) is a surgical emergency because of inflammation in the appendix leading to swelling, whereas acute complicated appendicitis is characterized by a gangrenous or perforated appendix with or without periappendicular abscess, peritonitis, and an appendicular mass. The laparoscopic approach in complicated acute appendicitis is a viable alternative method but is not practiced in all cases because of technical difficulties and unpredictable complications. Thus, the present study aimed to evaluate the primary and secondary outcome predictors of laparoscopic appendectomy in complicated appendicitis. Methods: A single-center prospective observational study was carried out after the approval of the Institutional Ethics Committee (IEC). A total of 87 complicated acute appendicitis patients were included in the study. Clinico-demographic features such as age, gender, duration of surgery, post-operative pain, and hospital stay were monitored in different age groups of &lt;20, 20-39, and &gt;40 years, and the primary and secondary outcomes of laparoscopic surgery in acute complicated appendicitis were measured. Result: Acute complicated appendicitis cases were observed mostly in people older than 42 years in the total study population. Laparoscopic appendectomy was conducted in all 87 acute complicated appendicitis patients, and the major surgical outcome predictors were monitored, such as mean operating time (87.9 minutes), post-operative pain (3.9 scores), and post-operative stay (6.7 days). Post-operative complications such as drain site infection (1.14%), enterocutaneous fistula (2%), and intra-abdominal abscess (7%) were observed. Conclusion: Based on our observations, a laparoscopic appendectomy can be considered a viable alternative with an acceptable complication rate. Operative time varies from 84 to 94 minutes in different age groups and with the extent of the disease. abstract_id: PUBMED:31448803 Use of drains and post-operative complications in secondary peritonitis for complicated acute appendicitis at a national hospital. Introduction: Acute appendicitis is the main cause of emergency surgical care. Post-operative patients with complicated acute appendicitis present complications, many of them expected. The use of drains is one of the measures to prevent these complications; however, recent meta-analyzes do not justify this therapeutic measure. This study evaluates the relationship between use and non-use of drains, post-operative complications in patients with complicated peritonitis secondary to acute appendicitis. Methods: A retrospective observational cohort study was conducted. The outcomes were analyzed by Chi-square test and Student's t-test; Fisher exact test was performed. Results: The average operating time was 1.46 h (1.0-2.5) and 1.66 (1-3) for patients without drains and with drains, respectively, the difference was significant (p = 0.001). Post-operative fever was more prevalent in group with a drains odds ratio (OR) 3.4 (confidence interval [CI] 95% 1.4-7.9). The mean time of hospitalization was 7.3 (3-20) and 8.8 days (3-35) for patients without drains and with drains, respectively. (p = 0.01). The Chi-square analysis was significant for evisceration Grade III and residual collection p = 0.036, OR not evaluable. Reoperation was not significant among both groups, p = 0.108 OR 6.3 (CI 95% 0.6-62.4). Conclusions: There is a relationship between the non-use of drains and collections and evisceration in post-operative patients with open appendectomy, by complicated acute appendicitis. abstract_id: PUBMED:28629607 Post-operative management of perforated appendicitis: Can clinical pathways improve outcomes? Background: We sought to decrease organ space infection (OSI) following appendectomy for perforated acute appendicitis (PAA) by minimizing variation in clinical management. Objective: A postoperative treatment pathway was developed and four recommendations were implemented: 1) clear documentation of post-operative diagnosis, 2) patients with unknown perforation status to be treated as perforated pending definitive diagnosis, 3) antibiotic therapy to be continued post operatively for 4-7 days after SIRS resolution, and 4) judicious use of abdominal computed tomography (CT) scanning prior to post-operative day 5. Patient demographics and potential clinical predictors of OSI were captured. The primary end point was development of OSI within 30 days of discharge. Secondary endpoints included length of stay (LOS), readmission rate, other complications and secondary procedures performed. Results: A total of 1246 appendectomies were performed and we excluded patients &lt;18 years (n = 205), interval appendectomies (n = 51) or appendectomies for other diagnosis (n = 37). Among the remaining 953 patients, 133 (14.0%) were perforated and 21 of these (15.8%) developed OSI. Comparing pre (n = 91) to post (n = 42) protocol patients, we saw similar rates of OSI (16.5 vs 14.3%, p = 0.75) with a peak in OSI development immediately prior to protocol implementation which dropped to baseline levels 1 year later based on CUSUM analysis. Readmission rates fell by 49.7% (14.3 vs 7.1%, p = 0.39) without increase in LOS (5.3 vs 5.7 days, p = 0.55) comparing patients pre and post protocol, although these results did not reach clinical significance. Conclusions: The implementation of and compliance with a post-operative protocol status post appendectomy for PAA demonstrated a trend towards diminishing readmission rates and decreased utilization of CT imaging, but did not affect OSI rates. Additional approaches to diminishing OSI following management of perforated appendicitis need to be evaluated. abstract_id: PUBMED:24682310 Introduction of an acute surgical unit: comparison of performance indicators and outcomes for operative management of acute appendicitis. Background: The Acute Surgical Unit (ASU) is a recent change in management of acute general surgical patients in hospitals worldwide. In contrast to traditional management of acute surgical presentations by a rotating on-call system, ASUs are shown to deliver improved efficiency and patient outcomes. This study investigated the impact of an ASU on operative management of appendicitis, the most common acute surgical presentation, by comparing performance indicators and patient outcomes prior to and after introduction of an ASU at the Gold Coast Hospital, Queensland, Australia. Methods: A retrospective study of patients admitted from the Emergency Department (ED) and who underwent emergency appendectomy from February 2010 to January 2011 (pre-ASU) and after introduction of the ASU from February 2011 to January 2012 (post-ASU). A total of 548 patients underwent appendectomy between February 2010 and January 2012, comprising 247 pre-ASU and 301 post-ASU patients. Results: Significant improvements were demonstrated: reduced time to surgical review, fewer complications arising from operations commencing during ASU in-hours, and more appendectomies performed during the daytime attended by the consultant. There was no significant difference in total cost of admission or total admission length of stay. Conclusions: This study demonstrated that ASUs have potential to significantly improve the outcomes for operative management of acute appendicitis compared to the traditional on-call model. The impact of the ASU was limited by access to theaters and restricted ASU operation hours. Further investigation of site-specific determinants could be beneficial to optimize this new model of acute surgical care. abstract_id: PUBMED:33762117 Does speed matter? A look at NSQIP-P outcomes based on operative time. Background: Appendicitis is a common pediatric surgical condition, comprising a large burden of healthcare costs. We aimed to determine if prolonged operative times were associated with increased 30-day complication rates when adjusting for pre-operative risk factors. Methods: Patients &lt;18 years old, diagnosed intraoperatively with acute uncomplicated appendicitis and undergoing laparoscopic appendectomy were identified from the NSQIP-P 2012-2018 databases. The primary outcome, "infectious post-operative complications", is a composite of sepsis, deep incisional surgical site infections, wound disruptions, superficial, and organ space infections within 30-days of the operation. Secondary outcomes included return to the operating room and unplanned readmissions within 30 days. Logistic regression models were used to assess associations between operative time and each outcome. A Receiver Operating Characteristic (ROC) curve was generated from the predicted probabilities of the multivariate model for infectious post-operative complications to examine operative times. Results: Between 2012 and 2018, 27,763 pediatric patients with acute uncomplicated appendicitis underwent a laparoscopic appendectomy. Over half the population was male (61%) with a median operative time of 39 min (IQR 29-52 min). Infectious post-operative complication rate was 2.8% overall and was highest (8%) among patients with operative time ≥ 90 min (Fig. 1). Unplanned readmission occurred in 2.9% of patients, with 0.7% returning to the operating room. Each 30-min increase in operating time was associated with a 24% increase in odds of an infectious post-operative complication (OR=1.24, 95% CI=1.17-1.31) in adjusted models. Operative time thresholds predicted with ROC analysis were most meaningful in younger patients with higher ASA class and pre-operative SIRS/Sepsis/Septic shock. Longer operative times were also associated with higher odds of unplanned readmission (OR=1.11, 95% CI=1.05-1.18) and return to the operating room (OR=1.13, 95% CI=1.02-1.24) in adjusted models. Conclusion: There is a risk-adjusted association between prolonged operative time and the occurrence of infectious post-operative complications. Infectious postoperative complications increase healthcare spending and are currently an area of focus in healthcare value models. Future studies should focus on addressing laparoscopic appendectomy operative times longer than 60 min, with steps such as continuation of antibiotics, shifting roles between attending and resident surgeons, and simulation training. Level Of Evidence: Level III, retrospective comparative study. abstract_id: PUBMED:30505435 Is abdominal drainage after open emergency appendectomy for complicated appendicitis beneficial or waste of money? A single centre retrospective cohort study. Background: Appendicitis is a medical condition that causes painful inflammation of the appendix. For acute appendicitis, appendectomy is immediately required as any delay may lead to serious complications such as gangrenous or perforated appendicitis with or without localized abscess formation. Patients who had appendectomy for complicated appendicitis are more prone to develop post-operative complications such as peritoneal abscess or wound infection. Sometimes, abdominal drainage is used to reduce these complications. However, the advantage of the abdominal drainage to minimize post-operative complications is not clear. Therefore, the aim of this study was to investigate whether the use of abdominal drainage after open emergency appendectomy for complicated appendicitis (perforated appendicitis with localized abscess formation only) can prevent or significantly reduce post-operative complications such as intra-peritoneal abscess formation or wound infection. Methods: In this retrospective cohort study, files and notes were reviewed retrospectively for patients who had open emergency appendectomy for complicated appendicitis (perforated appendicitis with localized abscess formation only) and who had already been admitted and discharged from the surgical wards of Kerbala medical university/Imam Hussein medical city hospital/Kerbala/Iraq. Patients were selected according to specific inclusion and exclusion criteria. Patients were divided into two groups; drainage and non-drainage groups. The drainage group had intra-abdominal drain inserted after the surgery, while the non-drainage group had no drain placed post-operatively. A comparison between both groups was done in terms of these parameters; (i) the development of post operative intra-peritoneal abscess and or wound infection. (ii) The length and cost of hospital stay. (iii) The mortality outcomes. Statistical analysis was done using Pearson Chi-square test, Independent sample t-test and Mann-Whitney U Test. Results: Of 227 patients with open emergency appendectomy for complicated appendicitis, 114 had received abdominal drain after the surgery. Fifty out of 114 patients (43.9%) with abdominal drainage developed post-operative intra-peritoneal abscess (abdominal or pelvic) while 53 out of 113 patients (46.9%) without drainage developed the same complication (P = 0.65). It was also revealed that for patients with drainage, 42 patients (36.8%) had post-operative wound infection, whereas this number was 38 (33.6%) for patients without drainage (P = 0.61). On the other hand, the patients with drain had significantly longer length of hospital stay (mean length of stay: 4.99 days versus 2.12 days, P &lt; 0.001) and significantly higher cost (median cost per patient: $120 versus $60, P &lt; 0.001). Conclusion: Installation of abdominal drainage after open emergency appendectomy for complicated appendicitis did not bring any considerable advantage in terms of prevention or significant reduction of post-operative intra-peritoneal abscess and wound infection. Rather, it lengthened the hospital stay and doubled the cost of operation. abstract_id: PUBMED:37034490 Post-appendectomy abdominal pain attributed to incidental ovarian cyst: a case report. Acute abdominal pain in adolescents has a multitude of diagnoses to consider ranging from life-threating ones to other less obvious. In this case report, a 15-year-old girl presented with right lower quadrant abdominal pain and tenderness one month after successful surgical management of acute appendicitis. Post-appendectomy abdominal pain could easily be attributed to post-operative complications, while, in reality, a different disease state may be the cause of the pain. Physicians should have a high index of clinical suspicion, even though the temporal association of events may suggest otherwise. Hemorrhagic ovarian cyst (HOC) should be included in the differential, as it was confirmed with imaging in our case. A conservative treatment approach with progesterone was chosen, with menses resuming 2 days later, leading to regression of the cyst. The clinical significance of this case relies on the timely recognition of a disease entity, in order to distinguish it from complications arising postoperatively. abstract_id: PUBMED:37291012 Routine post-operative labs and healthcare system burden in acute appendicitis. Background: Data from the National Health Expenditure Accounts have shown a steady increase in healthcare cost paralleled by availability of laboratory tests. Resource utilization is a top priority for reducing health care costs. We hypothesized that routine post-operative laboratory utilization unnecessarily increases costs and healthcare system burden in acute appendicitis (AA) management. Methods: A retrospective cohort of patients with uncomplicated AA 2016-2020 were identified. Clinical variables, demographics, lab usage, interventions, and costs were collected. Results: A total of 3711 patients with uncomplicated AA were identified. Total costs of labs ($289,505, 99.56%) and repletions ($1287.63, 0.44%) were $290,792.63. Increased LOS was associated with lab utilization in multivariable modeling, increasing costs by $837,602 or 472.12 per patient. Conclusions: In our patient population, post-operative labs resulted in increased costs without discernible impact on clinical course. Routine post-operative laboratory testing should be re-evaluated in patients with minimal comorbidities as this likely increases cost without adding value. abstract_id: PUBMED:25314345 Simple acute appendicitis versus non-perforated gangrenous appendicitis: is there a difference in the rate of post-operative infectious complications? Background: According to the 2002 Surgical Infection Society Guidelines on Antimicrobial Therapy for Intra-abdominal Infections, antimicrobial therapy is not recommended beyond 24 hours for the treatment of postoperative acute or gangrenous appendicitis without perforation. However, clinicians commonly consider gangrenous appendicitis to pose a greater risk of post-operative infectious complications, such as surgical site infections and intra-abdominal abscesses. This study examines the relative risk of post-operative infection between patients with simple and gangrenous appendicitis. Methods: A retrospective review of patients with either non-perforated gangrenous or simple appendicitis from 2010 to 2012 was performed at a large urban teaching hospital. Results: The rate of post-operative intra-abdominal abscess formation, which was diagnosed on patient readmission to the hospital, was significantly greater in patients with non-perforated gangrenous appendicitis in comparison to those with simple non-perforated appendicitis. Also, patients with non-perforated gangrenous appendicitis received extended courses of post-operative antibiotics, despite SIS recommendations. Conclusions: The role of peri-operative antibiotics for non-perforated gangrenous appendicitis merits further study. Answer: Yes, there are reliable indicators that can predict post-operative complications in acute appendicitis. A study analyzing 485 patients who underwent appendectomies found that preoperative white blood cell count (WBC), C-reactive protein (CRP), and appendix maximum short diameter (AMSD) on diagnostic imaging were all indicators of an increased risk of complications. Specifically, if WBC was greater than 16,500/μl, CRP was greater than 3.1 mg/dl, and AMSD was greater than 11.4 mm, the risk of complications increased sixfold (PUBMED:26310685). Additionally, operative time has been associated with post-operative complications. A study using the NSQIP-P database from 2012-2018 showed that each 30-minute increase in operating time was associated with a 24% increase in odds of an infectious post-operative complication. Operative times longer than 60 minutes were particularly significant, suggesting that efforts to reduce operative times could potentially decrease the risk of complications (PUBMED:33762117). Furthermore, the use of drains in patients with complicated acute appendicitis has been evaluated. One study found that the use of drains did not prevent complications such as collections and evisceration in post-operative patients with open appendectomy for complicated acute appendicitis (PUBMED:31448803). Another study concluded that abdominal drainage after open emergency appendectomy for complicated appendicitis did not significantly reduce post-operative intra-peritoneal abscess and wound infection, but it did increase the length of hospital stay and the cost of operation (PUBMED:30505435). Lastly, the type of appendicitis may also be a factor. A study comparing patients with non-perforated gangrenous appendicitis to those with simple non-perforated appendicitis found a significantly greater rate of post-operative intra-abdominal abscess formation in the former group, suggesting that gangrenous appendicitis may pose a higher risk of post-operative infectious complications (PUBMED:25314345). In summary, preoperative WBC, CRP, AMSD, operative time, and the type of appendicitis (simple vs. gangrenous) are reliable indicators that can predict post-operative complications in patients with acute appendicitis.
Instruction: Epidural anesthesia in cardiac surgery: is there an increased risk? Abstracts: abstract_id: PUBMED:23816670 Epidural catheterization in cardiac surgery: the 2012 risk assessment. Aims And Objectives: The risk assessment of epidural hematoma due to catheter placement in patients undergoing cardiac surgery is essential since its benefits have to be weighed against risks, such as the risk of paraplegia. We determined the risk of the catheter-related epidural hematoma in cardiac surgery based on the cases reported in the literature up to September 2012. Materials And Methods: We included all reported cases of epidural catheter placement for cardiac surgery in web and in literature from 1966 to September 2012. Risks of other medical and non-medical activities were retrieved from recent reviews or national statistical reports. Results: Based on our analysis the risk of catheter-related epidural hematoma is 1 in 5493 with a 95% confidence interval (CI) of 1/970-1/31114. The risk of catheter-related epidural hematoma in cardiac surgery is similar to the risk in the general surgery population at 1 in 6,628 (95% CI 1/1,170-1/37,552). Conclusions: The present risk calculation does not justify not offering epidural analgesia as part of a multimodal analgesia protocol in cardiac surgery. abstract_id: PUBMED:17599887 Epidural analgesia in cardiac surgery: an updated risk assessment. Introduction: The use of epidural anesthesia carries risks that have been known for 50 years. The debate about the use of locoregional technique in cardiac anesthesia continues. The objective of this report is to estimate the risks and their variability of a catheter-related epidural hematoma in cardiac surgery patients and to compare it with other anesthetic and medical procedures. Methods: Case series reporting the use of epidural anesthesia in cardiac surgery were researched through Medline. Additional references were retrieved from the bibliography of published articles and from the internet. Risks of complications in other anesthetic and medical activity were retrieved from recent reviews. Results: Based on the present evidence, the risk of epidural hematoma in cardiac surgery is 1:12,000 (95% CI of 1:2100 to 1:68,000), which is comparable to the risk in the nonobstetrical population of 1:10,000 (95% CI 1:6700 to 1:14,900). The risk of epidural hematoma is comparable to the risk of receiving a wrong blood product or the yearly risk of having a fatal road accident in Western countries. Conclusions: The risk of a hematoma after epidural in cardiac surgery is comparable to other nonobstetrical surgical procedures. Its routine application in a controlled setting should be encouraged. abstract_id: PUBMED:23440670 Epidural analgesia in high risk cardiac surgical patients. Cardiac surgery is associated with high morbidity and mortality in patients with renal, hepatic or pulmonary dysfunction, advanced age and morbid obesity. Thoracic epidural analgesia is associated with decreased morbidity in these patients. Thoracic epidural analgesia in cardiac surgery is associated with haemodynamic stability, decreased catecholamine response, good pulmonary function, early extubation and discharge from intensive care unit. It is an important component of fast tracking in cardiac surgery as well. Its use has significantly increased over the years and has been used as an adjuvant to general anaesthesia as well as the sole anaesthetic technique in selected groups of patients. Proper selection of patients for thoracic epidural analgesia is mandatory. Timing of epidural catheter insertion and removal should be judiciously selected. The risk of epidural hematoma secondary to anticoagulation or residual effects of antiplatelet drug that can be reduced by taking standard precautions. In conclusion thoracic epidural analgesia in high risk cardiac surgery might decrease pulmonary, cardiovascular or renal complications, provide excellent analgesia and allow early extubation. abstract_id: PUBMED:9780766 Epidural hematoma after removal of an epidural catheter Epidural hematoma is a rare but serious neurological complication of epidural anesthesia. We report the case of a 61-year-old man with squamous cell carcinoma of the lung who suffered an epidural hematoma after undergoing right double lobectomy. Before anesthetic induction an epidural catheter was inserted to the D5-D6 space for postoperative analgesia. Surgery was without noteworthy events and the patient was extubated in the operating room; 5,000 IU of low molecular weight heparin was injected subcutaneously every 24 hours and 5 mg of methadone was provided by epidural catheter every 8 hours. After removal of the catheter three days after surgery, lumbar back pain and hypoesthesia, and weakness in both legs appeared. Epidural hematoma was suspected and treatment with 30 mg.kg-1 of methylprednisolone i.v. was started. Nuclear magnetic resonance imaging of the lumbar spine confirmed the presence of a hematoma at D6-D8. Neurologic symptoms improved in the following hours and additional surgery was not required. The patient was released without neurological symptoms 10 days after lung surgery. We discuss the prevalence, etiology and treatment of epidural hematoma related to epidural anesthesia. abstract_id: PUBMED:21800665 Coagulation profiles following donor hepatectomy and implications for the risk of epidural hematoma associated with epidural anesthesia Background: Continuous epidural analgesia has become an accepted technique used in laparotomy including liver resections. Although American Society of Regional Anesthesia and Pain Medicine recommends that epidural catheter be removed with prothrombin time-international normalized ratio (PT-INR) less than 1.5, it is possible that liver surgery causes coagulation disturbances. We examined the postoperative changes in coagulation profiles of living liver donors to elucidate whether hepatectomy increases the risk of epidural hematoma related to removal of epidural catheters or not. Methods: From January 2007 to October 2009, 42 living liver related transplantations were performed in Hokkaido University Hospital. We reviewed the donor data including PT-INR obtained during perioperative days [preoperative, immediately postoperative, postoperative day 1, 3 and 7] and epidural catheter-related-complications, retrospectively. Results: While in all donors values of PT-INR obtained during preoperative periods were within normal limits, 14 donors had a PT-INR over 1.5 during postoperative periods. There was no epidural hematoma case in this study. Conclusions: Our study suggested that hepatectomy increases the risk of epidural hematoma related to removal of epidural catheters, even in the living liver transplant donors with normal liver function. abstract_id: PUBMED:19295296 High thoracic epidural anaesthesia for cardiac surgery. Purpose Of Review: Epidurals have been used for cardiac surgery for more than 20 years. The worldwide-published use is now large enough to determine that there is no additional risk for epidural use in cardiac versus noncardiac surgery. Recent Findings: Several large case series have been added to the literature without cases of spinal damage. The estimated risk of epidural haematoma is 1: 12 000 (95% confidence interval of 1: 2100 to 1: 68000), which is comparable to noncardiac surgery. The fear of an increased risk of epidural haematoma associated with cardiopulmonary bypass has not eventuated. Improved analgesia, reduced pulmonary complications and reduced atrial fibrillation in off-pump coronary surgery have been reported. There are some case series and numerous case reports of awake cardiac surgery performed under epidural anaesthesia. This review will focus on safety, benefits and the logistics of performing epidural anaesthesia for cardiac surgery. Summary: Fear of an increased risk of epidural haematoma has largely prevented increased use of this technique for cardiac surgery. Clinicians can be reassured that the risk of epidural use in cardiac surgery is similar to that for noncardiac surgery, which provides a new platform for considering risk versus benefit in their practice. abstract_id: PUBMED:10210054 Epidural hematoma following epidural catheter placement in a patient with chronic renal failure. Purpose: We report a case of epidural hematoma in a surgical patient with chronic renal failure who received an epidural catheter for postoperative analgesia. Symptoms of epidural hematoma occurred about 60 hr after epidural catheter placement. Clinical Features: A 58-yr-old woman with a history of chronic renal failure was admitted for elective abdominal cancer surgery. Preoperative laboratory values revealed anemia, hematocrit 26%, and normal platelet, PT and PTT values. General anesthesia was administered for surgery, along with epidural catheter placement for postoperative analgesia. Following uneventful surgery, the patient completed an uneventful postoperative course for 48 hr. Then, the onset of severe low back pain, accompanied by motor and sensory deficits in the lower extremities, alerted the anesthesia team to the development of an epidural hematoma extending from T12 to L2 with spinal cord compression. Emergency decompressive laminectomy resulted in recovery of moderate neurologic function. Conclusions: We report the first case of epidural hematoma formation in a surgical patient with chronic renal failure (CRF) and epidural postoperative analgesia. The only risk factor for the development of epidural hematoma was a history of CRF High-risk patients should be monitored closely for early signs of cord compression such as severe back pain, motor or sensory deficits. An opioid or opioid/local anesthetic epidural solution, rather than local anesthetic infusion alone, may allow continuous monitoring of neurological function and be a prudent choice in high-risk patients. If spinal hematoma is suspected, immediate MRI or CT scan should be done and decompressive laminectomy performed without delay. abstract_id: PUBMED:36778829 Incidence of and modifiable risk factors for inadequate epidural analgesia in pediatric patients aged up to 8 years. Background And Aims: Postoperative pain in pediatric patients is one of most inadequately treated conditions. This study aimed to investigate the incidence of and modifiable risk factors for inadequate epidural analgesia in pediatric patients aged up to 8 years at Siriraj Hospital-Thailand's largest national tertiary referral center. Material And Methods: This retrospective study included pediatric patients aged 0-8 years who underwent surgery with epidural catheter during January 2015 to January 2020. Patients with missing data were excluded. Records from both the ward staff and the acute pain service were reviewed. All relevant data were extracted until the epidural catheters were removed. Results: One hundred and fifty pediatric patients were included. The median age was 29 months and the range varied from 12 days to 98 months on the day of surgery, and 86 (57.3%) were male. The incidence of inadequate epidural analgesia was 32%. Most patients (95.8%) had an unacceptably high pain score within 4 hours after arriving at the ward. Univariate analysis revealed direct epidural placement, the length in epidural space less than 5 cm, and postoperative leakage to be substantially higher in the inadequate pain epidural analgesia group. When those factors were included in multivariate analysis, only length in epidural space less than 5 cm was identified as an independent risk factor. Conclusion: The incidence of inadequate epidural analgesia in this pediatric study was 32%. Multivariate analysis showed length of catheter in epidural space less than 5 cm to be the only factor independently associated with inadequate epidural analgesia. abstract_id: PUBMED:34799017 Postoperative Epidural Hematoma. Symptomatic postoperative epidural hematomas are rare, with an incidence of 0.10% to 0.69%. Risk factors have varied in the literature, but multiple studies have reported advanced age, preoperative or postoperative coagulopathy, and multilevel laminectomy as risk factors for hematoma. The role of pharmacologic anticoagulation after spine surgery remains unclear, but multiple studies suggest it can be done safely with a low risk of epidural hematoma. Prophylactic suction drains have not been found to lower hematoma incidence. Most symptomatic postoperative epidural hematomas present within the first 24 to 48 hours after surgery but can present later. Diagnosis of a symptomatic hematoma requires correlation of clinical signs and symptoms with a compressive hematoma on MRI. Patients will usually first complain of a marked increase in axial pain, followed by radicular symptoms in the extremities, followed by motor weakness and sphincter dysfunction. An MRI should be obtained emergently, and if it confirms a compressive hematoma, surgical evacuation should be carried out as quickly as possible. The prognosis for neurologic improvement after evacuation depends on the time delay and the degree of neurologic impairment before evacuation. abstract_id: PUBMED:12163731 Risk factors for spinal epidural hematoma after spinal surgery. Study Design: A case-control retrospective analysis comparing patients who developed a postoperative spinal epidural hematoma with patients who did not develop this complication. Objectives: To identify risk factors for the development of an epidural hematoma following spinal surgery. Summary Of Background Data: Neurologic deterioration following spinal surgery is a rare but devastating complication. Epidural hematomas should be suspected in the patient who demonstrates a new postoperative neurologic deficit. The risk factors that predispose a patient to a postoperative spinal epidural hematoma have not been identified. Methods: Patients who underwent spinal surgery at a single institution over a 10-year period were retrospectively reviewed. Twelve patients who demonstrated neurologic deterioration after surgery and required surgical decompression because of an epidural hematoma were identified. All cases involved lumber laminectomies. A total of 404 consecutive patients that underwent lumbar decompression and did not develop an epidural hematoma formed the control group. Factors postulated to increase the risk of postoperative spinal epidural hematoma were compared between the two groups using logistic regression. Results: Multilevel procedures (P = 0.037) and the presence of a preoperative coagulopathy (P &lt; 0.001) were significant risk factors. Age, body mass index, perioperative durotomies, and postoperative drains were not statistically significant risk factors. Conclusions: Patients who require multilevel lumbar procedures and/or have a preoperative coagulopathy are at a significantly higher risk for developing a postoperative epidural hematoma. Answer: The use of epidural anesthesia in cardiac surgery has been a topic of debate due to the associated risks, particularly the risk of epidural hematoma, which can lead to serious complications such as paraplegia. However, several studies have assessed the risk of catheter-related epidural hematoma in cardiac surgery patients and compared it with other anesthetic and medical procedures. Based on the evidence from the literature, the risk of catheter-related epidural hematoma in cardiac surgery is estimated to be 1 in 5493, with a 95% confidence interval of 1/970 to 1/31114 (PUBMED:23816670). Another study reported a similar risk of 1:12,000 (95% CI of 1:2100 to 1:68,000), which is comparable to the risk in the nonobstetrical population of 1:10,000 (95% CI 1:6700 to 1:14,900) (PUBMED:17599887). These findings suggest that the risk of epidural hematoma in cardiac surgery is similar to that in general surgery and other nonobstetrical surgical procedures. Furthermore, thoracic epidural analgesia has been associated with decreased morbidity in high-risk cardiac surgery patients, providing benefits such as hemodynamic stability, decreased catecholamine response, good pulmonary function, early extubation, and discharge from the intensive care unit (PUBMED:23440670). The use of epidural anesthesia in cardiac surgery does not appear to carry an additional risk compared to noncardiac surgery, and the fear of an increased risk of epidural hematoma associated with cardiopulmonary bypass has not been substantiated (PUBMED:19295296). However, certain patient populations, such as those with chronic renal failure, may have an increased risk of developing an epidural hematoma, and thus should be monitored closely for early signs of cord compression (PUBMED:10210054). Additionally, hepatectomy has been suggested to increase the risk of epidural hematoma related to the removal of epidural catheters, even in patients with normal liver function (PUBMED:21800665). In conclusion, the available evidence indicates that the risk of epidural anesthesia in cardiac surgery is not increased compared to other surgical procedures. However, patient selection and careful monitoring are crucial to minimize the risk of complications such as epidural hematoma.
Instruction: Fecal elastase-1 determination: 'gold standard' of indirect pancreatic function tests? Abstracts: abstract_id: PUBMED:11276375 Pancreatic function tests: when to choose, what to use. Although techniques for high-resolution imaging of the pancreas are constantly being improved, the evaluation of pancreatic function remains crucial for the workup of pancreatic diseases. More than 20 direct and indirect tests are available for the assessment of pancreatic function. Measurement of fecal elastase-1 is recommended as the most suitable test for the initial assessment of pancreatic function. Among other techniques, the pancreolauryl test, and alternatively the BT-PABA (N-benzoyl-L-tyrosyl-p-aminobenzoic acid) or the (13)C-mixed-triglyceride test, yield the best sensitivity and specificity. Nevertheless, all indirect tests are of limited value in patients with mild to moderate impairment of pancreatic function. In these patients, the secretin-caerulein test remains the gold standard. abstract_id: PUBMED:11589385 Fecal elastase-1 determination: 'gold standard' of indirect pancreatic function tests? Background: Tubeless pancreatic function tests measuring the content of elastase-1 and the activity of chymotrypsin in stool are used with different cut-off levels and with varying success in diagnosing functional impairment of the pancreas. The aim of our study was to re-evaluate the sensitivity and specificity of elastase-1 and chymotrypsin in stool in the assessment of exocrine pancreatic insufficiency. Methods: In 127 patients displaying clinical signs of malassimilation, the secretin-caerulein test ('gold standard'), fecal fat analysis, fecal chymotrypsin activity and fecal elastase-1 concentration were performed. Exocrine pancreatic insufficiency was graded, according to the results of the secretin-caerulein test, into mild, moderate and severe. Chymotrypsin and elastase-1 in stool were estimated using two commercially available test kits. Fecal elastase-1 concentration of 200 and 100 microg/g stool and chymotrypsin activity of 6 and 3 U/g stool were used separately as cut-off levels for calculation. Results: 1) In 65 patients, a normal pancreatic function was found using the secretin-caerulein test. In 62 patients, an exocrine pancreatic insufficiency was found and classified into severe (n = 25), moderate (n = 14) and mild (n = 23). 2) The correlation between fecal elastase-1 and chymotrypsin with duodenal enzyme outputs of amylase, lipase, trypsin, chymotrypsin and elastase-1 ranged between 33% and 55% and 25% and 38%, respectively. 3) Using a cut-off of 200 microg elastase-1/g, stool sensitivities of fecal elastase-1 and fecal chymotrypsin (cut-off: 6 U/g) were 100% and 76%, respectively (P &lt; 0.0001 and P &lt; 0.001 respectively) in severe exocrine pancreatic insufficiency, 89% and 47% respectively (P &lt; 0.001; P = 0.34, respectively) in moderate and 65% for both in mild pancreatic insufficiency. Specificities of elastase-1 and chymotrypsin in stool were 55% and 47%, respectively. 4) Elastase-1 based diagnostic provided a positive predictive value of 50% using a cut-off' 200 microg/g stool in a representative group of consecutively recruited patients with gastroenterological disorders. Conclusion: Determination of fecal elastase-1 is highly sensitive in the diagnosis of severe and moderate exocrine pancreatic insufficiency and is of significantly higher sensitivity than fecal chymotrypsin estimation. Specificity for both stool tests is low. Correlation between elastase-1 and chymotrypsin in stool and duodenal enzyme outputs is moderate. Neither test is suitable for screening, as they provide a pathologic result in roughly half of 'non-pancreas' patients. abstract_id: PUBMED:29432279 13C-Mixed Triglyceride Breath Test and Fecal Elastase as an Indirect Pancreatic Function Test in Cystic Fibrosis Infants. Background: The 'gold standard' test for the indirect determination of pancreatic function status in infants with cystic fibrosis (CF), the 72-hour fecal fat excretion test, is likely to become obsolete in the near future. Alternative indirect pancreatic function tests with sufficient sensitivity and specificity to determine pancreatic phenotype need further evaluation in CF infants. Objective: Evaluation of the clinical utility of both the noninvasive, nonradioactive C-mixed triglyceride (MTG) breath test and fecal elastase-1 (FE1) in comparison with the 72-hour fecal fat assessment in infants with CF. Methods: C-MTG breath test and the monoclonal and polyclonal FE1 assessment in stool was compared with the 72-hour fecal fat assessment in 24 infants with CF. Oral pancreatic enzyme substitution (PERT; if already commenced) was stopped before the tests. Results: Sensitivity rates between 82% and 100% for CF patients with pancreatic insufficiency assessed by both the C-MTG breath test and the FE1 tests proved to be high and promising. The C-MTG breath test (31%-38%) as well as both FE1 tests assessed by the monoclonal (46%-54%) and the polyclonal (45%) ELISA kits, however, showed unacceptably low-sensitivity rates for the detection of pancreatic-sufficient CF patients in the present study. Conclusions: The C-MTG breath test with nondispersive infrared spectroscopy (NDIRS) technique, as well as both FE1 tests, are not alternatives to the fecal fat balance test for the evaluation of pancreatic function in CF infants during the first year of life. abstract_id: PUBMED:15508057 The diagnostic validity of non-invasive pancreatic function tests--a meta-analysis Background: The paper discusses the non-invasive (tubeless) pancreatic function tests used to diagnose exocrine pancreatic insufficiency (EI). Studies evaluating the diagnostic validity of these tests are integrated into a meta-analysis, provided that they comply with the following criteria: The sensitivity (Ss) of a test has to be calculated by comparing it with an invasive function test which is accepted as the gold standard of pancreatic function diagnostics. Furthermore, the test must differentiate between slight (sl), moderate (md) and severe (sv) EI. For assessment of the specificity (Sp), the control group should not contain healthy persons but rather patients with other gastrointestinal diseases and a normal pancreatic function. In the statistical evaluation, each study was weighted according to the number of persons included. Results: Tests (n = sum of persons included in all analysed studies): Fecal chymotrypsin: Ss (n = 169) 54 % (sl EI), 53 % (md EI), 89 % (sv EI), Sp (n = 202) 74 %. NBT-PABA test: Ss (n = 394) 49 % (sl EI), 64 % (md EI), 72 % (sv EI), Sp (n = 218) 83 %. Pancreolauryl test: Ss (n = 320) 63 % (sl EI), 76 % (md EI), 94 % (sv EI), Sp (n = 171) 85 %. Fecal elastase-1: Ss (n = 307) 54 % (sl EI), 75 % (md EI), 95 % (sv EI), Sp (n = 347) 79 %. Additional tests discussed but not included in the meta-analysis were fecal fat, (13)C breath tests, amino acid consumption test, serum tests. Conclusion: None of the non-invasive pancreatic function tests is sensitive enough to diagnose reliably a slight to moderate exocrine pancreatic insufficiency. abstract_id: PUBMED:11077482 Preoperative laboratory diagnosis in pancreatic surgery--what is necessary? Some aspects of preoperative laboratory investigations are discussed in pancreatic surgery. Diagnosis of acute pancreatitis is not a major diagnostic problem with pancreatic enzyme measurement and with further regard to medical history and clinical presentation. However, early assessment of the prognosis of acute pancreatitis still remains a clinical challenge. The determination of C-reactive protein (CRP) and lactate dehydrogenase (LDH) is used successfully in clinical practice. The values of different serum markers are described in prognostic evaluation of acute pancreatitis, and some direct and indirect pancreatic-function tests are compared in preoperative management of patients with chronic pancreatitis. The most sensitive direct pancreatic-function test is the secretin-pancreozymin test (SP-test), among the indirect function tests estimation of faecal elastase 1 is superior. The significance of tumor marker measurement is described in pancreatic cancer. abstract_id: PUBMED:11324137 Value of combinations of pancreatic function tests to predict mild or moderate chronic pancreatitis. Unlabelled: Pancreatic function tests share an insufficient accuracy concerning the detection of mild or moderate forms of chronic pancreatitis. It was evaluated here whether by combination of different assays the prediction or exclusion of chronic pancreatitis could be improved. Methods: 62 patients with chronic abdominal pain and suspected chronic pancreatitis underwent an endoscopic retrograde pancreaticography. The duct alterations were classified according to the Cambridge criteria. In all individuals the pancreolauryl test in serum (PLT-S) and urine (PLT-U) was performed and elastase 1-immunoreactivity (E) as well as chymotrypsin (Chy) activity in stool were measured. Sensitivities, specificities, receiver-operator-curves as well as cut-off points at optimal accuracies were calculated for each single assay and all test combinations. Cut offs were optimized by a mathematical model to achieve highest accuracies. Results: In 30 patients the pancreatic duct was normal and in 32 patients alterations of the duct system were found. These were classified as mild in 10 patients, as moderate in 8 patients and as severe in 14 patients. In those with mild and moderate disease all pancreatic function tests showed sensitivities/specificities of 60-65% and 65-70%, respectively. Only in severe chronic pancreatitis elastase was superior to the other tests. Combinations of function tests did not lead to improved accuracy. After mathematical optimization the accuracy (sensitivity 80%, specificity 80%) was best for the combination of PLT-S (cut off 4.7 micrograms/ml) and E (cut off 500 micrograms/g). Both parameters had to be below these newly defined cut offs to diagnose chronic pancreatitis. Conclusions: The accuracy of pancreatic function tests may be improved by use of altered cut offs and a combination of serum pancreolauryl test and elastase. These newly defined cut offs will have to be evaluated in a much larger study. abstract_id: PUBMED:22266489 Comparison of fecal elastase-1 and pancreatic function testing in children. Objectives: The fecal pancreatic elastase-1 (FE-1) test is considered a simple, noninvasive, indirect measure of pancreatic function. We aimed to evaluate the performance of the FE-1 test compared with the direct pancreatic function test (PFT) with secretin stimulation in children. Methods: Data of 70 children (6 months-17 years of age) who had both FE-1 test and PFT were analyzed. Results: The average FE-1 concentration was 403 ± 142 μg/g. Eleven children had concentrations below 200 μg/g, 23 between 201 to 500 μg/g, and 36 were above 500 μg/g. The average pancreatic elastase activity measured on direct stimulation was 49.1 ± 38.6 μmol · min (-1)· ml(-1) and 11 children had activity below the established cutoff (10.5 μmol · min(-1) · ml(-1)). Among the 11 children with pathologic PFT, 7 had normal FE-1, 4 were in the intermediate range (201-500 μg/g), and none were in the low range (&lt;200 μg/g). Among the 59 children with normal direct PFT 11 (19%) had pathologic (&lt;200 μg/g) and 19 (32%) had intermediate FE-1 tests. Twenty-nine children had both normal FE-1 concentration and normal PFT, giving a negative predictive value of 80%. The correlation between pancreatic elastase activity and FE-1 concentration was poor (r = 0.190). The sensitivity of the FE-1 test was found to be 41.7%, whereas the specificity was 49.2%. The positive predictive value of the FE-1 test was only 14%. Conclusions: The FE-1 test is a simple, noninvasive, indirect method; however, ordering physicians should be aware of its limitations. It can give false-positive results and has low sensitivity in children with mild pancreatic insufficiency without cystic fibrosis and in those with isolated pancreatic enzyme deficiencies. abstract_id: PUBMED:9548629 Comparative clinical evaluation of the 13C-mixed triglyceride breath test as an indirect pancreatic function test. Background: Breath tests using stable isotopes of carbon or hydrogen are increasingly becoming established for the evaluation of various gastrointestinal functions, including measurement of exocrine pancreatic insufficiency. We wanted to evaluate the clinical relevance of the non-invasive, non-radioactive 13C-mixed triglyceride breath test in comparison with the secretin-caerulein test as the 'gold standard' of pancreatic function testing and with faecal chymotrypsin and elastase 1 in patients with mild and severe exocrine pancreatic insufficiency. Methods: The secretin-caerulein test, faecal fat analysis, 13C-mixed triglyceride breath test, faecal elastase 1, and chymotrypsin and various morphologic investigations were done in 26 patients with mild (n = 13) or severe (n = 13) exocrine pancreatic insufficiency and 25 patients with gastrointestinal diseases of non-pancreatic origin. Twenty-seven healthy volunteers served as normal controls. After a 12-h fast 200 mg mixed triglyceride (1,3-distearyl,2(carboxyl-13C)octanoyl glycerol) were orally administered with a test meal, and breath samples were taken before and at 30-min intervals for 5 h thereafter, and the increase in 13C/12C isotopic ratio in breath was analysed by mass spectrometry. Various modifications of the test procedure were investigated. Results: Specificity for impaired pancreatic function was higher for faecal elastase (90%) and equal for faecal chymotrypsin (82%) as compared with the various variables of the 13C-mixed triglyceride breath test (69-85%). The sensitivity of the 13C-mixed triglyceride breath test for total and separately for mild and severe exocrine pancreatic insufficiency was higher (total, 69-81%) than that of faecal chymotrypsin (total, 56%) but lower than faecal elastase (total, 92%). Conclusion: The 13C-mixed triglyceride breath test very sensitively reflects severe exocrine pancreatic insufficiency (steatorrhoea) but has limited sensitivity for the detection of mild cases. With regard to the higher sensitivity and specificity, the higher practicability, and the much lower cost, determination of faecal elastase 1 concentrations is superior to the 13C-mixed triglyceride breath test and therefore remains the most reliable indirect pancreatic function test available today. abstract_id: PUBMED:29097865 Staging chronic pancreatitis with exocrine function tests: Are we better? Chronic pancreatitis (CP) is an inflammatory disease of the pancreas evolving in progressive fibrotic disruption of the gland with exocrine and endocrine pancreatic insufficiency. Although imaging features of CP are well known, their correlation with exocrine pancreatic function tests are not obvious, particularly in the early stage of the disease. There are many clinical classification of CP, all suggested for better distinguish and manage different forms based on etiological and clinical factors, and severity of the disease. Recently, a new classification of CP has been suggested: the M-ANNHEIM multiple risk factor classification that includes etiology, stage classification and degree of clinical severity. However, more accurate determination of clinical severity of CP requires a correct determination of exocrine function of the pancreas and fecal fat excretion. Recently, Kamath et al demonstrated that the evaluation of exocrine pancreatic function by acid steatocrit and fecal elastase-1 (EF-1) was helpful, but EF-1 was able to detect exocrine pancreatic insufficiency in more patients, upgrading some patients in higher stage of disease according to M-ANNHEIM classification. So, EF-1 is a more accurate test to determine exocrine pancreatic insufficiency and to stage chronic pancreatitis in the M-ANNHEIM classification. On the contrary, EF-1 determination shows low sensitivity in detecting exocrine pancreatic insufficiency in early stage of the disease. abstract_id: PUBMED:7572904 Fecal elastase test: evaluation of a new noninvasive pancreatic function test. Objectives: Pancreatic elastase is highly stable along the intestinal tract. A new ELISA is commercially available to measure human specific elastase-1 concentration in stool. We evaluated the behavior of this fecal elastase test (FET) compared with other indirect pancreatic function tests in patients with chronic pancreatitis (CP). Methods: A total of 69 patients were included in the study, 20 of whom were diagnosed with CP according to the findings on ERP and CT; 13 patients had other pancreatic diseases, and the remaining 36 patients had gastrointestinal or hepatic disorders. All patients' elastase-1 concentrations and chymotrypsin activities [fecal chymotrypsin test (FCT)] were measured, and the serum pancreolauryl test (PLT) was performed. Results: Similar to PLT, fecal elastase concentration was significantly decreased in patients with moderate and severe CP (assessed by ERP) compared with patients with extrapancreatic disorders. However, and contrarily to PLT, FET was not affected by gastric resection, malabsorption due to intestinal disease, or marked alteration of the gastric motility. The sensitivity of FET was 100% for moderate to severe CP but 0% for mild CP; the specificity was 83%. Compared with other indirect pancreatic function tests, FET appears to be as sensitive as PLT and as specific as FCT, and it is clearly more specific than PLT and more sensitive than FCT. Unlike FCT, FET was not affected by oral enzyme supplementation. Conclusion: FET is a simple and accurate functional test for CP, and it is hardly influenced by extrapancreatic disorders or therapy with exogenous enzymes. Answer: The determination of fecal elastase-1 (FE1) is a widely used indirect pancreatic function test, but it is not considered the 'gold standard' for all cases. While FE1 is highly sensitive in diagnosing severe and moderate exocrine pancreatic insufficiency, its specificity is low, and it has moderate correlation with duodenal enzyme outputs (PUBMED:11589385). The secretin-caerulein test is still regarded as the gold standard, particularly for patients with mild to moderate impairment of pancreatic function (PUBMED:11276375). FE1 has been shown to be more sensitive than fecal chymotrypsin estimation, but neither test is suitable for screening as they can yield pathological results in roughly half of the patients without pancreatic diseases (PUBMED:11589385). In cystic fibrosis infants, the FE1 test and the 13C-Mixed Triglyceride Breath Test (C-MTG) have not been found to be suitable alternatives to the fecal fat balance test for evaluating pancreatic function (PUBMED:29432279). A meta-analysis revealed that no non-invasive pancreatic function test is sensitive enough to reliably diagnose slight to moderate exocrine pancreatic insufficiency (PUBMED:15508057). In children, the FE1 test has limitations, with low sensitivity and a poor correlation with direct pancreatic function tests, leading to false-positive results (PUBMED:22266489). The 13C-mixed triglyceride breath test, while sensitive for severe exocrine pancreatic insufficiency, has limited sensitivity for mild cases and is less reliable and cost-effective compared to FE1 determination (PUBMED:9548629). In the context of chronic pancreatitis, FE1 can be helpful in staging the disease according to the M-ANNHEIM classification, particularly for detecting exocrine pancreatic insufficiency and upgrading patients to a higher stage of disease. However, its sensitivity in detecting exocrine pancreatic insufficiency in the early stage of the disease is low (PUBMED:29097865). In summary, while FE1 is a useful test for diagnosing severe and moderate exocrine pancreatic insufficiency, it is not the 'gold standard' across all scenarios, particularly for mild cases or screening purposes. The secretin-caerulein test remains the gold standard for assessing pancreatic function, especially in mild to moderate cases (PUBMED:11276375).
Instruction: Does a tackling task effect shoulder joint position sense in rugby players? Abstracts: abstract_id: PUBMED:24315682 The effect of tackling on shoulder joint positioning sense in semi-professional rugby players. Objective: To assess the effect of a tackling task replicating the force magnitudes and directions seen in a competitive game or training session, on a players shoulder joint position sense. Design: Repeated measures design. Setting: Field based. Participants: Nineteen, senior, male, semi-professional rugby union players. Main Outcome Measures: Two criterion angles of 45° and 20° off maximal range of shoulder external rotation in the 90° angle of abduction, were assessed for reproduction accuracy prior to, and following a field based tackling task against an opponent. A comparison between dominant and non-dominant side accuracy was also obtained. Results: Prior to the tackling task, joint positioning sense was poorer at the 45° criterion angle than for 20° off the athletes' maximal range angle. Following the tackling task, error scores were significantly increased from baseline measures at the outer-range criterion angle for both dominant and non-dominant sides. In contrast to previous research the detrimental effect of the task was also greater. In addition, there was a significant decrease in accuracy at the 45° criterion angle for the players' non-dominant side. Conclusions: This study found a significant decrease in accuracy of joint position sense following the tackling task. It also found this decrease to be greater than previous research findings. In contrast to previous studies that found no effect at the 45° criterion angle, this study found significant changes for the players' non-dominant side occurred at this angle. A possible explanation for this is that the sensory motor system is negatively affected by fatigue and consequently shoulder dynamic stability is reduced. This fatigue element explains the trend for increased injury frequency in the third quarter of the game and would provide a rationale for the inclusion of conditioning programmes that address fatigue resistance and motor co-ordination in the region. abstract_id: PUBMED:19083706 Does a tackling task effect shoulder joint position sense in rugby players? Objective: To assess the effect of a simulated tackling task on shoulder joint position sense (JPS) in rugby players. The study also aimed to assess if differences in JPS occurred between mid range and end of range JPS, and if the tackling task had angle-specific effects on these values. Design: Repeated measures. Setting: University human performance laboratory. Participants: Twenty-two asymptomatic professional rugby union players. Main Outcome Measures: JPS was assessed using two criterion angles in the 90 degrees shoulder abduction position (45 degrees and 80 degrees external rotation) prior to and following a simulated tackling task. Results: Prior to the tackling task JPS (absolute error scores) was worse at the 45 degrees than 80 degrees criterion angle (p&lt;0.05). Following the tackling task absolute error scores were significantly increased at the 80 degrees angle (p&lt;0.001), with no significant change at the 45 degrees angle (p&gt;0.05), and no significant difference was present for error scores between angles (p=0.74). Conclusions: This study found JPS to be significantly reduced following a fatiguing task. But this change was only true for the end of range position, with JPS in the mid range not changing. If the mechanoreceptors are unable to accurately report shoulder position in the outer range (stretch) position due to repetitive tackling, then there is a potential for the anterior structures to become stressed before any compensatory muscle contraction can take place. These results highlight the presence of sensorimotor system deficits following repeated tackling. These deficits are proposed to contribute to overuse injuries and micro-instability of the glenohumeral joint which may be related to the increasing rate of shoulder injuries in rugby. abstract_id: PUBMED:31062539 Can a short neuromuscular warmup before tackling improve shoulder joint position sense in rugby players? Background: In rugby the tackle is a complex task requiring joint position sense (JPS). Injuries commonly occur during the tackle and these account for significant time lost from training and play. Simulated tackling tasks have previously shown a reduction in shoulder joint position sense and it is possible that this may contribute to injury. There is growing evidence in support of injury prevention programs, but none so far are dedicated specifically to tackling. We postulate that a brief neuromuscular warmup could alter the negative effects of fatigue on shoulder JPS. Methods: In this field based, repeated measures design study, 25 semi-professional Rugby players participated. JPS was measured at criterion angles of 45° and 80° of right arm shoulder external rotation. Reproduction accuracy prior to and following a neuromuscular warmup and simulated tackling task was then assessed. Results: In pre-warmup JPS measures, the spread of angle errors were larger at the 80° positions. Adding the warmup, the spread of the angle errors at the 80° positions decreased compared to pre-intervention measures. Two one-sided tests (TOST) analysis comparing pre- and post-testing angle errors, with the addition of the warmup, indicated no difference in JPS. Conclusions: The neuromuscular warmup resulted in a decrease in JPS error variance meaning fewer individuals made extreme errors. The TOST analysis results also suggest the neuromuscular warmup used in this study could mitigate the negative effects of tackling on JPS that has been seen in prior research. This neuromuscular warmup could play a role in preventing shoulder injuries. It can easily be added to existing successful injury prevention programs. abstract_id: PUBMED:25043695 Shoulder injuries in rugby players: mechanisms, examination, and rehabilitation. Background: The sport of rugby is growing in popularity for players at the high school and collegiate levels. Objective: This article will provided the sports therapist with an introduction to the management of shoulder injuries in rugby players. Summary: Rugby matches results in frequent impacts and leveraging forces to the shoulder region during the tackling, scrums, rucks and maul components of the game. Rugby players frequently sustain contusion and impact injuries to the shoulder region, including injuries to the sternoclavicular, acromioclavicular (AC), and glenohumeral (GH) joints. Players assessed during practices and matches should be screened for signs of fracture, cervical spine and brachial plexus injuries. A three phase program will be proposed to rehabilitate players with shoulder instabilities using rugby specific stabilization, proprioception, and strengthening exercises. A plan for return to play will be addressed including position-specific activities. abstract_id: PUBMED:20129119 Evaluation of shoulder joint position sense in both asymptomatic and rehabilitated professional rugby players and matched controls. Objective: To assess if joint position sense (JPS) in the shoulder differed between un-injured rugby players, matched control subjects and previously injured rehabilitated rugby players. Design: Mixed design. Setting: University biomechanics laboratory. Participants: 15 asymptomatic professional rugby union players, 15 previously injured professional rugby union players, 15 asymptomatic matched non-rugby playing controls had their JPS assessed. Main Outcome Measures: JPS was assessed using two criterion angles in the 90 degrees shoulder abduction position (45 degrees and 80 degrees external rotation). Results: The study found a significant difference between groups in error score (p=0.02). The testing angle also had a significant effect on error score (p=0.002), with greater error scores occurring in the mid range position. Conclusion: This study showed rugby players to have better JPS than controls, indicating JPS might not be related to injury risk. Poor JPS appears to be related to injury, players having sustained an injury have decreased JPS despite surgery and/or rehabilitation and returning to sport without incident. abstract_id: PUBMED:27398838 Specific tackling situations affect the biomechanical demands experienced by rugby union players. Tackling in Rugby Union is an open skill which can involve high-speed collisions and is the match event associated with the greatest proportion of injuries. This study aimed to analyse the biomechanics of rugby tackling under three conditions: from a stationary position, with dominant and non-dominant shoulder, and moving forward, with dominant shoulder. A specially devised contact simulator, a 50-kg punch bag instrumented with pressure sensors, was translated towards the tackler (n = 15) to evaluate the effect of laterality and tackling approach on the external loads absorbed by the tackler, on head and trunk motion, and on trunk muscle activities. Peak impact force was substantially higher in the stationary dominant (2.84 ± 0.74 kN) than in the stationary non-dominant condition (2.44 ± 0.64 kN), but lower than in the moving condition (3.40 ± 0.86 kN). Muscle activation started on average 300 ms before impact, with higher activation for impact-side trapezius and non-impact-side erector spinae and gluteus maximus muscles. Players' technique for non-dominant-side tackles was less compliant with current coaching recommendations in terms of cervical motion (more neck flexion and lateral bending in the stationary non-dominant condition) and players could benefit from specific coaching focus on non-dominant-side tackles. abstract_id: PUBMED:20381004 Cervical joint position sense in rugby players versus non-rugby players. Objective: To determine whether cervical joint position sense is modified by intensive rugby practice. Design: A group-comparison study. Setting: University Medical Bioengineering Laboratory. Participants: Twenty young elite rugby players (10 forwards and 10 backs) and 10 young non-rugby elite sports players. Interventions: Participants were asked to perform the cervicocephalic relocation test (CRT) to the neutral head position (NHP) that is, to reposition their head on their trunk, as accurately as possible, after full active left and right cervical rotation. Rugby players were asked to perform the CRT to NHP before and after a training session. Main Outcome Measurements: Absolute and variable errors were used to assess accuracy and consistency of the repositioning for the three groups of Forwards, Backs and Non-rugby players, respectively. Results: The 2 groups of Forwards and Backs exhibited higher absolute and variable errors than the group of Non-rugby players. No difference was found between the two groups of Forwards and Backs and no difference was found between Before and After the training session. Conclusions: The cervical joint position sense of young elite rugby players is altered compared to that of non-rugby players. Furthermore, Forwards and Backs demonstrated comparable repositioning errors before and after a specific training session, suggesting that cervical proprioceptive alteration is mainly due to tackling and not the scrum. abstract_id: PUBMED:27754783 The effects of fast bowling fatigue and adhesive taping on shoulder joint position sense in amateur cricket players in Victoria, Australia. The impact that muscle fatigue and taping have on proprioception in an applied sporting context remains unclear. The purpose of this study was to investigate disturbances in position sense at the shoulder joint, and asses the effectiveness of adhesive tape in preventing injury and improving performance, after a bout of cricket fast bowling. Among amateur cricket players (N = 14), shoulder position sense, maximum voluntary contraction (MVC) force and bowling accuracy was assessed before and immediately after a fatiguing exercise bout of fast bowling. Participants were tested with the shoulder taped and untapped. Shoulder extension MVC force dropped immediately and 30 min after the exercise (P &lt; 0.05 and P &lt; 0.05, respectively). Position sense errors increased immediately after exercise (P &lt; 0.05), shifting in the direction of shoulder extension for all measurements. Taping had no effect on position errors before exercise, but did significantly reduce position errors after exercise at mid-range shoulder flexion angles (45° and 60°; P &lt; 0.05 and P &lt; 0.05, respectively). Taping had no significant effect on bowling accuracy. These findings may be explained by a body map shift towards a gravity neutral position. Added cutaneous input from the tape is proposed to contribute more to shoulder position sense when muscles are fatigued. abstract_id: PUBMED:34827495 Head Accelerations during a 1-on-1 Rugby Tackling Drill Performed by Experienced Rugby Union Players. Rugby Union is a popular sport played by males and females worldwide, from junior to elite levels. The highly physical skill of tackling occurs every few seconds throughout a match and various injuries associated with tackling are relatively common. Of particular interest are head injuries that result in a concussion. Recently, repeated non-injurious head impacts in sport have attracted the attention of researchers interested in brain health. Therefore, this study assessed head movement during repeated rugby tackle drills among experienced Rugby Union players. Experienced male and female participants performed 15 1-on-1 tackles in a motion analysis laboratory to measure the head movements of the ball carrier and tackler during each tackle, using three-dimensional motion capture. The average peak acceleration of the head for ball carriers was 28.9 ± 24.08 g and 36.67 ± 28.91 g for the tacklers. This study found that the type of head impacts common while performing a tackle in Rugby Union are similar to those experienced by soccer players during heading, which has been found to alter brain function that lasts hours after the event. This has important implications for player health and suggests that mitigation strategies should be considered for Rugby Union. abstract_id: PUBMED:38425739 Reduced shoulder proprioception due to fatigue after repeated handball throws and evaluation of test-retest reliability of a clinical shoulder joint position test. Background: Proprioception is vital for motor control and can be disturbed, for example, due to fatigue or injury. Clinical feasible, reliable and valid tests of shoulder proprioception are warranted. The aim was to investigate the effects of local fatigue on shoulder proprioception and the reliability of a feasible joint position sense test using an experimental repeated measures design. Method: Forty participants repeated a shoulder joint position sense test to assess test-retest reliability. The test was then utilized on a subgroup of handball players who were subjected to five bouts of a repeated throwing task with the dominant hand. The effect of local fatigue was investigated by comparing the fatigued with the non-fatigued shoulder. Results: There was a significant interaction for the arm × bout (p = 0.028, ηp2 = 0.20) and a significant effect for the arm (p = 0.034, ηp2 = 0.35) with a significant decrease in joint position sense for the throwing arm compared to the non-throwing arm. The intraclass correlation coefficient was 0.78 (95% CI = [0.57; 0.89]). The standard error of measurement between trials was 0.70° (range: 0.57°-0.90°). Discussion: The results indicate that repeated throwing to fatigue disturbs shoulder joint position sense. Assessment with the modified test showed acceptable reliability and can be a valuable assessment tool in the clinic. Answer: Yes, a tackling task does affect shoulder joint position sense (JPS) in rugby players. Studies have shown that JPS is significantly reduced following a fatiguing task such as tackling. For instance, one study found that JPS was worse at a 45 degrees criterion angle compared to an 80 degrees angle prior to a tackling task. After the tackling task, absolute error scores were significantly increased at the 80 degrees angle, indicating a reduction in JPS at the end of the range position (PUBMED:19083706). Another study reported a significant decrease in accuracy of JPS following a tackling task, with the detrimental effect being greater than previous research findings. This decrease was also greater for the players' non-dominant side at the 45 degrees criterion angle (PUBMED:24315682). Furthermore, specific tackling situations have been shown to affect the biomechanical demands experienced by rugby union players, which could influence JPS (PUBMED:27398838). Additionally, rugby players have been found to have altered cervical joint position sense compared to non-rugby players, which could be due to the demands of tackling and not scrum activities (PUBMED:20381004). However, interventions such as a brief neuromuscular warmup before tackling have been suggested to mitigate the negative effects of fatigue on shoulder JPS. A study found that a neuromuscular warmup resulted in a decrease in JPS error variance, suggesting it could prevent shoulder injuries by maintaining better JPS during tackling (PUBMED:31062539). In summary, tackling tasks in rugby can negatively affect shoulder joint position sense, but specific warmup routines may help to counteract these effects.
Instruction: Are airways structural abnormalities more frequent in children with recurrent lower respiratory tract infections? Abstracts: abstract_id: PUBMED:24709380 Are airways structural abnormalities more frequent in children with recurrent lower respiratory tract infections? Unlabelled: We report bronchoscopic changes observed in children with recurrent lower airways infections (RLAI) and findings in control children undergoing bronchoscopy for causes other than RLAI. Patients And Methods: Retrospective case-control cohorts study. The clinical records of children who had fiberoptic bronchoscopy (FB) for a history of RLAI without any known underlying disorder between 2007 and 2013 and of control children who required FB for other causes were reviewed. Clinical features, bronchospic findings and bronchoalveolar lavage (BAL) results were assessed. Results: Cases were 62 (32 female) children aged 5 years (1-12) and controls 29 children aged 4.5 years (0.5-14). Airway malacia was observed in 32 (52%) vs. 4 (13%) (p = 0.001), profuse respiratory secretions in 34(55%) vs. 6 (20%) (p = 0.007). Endobronchial obstruction: 4 (6.4%) and tracheobronchomegaly were observed only in cases. In cases with profuse respiratory secretions there was a higher prevalence of airways malacia: 64.7% vs. 35.7% (p = 0.04) and of positive BAL cultures: 45.5% vs. 13.3% (p = 0.04). Isolated organisms in cases were non-typable Haemophilus influenzae and Streptococcus pneumoniae most frequently. Pneumocystiis jirovecii, Staphylococcus aureus, and Streptococcus mitis were isolated in controls. Conclusions: Half of the children with RLAI had tracheo and/or bronchomalacia, their frequency being in keeping with previous reports and far higher than that observed in controls. It was associated with profuse respiratory secretions and with a higher frequency of positive BAL cultures mostly for non typable H. influenzae and S. pneumoniae which were not isolated in controls. abstract_id: PUBMED:26078665 Profile of the patients who present to immunology outpatient clinics because of frequent infections. Aim: We aimed to determine the rate of primary immune deficiency (PID) among children presenting to our immunology outpatient clinic with a history of frequent infections and with warning signs of primary immune deficiency. Material And Methods: The files of 232 children aged between 1 and 18 years with warning signs of primary immune deficiency who were referred to our pediatric immunology outpatient clinic with a complaint of frequent infections were selected and evaluated retrospectively. Results: Thirty-six percent of the subjects were female (n=84) and 64% were male (n=148). PID was found in 72.4% (n=164). The most common diagnosis was selective IgA deficiency (26.3%, n=61). The most common diseases other than primary immune deficiency included reactive airway disease and/or atopy (34.4%, n=22), adenoid vegetation (12.3%, n=8), chronic disease (6.3%, n=4) and periodic fever, aphtous stomatitis and adenopathy (4.6%, n=3). The majortiy of the subjects (90.5%, n=210) presented with a complaint of recurrent upper respiratory tract infection. PID was found in all subjects who had bronchiectasis. The rates of the diagnoses of variable immune deficiency and Bruton agammaglubulinemia (XLA) were found to be significantly higher in the subjects who had lower respiratory tract infection, who were hospitalized because of infection and who had a history of severe infection compared to the subjects who did not have these properties (p&lt;0.05 and p&lt;0.01, respectively). Growth and developmental failure was found with a significantly higher rate in the patients who had a diagnosis of severe combined immune deficiency or hyper IgM compared to the other subjects (p&lt;0.01). No difference was found in the rates of PID between the age groups, but the diagnosis of XLA increased as the age of presentation increased and this was considered an indicator which showed that patients with XLA were being diagnosed in a late period. Conclusions: It was found that the rate of diagnosis was considerably high (72.4%), when the subjects who had frequent infections were selected by the warning signs of PID. abstract_id: PUBMED:32432064 Time to Say Goodbye to Bronchiolitis, Viral Wheeze, Reactive Airways Disease, Wheeze Bronchitis and All That. The diagnosis and management of infants and children with a significant viral lower respiratory tract illness remains the subject of much debate and little progress. Over the decades various terms for such illnesses have been in and fallen out of fashion or have evolved to mean different things to different clinicians. Terms such as "bronchiolitis," "reactive airways disease," "viral wheeze," and many more are used to describe the same condition and the same term is frequently used to describe illnesses caused by completely different dominant pathologies. This lack of clarity is due, in large part, to a failure to understand the basic underlying inflammatory and associated processes and, in part, due to the lack of a simple test to identify a condition such as asthma. Moreover, there is a lack of insight into the fact that the same pathology can produce different clinical signs at different ages. The consequence is that terminology and fashions in treatment have tended to go around in circles. As was noted almost 60 years ago, amongst pre-school children with a viral LRTI and airways obstruction there are those with a "viral bronchitis" and those with asthma. In the former group, a neutrophil dominated inflammation response is responsible for the airways' obstruction whilst amongst asthmatics much of the obstruction is attributable to bronchoconstriction. The airways obstruction in the former group is predominantly caused by airways secretions and to some extent mucosal oedema (a "snotty lung"). These patients benefit from good supportive care including supplemental oxygen if required (though those with a pre-existing bacterial bronchitis will also benefit from antibiotics). For those with a viral exacerbation of asthma, characterized by bronchoconstriction combined with impaired b-agonist responsiveness, standard management of an exacerbation of asthma (including the use of steroids to re-establish bronchodilator responsiveness) represents optimal treatment. The difficulty is identifying which group a particular patient falls into. A proposed simplified approach to the nomenclature used to categorize virus associated LRTIs is presented based on an understanding of the underlying pathological processes and how these contribute to the physical signs. abstract_id: PUBMED:21535288 Respiratory problems in children with Down syndrome. Down syndrome is associated with a significant health burden, which is particularly apparent in young children who will frequently present with cardiac and respiratory problems. Respiratory presentations include problems related to structural abnormalities of the airways and lungs, glue ears, recurrent lower respiratory tract infections and obstructive sleep apnoea. These conditions are readily identifiable and able to be treated. An awareness of the breadth of respiratory problems and a plan to monitor patients with Down syndrome for their development has the potential to improve outcomes. abstract_id: PUBMED:31733527 Perfluoroalkyl substances, airways infections, allergy and asthma related health outcomes - implications of gender, exposure period and study design. Introduction: Exposure to perfluoroalkyl substances (PFASs) has been inconsistently associated with asthma, allergic diseases and airways infections in early childhood. The aim of the study was, therefore, to investigate the effect of childhood exposure to PFASs on asthma and allergy related outcomes and on airways infections before and during puberty using the prospective birth cohort Environment and Childhood Asthma (ECA) Study. Aspects of gender, exposure period and study design (cross-sectional and longitudinal) were also taken into consideration. Material And Methods: Included in the study was 378 participants with PFAS measurements at age 10 years and follow-up data at ages 10 years (cross sectional data) and 16 years (longitudinal data). Eight PFASs with at least 70% of measurements above the limit of quantification (LOQ) in the child's serum were included in the present study: perfluoroheptanoate (PFHpA), perfluorooctanoate (PFOA), perfluourononanoate (PFNA), perfluorodecanoate (PFDA), perfluoroundecanoate (PFUnDA), perfluorohexane sulfonate (PFHxS), perfluoroheptane sulfonate (PFHpS) and perfluorooctane sulfonate (PFOS). The PFAS levels were converted into interquartile range (IQR). In addition, perfluorooctane sulfonamide (PFOSA) detected in 60% of the samples, was recoded into "not detected /detected". Binomial, multinomial and linear regression were used, followed by Bonferroni adjustment to correct for multiple comparisons. Sensitivity analyses evaluating the effect of extreme PFAS values and gender were performed. Results: In the cross sectional data at 10 years a positive statistically significant association was seen between PFHpA and asthma in girls. In the longitudinal data, PFNA, PFDA and PFUnDA were inversely associated with atopic dermatitis (AD) in girls and with PFHxS in all participants and in boys. Further, PFNA and PFHpS were positively associated with rhinitis in girls and with PFOA in all participants. There seems to be a suggestive pattern of increased risk of allergic sensitisation in all participants and a decreased risk in boys, but due to different results in main and sensitivity analyses these findings should be interpreted with caution. No associations were found between PFASs and lung function. For airways infections and longitudinal data, PFDA was inversely associated with common cold, while positive association was found for PFHpA, PFOA, PFHpS and PFOS and lower respiratory tract infections (LRTI). Discussion And Conclusion: Our results lend further support for an immunosuppressive effect of PFASs on AD and LRTI. Gender seems to be important for some exposure-health associations. No clear pattern in exposure-health associations was observed with regard to exposure period or study design, with the exception of asthma where significant findings have mostly been reported in cross-sectional studies. abstract_id: PUBMED:25254161 Burden of respiratory syncytial virus infection in young children. Respiratory syncytial virus (RSV) is the most frequent and important cause of lower respiratory tract infection in infants and children. It is a seasonal virus, with peak rates of infection occurring annually in the cold season in temperate climates, and in the rainy season, as temperatures fall, in tropical climates. High risk groups for severe RSV disease include infants below six mo of age, premature infants with or without chronic lung disease, infants with hemodynamically significant congenital heart disease, infants with immunodeficiency or cystic fibrosis, and infants with neuromuscular diseases. Mortality rates associated with RSV infection are generally low in previous healthy infants (below 1%), but increase significantly in children with underlying chronic conditions and comorbidities. Following early RSV lower respiratory tract infection, some patients experience recurrent episodes of wheezing mimicking early childhood asthma with persistence of lung function abnormalities until adolescence. There is currently no RSV vaccine available, but promising candidate vaccines are in development. Palivizumab, a monoclonal RSV antibody that is the only tool for immunoprophylaxis in high-risk infants, lowers the burden of RSV infection in certain carefully selected patient groups. abstract_id: PUBMED:34114631 Health Workers' Practices in Assessment and Management of Children with Respiratory Symptoms in Primary Care Facilities in Uganda: A FRESH AIR Descriptive Study. Introduction: Globally, acute lower respiratory infections are the leading cause of mortality among children under 5 years. Following World Health Organization primary care guidelines, pneumonia is diagnosed based on cough/difficult breathing and fast breathing. We aimed to describe the practices of healthcare workers in primary care health facilities in Uganda in the management of young children with respiratory symptoms especially regarding asthma as opposed to pneumonia. Methods: Health workers were observed during clinical consultations with children 1-59 months of age presenting with cough and/or difficult breathing at recruitment. Afterward, an exit interview with the caregiver was conducted. Health center availability of clinical guidelines, equipment and supplies for management of children with respiratory symptoms was assessed systematically. Results: A total of 218 consultations with 50 health workers at six health centers were included. Median consultation time was 4 min. Health workers asked history relevant to distinguishing asthma from pneumonia in 16% of consultations. The respiratory rate was counted in 10%. Antibiotics were prescribed to 32% of all the children and to 39% of children diagnosed with pneumonia. Caregivers reported being informed of findings and possible diagnosis in 5% of cases. Medicine and equipment needed for diagnosing and treating asthma were generally unavailable. Conclusion: Clinical practices among Ugandan health workers in primary care are insufficient to distinguish between main causes of respiratory symptoms, especially asthma as opposed to pneumonia, in children under five. Irrational use of antibiotics is widespread. Clear communication with caregivers is lacking. This could be due to lack of relevant competencies, medicines, time and supplies. Lay Summary: Globally, the most frequent cause of death for children under five is infections in the lower airways. The World Health Organization recommends that in local health clinics this is defined as cough/difficult breathing and fast breathing. This article focuses on the practices of local health workers in Uganda and how they in practice diagnose and treat children under five with these symptoms. In addition, we try to estimate how much the caregivers of the children understand from the consultation. This is done by observing the healthcare workers (HCWs) and by interviewing the caregivers. In general, we found that the consultations were too short, that too few of the health workers looked for important signs for lower airways disease such as fast breathing and that antibiotics were prescribed in too many of the consultations. Also, the length and quality of the consultations and the supplies at the local health clinics were not sufficient to diagnose and treat asthma, which can often be mistaken for an infection. We believe that it is an important problem that too few children with asthma are being diagnosed correctly and that antibiotics are being prescribed too frequently, the latter being an important cause of antibiotic resistance. Relevant action must be taken to improve this. abstract_id: PUBMED:36750739 Tobacco smoke exposure, the lower airways microbiome and outcomes of ventilated children. Background: Tobacco smoke exposure increases the risk and severity of lower respiratory tract infections in children, yet the mechanisms remain unclear. We hypothesized that tobacco smoke exposure would modify the lower airway microbiome. Methods: Secondary analysis of a multicenter cohort of 362 children between ages 31 days and 18 years mechanically ventilated for &gt;72 h. Tracheal aspirates from 298 patients, collected within 24 h of intubation, were evaluated via 16 S ribosomal RNA sequencing. Smoke exposure was determined by creatinine corrected urine cotinine levels ≥30 µg/g. Results: Patients had a median age of 16 (IQR 568) months. The most common admission diagnosis was lower respiratory tract infection (53%). Seventy-four (20%) patients were smoke exposed and exhibited decreased richness and Shannon diversity. Smoke exposed children had higher relative abundances of Serratia spp., Moraxella spp., Haemophilus spp., and Staphylococcus aureus. Differences were most notable in patients with bacterial and viral respiratory infections. There were no differences in development of acute respiratory distress syndrome, days of mechanical ventilation, ventilator free days at 28 days, length of stay, or mortality. Conclusion: Among critically ill children requiring prolonged mechanical ventilation, tobacco smoke exposure is associated with decreased richness and Shannon diversity and change in microbial communities. Impact: Tobacco smoke exposure is associated with changes in the lower airways microbiome but is not associated with clinical outcomes among critically ill pediatric patients requiring prolonged mechanical ventilation. This study is among the first to evaluate the impact of tobacco smoke exposure on the lower airway microbiome in children. This research helps elucidate the relationship between tobacco smoke exposure and the lower airway microbiome and may provide a possible mechanism by which tobacco smoke exposure increases the risk for poor outcomes in children. abstract_id: PUBMED:29984134 Lung function in HIV-infected children and adolescents. Background: The advent of antiretroviral therapy has led to the improved survival of human immunodeficiency virus (HIV)-infected children to adulthood and to HIV becoming a chronic disease in older children and adolescents. Chronic lung disease is common among HIV-infected adolescents. Lung function measurement may help to delineate the spectrum, pathophysiology and guide therapy for HIV-related chronic lung disease. Aim: The aim of this study was to review the available data on the spectrum and determinants of lung function abnormalities and the impact of antiretroviral therapy on lung function in perinatally HIV-infected children and adolescents. Methods: Electronic databases "PUBMED", "African wide" and "CINAHL" via EBSCO Host, using the MeSH terms "Respiratory function" AND "HIV" OR "Acquired Immunodeficiency Syndrome" AND "Children" OR "Adolescents", were searched for relevant articles on lung function in HIV-infected children and adolescents. The search was limited to English language articles published between January 1984 and September 2017. Results: Eighteen articles were identified, which included studies from Africa, the United States of America (USA) and Italy, representing 2051 HIV-infected children and adolescents, 68% on antiretroviral therapy, aged from 50 days to 24 years. Lung function abnormalities showed HIV-infected participants had increased irreversible lower airway expiratory obstruction and reduced functional aerobic impairment on exercise, compared to HIV-uninfected participants. Mosaic attenuation, extent of bronchiectasis, history of previous pulmonary tuberculosis or previous lower respiratory tract infection and cough for more than 1 month were associated with low lung function. Pulmonary function tests in children established on antiretroviral therapy did not show aerobic impairment and had less severe airway obstruction. Conclusion: There is increasing evidence that HIV-infected children and adolescents have high prevalence of lung function impairment, predominantly irreversible lower airway obstruction and reduced aerobic function. abstract_id: PUBMED:38134318 Pediatric Patients with Tracheostomies and Its Multifacet Association with Lower Airway Infections: An 8-Year Retrospective Study in a Large Tertiary Center. Background: Lower respiratory tract infections frequently complicate the care of children with chronic tracheostomies. Pediatric patients have significantly more risk to have tracheostomy infections than adults. Better understanding of modifiable risk factors for pulmonary exacerbations may improve the care of technology-dependent children. Methods: A retrospective single-center cohort study conducted on children with tracheostomy and chronic home ventilator to determine the incidence of pulmonary exacerbations leading to hospitalizations, emergency room (ER) visits, and antibiotic prescriptions. Oral and nebulized antibiotic prescriptions were collected and correlated to the type of exacerbation. Results: Gram-negative enteric organisms were the most common microbes seen in the lower airways, with Pseudomonas aeruginosa cultured in 86% of the subjects. P. aeruginosa presence predicted a 4-fold increased rate of pulmonary-related hospitalization. In pediatric patients with chronic respiratory failure, 64% of readmissions were pulmonary or tracheostomy related. When compared to standard care subjects on dual agent, alternating monthly nebulized antibiotic therapy (for chronic pseudomonas colonization) experienced 41% fewer hospitalizations [incidence rate ratios (IRR) 0.59 (0.18), P = 0.08], 46% fewer ER visits [IRR 0.56 (0.16), P = 0.04], and 41% fewer pulmonary-related ER visits [IRR 0.59 (0.19), P = 0.94]. Discussion: Children who require artificial airways are at an increased risk for bacterial bronchopulmonary infections. Most notable risk factors for hospitalization in tracheostomized children included neurologic impairment, dysphagia, aspiration, gastrotomy tube dependence, and gastroesophageal reflux disease. Pathogenic microbes such as P. aeruginosa species, certain gram-negative bacteria, candida, and yeast also predicted increased hospitalizations. Use of nebulized antibiotics prophylaxis in a subset of patients predicted lower rates of hospitalization or ER visits. More studies are needed to assess whether there is increased antimicrobial resistance with this strategy, and whether the benefits persist in the long-term nebulized antibiotics utilization. Answer: Yes, airways structural abnormalities are more frequent in children with recurrent lower respiratory tract infections (RLAI). A study reported that half of the children with RLAI had tracheo and/or bronchomalacia, which is a higher frequency compared to controls. This was associated with profuse respiratory secretions and a higher frequency of positive bronchoalveolar lavage (BAL) cultures, mostly for non-typable Haemophilus influenzae and Streptococcus pneumoniae, which were not isolated in controls (PUBMED:24709380).
Instruction: Should we ventilate or cool the pulmonary graft inside the non-heart-beating donor? Abstracts: abstract_id: PUBMED:14585384 Should we ventilate or cool the pulmonary graft inside the non-heart-beating donor? Background: The ideal preservation method during the warm ischemic period in the non-heart-beating donor (NHBD) remains unclear. In this study we compare the protective effect of ventilation vs cooling of the non-perfused pulmonary graft. Methods: Domestic pigs (30.8 +/- 0.35 kg) were divided into 3 groups. In Group I, lungs were flushed with cold Perfadex solution, explanted and stored in saline (4 degrees C) for 4 hours (HBD, n = 5). Pigs in the 2 study groups were killed by myocardial fibrillation and left untouched for 1 hour. Lungs in Group II were ventilated (NHBD-V, n = 5) for 3 hours. Lungs in Group III were topically cooled (NHBD-TC, n = 5) in situ for 3 hours with saline (6 degrees C) infused via intra-pleural drains. Thereafter, the left lungs from all groups were prepared for evaluation. In an isolated circuit the left lungs were ventilated and reperfused via the pulmonary artery (PA) with autologous, hemodiluted, deoxygenated blood. Hemodynamic, aerodynamic and oxygenation parameters were measured at 37.5 degrees C and a PA pressure of 20 mm Hg. The wet:dry weight ratio (W/D) was calculated after reperfusion. Results: Pulmonary vascular resistance, oxygenation index and W/D weight ratio were significantly worse in NHBD-V (3,774 +/- 629 dyn sec cm(-5), 3.43 +/- 0.5, 6.98 +/- 0.42, respectively) compared with NHBD-TC (1,334 +/- 140 dyn sec cm(-5), 2.47 +/- 0.14, 5.72 +/- 0.24, respectively; p &lt; 0.01, p &lt; 0.05 and p &lt; 0.05, respectively) and HBD (1,130 +/- 91 dyn sec cm(-5), 2.25 +/- 0.09, 5.23 +/- 0.49, respectively; p &lt; 0.01, p &lt; 0.01 and p &lt; 0.05, respectively groups). No significant differences were observed, however, in any of these parameters between NHBD-TC and HBD (p = 0.46, p = 0.35 and p = 0.12, respectively). Conclusion: These results indicate that cooling of the pulmonary graft inside the cadaver is the preferred method in an NHBD protocol. It is also confirmed that 1 hour of warm ischemia does not diminish graft function upon reperfusion. abstract_id: PUBMED:24527871 Effects of exogenous surfactant on the non-heart-beating donor lung graft in experimental lung transplantation - a stereological study. The use of non-heart-beating donor (NHBD) lungs may help to overcome the shortage of lung grafts in clinical lung transplantation, but warm ischaemia and ischaemia/reperfusion injury (I/R injury) resulting in primary graft dysfunction represent a considerable threat. Thus, better strategies for optimized preservation of lung grafts are urgently needed. Surfactant dysfunction has been shown to contribute to I/R injury, and surfactant replacement therapy is effective in enhancing lung function and structural integrity in related rat models. In the present study we hypothesize that surfactant replacement therapy reduces oedema formation in a pig model of NHBD lung transplantation. Oedema formation was quantified with (SF) and without (non-SF) surfactant replacement therapy in interstitial and alveolar compartments by means of design-based stereology in NHBD lungs 7 h after cardiac arrest, reperfusion and transplantation. A sham-operated group served as control. In both NHBD groups, nearly all animals died within the first hours after transplantation due to right heart failure. Both SF and non-SF developed an interstitial oedema of similar degree, as shown by an increase in septal wall volume and arithmetic mean thickness as well as an increase in the volume of peribron-chovascular connective tissue. Regarding intra-alveolar oedema, no statistically significant difference could be found between SF and non-SF. In conclusion, surfactant replacement therapy cannot prevent poor outcome after prolonged warm ischaemia of 7 h in this model. While the beneficial effects of surfactant replacement therapy have been observed in several experimental and clinical studies related to heart-beating donor lungs and cold ischaemia, it is unlikely that surfactant replacement therapy will overcome the shortage of organs in the context of prolonged warm ischaemia, for example, 7 h. Moreover, our data demonstrate that right heart function and dysfunctions of the pulmonary vascular bed are limiting factors that need to be addressed in NHBD. abstract_id: PUBMED:26121917 Controlled non-heart beating donor lung transplantation: initial experience in Spain. Although the number of lung transplants in Spain is increasing annually, more organs are required to ease waiting lists. Controlled non-heart beating donors (NHBD) (Maastricht III) are a reality at international level, and contribute significantly to increasing donor numbers. In this study, we present our NHBD protocol and the initial experience in Spain using lung grafts from this type of donor. Three bilateral lung transplants were performed between January 2012 and December 2014. Preservation was by ex-vivo lung perfusion in 2 cases and by traditional cold ischemia in the other. None of the patients developed grade 3 primary graft dysfunction, no in-hospital mortality was recorded and 1-year survival was 100%. These initial results, and international experience, should help to develop similar protocols to encourage the use of controlled non-heart beating donors. abstract_id: PUBMED:20191461 Non-heart-beating-donor transplant: the first experience in Italy A promising way to increase the number of kidneys for transplantation is to expand the donor pool by including non-heart-beating donors (NHBDs). The centers involved in NHBD transplantation programs have reported a 16-40% increase in kidney transplants. A key issue with NHBD is the significantly higher rate of delayed graft function (DGF) and primary non-function (PNF) compared with that associated with heart-beating donor (HBD) transplants. However, although transplants from NHBD are associated with a greater incidence of early adverse events, long-term graft survival appears to be similar to that observed after transplants from HBDs. In addition, the use of extracorporeal membrane oxygenation and mechanical perfusion, the careful selection of recipients and donors, and an adequate therapeutic strategy may at least partially reduce the risk of PNF and DGF and improve transplant outcome. abstract_id: PUBMED:23739607 Non-heart-beating kidney donors: first improve ischemic times, then allocation A study recently published in The Lancet investigated the 3-year graft survival of heart-beating (HB) and non-heart-beating (NHB) kidney donors. This was done utilizing the UK registry. The study concluded that donor age affects graft survival. It also demonstrated that NHB kidneys tolerate cold ischemia less well than kidneys from HB donors. Based on these conclusions, it is suggested that a different allocation algorithm should be designed in order to reduce ischemic time for NHB donor kidneys. These findings are relevant to the Dutch situation in which more than 50% of postmortal kidney donors are now NHB. Despite past efforts, ischemic times in the Netherlands can still be improved compared to neighbouring countries. It is proposed that this matter be dealt with prior to changes in the allocation algorithm. abstract_id: PUBMED:10881841 Pulmonary graft function after long-term preservation of non-heart-beating donor lungs. Background: Critical organ shortage in lung transplantation could be attenuated by the use of non-heart-beating donor (NHBD) lungs. In addition, prolonged ischemic tolerance of the organs would contribute to the alleviation of organ shortage. The aim of this study was to investigate pulmonary graft function of NHBD lungs after long-term hypothermic storage. Methods: Twelve native-bred pigs (bodyweight 20 to 30 kg) underwent left lung allotransplantation. In the heart-beating donor (HBD) group, lungs were harvested immediately after cardiac arrest. In the NHBD group, lungs were subjected to a warm ischemic period of 90 minutes before harvesting. After a total ischemic time of 19 hours, pulmonary grafts in both groups were reperfused and pulmonary graft function was assessed. All values were compared with a sham-operated control group. Results: Pulmonary graft function in the HBD group was excellent. In the NHBD group, pulmonary gas exchange was impaired, but still provided good graft function compared with the excellent graft function in the HBD group. Pulmonary vascular resistance was even lower in the NHBD group. In the NHBD group, calculated intrapulmonary shunt fraction (Qs/Qt) was significantly increased compared with the sham-group. Histologic alteration and wet-to-dry ratio did not differ significantly between the HBD and NHBD group. Conclusions: We conclude that NHBD lungs (90 minutes of warm ischemic time) have the potential to alleviate organ shortage in lung transplantation even after an extended total ischemic time. abstract_id: PUBMED:18765195 Impact of topical cooling solution and prediction of pulmonary graft viability from non-heart-beating donors. Background: Functional assessment of the potentially damaged graft from a non-heart-beating donor (NHBD) is mandatory for successful outcome after transplantation. We investigated the impact of the topical cooling solution on graft preservation and whether inflammatory markers in bronchoalveolar lavage (BAL) can predict pulmonary graft viability in a pig ex vivo lung perfusion model. Methods: Pigs were euthanized and left untouched for 1 (SAL-1, PER-1) or 3 (SAL-3, PER-3) hours. Topical cooling was done with saline or low-potassium dextran solution (Perfadex) for 1 or 3 hours. In the heart-beating donor control group, the lungs were flushed, explanted and stored for 4 hours. BAL samples were taken from right lungs after explantation and assessed for nitrite, interleukin-8 (IL-8) and protein levels. Left lungs were prepared for ex vivo evaluation. Hemodynamic and oxygenation parameters were measured. Results: Pulmonary vascular resistance (PVR), oxygenation index and Pao(2)/Fio(2) ratio differed significantly between the SAL-3 (42.2 +/- 6.0, 15.9 +/- 3.2 and 148 +/- 14.6 Wood units, respectively) and PER-3 (23.9 +/- 2.7, 6.4 +/- 0.8 and 221.7 +/- 15.06 Wood units, respectively) groups (p &lt; 0.05). BAL IL-8 levels were higher in the SAL-3 group compared with the PER-3 group. BAL nitrite and protein levels were statistically higher in the SAL-3 group (0.98 +/- 0.17 micromol/liter, 728.3 +/- 75.7 microg/ml) than in the PER-3 (0.22 +/- 0.09 micromol/liter, 393.3 +/- 51.1 microg/ml) group (p &lt; 0.05) and correlated with an increase in PVR (r = 0.623, p = 0.001; r = 0.530, p = 0.006, respectively). Conclusions: After 3 hours of warm ischemia topical cooling with Perfadex resulted in better graft function. Nitrite and protein levels in BAL correlated well with PVR and may therefore be used as a non-invasive marker to predict graft function for NHBDs. abstract_id: PUBMED:15653374 IL-1beta in bronchial lavage fluid is a non-invasive marker that predicts the viability of the pulmonary graft from the non-heart-beating donor. Background: Viability testing of the pulmonary graft retrieved from the non-heart-beating donor (NHBD) is mandatory for successful outcome after lung transplantation. Functional assessment by ex vivo reperfusion, however, remains a cumbersome procedure. In this study, therefore, we wanted to investigate the possible value of the proinflammatory cytokines interleukin-1beta (IL-1beta) and tumor necrosis factor-alpha (TNF-alpha) measured in bronchial lavage fluid (BLF) in predicting functional outcome of the pulmonary graft after reperfusion. Methods: Domestic pigs (29.9 +/- 0.56 kg) were sacrificed and divided in 5 groups (n = 5/group). In the non-ischemic group (NHBD-0), the heart-lung block was explanted immediately. In the other groups the animals were left untouched with increasing time intervals (1 hour = NHBD-1; 2 hours = NHBD-2; 3 hours = NHBD-3). Thereafter both lungs were cooled topically via chest drains up to a total ischemic interval of 4 hours. Finally, in the heart-beating donor group lungs were flushed and stored for 4 hours (4 degrees C) [HBD]. BLF samples were taken from the right lung in all groups after explantation for measurement of IL-1beta and TNF-alpha and the left lung was prepared for evaluation in an isolated reperfusion circuit. Haemodynamic, aerodynamic and oxygenation parameters were measured. Wet-to-dry weight ratio (W/D) was calculated after reperfusion. Results: Graft function deteriorated with increasing time intervals after death. A strong correlation was found between the increase of IL-1beta concentration measured in BLF and the increase in pulmonary vascular resistance (r = 0.80), mean airway pressure (r = 0.74) and wet-to dry weight ratio (r = 0.78); (p &lt; 0.0001, for all parameters). No significant differences in TNF-alpha levels in BLF were observed amongst groups (p = 0.933). Conclusions: IL-1beta in BLF prior to reperfusion correlated well with graft function and may therefore be a useful, non-invasive marker that can predict the viability of the pulmonary graft from the NHBD. abstract_id: PUBMED:15223895 Continuous infusion of nitroglycerin improves pulmonary graft function of non-heart-beating donor lungs. Background: The warm ischemic period of lungs harvested from a non-heart-beating donor (NHBD) results in an increased ischemia-reperfusion injury after transplantation. The intravenous application of nitroglycerin (NTG), a nitric oxide (NO) donor, proved to be beneficial during reperfusion of lung grafts from heart-beating donors. The objective of the present study was to investigate the effect of nitroglycerin on ischemia-reperfusion injury after transplantation of long-term preserved NHBD-lungs. Methods: Sixteen pigs (body weight, 20-30 kg) underwent left lung transplantation. In the control group (n=5), lungs were flushed (Perfadex, 60 mL/kg) and harvested immediately after cardiac arrest. In the NHBD group (n=5) and the NHBD-NTG group (n=6), lungs were flushed 90 min (warm ischemia) after cardiac arrest. After a total ischemia time of 19 hr, lungs were reperfused and graft function was observed for 5 hr. Recipient animals in the NHBD-NTG group received 2 microg/kg/min of NTG administered intravenously during the observation period starting 5 min before reperfusion. Tissue specimens and bronchoalveolar lavage fluid (BALF) were obtained at the end of the observation period. Results: Compared with the control group, pulmonary gas exchange was significantly impaired in the NHBD group, whereas graft function in the NHBD-NTG group did not change. Leukocyte fraction and protein concentration in the BALF and histologic alteration of the NHBD-NTG group were not different from controls. Conclusions: Continuous infusion of NTG in the early reperfusion period improves pulmonary graft function of NHBD lungs after long-term preservation. The administration of an NO donor during reperfusion may favor the use of NHBD lungs to alleviate the critical organ shortage in lung transplantation. abstract_id: PUBMED:11093510 Short-term inhaled nitric oxide in canine lung transplantation from non-heart-beating donor. Background: Use of lungs harvested from non-heart-beating donors (NHBDs) would increase the pulmonary donor pool; however, this strategy would have higher risk of early postoperative graft dysfunction due to unavoidable warm ischemic time. We evaluated the effects of short-term inhaled nitric oxide (NO) during reperfusion in canine left single-lung allotransplantation from a non-heart-beating donor. Methods: The donor dogs were sacrificed without heparinization and left at room temperature for 3 hours. Then, recipient dogs received a left single-lung allotransplantation. After implantation, the right bronchus and pulmonary artery were ligated. In group 1 (n = 6), NO gas was administered continuously at a concentration of 40 parts per million throughout a 6-hour assessment period. In group 2 (n = 6), NO gas was administered for the initial 1 hour during reperfusion. In group 3 (n = 6), nitrogen gas was administered for control. Results: Groups treated with NO exhibited lower pulmonary vascular resistance, as well as improved survival and oxygenation. There was no significant difference in these parameters between group 1 and group 2. Myeloperoxidase activity was significantly lower in NO-treated groups. Conclusions: Inhaled NO during reperfusion is beneficial in lung transplantation from non-heart beating donors. The beneficial effect is obtained mainly during the first hour of reperfusion. Answer: Based on the provided abstracts, cooling the pulmonary graft inside the non-heart-beating donor (NHBD) is the preferred method over ventilation. In a study comparing the protective effect of ventilation versus cooling of the non-perfused pulmonary graft, it was found that pulmonary vascular resistance, oxygenation index, and wet-to-dry weight ratio were significantly worse in the group where lungs were ventilated (NHBD-V) compared to the group where lungs were topically cooled (NHBD-TC). The results indicated that the NHBD-TC group had similar outcomes to the heart-beating donor (HBD) group, suggesting that cooling is a more effective preservation method during the warm ischemic period in NHBD protocols (PUBMED:14585384). Additionally, another study showed that topical cooling with a low-potassium dextran solution (Perfadex) resulted in better graft function after 3 hours of warm ischemia compared to saline. Nitrite and protein levels in bronchoalveolar lavage (BAL) correlated well with pulmonary vascular resistance and may be used as non-invasive markers to predict graft function for NHBDs (PUBMED:18765195). Furthermore, the use of controlled NHBD lung transplantation has been reported to contribute significantly to increasing donor numbers without diminishing graft function upon reperfusion, as evidenced by the initial experience in Spain where none of the patients developed grade 3 primary graft dysfunction, and 1-year survival was 100% (PUBMED:26121917). In conclusion, the evidence from these studies supports the practice of cooling the pulmonary graft inside the NHBD as a more effective method for preserving lung function and improving transplantation outcomes compared to ventilation of the non-perfused pulmonary graft.
Instruction: Is it me or not me? Abstracts: abstract_id: PUBMED:22525145 Kidney cancer: radiofrequency ablation of small renal masses--more work required. N/A abstract_id: PUBMED:13362478 Concerning researchers' thinking in schizophrenia research. N/A abstract_id: PUBMED:33764252 Perspectives of rehabilitation professionals on assistive technology provision to young children in South Africa: a national survey. Purpose: Various factors influence the selection of assistive technology for young children within a context with limited resources, such as South Africa. Rehabilitation professionals are required to weigh up different factors as part of their professional reasoning process when making assistive technology (AT) selections. Insight into the perceived influence of different factors may assist in understanding how professionals make decisions about AT in this context. Materials And Methods: An online survey with questions designed using best-worst scaling was distributed to rehabilitation professionals throughout South Africa. Factors influencing assistive technology selection included in the best-worst survey were identified in previous phases of a larger project. A total of n = 451 rehabilitation professionals completed the survey by selecting the factors that were most and least influential on their assistive technology provision. Results: Results of the survey were obtained by calculating the number of times each factor was selected as most influential across the entire sample, and across all questions, enabling the researchers to sort the items in terms of the frequency of selection. Conclusions: Even though the rehabilitation professionals that participated in the study provide services in a context with limited resources, assessment and factors pertaining to the assistive technology itself were generally perceived to be of greater influence than environmental factors. It is recommended that these factors be reflected in frameworks and models of AT selection.IMPLICATIONS FOR REHABILITATIONThe family's ability to support the implementation of AT is an important resource that is perceived to influence the selection of AT by an RP.Insight into the mind-set of professionals that are used to selecting AT within settings with limited resources may provide RPs in well-resourced contexts with guidance on how to do more, with less.RPs should aim to determine child preference and attitude towards AT during the AT selection process.RPs should be aware of their own influence on AT selection. Existing AT Selection models should be adapted to clearly reflect the influence of the recommending professional. abstract_id: PUBMED:25102918 Modified tectonic keratoplasty with minimal corneal graft for corneal perforation in severe Stevens--Johnson syndrome: a case series study. Background: Corneal perforation in severe Stevens-Johnson syndrome (SJS) presenting great therapeutic difficulties, the imperative corneal transplantation always result in graft failure and repeated recurrence of perforation. The aim of this study was to evaluate the effectiveness of a modified small tectonic keratoplasty (MSTK) with minimal corneal graft in the management of refractory corneal perforation in severe SJS. Methods: Refractory corneal perforations in ten patients (10 eyes) with severe SJS were mended with a minimal corneal patch graft, under the guidance of anterior chamber optical coherence tomography, combined with conjunctival flap covering. The outcome measures included healing of the corneal perforation, survival of the corneal graft and conjunctival flap, relevant complications, and improvement in visual acuity. Results: Corneal perforation healed, and global integrity was achieved in all eyes. No immune rejection or graft melting was detected. Retraction of conjunctival flap occurred in one eye, which was treated with additional procedure. Visual acuity improved in six eyes (60%), unchanged in three eyes (30%) and declined in one eye (10%). Conclusions: The MSTK combined with conjunctival flap covering seems to be effective for refractory corneal perforation in severe SJS. abstract_id: PUBMED:19857504 Male and female odors induce Fos expression in chemically defined neuronal population. Olfactory information modulates innate and social behaviors in rodents and other species. Studies have shown that the medial nucleus of the amygdala (MEA) and the ventral premammillary nucleus (PMV) are recruited by conspecific odor stimulation. However, the chemical identity of these neurons is not determined. We exposed sexually inexperienced male rats to female or male odors and assessed Fos immunoreactivity (Fos-ir) in neurons expressing NADPH diaphorase activity (NADPHd, a nitric oxide synthase), neuropeptide urocortin 3, or glutamic acid decarboxylase mRNA (GAD-67, a GABA-synthesizing enzyme) in the MEA and PMV. Male and female odors elicited Fos-ir in the MEA and PMV neurons, but the number of Fos-immunoreactive neurons was higher following female odor exposure, in both nuclei. We found no difference in odor induced Fos-ir in the MEA and PMV comparing fed and fasted animals. In the MEA, NADPHd neurons colocalized Fos-ir only in response to female odors. In addition, urocortin 3 neurons comprise a distinct population and they do not express Fos-ir after conspecific odor stimulation. We found that 80% of neurons activated by male odors coexpressed GAD-67 mRNA. Following female odor, 50% of Fos neurons coexpressed GAD-67 mRNA. The PMV expresses very little GAD-67, and virtually no colocalization with Fos was observed. We found intense NADPHd activity in PMV neurons, some of which coexpressed Fos-ir after exposure to both odors. The majority of the PMV neurons expressing NADPHd colocalized cocaine- and amphetamine-regulated transcript (CART). Our findings suggest that female and male odors engage distinct neuronal populations in the MEA, thereby inducing contextualized behavioral responses according to olfactory cues. In the PMV, NADPHd/CART neurons respond to male and female odors, suggesting a role in neuroendocrine regulation in response to olfactory cues. abstract_id: PUBMED:26806954 The rise and fall of anaesthesia-related neurotoxicity and the immature developing human brain. N/A abstract_id: PUBMED:15467073 Euthanasia: above ground, below ground. The key to the euthanasia debate lies in how best to regulate what doctors do. Opponents of euthanasia frequently warn of the possible negative consequences of legalising physician assisted suicide and active euthanasia (PAS/AE) while ignoring the covert practice of PAS/AE by doctors and other health professionals. Against the background of survey studies suggesting that anything from 4% to 10% of doctors have intentionally assisted a patient to die, and interview evidence of the unregulated, idiosyncratic nature of underground PAS/AE, this paper assesses three alternatives to the current policy of prohibition. It argues that although legalisation may never succeed in making euthanasia perfectly safe, legalising PAS/AE may nevertheless be safer, and therefore a preferable policy alternative, to prohibition. At a minimum, debate about harm minimisation and the regulation of euthanasia needs to take account of PAS/AE wherever it is practised, both above and below ground. abstract_id: PUBMED:140408 The Salop Skate. N/A abstract_id: PUBMED:23471424 Depictions of insomniacs' behaviors and thoughts in music lyrics. Study Objectives. Studies have found that depictions of unhealthy behaviors (e.g., illicit substance use, violence) are common in popular music lyrics; however, we are unaware of any studies that have specifically analyzed the content of music lyrics for unhealthy sleep-related behaviors. We sought to determine whether behaviors known to perpetuate insomnia symptoms are commonly depicted in the lyrics of popular music. Methods. We searched three online lyrics sites for lyrics with the word "insomnia" in the title and performed content analysis of each of the lyrics. Lyrics were analyzed for the presence/absence of the following perpetuating factors: extending sleep opportunity, using counter fatigue measures, self-medicating, and engaging in rituals or anti-stimulus control behaviors. Results. We analyzed 83 music lyrics. 47% described one or more perpetuating factor. 30% described individual(s) engaging in rituals or antistimulus control strategies, 24% described self-medicating, 7% described engaging in counter fatigue measures, and 2% described extending sleep opportunity (e.g., napping during daytime). Conclusion. Maladaptive strategies known to perpetuate insomnia symptoms are common in popular music. Our results suggest that listeners of these sleep-related songs are frequently exposed to lyrics that depict maladaptive coping mechanisms. Additional studies are needed to examine the direct effects of exposing individuals to music lyrics with this content. abstract_id: PUBMED:12012506 Funding cuts compromise sexual health care. N/A Answer: Based on the provided abstracts, there is no information that directly answers the question "Is it me or not me?" This question is quite abstract and could be interpreted in various ways, such as questioning one's identity, responsibility, or involvement in a situation. None of the abstracts provided discuss topics related to self-identity, personal responsibility, or involvement in an event or action that would allow for a direct answer to this question. The abstracts cover a range of topics including radiofrequency ablation in kidney cancer (PUBMED:22525145), schizophrenia research (PUBMED:13362478), assistive technology provision (PUBMED:33764252), tectonic keratoplasty in Stevens-Johnson syndrome (PUBMED:25102918), olfactory cues and neuronal response (PUBMED:19857504), anaesthesia-related neurotoxicity (PUBMED:26806954), euthanasia (PUBMED:15467073), a device called the Salop Skate (PUBMED:140408), depictions of insomniac behavior in music lyrics (PUBMED:23471424), and funding cuts in sexual health care (PUBMED:12012506). None of these topics provide relevant information to address the existential or identity-related question posed.
Instruction: Do surgeons and patients discuss what they document on consent forms? Abstracts: abstract_id: PUBMED:25891679 Do surgeons and patients discuss what they document on consent forms? Background: Previous studies of surgeon behavior report that surgeons rarely meet basic standards of informed consent, raising concerns that current practice requires urgent remediation. We wondered if the Veterans Affairs Healthcare System's recent implementation of standardized, procedure-specific consent forms might produce a better practice of informed consent than has been reported previously. Our goal was to determine how the discussions shared between surgeons and patients correspond to the VA's standardized consent forms. Methods: We enrolled a prospective cohort of patients presenting for possible cholecystectomy or inguinal herniorrhaphy and the surgical providers for those patients. Audio recordings captured the clinical encounter(s) culminating in a decision to have surgery. Each patient's informed consent was documented using a standardized, computer-generated form. We abstracted and compared the information documented with the information discussed. Results: Of 75 consecutively enrolled patients, 37 eventually decided to have surgery and signed the standardized consent form. Patients and providers discussed 37% (95% confidence interval, 0.07-0.67) and 33% (95% confidence interval, 0.21-0.43) of the information found on the cholecystectomy and herniorrhaphy consent forms, respectively. However, the patient-provider discussions frequently included relevant details nowhere documented on the standardized forms, culminating in discussions that included a median 27.5 information items for cholecystectomy and 20 items for herniorrhaphy. Fully, 80% of cholecystectomy discussions and 76% of herniorrhaphy discussions mentioned at least one risk, benefit or alternative, indication for, and description of the procedure. Conclusions: The patients and providers observed here collaborated in a detailed process of informed consent that challenges the initial reports suggesting the need to remediate surgeon's practice of informed consent. However, because the discrepancy between the information documented and discussed exposes legal and ethical liability, there is an opportunity to improve the iMed system so that it better reflects what surgeons discuss and more frequently includes all the information patients need. abstract_id: PUBMED:36864403 Evidence-based informed consent form for total knee arthroplasty. Introduction: Informed consent documentation is often the first area of interest for lawyers and insurers when a medico-legal malpractice suit is concerned. However, there is a lack of uniformity and standard procedure about obtaining informed consent for total knee arthroplasty (TKA). We developed a solution for this need for a pre-designed, evidence-based informed consent form for patients undergoing TKA. Materials And Methods: We extensively reviewed the literature on the medico-legal aspects of TKA, medico-legal aspects of informed consent, and medico-legal aspects of informed consent in TKA. We then conducted semi-structured interviews with orthopaedic surgeons and patients who had undergone TKA in the previous year. Based on all of the above, we developed an evidence-based informed consent form. The form was then reviewed by a legal expert, and the final version was used for 1 year in actual TKA patients operated at our institution. Results: Legally sound, evidence-based Informed Consent Form for Total Knee Arthroplasty. Conclusion: The use of legally sound, evidence-based informed consent for total knee arthroplasty would be beneficial to orthopaedic surgeons and patients alike. It would uphold the rights of the patient, promote open discussion and transparency. In the event of a lawsuit, it would be a vital document in the defence of the surgeon and withstand the scrutiny of lawyers and the judiciary. abstract_id: PUBMED:23218878 Informed consent for innovative surgery: a survey of patients and surgeons. Background: Unlike new drugs and medical devices, most surgical procedures are developed outside clinical trials and without regulatory oversight. Surgical professional organizations have discussed how new procedures should be introduced into practice without agreement on what topics informed consent discussions must include. To provide surgeons with more specific guidance, we wanted to determine what information patients and surgeons consider essential to disclose before an innovative surgical procedure. Methods: Of those approached, 85 of 113 attending surgeons and 383 of 541 adult postoperative patients completed surveys; responses to the surveys were 75% and 71%, respectively. Using a 6-point Likert scale, participants rated the importance of discussing 16 types of information preoperatively for 3 techniques (standard open, laparoscopic, robotic) offered for a hypothetic partial hepatectomy. Results: Compared with surgeons, patients placed more importance on nearly all types of information, particularly volumes and outcomes. For all 3 techniques, approximately 80% of patients indicated that they could not decide on surgery without being told whether it would be the surgeon's first time doing the procedure. When considering an innovative robotic surgery, a clear majority of both patients and surgeons agreed that it was essential to disclose the novel nature of the procedure, potentially unknown risks and benefits, and whether it would be the surgeon's first time performing the procedure. Conclusion: To promote informed decision-making and autonomy among patients considering innovative surgery, surgeons should disclose the novel nature of the procedure, potentially unknown risks and benefits, and whether the surgeon would be performing the procedure for the first time. When accurate volumes and outcomes data are available, surgeons should also discuss these with patients. abstract_id: PUBMED:32753261 "Let's Get the Consent Together": Rethinking How Surgeons Become Competent to Discuss Informed Consent. Objective: Eliciting informed consent is a clinical skill that many residents are tasked to conduct without sufficient training and before they are competent to do so. Even senior residents and often attending physicians fall short of following best practices when conducting consent conversations. Design: This is a perspective on strategies to improve how residents learn to collect informed consent based on current literature. Conclusions: We advocate that surgical educators approach teaching informed consent with a similar framework as is used for other surgical skills. Informed consent should be defined as a core clinical skill for which attendings themselves should be sufficiently competent and residents should be assessed through direct observation prior to entrustment. abstract_id: PUBMED:31640880 Does content of informed consent forms make surgeons vulnerable to lawsuits? Background: Written informed consent forms (ICFs) are important for ensuring that physicians disclose core information to patients to help them autonomously decide about treatment and for providing substantial evidence for the surgeon in case of a legal dispute. This paper aims to assess the legal and ethical appropriateness and sufficiency of the contents of ICFs designed for several elective surgical procedures currently in use in Turkish hospitals. Methods: One hundred and twenty-six forms were randomly selected and were analyzed for 22 criteria. The results were compared using the Fisher' exact test, and 95% confidence intervals were calculated. Results: More than 80% of ICFs contained information about the risks of the proposed treatment, the diagnosis of the patient, and the patient's voluntariness/willingness, as well as a designated space for the signatures of the patient and the physician and a description of the proposed treatment. Some ICFs were designed for obtaining blanket consent for using patients' specimens. Conclusions: The ICFs for general elective surgery contain many deficiencies regarding disclosure of information, and there is significant variation among primary healthcare providers. Unrealistic expectations regarding the surgery or the post-operative recovery period due to insufficient information disclosure may lead patients, who experience post-surgical inconveniences, to file lawsuits against their surgeons. Although all ICFs, regardless of their institution, are generally insufficient for defending hospital administrations or surgeons during a lawsuit, ICFs of private hospitals might be considered better equipped for the situation than those of state or university hospitals. However, further research is needed to show if private hospitals have lower lawsuit rates or better lawsuit outcomes than state or university hospitals in Turkey. abstract_id: PUBMED:2296494 Readability of pediatric biomedical research informed consent forms. Informed consent forms are used in biomedical research as a mechanism to convey study information to potential subjects so that they may arrive at a decision concerning their willingness to participate. Although the Department of Health and Human Services Regulations for the Protection of Human Subjects require the presentation of specific study information at a level that is easily understood, according to research concerning adult biomedical consent forms, the typical form is not readily comprehensible. Unfortunately, no data exist concerning the readability of informed consent forms that are used in the context of pediatric biomedical research. In the present study, readability analyses were conducted on a large sample (N = 238) of pediatric biomedical informed consent forms obtained during a 10-year period from a large midwestern children's hospital. For the entire sample, results derived from two readability estimates (Fry grade equivalent and Flesch Reading Ease methods) indicated that the consent forms were written at the college graduate level. Although there was a linear increase in the length of the consent document during the 10-year period evaluated, expanded length was not associated with improved readability. According to analyses, a differential pattern of reading difficulty was associated with specific sections of the informed consent document. Findings are highly consistent with those from studies of adult biomedical consent form and document that the purpose of the informed consent form is being compromised, in part, by a readability factor. Suggestions for solving this critical problem are advanced. abstract_id: PUBMED:36175279 Evaluation of consent forms for clinical practice in Spanish Public Hospitals. Objective: To evaluate the access, development, and quality of consents forms for clinical practice within the Spanish Public Hospitals. Method: A cross-sectional study was conducted in a two-stage process (January 2018-September 2021). In stage 1, A nationwide survey was undertaken across all public general hospitals (n=223) in the Spanish Healthcare System. In stage 2, Data was taken from the regional health services websites and Spanish regulations. Health Regional Departments were contacted to verify the accuracy of the findings. Data was analyzed using a descriptive and inferential statistics (frequencies, percentages, Chi-square &amp; Fisher's exact tests). Results: The response rate was 123 (55.16%) of Spanish Public Hospitals. The results revealed a range of hospital departments involved in the development of consent documents and the absence of a standardized approach to consent forms nationally. Consent audits are undertaken in 43.09% hospitals and translation of written consents into other languages is limited to a minority of hospitals (35.77%). The validation process of consent documentation is not in evidence in 13% of Spanish Hospitals. Regional Informed Consent Committees are not place in the majority (70.7%) of hospitals. Citizens can freely access to consent documents through the regional websites of Andalusia and Valencia only. Conclusion: Variability is found on access, development and quality of written consent across the Spanish Public Hospitals. This points to the need for a national informed consent strategy to establish policy, standards and an effective quality control system. National audits at regular intervals are necessary to improve the consistency and compliance of consent practice. abstract_id: PUBMED:18378912 Standardised consent forms on the website of the British Orthopaedic Association. The British Orthopaedic Association has endorsed a website, www.orthoconsent.com, allowing surgeons free access to a bank of pre-written consent forms. These are designed to improve the level of information received by the patient and lessen the risk of successful litigation against surgeons and Health Trusts. abstract_id: PUBMED:34270504 Informed Consent for the Orthopaedic Surgeon. »: In the United States, orthopaedic surgeons have a legal obligation to obtain informed consent from patients before performing surgery; it is a process that includes a signed written document. »: There are specific legal requirements that vary somewhat by state but generally include disclosure and documentation of the diagnosis, an explanation of the recommended procedure, a conversation about the risks and benefits of the procedure, and a discussion about alternative treatments. »: Inadequate disclosure of risks and alternatives is associated with increased indemnity risk. »: Studies have shown that many consent processes and forms are suboptimal. abstract_id: PUBMED:25827701 A written consent form dating back to 1524 in Bursa Ser'iye (Sharia Court) records and a proposal of a new start date for consent forms. Background And Objectives: Qadi registers are important documents for Ottoman medical history re.search. "Sharia Court Records (Ser'iyye Sicilleri)" are notebooks that include the records that qadis kept with regard to their decisions and deeds. These registers are the only authentic sources from which to acquire information on rural life, away from the center of the town, and to understand the daily practices of the Ottoman society. The objective of this study is to provide evidence for the fact that the concept of informed consent on medical interventions, and hence the written consent documents arranged between patients and physicians, dates back to older times in our history when compared to the Western world. Design And Setting: A large number of Ser'iyye (Sharia Court) record originals have been surveyed. The consent form registered as A-40. 221a in Ser'iyye (Sharia Court) Records found in Bursa has been presented here as the earliest consent document found by the authors. Transcription of the original document has been performed and analyzed. The aforementioned consent form dates back to 26/Dhu al-Qi'dah/933 (August 24, 1524). Methods: The original version of the referenced consent document is the earliest consent document presented so far to the best of the authors' knowledge; it was found in Bursa Ser'iyye Records and evaluated accordingly. Results: Based on the document, it is argued that the history of consent forms dates back about 500 years. Conclusion: Obtaining consent in scientific research from human beings was considered to have originated from the Nuremberg Code (1949). However, with this study, it has been shown that the concept of informed consent was already present in the Ottoman Period, during the 16th century, and that the original consent document dates back to 1524, pertaining to a surgical intervention. Answer: Surgeons and patients do not always discuss everything that is documented on consent forms. A study involving patients presenting for cholecystectomy or inguinal herniorrhaphy found that patients and providers discussed only 37% and 33% of the information found on the respective consent forms. However, the discussions often included relevant details not documented on the standardized forms, suggesting that the conversations were more detailed than the forms indicated (PUBMED:25891679). This discrepancy between what is discussed and what is documented exposes legal and ethical liabilities, indicating a need for improvement in the documentation system to better reflect the actual discussions (PUBMED:25891679). Moreover, another study on informed consent for innovative surgery revealed that patients and surgeons consider it essential to disclose the novel nature of the procedure, potentially unknown risks and benefits, and whether it would be the surgeon's first time performing the procedure. This suggests that there is a gap between what patients expect to know and what may be typically disclosed or documented (PUBMED:23218878). In the context of total knee arthroplasty, the development of a legally sound, evidence-based informed consent form was proposed to promote open discussion and transparency, which implies that existing consent forms may not always facilitate comprehensive discussions between surgeons and patients (PUBMED:36864403). Overall, while surgeons and patients do engage in discussions about surgery during the informed consent process, there is evidence to suggest that not all documented information is discussed, and there is room for improvement in ensuring that consent forms accurately reflect the entirety of these discussions (PUBMED:25891679; PUBMED:23218878; PUBMED:36864403).
Instruction: Post-traumatic stress disorder symptoms following a head injury: does amnesia for the event influence the development of symptoms? Abstracts: abstract_id: PUBMED:11516346 Post-traumatic stress disorder symptoms following a head injury: does amnesia for the event influence the development of symptoms? Background: There is controversy as to whether PTSD can develop following a brain injury with a loss of consciousness. However, no studies have specifically examined the influence of the memories that the individuals may or may not have on the development of symptoms. Aims: To consider how amnesia for the traumatic event effects the development and profile of traumatic stress symptoms. Method: Fifteen hundred case records from an Accident and Emergency Unit were screened to identify 371 individuals with traumatic brain injury who were sent questionnaires by post. The 53 subsequent valid responses yielded three groups: those with no memory (n = 14), untraumatic memories (n = 13) and traumatic memories (n = 26) of the index event. The IES-R was used as a screening measure followed by a structured interview (CAPS-DX) to determine caseness and provide details of symptom profile. Results: Groups with no memories or traumatic memories of the index event reported higher levels of psychological distress than the group with untraumatic memories. Ratings of PTSD symptoms were less severe in the no memory groups compared to those with traumatic memories. Conclusions: Psychological distress was associated with having traumatic or no memories of an index event. Amnesia for the event did not protect against PTSD; however, it does appear to protect against the severity and presence of specific intrusive symptoms. abstract_id: PUBMED:11102330 Unconsciousness, amnesia and psychiatric symptoms following road traffic accident injury. Background: Although road traffic accident injury is the most common cause of traumatic brain injury, little is known of the prevalence of psychiatric complications or the significance of unconsciousness and amnesia. Aims: To describe amnesia and unconsciousness following a road traffic accident and to determine whether they are associated with later psychological symptoms. Method: Information was obtained from medical and ambulance records for 1441 consecutive attenders at an emergency department aged 17-69 who had been involved in a road traffic accident. A total of 1148 (80%) subjects completed a self-report questionnaire at baseline and were followed up at 3 months and 1 year. Results: Altogether, 1.5% suffered major head (and traumatic brain) injury and 21% suffered minor head injury. Post-traumatic stress disorder (PTSD) and anxiety and depression were more common at 3 months in those who had definitely been unconscious than in those who had not, but there were no differences at 1 year. Conclusions: PTSD and other psychiatric complications are as common in those who were briefly unconscious as in those who were not. abstract_id: PUBMED:5389208 Psychiatric manifestations of head injuries--from the viewpoint of neurosurgery N/A abstract_id: PUBMED:63653 Sequelae of concussion caused by minor head injuries. Of 145 patients with concussion from minor head injuries admitted to the Royal Victoria Hospital, Belfast, over one year, 49.0 per cent had no symptoms, 38.9 per cent had between 1 and 6 symptoms, and 2.1 per cent had more than 6 symptoms about six weeks after the accident. There was significant correlation between a high symptom-rate at six weeks and positive neurological signs and symptoms at twenty-four hours. Post-concussion symptoms were more frequent in women, in those injured by falls, and in those who blamed their employers or large impersonal organisations for their accidents. The results suggest that both organic and neurotic factors are involved in the pathogenesis of symptoms at six weeks. abstract_id: PUBMED:16571489 Can PTSD Occur with Amnesia for the Precipitating Event? Theoretical accounts of post-traumatic stress disorder (PTSD) suggest that memory for a precipitating event is crucial for its development. Indeed, Sbordone and Liter (1995) have recently argued that mild traumatic brain injury and PTSD are mutually exclusive disorders. A case is described who sustained a severe head injury in a road traffic accident. He had a retrograde amnesia of two days and a post-traumatic amnesia of four weeks. Six months after his accident he was found to be suffering from a number of anxiety symptoms, including nightmares and intrusive thoughts, consistent with a diagnosis of PTSD. The implications of this case for theories of PTSD are discussed. abstract_id: PUBMED:11837405 Clinical predictors of posttraumatic stress disorder after closed head injury in children. Objective: To describe injury, demographic, and neuropsychiatric characteristics of children who develop posttraumatic stress disorder (PTSD) and posttraumatic stress symptoms (PTSS) after closed head injury (CHI). Method: Ninety-five children with severe CHI and amnesia for the event were prospectively followed for 1 year. Structured interviews were administered twice to the parents: shortly after injury to cover the child's premorbid status, and 1 year after injury. The child was also interviewed twice: shortly after injury to cover current status, and 1 year after injury. Outcome measures were diagnostic status (PTSD by parent or child) and symptom severity (PTSS by parent or child). Results: Twelve children developed PTSD by 1 year after injury, 5 according to parent report, 5 according to child report, and 2 according to both parent and child report. Predictors of PTSD at 1 year post-CHI included female gender and early post-CHI anxiety symptoms. Predictors of PTSS at 1 year post-CHI were (1) premorbid psychosocial adversity, premorbid anxiety symptoms, and injury severity; and (2) early post-CHI depression symptoms and nonanxiety psychiatric diagnoses. Conclusions: PTSD developed in 13% of children with severe CHI accompanied by traumatic amnesia. Predictors of PTSD and PTSS after CHI, according to parent and child report, are consistent with predictors of PTSD and PTSS that develop after non-head injury trauma. abstract_id: PUBMED:9017524 Posttraumatic stress disorder in patients with traumatic brain injury and amnesia for the event? Frequency of DSM-III-R posttraumatic stress disorder (PTSD) was studied in 47 active-duty service members (46 male, 1 female; mean age 27 = 7) with moderate traumatic brain injury and neurogenic amnesia for the event. Patients had attained "oriented and cooperative" recovery level. When evaluated with a modified Present State Examination and other questions at various points from study entry to 24-month follow-up, no patients met full criteria for PTSD or met criterion B (reexperience); 6 (13%) met both C (avoidance) and D (arousal) criteria. Five of these 6 also had organic mood disorder, depressed type, and/or organic anxiety disorder. Posttraumatic amnesia following moderate head injury may protect against recurring memories and the development of PTSD. Some patients with neurogenic amnesia may develop a form of PTSD without the reexperiencing symptoms. abstract_id: PUBMED:4469864 Atypical early posttraumatic syndromes (author's transl) In a consecutive series of 1,925 head injuries, 283 patients (14.7%), could not be classified, neither in the group of simple head injuries without cerebral symptoms, nor in the group of typical concussions characterized by immediate amnesia or observed coma. We have prefered the rather neutral term of atypical early posttraumatic syndromes. In this group, apart from neurovegetative manifestations, partial disturbances of consciousness and perception, we have also classified delayed disturbances of consciousness. Special attention has been given to migraineous phenomena and to a syndrome, characteristic for children, described by Mealey. This is an intermediate group important from a medico-legal point of view because certain transient cerebral manifestations risk to be mistaken for psychological reactions. On the other hand symptoms probably of psychic origin were discussed. abstract_id: PUBMED:8827662 Posttraumatic stress disorder, flashbacks, and pseudomemories in closed head injury. Posttraumatic stress disorder (PTSD) is rarely diagnosed in patients with significant head injury. This paper reviews two patients who were amnesic for events surrounding their motor vehicle accidents (MVAs) but developed delayed-onset PTSD. Symptoms included vivid images of the MVAs that were based on information learnt following the trauma. These cases indicate that amnesic head injured patients can suffer pseudomemories that are phenomenologically similar to flashbacks observed in PTSD. Implications for understanding the nature of flashbacks are discussed. abstract_id: PUBMED:14669199 Contemporary issues in mild traumatic brain injury. Objectives: To determine (1) minimum criteria in adults for clinical diagnosis of mild traumatic brain injury (TBI) and (2) whether persistent postconcussive syndrome exists as a nosologic entity. Data Sources: PubMed search by MEDLINE of head injuries from January 1977 to July 2002. Study Selection: All reviews and studies of mild TBI with special reference to those on persistent postconcussive syndrome having a general trauma cohort as a control comparison. Data Extraction: Review of design and other methodologic issues. Studies dependent on superior strength of evidence (as defined by the American Academy of Neurology) concerning the biologic nature of persistent postconcussive syndrome. Data Synthesis: A period of altered awareness with amnesia brought on by a direct craniofacial blow is the starting point in determining whether diffuse mild TBI has occurred. An amnestic scale is more helpful than Glasgow Coma Scale score in grading mild injury and in formulating minimum inclusion criteria for mild TBI. Neuropsychologic test results coupled with self-reported symptoms should not be taken as the primary source of evidence for mild TBI. Prolonged cognitive impairment after injury is not unique to brain trauma. Conclusions: Persistent postconcussive syndrome after mild brain trauma, uncomplicated by focal injury, is biologically inseparable from other examples of the posttraumatic syndrome. To account for the persistent cognitive and behavioral sequelae of posttraumatic states, including persistent postconcussive syndrome, we need further studies on the emerging concept of limbic neuronal attrition occurring as a maladaptive response to pain and stress. Answer: The development of post-traumatic stress disorder (PTSD) symptoms following a head injury and the influence of amnesia for the event have been explored in various studies. One study found that individuals with either no memory or traumatic memories of the index event reported higher levels of psychological distress than those with untraumatic memories. Interestingly, the group with no memory had less severe PTSD symptoms compared to those with traumatic memories, suggesting that while amnesia does not protect against PTSD, it may protect against the severity and presence of specific intrusive symptoms (PUBMED:11516346). Another study indicated that PTSD and other psychiatric complications were as common in those who were briefly unconscious as in those who were not, suggesting that unconsciousness and amnesia do not necessarily prevent the development of PTSD symptoms (PUBMED:11102330). A case study also described a patient who developed anxiety symptoms consistent with PTSD despite having amnesia for the precipitating event, challenging the notion that memory of the event is crucial for PTSD development (PUBMED:16571489). In children with closed head injury and amnesia for the event, PTSD developed in 13% of the cases, with predictors of PTSD and posttraumatic stress symptoms (PTSS) after head injury being consistent with predictors of PTSD and PTSS after non-head injury trauma (PUBMED:11837405). Conversely, a study on military service members with moderate traumatic brain injury and amnesia found that no patients met full criteria for PTSD, suggesting that posttraumatic amnesia may protect against recurring memories and the development of PTSD, although some patients may develop a form of PTSD without the reexperiencing symptoms (PUBMED:9017524). In summary, the relationship between amnesia following a head injury and the development of PTSD symptoms is complex. While amnesia may not completely protect against PTSD, it appears to influence the severity and nature of the symptoms experienced.
Instruction: Volumetric and functional assessment using cardiac magnetic resonance imaging in young children exposed to acute pulmonary regurgitation: is pulmonary regurgitation just a long-term matter? Abstracts: abstract_id: PUBMED:23356430 Volumetric and functional assessment using cardiac magnetic resonance imaging in young children exposed to acute pulmonary regurgitation: is pulmonary regurgitation just a long-term matter? Background: The early effect of pulmonary regurgitation (PR) on both ventricular volume and function has not been well established in children. We evaluated the early effect of PR on both ventricular volume and function in young children. Methods: Among patients who underwent total repair of pulmonary atresia with ventricular septal defect between January 2007 and December 2008, cardiac magnetic resonance imaging (CMRI) was performed in 12 patients at a median interval of 15.6 months (6-22 months). Valveless right ventricular outflow tract (RVOT) reconstruction was performed in five patients (valveless group) and RVOT reconstruction using valved conduit in seven patients (valve group). Age and weight at operation, and the interval between the operation and CMRI were not different between the groups. Results: We observed a higher pulmonary regurgitant fraction (p = 0.003), a higher right ventricular end-diastolic volume index (RVEDVI) (p = 0.003), a higher right ventricular end-systolic volume index (p = 0.003), a higher left ventricular end-diastolic volume index (p = 0.010), a higher left ventricular end-systolic volume index (p = 0.018), and a lower left ventricular ejection fraction (LVEF; p = 0.048) in the valveless group. Right ventricular ejection fraction (RVEF) was not different between two groups. The RVEDVI was negatively correlated with RVEF (rho = -0.601, p = 0.039) and LVEF (rho = -0.580, p = 0.048). Conclusions: Both ventricular volumes increased and left ventricular function was compromised, but right ventricular function was preserved early after the exposure to PR in children. Right ventricular volume was associated with both ventricular functions. abstract_id: PUBMED:29844737 Cardiac Magnetic Resonance to Evaluate Percutaneous Pulmonary Valve Implantation in Children and Young Adults. Experience with cardiac magnetic resonance to evaluate coronary arteries in children and young adult patients is limited. Because noninvasive imaging has advantages over coronary angiography, we compared the effectiveness of these techniques in patients who were being considered for percutaneous pulmonary valve implantation. We retrospectively reviewed the cases of 26 patients (mean age, 12.53 ± 4.85 yr; range, 5-25 yr), all of whom had previous right ventricular-to-pulmonary artery homografts. We studied T2-prepared whole-heart images for coronary anatomy, velocity-encoded cine images for ventricular morphology, and function- and time-resolved magnetic resonance angiographic findings. Cardiac catheterization studies included coronary angiography, balloon compression testing, right ventricular outflow tract, and pulmonary artery anatomy. Diagnostic-quality images were obtained in 24 patients (92%), 13 of whom were considered suitable candidates for valve implantation. Two patients (8%) had abnormal coronary artery anatomy that placed them at high risk of coronary artery compression during surgery. Twelve patients underwent successful valve implantation after cardiac magnetic resonance images and catheterization showed no increased risk of compression. We attempted valve implantation in one patient with unsuitable anatomy but ultimately placed a stent in the homograft. Magnetic resonance imaging of coronary arteries is an important noninvasive study that may identify patients who are at high risk of coronary artery compression during percutaneous pulmonary valve implantation, and it may reveal high-risk anatomic variants that can be missed during cardiac catheterization. abstract_id: PUBMED:32717537 Spectrum of changes on cardiac magnetic resonance in repaired tetralogy of Fallot: Imaging according to surgical considerations. Imaging of repaired tetralogy of Fallot (TOF) is one of the common indications for cardiac magnetic resonance (CMR) examinations. With advances in CMR imaging techniques like phase contrast imaging and functional imaging, it has superseded investigations like echocardiography for anatomical and functional assessment of the pathophysiological changes in repaired TOF. Common repair procedures for TOF include infundibulectomy, transannular patch repair and right ventricle to pulmonary artery (RV-PA) conduit. While each of these procedures cause dynamic changes in heart and pulmonary arteries resulting in some expected imaging findings, CMR also helps in diagnosing the complications associated with these repair procedures like pulmonary stenosis, right ventricular outflow tract aneurysm, pulmonary regurgitation, RV-PA conduit stenosis, tricuspid regurgitation, right ventricular failure, and residual ventricular septal defects. Hence, it is imperative for a radiologist to be familiar with the expected changes on CMR in repaired TOF along with some of the common complications that may be encountered on imaging in such patients. abstract_id: PUBMED:23680583 Evaluation of right ventricular function in patients with tetralogy of Fallot using the myocardial performance index and isovolumic acceleration: a comparison with cardiac magnetic resonance imaging. Background: Assessment of right ventricular function is a key point in the follow-up of operated patients with tetralogy of Fallot. Cardiac magnetic resonance assessment of right ventricular function is considered the gold standard. However, this technique is expensive, has limited availability, and requires significant expertise to acquire and interpret the images. Myocardial performance index and isovolumic acceleration have recently been studied for the assessment of right ventricular function and are shown to be simple yet powerful tools for assessing patients with right ventricular dysfunction of various origins. Methods: In this study, the integrity of myocardial performance index and isovolumic acceleration obtained by tissue Doppler imaging echocardiography to quantify right ventricular function was assessed in 31 patients operated for tetralogy of Fallot. Myocardial performance index and isovolumic acceleration measurements were compared with the parameters derived by cardiac magnetic resonance imaging. Results: In this study, a significant correlation has not been detected between cardiac magnetic resonance-originated right ventricular ejection fraction, pulmonary regurgitation fraction and myocardial performance index, isovolumic acceleration obtained by tissue Doppler imaging echocardiography from the lateral tricuspid annulus of the right ventricle. Conclusion: We have concluded that when evaluated separately, myocardial performance index and isovolumic acceleration obtained from tissue Doppler imaging echocardiography can be used in the long-term follow-up of patients who have been operated for tetralogy of Fallot, but that they do not show correlation with cardiac magnetic resonance-originated right ventricle ejection fraction and pulmonary regurgitation fraction. abstract_id: PUBMED:32653300 Assessment of Disease Progression in Patients With Repaired Tetralogy of Fallot Using Cardiac Magnetic Resonance Imaging: A Systematic Review. Aims: Tetralogy of Fallot (ToF) is the most common cyanotic congenital heart disease with a growing population of adult survivors. Late pulmonary outflow tract and pulmonary valve postoperative complications are frequent, leading to long-term risks such as right heart failure and sudden death secondary to arrhythmias. Cardiac magnetic resonance imaging (CMR) is the gold standard for assessment of cardiac function in patients with repaired ToF. We aimed to determine the most useful CMR predictors of disease progression and the optimal frequency of CMR. Methods And Results: We systematically reviewed PubMed from inception until 29 April 2019 for longitudinal studies assessing the relationship between CMR features and disease progression in repaired ToF. Fourteen (14) studies were identified. Multiple studies showed that impaired right and left ventricular function predict subsequent disease progression. Right ventricular end diastolic volume, while being associated with disease progression when analysed alone, was generally not associated with disease progression on multivariate analysis. Severity of tricuspid regurgitation and pulmonary regurgitation likewise did not show a consistent association with subsequent events. A number of non-CMR factors were also identified as being associated with disease progression, in particular QRS duration and older age at repair. Restrictive right ventricular physiology was not consistently an independent predictor of events. Conclusion: Impaired right and left ventricular function are the most consistent independent predictors of disease progression in repaired ToF. The optimal timing of repeat cardiac imaging remains controversial. Large scale prospective studies will provide important information to guide clinical decision making in this area. abstract_id: PUBMED:30506329 4-D flow magnetic-resonance-imaging-derived energetic biomarkers are abnormal in children with repaired tetralogy of Fallot and associated with disease severity. Background: Cardiac MRI plays a central role in monitoring children with repaired tetralogy of Fallot (TOF) for long-term complications. Current risk assessment is based on volumetric and functional parameters that measure late expression of underlying physiological changes. Emerging 4-D flow MRI techniques promise new insights. Objective: To assess whether 4-D flow MRI-derived measures of blood kinetic energy (1) differentiate children and young adults with TOF from controls and (2) are associated with disease severity. Materials And Methods: Pediatric patients post TOF repair (n=21) and controls (n=24) underwent 4-D flow MRI for assessment of time-resolved 3-D blood flow. Data analysis included 3-D segmentation of the right ventricle (RV) and pulmonary artery (PA), with calculation of peak systolic and diastolic kinetic energy (KE) maps. Total KERV and KEPA were determined from the sum of the KE of all voxels within the respective time-resolved segmentations. Results: KEPA was increased in children post TOF vs. controls across the cardiac cycle, with median 12.5 (interquartile range [IQR] 10.3) mJ/m2 vs. 8.2 (4.3) mJ/m2, P&lt;0.01 in systole; and 2.3 (2.7) mJ/m2 vs. 1.4 (0.9) mJ/m2, P&lt;0.01 in diastole. Diastolic KEPA correlated with systolic KEPA (R2 0.41, P&lt;0.01) and with pulmonary regurgitation fraction (R2 0.65, P&lt;0.01). Diastolic KERV showed similar relationships, denoting increasing KE with higher cardiac outputs and increased right heart volume loading. Diastolic KERV and KEPA increased with RV end-diastolic volume in a non-linear relationship (R2 0.33, P&lt;0.01 and R2 0.50, P&lt;0.01 respectively), with an inflection point near 120 mL/m2. Conclusion: Four-dimensional flow-derived KE is abnormal in pediatric patients post TOF repair compared to controls and has a direct, non-linear relationship with traditional measures of disease progression. Future longitudinal studies are needed to evaluate utility for early outcome prediction in TOF. abstract_id: PUBMED:10672616 Clinical applications of cardiac magnetic resonance imaging after repair of tetralogy of Fallot. In the past 15 years, cardiovascular magnetic resonance (MR) has evolved into an imaging technique that provides adequate, and in part unique, information on residual problems in the follow-up of patients operated for tetralogy of Fallot. Spin-echo or gradient-echo cine magnetic resonance imaging allow detailed assessment of intracardiac and large vessel anatomy, which is particularly helpful in Fallot patients with residual abnormalities of right ventricular outflow and/or pulmonary artery. Multisection gradient-echo cine MRI can be used to obtain accurate measurements of biventricular size, ejection fraction, and wall mass. This allows serial follow-up of biventricular function. MR velocity mapping is the only imaging technique available that provides practical quantification of pulmonary regurgitation volume. MR velocity mapping can also be used to quantify right ventricular diastolic function in the presence of pulmonary regurgitation. abstract_id: PUBMED:36286372 Multimodality Imaging of the Neglected Valve: Role of Echocardiography, Cardiac Magnetic Resonance and Cardiac Computed Tomography in Pulmonary Stenosis and Regurgitation. The pulmonary valve (PV) is the least imaged among the heart valves. However, pulmonary regurgitation (PR) and pulmonary stenosis (PS) can occur in a variety of patients ranging from fetuses, newborns (e.g., tetralogy of Fallot) to adults (e.g., endocarditis, carcinoid syndrome, complications of operated tetralogy of Fallot). Due to their complexity, PR and PS are studied using multimodality imaging to assess their mechanism, severity, and hemodynamic consequences. Multimodality imaging is crucial to plan the correct management and to follow up patients with pulmonary valvulopathy. Echocardiography remains the first line methodology to assess patients with PR and PS, but the information obtained with this technique are often integrated with cardiac magnetic resonance (CMR) and computed tomography (CT). This state-of-the-art review aims to provide an updated overview of the usefulness, strengths, and limits of multimodality imaging in patients with PR and PS. abstract_id: PUBMED:20693132 Angiocardiography and magnetic resonance imaging to assess pulmonary regurgitation in repaired tetralogy of Fallot. Objective: This study aimed to compare the results of angiocardiography and cardiovascular magnetic resonance imaging in the assessment of pulmonary regurgitation following repair of tetralogy of Fallot. Methods: We prospectively studied 37 patients with repaired tetralogy of Fallot. After routine examination cardiovascular magnetic resonance imaging (CMR) and cardiac catheterization and angiography were performed. Pulmonary regurgitation (PR) was classified according to the following criteria, using a left lateral angiogram of the main pulmonary artery; insufficiency jet is limited to right ventricular outflow tract (mild); jet reaches the body of right ventricle (moderate); jet fills the apex of the right ventricle (severe). Results: Pulmonary regurgitation determined by angiocardiography and CMR was severe in 51.4% and 32.4%, moderate in 27% and 40.5%, and none or mild in 21.6% and 27% of patients respectively. The ability of semi-quantitative estimation of PR determined by cardiac catheterization to distinguish between mild, moderate and severe pulmonary regurgitation was shown to have significant correlation with pulmonary regurgitant fraction obtained by CMR. Conclusions: Angiography obtained during invasive study can be used for the diagnosis and follow-up of pulmonary regurgitation confidently in patients with repaired tetralogy of Fallot and residual pulmonary regurgitation. abstract_id: PUBMED:16418252 Cardiac MR imaging assessment following tetralogy of fallot repair. Survivors of tetralogy of Fallot (TOF) repair constitute a large and growing population of patients. Although postsurgical outcome is generally favorable, as these patients move into adulthood, late morbidity is becoming more prevalent and the notion that TOF has been "definitively repaired" is increasingly being challenged. Recent evidence suggests that adverse long-term postsurgical outcome is related to chronic pulmonary regurgitation, right ventricular dilatation, and deteriorating ventricular function. Cardiac magnetic resonance (MR) imaging has been established as an accurate technique for quantifying ventricular size, ejection fraction, and valvular regurgitation. Cardiac MR imaging does not expose the patient to ionizing radiation and is therefore ideal for serial postsurgical follow-up. Familiarity with the anatomic basis of TOF, the surgical approaches to repair, and postrepair sequelae is essential for performing and interpreting cardiac MR imaging examinations. For example, awareness of the complications and sequelae that can occur will assist in determining when to intervene to preserve ventricular function and will improve long-term outcome. Technical facility is necessary to tailor the examination to the individual patient (eg, familiarity with non-breath-hold modifications that allow evaluation of young and less compliant patients). The radiologist can play an essential role in the treatment of patients with repaired TOF by providing noninvasive anatomic and physiologic cardiac MR imaging data. Further technologic advances in cardiac MR imaging are likely to bring about new applications, better normative data, and more examinations that are operator independent. Answer: The early effects of pulmonary regurgitation (PR) on ventricular volume and function in young children have been evaluated using cardiac magnetic resonance imaging (CMRI), suggesting that PR is not just a long-term matter. A study observed that young children exposed to PR after total repair of pulmonary atresia with ventricular septal defect showed increased ventricular volumes and compromised left ventricular function, while right ventricular function was preserved early after exposure to PR. This indicates that PR can have immediate effects on ventricular function and is not solely a long-term concern (PUBMED:23356430). Furthermore, CMRI has been used to evaluate the effectiveness of percutaneous pulmonary valve implantation in children and young adults, demonstrating its importance in noninvasively assessing the risk of coronary artery compression during such procedures (PUBMED:29844737). CMRI is also the gold standard for assessing cardiac function in patients with repaired Tetralogy of Fallot (ToF), a condition where PR is a common postoperative complication. Studies have shown that impaired right and left ventricular function, as assessed by CMRI, are consistent independent predictors of disease progression in repaired ToF (PUBMED:32653300). Additionally, 4-D flow MRI, an advanced CMRI technique, has been found to provide abnormal energetic biomarkers in children with repaired ToF, which are associated with disease severity and have a direct, non-linear relationship with traditional measures of disease progression (PUBMED:30506329). This further supports the notion that PR can have significant early effects on patients post-ToF repair. In conclusion, CMRI has provided evidence that PR can have immediate and significant effects on ventricular volume and function in young children, indicating that PR is not just a long-term issue but also requires early attention and management.
Instruction: Is COPD a Progressive Disease? Abstracts: abstract_id: PUBMED:35459138 Care patterns of patients with chronic fibrosing interstitial lung disease (ILD) with a progressive phenotype. Background: Interstitial lung diseases (ILDs) include a variety of parenchymal lung diseases. The most common types of ILDs are idiopathic pulmonary fibrosis (IPF), autoimmune ILDs and hypersensitivity pneumonitis (HP). There is limited real world data on care patterns of patients with chronic fibrosing ILDs with a progressive phenotype other than IPF. Therefore, the aim of this study is to describe care patterns in these patients. Methods: This retrospective cohort study used claims data from 2015 to 2019 from the Optum Research Database. The study population included adults (≥ 18 years old) with at least two diagnosis codes for fibrosing ILD during the identification period (1OCT2016 to 31DEC2018). A claim-based algorithm for disease progression was used to identify patients likely to have a progressive fibrotic phenotype using progression proxies during the identification period. Index date was the first day of progression proxy identification after fibrosing ILD diagnosis. Patients were required to have continuous enrollment for 12 months before (baseline) and after (follow-up) index date. Patients with an IPF diagnosis were excluded. Descriptive statistics were used to describe the patient population and care patterns. Results: 11,204 patients were included in the study. Mean age of the patient population was 72.7 years, and 54.5% were female. Unclassified ILDs (48.0%), HP (25.2%) and autoimmune ILDs (16.0%) were the most common ILD types. Other respiratory conditions were prevalent among patients including chronic obstructive pulmonary disease (COPD) (58.9%), obstructive sleep apnea (OSA) (25.0%) and pulmonary hypertension (9.8%). During baseline, 65.3% of all patients had at least one pulmonology visit, this proportion was higher during follow-up, at 70.6%. Baseline and follow-up use for HRCT were 39.9% and 48.8%, and for pulmonary function tests were 43.7% and 48.5% respectively. Use of adrenal corticosteroids was higher during follow-up than during baseline (62.5% vs. 58.0%). Anti-inflammatory and immunosuppressive medication classes were filled by a higher percentage of patients during follow-up than during baseline. Conclusions: Comprehensive testing is essential for diagnosis of a progressive phenotype condition, but diagnostic tests were underutilized. Patients with this condition frequently were prescribed anti-inflammatory and immunosuppressive medications. abstract_id: PUBMED:29705482 The effect of progressive muscle relaxation on the management of fatigue and quality of sleep in patients with chronic obstructive pulmonary disease: A randomized controlled clinical trial. Objective: To assess the effect of progressive muscle relaxation (PMR) on fatigue and sleep quality of patients with chronic obstructive pulmonary disease (COPD) stages 3 and 4. Materials And Methods: The pretest posttest clinical trial recruited 91 patients COPD grades 3 and 4. Following random assignment of subjects, the treatment group (n = 45) performed PMR for eight weeks and the control group (n = 46) received routine cares. At baseline and after the intervention, fatigue and sleep quality was assessed. Data obtained were analyzed in SPSS. Results: It was determined that PMR decreased patients' fatigue level and improved some sleep quality subscales including subjective sleep quality, sleep latency, sleep duration and habitual sleep efficiency, but no improvement was found in global sleep quality and other sleep subscales. Conclusion: An eight-week home-based PMR program can be effective in reducing fatigue and improving certain subscales of sleep quality in patients with COPD stages 3,4. (IRCT2016080124080N3). abstract_id: PUBMED:34237325 Measurement of urinary Dickkopf-3 uncovered silent progressive kidney injury in patients with chronic obstructive pulmonary disease. Chronic kidney disease (CKD) represents a global public health problem with high disease related morbidity and mortality. Since CKD etiology is heterogeneous, early recognition of patients at risk for progressive kidney injury is important. Here, we evaluated the tubular epithelial derived glycoprotein dickkopf-3 (DKK3) as a urinary marker for the identification of progressive kidney injury in a non-CKD cohort of patients with chronic obstructive pulmonary disease (COPD) and in an experimental model. In COSYCONET, a prospective multicenter trial comprising 2,314 patients with stable COPD (follow-up 37.1 months), baseline urinary DKK3, proteinuria and estimated glomerular filtration rate (eGFR) were tested for their association with the risk of declining eGFR and the COPD marker, forced expiratory volume in one second. Baseline urinary DKK3 but not proteinuria or eGFR identified patients with a significantly higher risk for over a 10% (odds ratio: 1.54, 95% confidence interval: 1.13-2.08) and over a 20% (2.59: 1.28-5.25) decline of eGFR during follow-up. In particular, DKK3 was associated with a significantly higher risk for declining eGFR in patients with eGFR over 90 ml/min/1.73m2 and proteinuria under 30 mg/g. DKK3 was also associated with declining COPD marker (2.90: 1.70-4.68). The impact of DKK3 was further explored in wild-type and Dkk3-/- mice subjected to cigarette smoke-induced lung injury combined with a CKD model. In this model, genetic abrogation of DKK3 resulted in reduced pulmonary inflammation and preserved kidney function. Thus, our data highlight urinary DKK3 as a possible marker for early identification of patients with silent progressive CKD and for adverse outcomes in patients with COPD. abstract_id: PUBMED:27100872 Is COPD a Progressive Disease? A Long Term Bode Cohort Observation. Background: The Global Initiative for Obstructive Lung Diseases (GOLD) defines COPD as a disease that is usually progressive. GOLD also provides a spirometric classification of airflow limitation. However, little is known about the long-term changes of patients in different GOLD grades. Objective: Explore the proportion and characteristics of COPD patients that change their spirometric GOLD grade over long-term follow-up. Methods: Patients alive for at least 8 years since recruitment and those who died with at least 4 years of repeated spirometric measurements were selected from the BODE cohort database. We purposely included the group of non survivors to avoid a "survival selection" bias. The proportion of patients that had a change (improvement or worsening) in their spirometric GOLD grading was calculated and their characteristics compared with those that remained in the same grade. Results: A total of 318 patients were included in the survivor and 217 in the non-survivor groups. Nine percent of survivors and 11% of non survivors had an improvement of at least one GOLD grade. Seventy one percent of survivors and non-survivors remained in the same GOLD grade. Those that improved had a greater degree of airway obstruction at baseline. Conclusions: In this selected population of COPD patients, a high proportion of patients remained in the same spirometric GOLD grade or improved in a long-term follow-up. These findings suggest that once diagnosed, COPD is usually a non-progressive disease. abstract_id: PUBMED:18558106 Primary care of the patient with chronic obstructive pulmonary disease-part 4: understanding the clinical manifestations of a progressive disease. This article reviews the main factors influencing the pathophysiology, symptoms, and progression of chronic obstructive pulmonary disease (COPD), including dynamic hyperinflation, exacerbations, and comorbid illness. Key clinical trials and reviews were identified. After formal presentations to a panel of pulmonary specialists and primary care physicians, a series of concepts, studies, and practical clinical implications related to COPD progression were integrated into this article, the last in a 4-part mini-symposium. The main points of roundtable consensus were as follows: (1) COPD is characterized by declining pulmonary function as classically measured by forced expiratory volume in 1 second (FEV(1)), but the complex pathophysiology and the rationale for bronchodilator therapy are actually better understood in terms of progressive hyperinflation, both at rest (static) and worsening during exercise (dynamic) and exacerbations; (2) although COPD progression is often thought of as inevitable and continuous, the clinical course is actually quite variable and probably influenced by the frequency of exacerbations; (3) preventing exacerbations with pharmacologic and nonpharmacologic care can influence overall morbidity; (4) comorbidities such as lung cancer, cardiovascular disease, and skeletal muscle dysfunction also contribute to declining patient health; and (5) surgical lung volume reduction and lung transplantation should be considered for selected patients with very severe COPD. We conclude that the concept of COPD as a gradual but relentlessly progressive illness that is best monitored via FEV(1) is outdated and likely compromises patient care. Many patients now being managed in primary care settings will benefit from an earlier, broad-based, and aggressive approach to management. abstract_id: PUBMED:38178680 Differential miRNA Profiling Reveals miR-4433a-5p as a Key Regulator of Chronic Obstructive Pulmonary Disease Progression via PIK3R2- mediated Phenotypic Modulation. Objective: In this study, a high-throughput sequencing technology was used to screen the differentially expressed miRNA in the patients with "fast" and "slow" progression of chronic obstructive pulmonary disease (COPD). Moreover, the possible mechanism affecting the progression of COPD was preliminarily analyzed based on the target genes of candidate miRNAs. Methods: The "fast" progressive COPD group included 6 cases, "slow" and Normal progressive COPD groups included 5 cases each, and COPD group included 3 cases. The peripheral blood samples were taken from the participants, followed by total RNA extraction and high throughput miRNA sequencing. The differentially expressed miRNAs among the progressive COPD groups were identified using bioinformatics analysis. Then, the candidate miRNAs were externally verified. In addition, the target gene of this miRNA was identified, and its effects on cell activity, cell cycle, apoptosis, and other biological phenotypes of COPD were analyzed. Results: Compared to the Normal group, a total of 35, 16, and 7 differentially expressed miRNAs were identified in the "fast" progressive COPD, "slow" progressive COPD group, and COPD group, respectively. The results were further confirmed using dual-luciferase reporter assay and transfection tests with phosphoinositide- 3-kinase, regulatory subunit 2 (PIK3R2) as a target gene of miR-4433a-5p; the result showed a negative regulatory correlation between the miRNA and its target gene. The phenotype detection showed that the activation of the phosphatidylinositol 3 kinase (PI3K)/protein kinase B (AKT) signaling pathway might participate in the progression of COPD by promoting the proliferation of inflammatory A549 cells and inhibiting cellular apoptosis. Conclusions: MiR-4433a-5p can be used as a marker and potential therapeutic target for the progression of COPD. As a target gene of miR-4433a-5p, PIK3R2 can affect the progression of COPD by regulating phenotypes, such as cellular proliferation and apoptosis. abstract_id: PUBMED:26206781 Progressive wheeze: atrial myxoma masquerading as chronic obstructive pulmonary disease. Atrial myxoma, the commonest primary cardiac neoplasm, presents with symptoms of heart failure, embolic phenomena or constitutional upset. We present an atypical case, with wheeze and symptomatic exacerbations typical of chronic obstructive pulmonary disease. With no early clinical evidence of heart failure, the patient was managed with inhaled steroids and bronchodilators, with little relief. Only when the patient was in extremis requiring intubation, due to respiratory failure, did clinical evidence of left heart failure become apparent, with echocardiography demonstrating a massive left atrial myxoma obstructing the mitral valve annulus. Following successful surgical resection, the patient's symptoms fully abated. This case highlights the importance of considering cardiac wheeze in those initially managed as obstructive airway disease not responding in a typical fashion to initial bronchodilator therapy, and particularly in those with rapidly progressive symptoms. Such patients should be referred early for cardiac imaging. The excellent prognosis and quick recovery after timely surgical resection of a myxoma are also highlighted. abstract_id: PUBMED:25539899 Persistent endothelial dysfunction turns the frequent exacerbator COPD from respiratory disorder into a progressive pulmonary and systemic vascular disease. Chronic obstructive pulmonary disease (COPD) is one of the leading causes of death in developed countries of the world, while the main cause of mortality and morbidity in COPD patients are acute exacerbations and cardiovascular diseases. With regard to the frequency of exacerbations the phenotype "frequent exacerbators" has been defined, which, besides a more severe clinical course and a significantly higher total mortality, is also characterised by an elevated risk of cardiovascular mortality, as some indicators show us. It is notable that during the exacerbation of COPD, next to other changes, a significant aggravation of endothelial function occurs while the ED and COPD relationship seems very complex and is still in greater part unknown. Making the pathophysiological link between the frequency of exacerbations of COPD and ED could change our understanding of the character of this type of pulmonary disease. We hypothesize that frequent exacerbator COPD is a progressive and generalised vascular disease, not only an isolated respiratory disorder with ancillary systemic effects. Our opinion is that differences in COPD phenotype do not only determine the clinical picture but could also be of key importance in defining the progressivity of the disease. ED, which in these patients persists between frequent exacerbations, could be the main cause of the progression of pulmonary disease, and not only of the high cardiovascular risk of these patients. Such a persistent ED in FE COPD, with its pro-inflammatory, vasoconstrictory and prothrombotic mechanisms, could contemporaneously induce new exacerbations of COPD, the progression of pulmonary changes and the development of systemic atherosclerosis as a main extrapulmonary manifestation in these patients. Such a model defines endothelium as a common soil of progressive pulmonary and cardiovascular changes in FE COPD. It can fully explain all the elements of the clinical course and co-morbidity in FE COPD, for which we still do not have adequate explanation. abstract_id: PUBMED:29028775 The Effect of Progressive Relaxation Exercises on Fatigue and Sleep Quality in Individuals With COPD. This randomized controlled experimental study was conducted to investigate the effect of progressive muscle relaxation exercises on dyspnea, fatigue, and sleep quality in individuals with chronic obstructive pulmonary disease. A Descriptive Questionnaire and the Chronic Obstructive Pulmonary Disease and Asthma Fatigue Scale, Chronic Obstructive Pulmonary Disease and Asthma Sleep Scale and Medical Research Council Dyspnea Scale were used for data collection. The decrease in the mean dyspnea, fatigue, and sleep scores in the intervention group was statistically significantly more pronounced than the patients in the control group (P &lt; .05). Progressive relaxation exercises can be implemented to decrease the dyspnea, fatigue, and sleep problems seen in patients with moderate and advanced chronic obstructive pulmonary disease by nurses working in the clinic. abstract_id: PUBMED:33836757 There is detectable variation in the lipidomic profile between stable and progressive patients with idiopathic pulmonary fibrosis (IPF). Background: Idiopathic pulmonary fibrosis (IPF) is a chronic interstitial lung disease characterized by fibrosis and progressive loss of lung function. The pathophysiological pathways involved in IPF are not well understood. Abnormal lipid metabolism has been described in various other chronic lung diseases including asthma and chronic obstructive pulmonary disease (COPD). However, its potential role in IPF pathogenesis remains unclear. Methods: In this study, we used ultra-performance liquid chromatography-quadrupole time-of-flight mass spectrometry (UPLC-QTOF-MS) to characterize lipid changes in plasma derived from IPF patients with stable and progressive disease. We further applied a data-independent acquisition (DIA) technique called SONAR, to improve the specificity of lipid identification. Results: Statistical modelling showed variable discrimination between the stable and progressive subjects, revealing differences in the detection of triglycerides (TG) and phosphatidylcholines (PC) between progressors and stable IPF groups, which was further confirmed by mass spectrometry imaging (MSI) in IPF tissue. Conclusion: This is the first study to characterise lipid metabolism between stable and progressive IPF, with results suggesting disparities in the circulating lipidome with disease progression. Answer: Chronic obstructive pulmonary disease (COPD) is often characterized as a progressive disease, with declining pulmonary function typically measured by forced expiratory volume in 1 second (FEV1). However, the clinical course of COPD can be quite variable. Some studies suggest that a high proportion of COPD patients may remain in the same spirometric GOLD grade or even improve over long-term follow-up, indicating that once diagnosed, COPD may not always be a progressive disease (PUBMED:27100872). This challenges the traditional view of COPD as a gradual but relentlessly progressive illness that is best monitored via FEV1. The progression of COPD is influenced by factors such as dynamic hyperinflation, exacerbations, and comorbid illnesses. Preventing exacerbations with pharmacologic and nonpharmacologic care can influence overall morbidity, and comorbidities like lung cancer, cardiovascular disease, and skeletal muscle dysfunction also contribute to declining patient health (PUBMED:18558106). Moreover, the phenotype "frequent exacerbators" has been defined, which is characterized by an elevated risk of cardiovascular mortality and a more severe clinical course. Persistent endothelial dysfunction in these patients may turn COPD from a respiratory disorder into a progressive pulmonary and systemic vascular disease (PUBMED:25539899). Additionally, differential miRNA profiling has revealed miR-4433a-5p as a key regulator of COPD progression via PIK3R2-mediated phenotypic modulation, suggesting that certain molecular pathways may contribute to the progression of the disease (PUBMED:38178680). In conclusion, while COPD is often described as a progressive disease, the actual progression can vary among individuals. Some patients may experience a stable course or even improvement, while others, particularly frequent exacerbators, may face a more progressive and systemic decline in health. Understanding the complex pathophysiology and individual variability is crucial for managing COPD effectively.
Instruction: Automated blood pressure measurement devices: a potential source of morbidity in preeclampsia? Abstracts: abstract_id: PUBMED:8178857 Automated blood pressure measurement devices: a potential source of morbidity in preeclampsia? Objective: The purpose was to compare auscultatory and oscillometric techniques in the determination of maternal blood pressure in normotensive primigravid patients and primigravid patients with proteinuric preeclampsia (blood pressure &gt; 140/90 on two occasions and proteinuria &gt; 0.5 gm/L). Study Design: A prospective comparison of systolic and diastolic blood pressure was made with an automated device using oscillometric principles and two observers using a double-headed stethoscope to determine auscultatory observations (phase I and phase IV of the vascular sounds) in normotensive primigravid patients (N = 40) and primigravid patients with proteinuric hypertension (N = 17). Results: In patients with proteinuric preeclampsia the mean differences between auscultatory (phase I and phase IV) and oscillometric observations were 5.4 mm Hg (SEM 1.4 mm, p &lt; 0.05) and 14.8 mm Hg (SEM 2.9 mm, p &lt; 0.01) for systolic and diastolic observations, respectively. In normotensive patients the mean differences between auscultatory (phase I and phase IV) and oscillometric observations were 2.4 mm Hg (SEM 0.9 mm, p not significant) and 7.5 mm Hg (SEM 1.9 mm, p &lt; 0.01) for systolic and diastolic observations, respectively. Conclusion: Automated devices using oscillometric principles "underrecord" systolic and diastolic blood pressure compared with auscultatory observations (phase I and phase IV) in patients with proteinuric preeclampsia. In some cases the difference between observations exceeds 30 mm Hg. abstract_id: PUBMED:30253712 Automated blood pressure self-measurement station compared to office blood pressure measurement for first trimester screening of pre-eclampsia. Background: Preeclampsia is a serious medical disorder affecting pregnancy. Screening in early pregnancy can identify women at risk and enable effective prophylactic treatment. Accurate blood pressure (BP) measurement is an important element of the screening algorithm. Automated self-screening, while attending the first trimester ultra sound scan, using a BP self-measurement (BPSM) station, could be a low-cost alternative to office BP measurements (OBPM) on both arms performed by clinical staff, if the measurement quality can be ensured. Objectives: The aim of this study was to compare automated BPSM using a self-measurement station on one arm, with OBPM performed by clinical staff on both arms. Primary outcome was the difference in mean arterial pressure (MAP) between the two methods and secondary outcomes were safety and practicality issues. Methods: Pregnant women attending ultrasound-examination at 12 weeks gestational age were recruited and randomized to start with having two OBPMs taken on both arms by staff, using two standard validated automatic upper arm BP devices, or self-measuring using an automated BPSM station following a crossover study design. The BPSM station consists of a validated blood pressure device, and an add-on sensor system capable of registering blood pressure values, rest-time, back-supported, legs-crossed, and ambient noise-levels respectively, and providing interactive guidance during the measurement process, for supporting the self-measurement process. Results: A total of 80 complete BP measurement sets were obtained, for a total of 240 BPSM measurements and 320 OBPM measurements. We found no significant difference between the OBPM and BPSM methods (p=0.86) for mean arterial pressure (MAP). However, erroneous measurements were observed frequently during the experiment, mainly during the first of the 3 BPSM measurements (6%), secondary during the second BPSM measurement (3%). Only one data set (1%) was excluded due to OBPM errors. Conclusion: No significant difference in MAP between the two methods was found. Means for detecting and repeating erroneous BP measurements should be implemented. Measurement errors was found in 9 % of the measurement sets which is not acceptable for clinical use. Thus, several measures have been identified in order to properly identify and recover from such measurement errors in the future. abstract_id: PUBMED:30825925 Validation of the iHealth Track and Omron HEM-9210T automated blood pressure devices for use in pregnancy. Objective: Self monitoring of blood pressure in pregnancy is increasingly popular with both health care professionals and patients. We assessed the validity of the iHealth Track and Omron HEM-9210T automated blood pressure devices (with Bluetooth connectivity) for the use in telemonitoring of blood pressure in pregnancy. Methods: In this prospective observational study, the revised 2010 International Protocol of the European Hypertension Society (EHS) was used for the validation of the two devices against auscultatory sphygmomanometry by two independent observers who took 13 same arm measurements in 33 pregnant women, of which 10 were diagnosed with preeclampsia. The measurements were alternated between the test device and a calibrated aneroid sphygmomanometer following the protocol. Both automated devices were assessed sequentially in the same women. Results: In the group of 33 women, the iHealth Track passed the EHS 2010 validation criteria with 86/98/99 of 99 device-observer systolic measurement comparisons and 88/96/98 of 99 device-observer diastolic measurement comparisons within the 5/10/15 mmHg boundaries respectively. The Omron HEM-9210T passed the same criteria with 85/94/99 of 99 device-observer systolic measurement comparisons and 82/95/99 of 99 device-observer diastolic measurement comparisons. Conclusions: The iHealth Track and Omron HEM-9210T automated blood pressure monitors are validated for use in pregnancy. These two devices can now be added to the short list of validated devices in pregnancy and can be used for self-measurement of blood pressure in a telemonitoring setting of pregnant patients with (a high risk of) hypertensive disease. abstract_id: PUBMED:12387468 Inflationary oscillometry provides accurate measurement of blood pressure in pre-eclampsia. Objective: To evaluate the accuracy of the OMRON-MIT inflationary oscillometric device for blood pressure measurement in pregnancy and pre-eclampsia. Design: Prospective observational study, using validation methods recommended by the British Hypertension Society (BHS) and the Association for the Advancement of Medical Instrumentation (AAMI). Settings: Antenatal clinics and ward, Guy's Hospital, London. Population: Normotensive pregnant women and those diagnosed with pre-eclampsia according to the definition of the International Society for the Study of Hypertension in Pregnancy. Methods: Validation according to BHS protocol. Main Outcome Measures: Proportion of readings within 5, 10 and 15 mmHg (absolute differences) between the automated device and two trained, blinded observers, according to the BHS and AAMI criteria. Results: The OMRON-MIT achieved an overall BHS grade B for systolic and grade A for diastolic blood pressure measurement in both pregnancy and pre-eclampsia. The mean (SD) differences between the standard and the test device were -5 (7) mmHg for systolic and 2 (6) mmHg for diastolic blood pressure in pregnancy and -4 (6) mmHg for systolic and 2 (7) mmHg for diastolic blood pressure in pre-eclampsia. This device therefore fulfils the AAMI criteria. Conclusion: The OMRON-MIT is the only automated oscillometric device that has proven to be accurate for blood pressure measurement in pre-eclampsia according to the BHS protocol in pregnancy. Inflationary oscillometry may correct the error associated with oscillometric devices in pre-eclampsia. abstract_id: PUBMED:33276403 Ambulatory and Home Blood Pressure Measurement in Hypertensive Pregnant Women The prevalence of hypertensive disorders in pregnancy (HDP) is 6-8%. Blood pressure measurement (BPM) remains the cornerstone of diagnosis and should be performed in a standardised manner using automated devices. Office BPM represents only a spotty reading in an "artificial" environment failing to diagnose white coat hypertension (WCH). Ambulatory and home blood pressure measurement (ABPM/HBPM) are recommended for the diagnosis and differentiation of hypertension as well as for blood pressure and therapy control in women with HDP. Patient compliance is crucial for the use of both methods. ABPM is an appropriate method for the early identification of WCH and masked hypertension as well as for differentiating WCH from chronic hypertension &lt; 20 week's gestation. HBPM has been shown to reduce the number of antenatal visits and hospital admissions compared to office blood pressure measurement without compromising maternal and fetal outcomes; it also avoids unnecessary antihypertensive medications and reduces the rate of labour inductions and false diagnosis of "preeclampsia". Problems associated with ABPM are its limited availability and inconvenience to patients due to sleep disturbances. Disadvantages of HBPM are the need for patient training, potential measurement errors, and the lack of evidence-based BP thresholds. The widespread use especially of HBPM may contribute to a reduction in workload of obstetric staff in the hospital and may save hospital expense. abstract_id: PUBMED:8606340 Automated 24-hour ambulatory blood pressure monitoring in preeclampsia. A prospective controlled study was designed to compare automated 24-hour ambulatory blood pressure monitoring with intermittent blood pressure recordings obtained using a sphymomanometer. Blood pressure was measured in 20 hospitalized preeclamptic women in the third trimester. Data obtained using the Spacelabs automated blood pressure monitor was recorded over a period of 24 hours, and thereafter stored and processed in a computer. During the same 24 hour period, blood pressure and heart rate were measured by experienced staff at 07.00, 10.00, 13.00, 15.00, 18.00 and 21.00 hours, with the patient in a semi-recumbent position using a conventional mercury sphygmomanometer with a cuff of appropriate size. Korotkoff phase 5 was used as the indicator of diastolic blood pressure in all recordings by the staff. The main outcome measures were systolic and diastolic blood pressure, mean arterial blood pressure and maternal heart rate. Automated ambulatory monitoring was well tolerated and gave 91.6% successful readings. The mean differences between the blood pressure readings recorded by the monitor and of intermittent mercury sphygmomanometry during daytime were 0.7 (95% confidence interval -2.6 to 4.0) mmHg for the mean arterial blood pressure, -1.7 (95% confidence interval -6.8 to 3.5) mmHg for the systolic blood pressure, and 1.9 (95% confidence interval -2.2 to 6.0) mmHg for the diastolic blood pressure. The mean differences between day-time and night-time monitored blood pressures were 4.3 (95% confidence interval 0.8 to 7.8) mmHg, 4.6 (95% confidence interval 2.0 to 7.2) mmHg, and 4.4 (95% confidence interval 1.5 to 7.2) mmHg, respectively. The number of patients diagnosed as being hypertensive was similar whether automated blood pressure monitor or mercury sphymomanometry was used. Mean maternal heart rate recorded by the monitor or by the staff did not differ. Automated ambulatory blood pressure monitoring is reliable and might improve our understanding of the dynamic changes in blood pressure in pre-eclamptic women and may be a more suitable method to assess the blood pressure control achieved by different drugs. abstract_id: PUBMED:8545438 Ambulatory measurement of blood pressure The advent of new techniques has greatly contributed to the development of ambulatory measurement as a noninvasive method for evaluating blood pressure. The technique implies use of a validated and reliable standardized apparatus. The operator must strictly comply with operating procedures, which must also be explained to the patient. Ambulatory measurement can be meaningful only if the results are compatible with reference values, which have now been established, and if the causes of possible error can be recognized and interpreted. Ambulatory blood pressure measurement has greatly improved our knowledge of physiological and pathological variations over the circadian cycle including day/night variability and the effects of psychosensorial stimulation. Diagnostic indications are clearly identified and include borderline hypertension suspected but not identified after about 3 months, the white coat effect, severe hypertension when modifications in the circadian cycle are suspected, paroxysmal hypertension, suspected pheochromocytoma, and gravid hypertension or an inversion of the circadian cycle possibly preceding an episode of eclampsia. There are also a certain number of particular indications in patients with degenerative or primary conditions affecting their autonomy. The true prognostic value of these recordings was recognized several years ago and has been confirmed by clinical trials. For example, the white blouse effect has no significant implication in terms or predicting less favourable morbidity or mortality. Finally, ambulatory blood pressure measurement has been definitively shown to be a valid method for evaluating the therapeutic effect of an anti-hypertensive drug in a given patient, especially when resting levels are questioned. For therapeutic trials, ambulatory measurements serve as a reference to evaluate the effect of treatment on the circadian cycle. Peak/dip levels can thus be determined in comparison with the residual effect of the drug. A large number of studies remain to done to identify the full potential of this method. abstract_id: PUBMED:9166197 Automated blood pressure measurement as a predictor of proteinuric pre-eclampsia. Objectives: To investigate the relation between antenatal clinic, obstetric day unit and 24-hour ambulatory blood pressure measurements and 24-hour proteinuria levels in hypertensive pregnancies. Design: An observational study. Participants: Forty-eight women presenting with new hypertension after 20 weeks of gestation. Results: The closest relation was found between ambulatory blood pressure measurements and 24-hour proteinuria levels. No significant relation was found between the conventional diastolic blood pressure threshold of 90 mmHg and 24-hour proteinuria levels. Conclusions: Ambulatory blood pressure measurement gives better information about disease status in pre-eclampsia as assessed by proteinuria than does conventional sphygmomanometry. abstract_id: PUBMED:30003705 Blood pressure measurement in special populations and circumstances. According to the established validation protocols, a typical validation study of a blood pressure (BP) monitor includes general population adults with normal or elevated BP. It is recognized, however, that the automated (oscillometric) BP monitors may have different accuracy or uses in some special populations compared with adults in the general population. Thus, an automated BP monitor with proven accuracy in a general population of adults may not be accurate in a special population, and therefore separate validation is needed. Recognized special populations deserving separate validation are those for which there is theoretical, and also clinical evidence, that the accuracy of BP monitors in these groups differs from that in the general population. Young children, pregnant women (including those with preeclampsia), individuals with arm circumference &gt;42 cm, and patients with atrial fibrillation are regarded as special populations. Adolescents, individuals older than 80 years, and patients with end-stage renal disease or diabetes mellitus have also been considered as possible special groups, but there is still inadequate evidence of altered accuracy of BP monitors in these subjects. Validation studies should be performed in special populations and evaluated separately after the BP-measuring device has successfully undergone a validation study in a general population (unless the test device is intended only for a special population). This article discusses issues relating to the measurement of BP and the diagnosis of hypertension in selected special populations, as well as in low-resource settings, where a simplified yet efficient evaluation strategy is necessary. abstract_id: PUBMED:30720845 Blood pressure measurement in pregnancy and in hypertensive disorders of pregnancy: devices, techniques and challenges. Measurement of blood pressure is essential for clinical management of patients. To obtain an accurate blood pressure reading, the use of a validated device and an appropriate technique are required. This is of particular importance in pregnancy where the physiological changes affect vessel wall compliance. Moreover, currently it is difficult to predict in early pregnancy (prior to 20 weeks of gestation) which women would develop hypertension or pre-eclampsia. For this reason, blood pressure devices require validation in pregnancy and in hypertensive disorders of pregnancy to ensure that accurate readings are obtained and utilised for clinical decisions, otherwise the safety of the mother or the foetus/neonate or both may be compromised. The authors provide a narrative review on devices and techniques for blood pressure measurement in pregnancy and hypertensive disorders of pregnancy as well as the associated challenges. Answer: Automated blood pressure measurement devices have been a subject of investigation in the context of preeclampsia, a serious medical disorder affecting pregnancy characterized by high blood pressure and often proteinuria. The accuracy of these devices in measuring blood pressure is critical for the diagnosis and management of preeclampsia. Studies have shown that automated devices using oscillometric principles may "underrecord" systolic and diastolic blood pressure compared with auscultatory observations in patients with proteinuric preeclampsia, with differences sometimes exceeding 30 mm Hg (PUBMED:8178857). This discrepancy suggests that reliance on automated devices could potentially lead to underestimation of blood pressure and thus underdiagnosis or undertreatment of preeclampsia, which could be a source of morbidity. However, other studies have validated certain automated blood pressure devices for use in pregnancy. For instance, the iHealth Track and Omron HEM-9210T automated blood pressure monitors have been validated for use in pregnancy and can be used for self-measurement of blood pressure in a telemonitoring setting for pregnant patients with (a high risk of) hypertensive disease (PUBMED:30825925). Similarly, the OMRON-MIT inflationary oscillometric device has been shown to provide accurate blood pressure measurements in pregnancy and pre-eclampsia, fulfilling the criteria set by the British Hypertension Society and the Association for the Advancement of Medical Instrumentation (PUBMED:12387468). Ambulatory blood pressure monitoring (ABPM) and home blood pressure measurement (HBPM) are recommended for the diagnosis and differentiation of hypertension as well as for blood pressure and therapy control in women with hypertensive disorders of pregnancy (PUBMED:33276403). ABPM has been found reliable and might improve understanding of the dynamic changes in blood pressure in pre-eclamptic women (PUBMED:8606340). In conclusion, while some automated blood pressure measurement devices may underrecord blood pressure in patients with proteinuric preeclampsia, potentially leading to morbidity, others have been validated and are considered accurate for use in pregnancy, including cases of hypertensive disorders. It is important to use validated devices and appropriate techniques to ensure accurate blood pressure readings for the management of preeclampsia (PUBMED:30720845).
Instruction: Is caspase inhibition a valid therapeutic strategy in cryopreservation of ovarian tissue? Abstracts: abstract_id: PUBMED:19697118 Is caspase inhibition a valid therapeutic strategy in cryopreservation of ovarian tissue? Purpose: The aim of this study is to determine whether inclusion of caspase inhibitor can improve the efficacy of cryopreservation of ovarian tissue. Methods: Mice were randomly assigned to the Group A (fresh control group) Group B (inclusion of caspase inhibitor) and Group C (non-inclusion of caspase inhibitor). Ovarian tissue in Group B and Group C was vitrified-thawed. TUNEL assay and Bax protein detection were measured after cryopreservation. The mice in all groups received autotransplantation. The number of days before the resumption of estrous cycles was measured daily from the 5th day after surgery, and the percentage of cells expressing PCNA in grafts was measured one month following transplantation. Results: The incidence of TUNEL positive follicles in Group B was significantly higher than that in Group C. Similarly, the percentage of follicles expressing Bax protein in Group B was significantly higher than that in Group C. The number of days before the resumption of estrous cycles in Group B was significantly less than that in Group C. In addition, the percentage of follicular and stromal cells expressing PCNA of grafts in Group B was significantly higher than that in Group C. Conclusions: The global caspase inhibitor Z-VAD-FMK decreases the incidence of apoptosis of ovarian tissue induced by cryopreservation, and inclusion of caspase inhibitor improves the efficacy of cryopreservation of ovarian tissue. abstract_id: PUBMED:33973983 Inhibition of mTORC1 Signaling Pathway is a Valid Therapeutic Strategy in Transplantation of Cryopreserved Mouse Ovarian Tissue. Background: Blockage of mTOR1 can inhibit the transformation of primordial follicles into growing follicles in the ovaries. Objective: The aim of this study was to investigate the role of mTORC1 inhibition in the cryopreservation and transplantation of mouse ovarian tissues. Materials And Methods: ICR (Institute of Cancer Research) mice were randomly divided into control group (autograft), cryopreservation group (cryopreservation + autograft), and mTORC1 inhibition group (cryopreservation + autograft + mTOR inhibitor). After 30 days of auto-transplantation, the follicle number of graft and kit ligand (KL) immunostaining in grafts were quantified. In addition, serum concentration of anti-Müllerian hormone (AMH) was examined by ELISA. Results: The graft in mTORC1 inhibition group showed a significantly higher proportion of primordial follicles and a significantly lower proportion of growing follicles compared with cryopreservation group. Furthermore, a significant decrease in expression of KL (a marker gene related to follicular development) was observed in mTORC1 inhibition group in contrast to cryopreservation group. The follicle number of graft and serum AMH concentration in mTORC1 inhibition group were significantly higher than that in cryopreservation group. Conclusion: Inhibition of mTORC1 signaling pathway is a valid therapeutic strategy in transplantation of cryopreserved mouse ovarian tissue via suppression of primordial follicle activation. abstract_id: PUBMED:10855250 Indications for cryopreservation of ovarian tissue Background: The first attempts at ovarian tissue cryopreservation (OTCP) were performed in the 1950s. Recent research efforts have demonstrated the possibility of obtaining pregnancy with this technique in three animal species and have shown good primordial human follicle survival up through the freezing process. Potential Indications: OTCP is a procedure designed to protect ovarian tissue from threats to its follicular reserves. The first threat is the time-related massive physiological destruction of the follicular reserve ending with menopause. OTCP would enable this wastage to be arrested, thereby prolonging ovarian cycling beyond limits. Conditions producing premature menopause, when known in advance, may also potentially benefit from OTCP. The iatrogenic destruction of the follicular reserve by radiation therapy or alkylating agents is another situation where OTCP would enable the patient's fertility to be preserved. Among these clinical settings, iatrogenic destruction of follicular stocks appears to us, with the current state of research, to be an acceptable indication for OTCP. abstract_id: PUBMED:19562046 Ovarian tissue cryopreservation: An update. Ovarian tissue cryopreservation and transplantation have been considered as promising means of fertility preservation for women who have survived cancer, with livebirths being reported from this technique. Ovarian tissue cryopreservation can be offered to patients with different types of cancer. Among the cryoprotectants, glycerol appears to give the poorest results. The techniques of cryopreserving ovarian tissue and alternative approaches have been reviewed in this article. The readers are reminded that this technique is still experimental and informed consent to be obtained from patients after counseling with medical information on the risks involved. abstract_id: PUBMED:30018897 Ovarian tissue cryopreservation and transplantation in patients with cancer. Chemotherapy and radiotherapy improved survival rates of patients with cancer. However, they can cause ovarian failure and infertility in women of reproductive age. Infertility following cancer treatment is considered a major quality of life issue. Ovarian tissue cryopreservation and transplantation is an important option for fertility preservation in adult patients with cancer who need immediate chemotherapy or do not want to undergo ovarian stimulation. Ovarian tissue freezing is the only option for preserving the fertility of prepubertal patients with cancer. In a recent review, it was reported that frozen-thawed ovarian transplantation has lead to about 90 live births and the conception rate was about 30%. Endocrine function recovery was observed in 92.9% between 3.5 and 6.5 months after transplantation. Based on our review, ovarian tissue cryopreservation and transplantation may be carefully considered before cancer treatment in order to preserve fertility and endocrine function in young cancer survivors. abstract_id: PUBMED:33569092 Ovarian tissue cryopreservation and novel bioengineering approaches for fertility preservation. Purpose Of Review: Breast cancer patients who cannot delay treatment or for whom hormone stimulation and egg retrieval are contraindicated require alternative methods of fertility preservation prior to gonadotoxic treatment. Ovarian tissue cryopreservation is an alternative approach that may offer patients the opportunity to preserve fertility and carry biologically-related children later in life. Various experimental approaches are being explored to obtain mature gametes from cryopreserved and thawed ovarian tissue for fertilization and implantation using biomimetic tissue culture in vitro. Here we review the most recent developments in ovarian tissue cryopreservation and exciting advances in bioengineering approaches to in vitro tissue and ovarian follicle culture. Recent Findings: Slow freezing is the most widely accepted method for ovarian tissue cryopreservation, but efforts have been made to modify vitrification for this application as well. Numerous approaches to in vitro tissue and follicle culture are in development, most prominently two-step culture systems for ovarian cortical tissue and encapsulation of ovarian follicles in biomimetic matrices for in vitro culture. Summary: Refinements to slow freeze and vitrification protocols continue to address challenges associated with cryopreservation, such as ice crystal formation and damage to the stroma. Similarly, improvements to in vitro tissue and follicle culture show promise for utilizing patients' cryopreserved tissues to obtain mature gametes after disease treatment and remission. Development of an effective and reproducible culture system for human ovarian follicles will serve as a broad assisted reproductive technology for cancer survivors who cryopreserved tissue prior to treatment. abstract_id: PUBMED:26776823 First transplantation of cryopreserved ovarian tissue in Portugal, stored for 10 years: an unexpected indication. Ovarian tissue cryopreservation represents a valid strategy to preserve ovarian function in patients with a high risk of premature ovarian failure. We present a case of ovarian tissue cryopreservation carried out in an 18-year-old woman after a laparotomy for left adnexal mass with left adnexectomy. Congenital absence of the right ovary was observed during surgery. To preserve fertility, rescue cryopreservation of ovarian tissue was carried out under extreme conditions (without adopting the standard published protocol, not yet available at our centre). Ten years later, transplantation of cryopreserved ovarian tissue was carried out and, shortly after it, restoration of ovarian function was confirmed. abstract_id: PUBMED:35774145 A Systematic Review of Ovarian Tissue Transplantation Outcomes by Ovarian Tissue Processing Size for Cryopreservation. Ovarian tissue cryopreservation (OTC) is the only pre-treatment option currently available to preserve fertility for prepubescent girls and patients who cannot undergo ovarian stimulation. Currently, there is no standardized method of processing ovarian tissue for cryopreservation, despite evidence that fragmentation of ovaries may trigger primordial follicle activation. Because fragmentation may influence ovarian transplant function, the purpose of this systematic review was (1) to identify the processing sizes and dimensions of ovarian tissue within sites around the world, and (2) to examine the reported outcomes of ovarian tissue transplantation including, reported duration of hormone restoration, pregnancy, and live birth. A total of 2,252 abstracts were screened against the inclusion criteria. In this systematic review, 103 studies were included for analysis of tissue processing size and 21 studies were included for analysis of ovarian transplantation outcomes. Only studies where ovarian tissue was cryopreserved (via slow freezing or vitrification) and transplanted orthotopically were included in the review. The size of cryopreserved ovarian tissue was categorized based on dimensions into strips, squares, and fragments. Of the 103 studies, 58 fertility preservation sites were identified that processed ovarian tissue into strips (62%), squares (25.8%), or fragments (31%). Ovarian tissue transplantation was performed in 92 participants that had ovarian tissue cryopreserved into strips (n = 51), squares (n = 37), and fragments (n = 4). All participants had ovarian tissue cryopreserved by slow freezing. The pregnancy rate was 81.3%, 45.5%, 66.7% in the strips, squares, fragment groups, respectively. The live birth rate was 56.3%, 18.2%, 66.7% in the strips, squares, fragment groups, respectively. The mean time from ovarian tissue transplantation to ovarian hormone restoration was 3.88 months, 3.56 months, and 3 months in the strips, squares, and fragments groups, respectively. There was no significant difference between the time of ovarian function' restoration and the size of ovarian tissue. Transplantation of ovarian tissue, regardless of its processing dimensions, restores ovarian hormone activity in the participants that were reported in the literature. More detailed information about the tissue processing size and outcomes post-transplant are required to identify a preferred or more successful processing method. Systematic Review Registration: [https://www.crd.york.ac.uk], identifier [CRD42020189120]. abstract_id: PUBMED:34906692 Review of ovarian tissue cryopreservation techniques for fertility preservation. Ovarian failure and ovarian malfunction are among major fertility problems in women of reproductive age (18-35 years). It is known that various diseases, such as ovarian cancer and premature ovarian failure, besides certain treatments, such as radiotherapy and chemotherapy of other organs, can affect the normal process of folliculogenesis and cause infertility. In recent years, various procedures have been proposed for the treatment of infertility. One of the newest methods is the use of cryopreservation ovarian fragments after cancer treatment. According to some studies, this method yields very satisfactory results. Although ovarian tissue cryopreservation (OTC) is an accepted technique of fertility preservation, the relative efficacy of cryopreservation protocols remains controversial. Considering the controversies about these methods and their results, in this study, we aimed to compare different techniques of ovarian cryopreservation and investigate their advantages and disadvantages. Reviewing the published articles may be possible to identify appropriate strategies and improve infertility treatment in these patients. abstract_id: PUBMED:33939167 Methods of Ovarian Tissue Cryopreservation: Is Vitrification Superior to Slow Freezing?-Ovarian Tissue Freezing Methods. After cancer treatment, female survivors often develop ovarian insufficiency or failure. Oocyte and embryo freezing are well-established fertility preservation options, but cannot be applied in pre-pubescent girls, in women with hormone-sensitive malignancies, or when gonadotoxic treatment cannot be delayed. Although ovarian tissue cryopreservation (OTC) has been used to restore fertility and endocrine function, the relative efficacy of its two major protocols, slow freezing and vitrification, remains controversial. This literature review evaluates clinical and lab-based studies published between January 2012 and June 2020 to determine whether vitrification, the optimal technique for oocyte and embryo cryopreservation, preserves ovarian tissue more effectively than slow freezing. Due to limited clinical data involving ovarian tissue vitrification, most clinical studies focus on slow freezing. Only 9 biochemical studies that directly compare the effects of slow freezing and vitrification of human ovarian tissue were noted. Most studies report no significant difference in follicular morphology and distribution between cryopreservation methods, but these findings must be interpreted in the context of high methodological variability. Discrepant findings regarding the effects of cryopreservation method on follicle viability, gene expression, and hormone production require further evaluation. Early clinical outcomes appear favorable for vitrification, but additional studies and longer term follow-up are needed to establish its efficacy. Sharing data through national or international registries would expedite this analysis. However, even if research corroborates conclusions of no clinical or biochemical difference between cryopreservation methods, the decreased costs and increased efficiency associated with vitrification make this method more accessible and cost-effective. Answer: Yes, caspase inhibition is considered a valid therapeutic strategy in the cryopreservation of ovarian tissue. The study identified as PUBMED:19697118 found that the inclusion of a global caspase inhibitor, Z-VAD-FMK, decreased the incidence of apoptosis in ovarian tissue induced by cryopreservation. The results showed that the incidence of TUNEL positive follicles and the percentage of follicles expressing Bax protein were significantly lower in the group with the caspase inhibitor (Group B) compared to the group without it (Group C). Additionally, the number of days before the resumption of estrous cycles was significantly less, and the percentage of follicular and stromal cells expressing PCNA in grafts was significantly higher in Group B than in Group C. These findings suggest that the inclusion of a caspase inhibitor improves the efficacy of cryopreservation of ovarian tissue.
Instruction: Anastomotic femoral aneurysms: is an increase in interval between primary operation and aneurysms formation related to change in incidence? Abstracts: abstract_id: PUBMED:9785919 Anastomotic femoral aneurysms: is an increase in interval between primary operation and aneurysms formation related to change in incidence? Objective: Anastomotic pseudoaneurysms continue to be a late complication of vascular surgery, particulary following prosthetic graft procedures. The purpose of this study was to investigate if a previously reported increase in interval between the original operation and the development of pseudoaneurysm was related to a change in incidence. Design: Retrospective study. Methods: We reviewed the records of 76 patients who presented with 90 femoral pseudo-aneurysms and underwent reconstructive procedures from January 1989 to June 1994. The median age was 69 years (range: 39-83). In the same time period all femoral artery anastomosis operations were recorded. Results: The incidence of femoral pseudo-aneurysms in Copenhagen was approximately 4.3%. Conclusions: A previously reported increase in interval between primary operation and aneurysms formation was not related to a change in incidence during the same time period. abstract_id: PUBMED:8616654 Anastomotic femoral aneurysms: increase in interval between primary operation and aneurysm formation. Objective: Anastomotic pseudoaneurysms continue to be a late complication of vascular surgery, particularly following prosthetic graft procedures. The purpose of this study was to investigate if a previously reported increase in interval between the original operation and the development of pseudoaneurysm was still valid. Design: Retrospective study. Material And Methods: We reviewed the records of 76 patients who presented with 90 femoral aneurysms. The median age was 69 years (range: 39-83). The commonest previous vascular surgery was a aortofemoral bypass in 61 cases. Results: The interval between the original operation and the repair of the pseudoaneurysms was 9 years (range 1 month to 26 years). Conclusions: This study confirms the previously noted trend of an increasing time to aneurysm formation from 3 years before 1975, 5 years between 1976 and 1980, and 6 years between 1981 and 1990. abstract_id: PUBMED:9678544 Para-anastomotic aneurysms: incidence, risk factors, treatment and prognosis. Background: The purpose of this retrospective study was to analyze the incidence, risk factors, treatment, and prognosis of para-anastomotic aneurysms. Methods: During the period between January, 1980 and August, 1996, 511 patients underwent surgical operations for arterial diseases with grafts and were followed for more than 30 days (average: 3.5 years). The number of anastomoses was 1445 in all. Until October, 1996, 18 para-anastomotic aneurysms had been detected in 13 patients. By Kaplan-Meier's method, the incidence of para-anastomotic aneurysms at 5, 10, and 15 years was 0.8, 6.2, and 35.8%, respectively. Univariate analysis indicated that arteriosclerosis obliterans, hypertension, thromboendarterectomy and an anastomosis in the groin were significant risk factors, while stepwised multivariate analysis revealed only hypertension as significant. The mean interval from the primary operation to the diagnosis was 79 months. Ten aneurysms were operated and seven were produced by dehiscence of the anastomotic line, namety anastomotic aneurysms, and three were juxta-anastomotic aneurysms with intact anastomotic lines. Eight patients underwent resection or exclusion of the aneurysm and reconstruction with a new graft and two patients underwent a replacement of the aneurysmal autovein patch to a Dacron one and aneurysmorrhaphy of the parent aneurysmal artery. Results: No recurrence has been detected. In eight patients who were followed conservatively, two died of rupture and renal failure following acute arterial occlusion. Conclusions: Since para-anastomotic aneurysms can lead to fatal complications, an enlarging or symptomatic aneurysm should be treated promptly. abstract_id: PUBMED:33608205 The Incidence of Para-Anastomotic Aneurysm After Open Repair Surgery for Abdominal Aortic Aneurysm Through Routine Annual Computed Tomography Imaging. Objective: Open repair surgery (ORS) for an abdominal aortic aneurysm (AAA) remains an important treatment option, but the incidence of para-anastomotic aneurysms is unclear. The purpose of this study was to estimate the incidence of para-anastomotic aneurysms and reveal secondary complications through routine annual computed tomography (CT) imaging. Methods: One hundred and forty-seven patients who underwent ORS for AAA between January 2006 and December 2015 and received routine CT imaging surveillance were enrolled. Results: The follow up period was 7.1 ± 2.7 years. The total follow up time of all patients was 1 041.1 years, and 958 CT images were collected (0.92 CT scans/year/patient). A proximal para-anastomotic aneurysm was detected in five patients (3.4%). Four of the five patients had aneurysmal dilation at the initial ORS (proximal diameter &gt;25 mm), which enlarged during follow up; thus, a de novo proximal para-anastomotic aneurysm was observed in one patient (0.7%). The time between surgery and the diagnosis of all proximal para-anastomotic aneurysms was 5.7 ± 1.4 years, and the de novo proximal para-anastomotic aneurysm was detected at 11.8 years. The incidence of all para-anastomotic aneurysms at five and 10 years was 2.2% and 3.6%, and the incidence of the de novo para-anastomotic aneurysm was 0% at five and 10 years. Nine synchronous thoracic aortic aneurysms (TAAs) and seven metachronous TAAs were detected, and 16 patients (10.9%) had a TAA. Neoplasms were detected in 18 of 147 patients (12.2%), and the most dominant neoplasm was lung cancer. Conclusion: The incidence of para-anastomotic aneurysms was low; thus, abdominal and pelvic CT imaging every five years may be sufficient and consistent with the current AAA guidelines. In contrast, TAAs were diagnosed in a high percentage of patients, and based on these observations, routine CT imaging should be expanded to include the chest. abstract_id: PUBMED:9546229 Anastomotic aneurysms after surgical treatment of Takayasu's arteritis: a 40-year experience. Purpose: To evaluate the clinical characteristics of anastomotic aneurysms that develop in surgically treated patients with Takayasu's arteritis. Methods: Among 103 patients with Takayasu's arteritis treated surgically over 40 years, 91 patients with 259 anastomoses (allowing for exclusion of 12 operative deaths) participated in follow-up study from 1 month to 37.3 years with a mean value +/- SEM of 17.3 +/- 1.1 years with a follow-up completion rate of 93% at 30 years. The clinical characteristics of anastomotic aneurysms were clarified, and the influences of several factors (sites of anastomoses, occlusive or aneurysmal disease, suture material, preoperative systemic inflammation, and administration of corticosteroids) on formation of anastomotic aneurysms were analyzed by means of life-table method and Cox regression analysis. Results: Twenty-two uninfected anastomotic aneurysms were found among 14 patients (22 of 259 anastomoses, 8.5%). The interval between the previous operation and diagnosis varied from 1.6 to 30 years with a mean value +/- SEM of 9.8 +/- 1.8 years. The cumulative incidence of anastomotic aneurysm at 20 years was 12.0%. Systemic inflammation or steroid administration had little influence on formation of anastomotic aneurysm. Instead, anastomotic aneurysm tended to occur after operations for aneurysmal lesions. Conclusions: Anastomotic aneurysm can occur anytime after operations for Takayasu's arteritis. The development of anastomotic aneurysm is not influenced by any factor specific to this disease except the presence of an aneurysmal lesion. abstract_id: PUBMED:29018652 Anastomotic Aneurysm Formation after High Flow Bypass Surgery: A Case Report with Histopathological Study. Bypass surgery is often used in the treatment of large and giant aneurysms. Major complications that often arise during the perioperative period include cranial nerve palsy, ischemic complications, and hyperperfusion. However, there have been a few reports about late onset complications such as anastomotic aneurysms. In particular, anastomotic aneurysm after high flow bypasses has never been reported. A 57-year-old woman who had been suffering from left eye pain was diagnosed with a large aneurysm of the left internal carotid artery (ICA) in the cavernous portion. She was treated with high flow bypass surgery using radial artery graft and proximal ICA ligation. One and a half year after surgery, a de novo aneurysm (7.5 mm in maximum diameter) was detected in the anastomotic site. To prevent rupture, the aneurysm was resected and the middle cerebral artery (MCA) was reconstructed via superficial temporal artery (STA)-MCA bypass. Postoperative course was uneventful and the anastomotic aneurysm did not recur until 2 years after second surgery. Histological evaluation of the anastomotic aneurysm demonstrated loss of smooth muscle cells and proliferation of neointima, features consistent with a true aneurysm. Interestingly, the above changes were prominent in the radial artery graft while the MCA was almost histologically intact. As such, intraoperative intimal damage and postoperative hemodynamic stress to the radial artery graft may be a cause of aneurysm formation. Anastomotic aneurysm may occur after high flow bypass, necessitating careful postoperative follow-up. abstract_id: PUBMED:6738261 Anastomotic aneurysms as a late complication of reconstructive vascular surgery of the lower extremity 29 operations were performed because of an anastomotic aneurysm in 25 patients. The incidence of false aneurysm was 0.7% (4079 reconstructive operations from 1964 to 1979). Arterial reconstructions previous to the formation of aneurysm were: aorto-femoral bifurcation graft 9, ileo-femoral bypass 9, femoro-popliteal reconstructions 11 (4 of them were Sparks' prostheses). 31% of the cases had complications (rupture, thrombosis) when operated, 73% were located in the groin. At the primary operation mostly Dacron had been used. In all instances non-absorbable synthetic suture material has been applied. If the interval between the first operation and the formation of the aneurysms is short infection is to be suspected. The diagnosis of aneurysms distal to the inguinal ligament is easy, aneurysms of the iliac region were found after complications (rupture, thrombosis) had occurred. The most frequent reconstructive procedure was graft interposition, but aneurysmorrhaphy was successful in certain cases. Two patients died postoperatively. Follow-up showed one recurrence (in the groin). We suggest that 1) insufficiency of the suture line because of tension 2) dilation of prosthetic dacron material have great importance for formation of anastomotic aneurysm, whereas local endarterectomy or end-side anastomosis do not seem to be significant. abstract_id: PUBMED:17061054 Anastomotic pseudoaneurysms: our experience with 49 cases. We investigated the factors implicated in the pathogenesis of anastomotic aneurysm formation and the postoperative course of patients with such a complication. Forty-five patients with 49 anastomotic aneurysms were diagnosed and treated in two vascular surgery departments in Athens, Greece, during an 8-year period. Emergent complications occurred in 15 cases, rupture in 11, and thromboembolic episodes in another four. Preoperative diagnostic workup in the remaining elective cases (n = 34) included color duplex scan, computed tomographic scan, and angiography. All patients underwent operation, and cultures were obtained during the surgical procedures. Histological examination of the host artery wall adjacent to the aneurysm was also performed. Aortobifemoral bypass was the original operation performed in the majority of cases (71%), and the femoral anastomosis was the most frequent site involved (85.7%). Emergent procedures were associated with increased mortality (46.6%), whereas elective operation resulted in high patency rates and no mortality. In an attempt to isolate predisposing factors that contributed to aneurysm formation, we concluded that the etiology was multifactorial in approximately one-third of the cases and degenerative host artery disease was the main cause (40%). Arterial degeneration is the leading cause of anastomotic aneurysm formation, and emergency arterial reconstruction in cases of aneurysm complication is associated with a poor prognosis. abstract_id: PUBMED:24988233 Location and incidence rate of anastomotic aneurysms--own clinical material and literature review. Unlabelled: Anastomotic aneurysms occurs at various levels of arterial system. Determining their location and incidence rate required investigation of large patient clinical material. Material And Methods: In the years 1989-2010 in local centre 230 anastomotic aneurysms were operated in 180 patients. Results: For 187 (81.3%) patients anastomotic aneurysms were localised in the groin, while for remaining 43 (18.7%) they occurred in other localisations. In aortic arch branch they occurred four times (1.7), in descending aorta--three times (1.3%), in abdominal aorta--14 (6.1%) and in iliac arteries--6 (2.6%). While for anastomosis with popliteal artery they were diagnosed in 16 (7%) patients. Own clinical material was compared with literature data. Conclusions: Anastomotic aneurysms in over 80% of cases occur in the groin, remaining percentage corresponds to other localisations. abstract_id: PUBMED:25805609 A rare para-anastomotic femoral artery aneurysm in a patient with history of femoro-popliteal bypass graft occlusion. Introduction: Para-anastomotic aneurysms may develop as a late complication of arterial reconstructions. They are mostly seen after abdominal aortic procedures and also are mostly encountered as pseudoaneurysms. Presentation Of Case: We report a true para-anastomotic aneurysm in the common femoral artery of a 57 year old man with an occluded femoro-popliteal bypass graft. We discuss the surgical management and possible causation of this extremely rare condition Discussion: Para-anastomotic aneurysms can have some life-threatening consequences, such as erosion of adjacent tissues, distal embolization from mural thrombus and rupture of the aneurysmal sac. Conclusion: Endovascular treatment using stent grafts is highly effective, but it has some handicaps such as occlusion of major arterial orifices. For this reason, surgical reconstruction is the preferable choice for the management of para-anastomotic aneurysms. Answer: The increase in the interval between the primary operation and the formation of anastomotic femoral aneurysms does not appear to be related to a change in incidence. A retrospective study conducted in Copenhagen found that the incidence of femoral pseudo-aneurysms was approximately 4.3%, and concluded that a previously reported increase in the interval between the primary operation and aneurysm formation was not related to a change in incidence during the same time period (PUBMED:9785919). This finding is consistent with another study that confirmed a trend of increasing time to aneurysm formation, with intervals increasing from 3 years before 1975, to 5 years between 1976 and 1980, and to 6 years between 1981 and 1990, without indicating a change in the overall incidence of anastomotic femoral aneurysms (PUBMED:8616654).
Instruction: E-health use in african american internet users: can new tools address old disparities? Abstracts: abstract_id: PUBMED:25536065 E-health use in african american internet users: can new tools address old disparities? Objective: Web-based health information may be of particular value among the African American population due to its potential to reduce communication inequalities and empower minority groups. This study explores predictors of e-health behaviors and activities for African American Internet users. Materials And Methods: We used the 2010 Pew Internet and American Life Health Tracking Survey to examine sociodemographic and health status predictors of e-health use behaviors among African Americans. E-health use behaviors included searching for e-health information, conducting interactive health-related activities, and tracking health information online. Results: In the African American subsample, 55% (n=395) were at least "occasional" Internet users. Our model suggests that searching for health information online was positively associated with being helped/knowing someone helped by online information (odds ratio [OR]=5.169) and negatively associated with lower income (OR=0.312). Interactive health activities were associated with having a college education (OR=3.264), being 65 years of age or older (OR=0.188), having a family member living with chronic conditions (OR=2.191), having a recent medical crisis (OR=2.863), and being helped/knowing someone helped by online information (OR=8.335). E-tracking behaviors were significantly stronger among African Americans who had health insurance (OR=3.907), were helped/knowing someone helped by online information (OR=4.931), and were social media users (OR=4.799). Conclusions: Findings suggest significant differences in e-health information-seeking behaviors among African American Internet users-these differences are mostly related to personal and family health concerns and experiences. Targeted online e-health resources and interventions can educate and empower a significant subset of the population. abstract_id: PUBMED:24650621 Disparities in health-related Internet use among African American men, 2010. Given the benefits of health-related Internet use, we examined whether sociodemographic, medical, and access-related factors predicted this outcome among African American men, a population burdened with health disparities. African American men (n = 329) completed an anonymous survey at a community health fair in 2010; logistic regression was used to identify predictors. Only education (having attended some college or more) predicted health-related Internet use (P &lt; .001). African American men may vary in how they prefer to receive health information; those with less education may need support to engage effectively with health-related Internet use. abstract_id: PUBMED:28387794 Engaging African American women in research: an approach to eliminate health disparities in the African American community. Objective: To explore the success of community-based participatory research [CBPR] in engaging African American women to achieve health equity by elucidating community, trust, communication and impact. Recommendations helpful for researchers interested in engaging communities to achieve health equity in the USA are included. Inroduction: African American women experience health disparities of multifactorial etiology and are underrepresented in research. CBPR is a collaborative approach that incorporates perspectives, which address the intricate determinants of health and has been reported as an effective means to address health disparities. Yet, the science of CBPR seems elusive to researchers in the medical field. The opportunity exists to better understand and expand the use of the principles of engagement, replication, and sustainability in engaging African American women in health research. Methods: A variety of literature regarding engaging African American women in community-based participatory research was reviewed. Results: CBPR focused on robust engagement of marginalized groups continues to be validated as a vital approach to the elimination of disparities and improved health for all, especially ethnic and racial minority populations. However, limited evidence of focused engagement of African American women was found. Making specific outreach to African American women must be a community and patient engagement priority to achieve health equity. Conclusions: Continued research is needed which specifically focuses on building and sustaining engagement with African American women and their communities. This research can transform healthcare access, experiences and outcomes by yielding actionable information about what African American women need and want to promote wellness for themselves and their communities. abstract_id: PUBMED:32649378 Equity in Genomics: A Brief Report on Cardiovascular Health Disparities in African American Adults. Background: African Americans are more likely to die from cardiovascular disease (CVD) than all other populations in the United States. Although technological advances have supported rapid growth in applying genetics/genomics to address CVD, most research has been conducted among European Americans. The lack of African American representation in genomic samples has limited progress in equitably applying precision medicine tools, which will widen CVD disparities if not remedied. Purpose: This report summarizes the genetic/genomic advances that inform precision health and the implications for cardiovascular disparities in African American adults. We provide nurse scientists recommendations for becoming leaders in developing precision health tools that promote population health equity. Conclusions: Genomics will continue to drive advances in CVD prevention and management, and equitable progress is imperative. Nursing should leverage the public's trust and its widespread presence in clinical and community settings to prevent the worsening of CVD disparities among African Americans. abstract_id: PUBMED:21213192 Addressing health disparities: the role of an African American health ministry committee. Healthy People 2010 identified the need to address health disparities among African Americans, Asians, American Indians, Hispanics, Alaskan American, and Pacific Islanders. These are groups disproportionately affected by cancer, cardiovascular disease, diabetes, HIV infection, and AIDSs. Despite the growing body of research on health disparities and effective interventions, there is a great need to learn more about culturally appropriate interventions. Social work professional values and ethics require that service delivery be culturally competent and effective. Social workers can collaborate with community based health promotion services, exploring new ways to ensure that health disparities can be addressed in institutions to which African Americans belong. This article presents findings of an African American health ministry committee's health promotion initiatives and probed the viability of a health ministry committee' role in addressing health disparities through education. The promising role of the Black church in addressing health disparities is explored. abstract_id: PUBMED:27977253 Reducing disparities and achieving equity in African American women's health. The colloquial phrase "Black Don't Crack" refers to perceptions of African American women retaining youthful features over time and seemingly defying the aging process. This conjecture appears to only be skin deep, as across almost every health indicator, African American women fare worse than women in other racial/ethnic groups. African American women experience excess morbidity in obesity, diabetes, and adverse birth outcomes, and are more likely than women of other ethnic groups to die from breast and cervical cancer, cardiovascular disease, and HIV/AIDS. This article provides an overview of social, biological, psychological, and cultural factors that contribute to African American women's health. Attention is directed to cultural factors that are both protective and risky for African American women's health. There is a need to garner a better understanding of the complex nature of health disparities experienced by African American women in order to move the field forward in making progress toward achieving health equity for this population. This article addresses this need and offers recommendations for translating science in this area into meaningful population level impact. (PsycINFO Database Record abstract_id: PUBMED:29502446 Sexual and behavioral health disparities among African American sexual minority men and women. Introduction: Sexual and behavioral health disparities have been consistently demonstrated between African American and White adults and between sexual minority and heterosexual communities in the United States; however, few studies using nationally representative samples have examined disparities between sexual minority and heterosexual adults within African American populations. The purpose of this study was to examine the prevalence of sexual and behavioral health outcomes between sexual minority and heterosexual African American adults and to examine whether there were different patterns of disparities for African American sexual minority men and women, respectively. Methods: We analyzed data from 4502 African American adults who participated in the 2001-2015 waves of the National Health and Nutrition Examination Survey. Using multivariable analyses, we examined differences in HIV, sexually transmitted infections, mental health, and substance use among African American sexual minority and heterosexual men and women. Results: After adjusting for sociodemographic variables, African American sexual minority men had significantly higher odds of HIV, sexually transmitted infections, and poor mental health compared to their heterosexual male counterparts, whereas African American sexual minority women had significantly higher odds of Hepatitis C, poor mental health, and substance use compared to their heterosexual female counterparts. Conclusions: These findings demonstrate notable sexual orientation disparities among African American adults. Disparities persisted beyond the role of sociodemographic factors, suggesting that further research utilizing an intersectional approach is warranted to understand the social determinants of adverse health outcomes among African American sexual minority men and women. abstract_id: PUBMED:20828107 Disparities and social inequities: is the health of African American women still in peril? An amalgam of health concerns differentially affects the behavioral, psychological, and physical well-being of African American women. These disparities are both the result of, and contributors to, marked differences in the perception, interpretation and treatment of various psychological disorders and chronic medical conditions. Data show that African American women are diagnosed with more chronic and debilitating illnesses than found in the general population, and are often misdiagnosed with a myriad of psychiatric and medical disorders. Despite these findings, ambiguity remains about the contextual factors that affect the physical and mental well-being of African American women. The focus of this review was not to describe all psychological or medical conditions with deleterious outcomes among African American women, but rather collectively address identified mental and physical health issues prevailing among African American women. This approach addresses the urgent need to better understand the health needs of African American women in the United States, and demonstrates how advancing our knowledge of this marginalized group may lead to sustaining mental and physical health-related dialogue, while advancing policy. abstract_id: PUBMED:26059203 Asthma Management Disparities: A Photovoice Investigation with African American Youth. Disparities in asthma management are a burden on African American youth. The objective of this study is to describe and compare the discourses of asthma management disparities (AMDs) in African American adolescents in Seattle to existing youth-related asthma policies in Washington State. Adolescents participated in a three-session photovoice project and presented their phototexts to the Washington State asthma planning committee. Critical discourse analysis methodology was used to analyze adolescent phototexts and the State asthma plan. We found that the State plan did not address AMD in African American adolescents. Adolescents discussed more topics on AMD than the State plan presented, and they introduced new topics concerning residential mobility, poor nutrition, inadequate athletic opportunities, and schools with stairs. Current health policy may be constraining effective responses to asthma disparities in youth. School nursing leadership can use photovoice to advance youth voice in transforming structural inequities in urban school environments. abstract_id: PUBMED:25960944 A mixed methods study of health and social disparities among substance-using African American/Black men who have sex with men. African American/Black men who have sex with men (MSM) in the U.S. experience health and social disparities at greater rates than MSM of other races/ethnicities, including HIV infection and substance use. This mixed methods paper presents: 1) a quantitative examination of health and social disparities among a sample of substance-using African American/Black MSM (N=108), compared to Caucasian/White MSM (N=250), and 2) in-depth qualitative data from a subsample of African American/Black MSM (N=21) in order to contextualize the quantitative data. Findings indicate that compared to Caucasian/White MSM, African American/Black MSM experienced a wide range of health and social disparities including: substance use and dependence; buying, trading or selling sex; educational attainment; employment; homelessness; identifying as gay; HIV status; arrest history; social support; and satisfaction with one's living situation. Qualitative data suggests that structural interventions that address homophobia and the social environment would be likely to mitigate many of the health and social disparities experienced by African American/Black MSM. Answer: The use of e-health tools among African American internet users has the potential to address longstanding health disparities. Research indicates that web-based health information can be of significant value to the African American population by reducing communication inequalities and empowering minority groups (PUBMED:25536065). E-health behaviors among African Americans, such as searching for health information online, conducting interactive health-related activities, and tracking health information, are influenced by various factors including personal and family health concerns, experiences, and sociodemographic predictors (PUBMED:25536065). For instance, searching for health information online is more common among those who have been helped or know someone helped by online information, but less common among those with lower income (PUBMED:25536065). Interactive health activities are associated with higher education levels, having a family member with chronic conditions, experiencing a recent medical crisis, and being helped by online information (PUBMED:25536065). E-tracking behaviors are stronger among those with health insurance, who have been helped by online information, and who use social media (PUBMED:25536065). However, disparities persist in health-related internet use, with education being a significant predictor; African American men with some college education or more are more likely to engage in health-related internet use (PUBMED:24650621). This suggests that those with less education may require additional support to effectively engage with e-health tools. Community-based participatory research (CBPR) has been identified as an effective means to engage African American women in research to achieve health equity, but focused engagement of this group is still needed (PUBMED:28387794). Similarly, addressing health disparities through African American health ministry committees and other culturally appropriate interventions can be promising (PUBMED:21213192). In conclusion, while e-health tools offer a promising avenue to address health disparities among African American internet users, there are still challenges to overcome. These include ensuring equitable access to technology, providing education and support to facilitate effective use of e-health resources, and developing targeted interventions that consider the unique needs and preferences of the African American community (PUBMED:25536065; PUBMED:24650621; PUBMED:28387794; PUBMED:21213192).
Instruction: Application of intravenous contrast in PET/CT: does it really introduce significant attenuation correction error? Abstracts: abstract_id: PUBMED:15695788 Application of intravenous contrast in PET/CT: does it really introduce significant attenuation correction error? Unlabelled: The current perception of using contrast-enhanced CT (CECT) for attenuation correction (AC) is that of caution, as it might lead to erroneously elevated (18)F-FDG uptake on the PET scan. This study evaluates in vivo whether an intravenous iodinated contrast agent produces a significant AC artifact in the level of standardized uptake value (SUV) changes in PET/CT. Methods: Fifty-four patients referred for whole-body (WB) PET/CT scans were enrolled and subdivided into 2 groups. In part I, 26 patients had a single WB PET scan that was corrected for attenuation using noncontrast and intravenous CECT obtained before and after the emission data, respectively. The final PET images were compared for any visual and SUV maximum (SUV(max)) measurement difference. This allowed analysis of the compatibility of the scaling processes between the 2 different CTs and the PET. The SUV(max) values were obtained from ascending aorta, upper lung, femoral head, iliopsoas muscle, spleen, liver, and the site of pathology (total, 193 regions). Part II addressed whether intravenous contrast also influenced the PET emission data. For that purpose, the remaining 28 patients underwent a limited plain CT scan from lung base to lower liver edge, followed by a 1-bed PET scan of the same region and then a WB intravenous contrast CT scan in tandem with a WB PET scan. SUV(max) values were obtained at the lung base, liver, spleen, T11 or T12 vertebra, and paraspinal muscle (total, 135 regions). The data obtained from pre- and post-intravenous contrast PET scans were analyzed as in part I. Results: There was no statistically significant elevation of the SUV level in the measured anatomic sites as a whole (part I: mean SUV(max) difference = 0.06, P &gt; 0.05; Part II: mean SUV(max) difference = -0.02, P &gt; 0.05). However, statistically significant results as a group (mean SUV(max) difference = 0.26, P &lt; 0.05)--albeit considered to be clinically insignificant--were observed for areas of pathology in the part I study. No abnormal focal increased (18)F-FDG activity was detected as a result of the intravenous contrast in both parts of this examination. Conclusion: No statistically or clinically significant spuriously elevated SUV level that might potentially interfere with the diagnostic value of PET/CT was identified as a result of the application of intravenous iodinated contrast. abstract_id: PUBMED:30417316 A deep learning approach for 18F-FDG PET attenuation correction. Background: To develop and evaluate the feasibility of a data-driven deep learning approach (deepAC) for positron-emission tomography (PET) image attenuation correction without anatomical imaging. A PET attenuation correction pipeline was developed utilizing deep learning to generate continuously valued pseudo-computed tomography (CT) images from uncorrected 18F-fluorodeoxyglucose (18F-FDG) PET images. A deep convolutional encoder-decoder network was trained to identify tissue contrast in volumetric uncorrected PET images co-registered to CT data. A set of 100 retrospective 3D FDG PET head images was used to train the model. The model was evaluated in another 28 patients by comparing the generated pseudo-CT to the acquired CT using Dice coefficient and mean absolute error (MAE) and finally by comparing reconstructed PET images using the pseudo-CT and acquired CT for attenuation correction. Paired-sample t tests were used for statistical analysis to compare PET reconstruction error using deepAC with CT-based attenuation correction. Results: deepAC produced pseudo-CTs with Dice coefficients of 0.80 ± 0.02 for air, 0.94 ± 0.01 for soft tissue, and 0.75 ± 0.03 for bone and MAE of 111 ± 16 HU relative to the PET/CT dataset. deepAC provides quantitatively accurate 18F-FDG PET results with average errors of less than 1% in most brain regions. Conclusions: We have developed an automated approach (deepAC) that allows generation of a continuously valued pseudo-CT from a single 18F-FDG non-attenuation-corrected (NAC) PET image and evaluated it in PET/CT brain imaging. abstract_id: PUBMED:33226495 Improved PET/MRI attenuation correction in the pelvic region using a statistical decomposition method on T2-weighted images. Background: Attenuation correction of PET/MRI is a remaining problem for whole-body PET/MRI. The statistical decomposition algorithm (SDA) is a probabilistic atlas-based method that calculates synthetic CTs from T2-weighted MRI scans. In this study, we evaluated the application of SDA for attenuation correction of PET images in the pelvic region. Materials And Method: Twelve patients were retrospectively selected from an ongoing prostate cancer research study. The patients had same-day scans of [11C]acetate PET/MRI and CT. The CT images were non-rigidly registered to the PET/MRI geometry, and PET images were reconstructed with attenuation correction employing CT, SDA-generated CT, and the built-in Dixon sequence-based method of the scanner. The PET images reconstructed using CT-based attenuation correction were used as ground truth. Results: The mean whole-image PET uptake error was reduced from - 5.4% for Dixon-PET to - 0.9% for SDA-PET. The prostate standardized uptake value (SUV) quantification error was significantly reduced from - 5.6% for Dixon-PET to - 2.3% for SDA-PET. Conclusion: Attenuation correction with SDA improves quantification of PET/MR images in the pelvic region compared to the Dixon-based method. abstract_id: PUBMED:37695384 A review of PET attenuation correction methods for PET-MR. Despite being thirteen years since the installation of the first PET-MR system, the scanners constitute a very small proportion of the total hybrid PET systems installed. This is in stark contrast to the rapid expansion of the PET-CT scanner, which quickly established its importance in patient diagnosis within a similar timeframe. One of the main hurdles is the development of an accurate, reproducible and easy-to-use method for attenuation correction. Quantitative discrepancies in PET images between the manufacturer-provided MR methods and the more established CT- or transmission-based attenuation correction methods have led the scientific community in a continuous effort to develop a robust and accurate alternative. These can be divided into four broad categories: (i) MR-based, (ii) emission-based, (iii) atlas-based and the (iv) machine learning-based attenuation correction, which is rapidly gaining momentum. The first is based on segmenting the MR images in various tissues and allocating a predefined attenuation coefficient for each tissue. Emission-based attenuation correction methods aim in utilising the PET emission data by simultaneously reconstructing the radioactivity distribution and the attenuation image. Atlas-based attenuation correction methods aim to predict a CT or transmission image given an MR image of a new patient, by using databases containing CT or transmission images from the general population. Finally, in machine learning methods, a model that could predict the required image given the acquired MR or non-attenuation-corrected PET image is developed by exploiting the underlying features of the images. Deep learning methods are the dominant approach in this category. Compared to the more traditional machine learning, which uses structured data for building a model, deep learning makes direct use of the acquired images to identify underlying features. This up-to-date review goes through the literature of attenuation correction approaches in PET-MR after categorising them. The various approaches in each category are described and discussed. After exploring each category separately, a general overview is given of the current status and potential future approaches along with a comparison of the four outlined categories. abstract_id: PUBMED:33892276 Comparison of pre- and post-contrast-enhanced attenuation correction using a CAIPI-accelerated T1-weighted Dixon 3D-VIBE sequence in 68Ga-DOTATOC PET/MRI. Objectives: To investigate the influence of contrast agent administration on attenuation correction (AC) based on a CAIPIRINHA (CAIPI)-accelerated T1-weighted Dixon 3D-VIBE sequence in 68Ga-DOTATOC PET/MRI. Material And Methods: Fifty-one patients with neuroendocrine tumors underwent whole-body 68Ga-DOTATOC PET/MRI for tumor staging. Two PET reconstructions were performed using AC-maps that were created using a high-resolution CAIPI-accelerated Dixon-VIBE sequence with an additional bone atlas and truncation correction using the HUGE (B0 homogenization using gradient enhancement) method before and after application of Gadolinium (Gd)-based contrast agent. Standardized uptake values (SUVs) of 21 volumes of interest (VOIs) were compared between in both PET data sets per patient. A student's t-test for paired samples was performed to test for potential differences between both AC-maps and both reconstructed PET data sets. Bonferroni correction was performed to prevent α-error accumulation, p &lt; 0.0024 was considered to indicate statistical significance. Results: Significant quantitative differences between SUVmax were found in the perirenal fat (19.65 ± 48.03 %, p &lt; 0.0001), in the axillary fat (17.46 ± 63.67 %, p &lt; 0.0001) and in the dorsal subcutaneous fat on level of lumbar vertebral body L4 (10.26 ± 25.29 %, p &lt; 0.0001). Significant differences were also evident in the lungs apical (5.80 ± 10.53 %, p &lt; 0.0001), dorsal at the level of the pulmonary trunk (15.04 ± 19.09 %, p &lt; 0.0001) and dorsal in the basal lung (51.27 ± 147.61 %, p &lt; 0.0001). Conclusion: The administration of (Gd)-contrast agents in this study has shown a considerable influence on the AC-maps in PET/MRI and, consequently impacted quantification in the reconstructed PET data. Therefore, dedicated PET/MRI staging protocols have to be adjusted so that AC-map acquisition is performed prior to contrast agent administration. abstract_id: PUBMED:37692851 Pelvic PET/MR attenuation correction in the image space using deep learning. Introduction: The five-class Dixon-based PET/MR attenuation correction (AC) model, which adds bone information to the four-class model by registering major bones from a bone atlas, has been shown to be error-prone. In this study, we introduce a novel method of accounting for bone in pelvic PET/MR AC by directly predicting the errors in the PET image space caused by the lack of bone in four-class Dixon-based attenuation correction. Methods: A convolutional neural network was trained to predict the four-class AC error map relative to CT-based attenuation correction. Dixon MR images and the four-class attenuation correction µ-map were used as input to the models. CT and PET/MR examinations for 22 patients ([18F]FDG) were used for training and validation, and 17 patients were used for testing (6 [18F]PSMA-1007 and 11 [68Ga]Ga-PSMA-11). A quantitative analysis of PSMA uptake using voxel- and lesion-based error metrics was used to assess performance. Results: In the voxel-based analysis, the proposed model reduced the median root mean squared percentage error from 12.1% and 8.6% for the four- and five-class Dixon-based AC methods, respectively, to 6.2%. The median absolute percentage error in the maximum standardized uptake value (SUVmax) in bone lesions improved from 20.0% and 7.0% for four- and five-class Dixon-based AC methods to 3.8%. Conclusion: The proposed method reduces the voxel-based error and SUVmax errors in bone lesions when compared to the four- and five-class Dixon-based AC models. abstract_id: PUBMED:23682307 An MRI-based Attenuation Correction Method for Combined PET/MRI Applications. We are developing MRI-based attenuation correction methods for PET images. PET has high sensitivity but relatively low resolution and little anatomic details. MRI can provide excellent anatomical structures with high resolution and high soft tissue contrast. MRI can be used to delineate tumor boundaries and to provide an anatomic reference for PET, thereby improving quantitation of PET data. Combined PET/MRI can offer metabolic, functional and anatomic information and thus can provide a powerful tool to study the mechanism of a variety of diseases. Accurate attenuation correction represents an essential component for the reconstruction of artifact-free, quantitative PET images. Unfortunately, the present design of hybrid PET/MRI does not offer measured attenuation correction using a transmission scan. This problem may be solved by deriving attenuation maps from corresponding anatomic MR images. Our approach combines image registration, classification, and attenuation correction in a single scheme. MR images and the preliminary reconstruction of PET data are first registered using our automatic registration method. MRI images are then classified into different tissue types using our multiscale fuzzy C-mean classification method. The voxels of classified tissue types are assigned theoretical tissue-dependent attenuation coefficients to generate attenuation correction factors. Corrected PET emission data are then reconstructed using a three-dimensional filtered back projection method and an order subset expectation maximization method. Results from simulated images and phantom data demonstrated that our attenuation correction method can improve PET data quantitation and it can be particularly useful for combined PET/MRI applications. abstract_id: PUBMED:26859397 Importance of Attenuation Correction (AC) for Small Animal PET Imaging. Unlabelled: The purpose of this study was to investigate whether a correction for annihilation photon attenuation in small objects such as mice is necessary. The attenuation recovery for specific organs and subcutaneous tumors was investigated. A comparison between different attenuation correction methods was performed. Methods: Ten NMRI nude mice with subcutaneous implantation of human breast cancer cells (MCF-7) were scanned consecutively in small animal PET and CT scanners (MicroPET(TM) Focus 120 and ImTek's MicroCAT(TM) II). CT-based AC, PET-based AC and uniform AC methods were compared. Results: The activity concentration in the same organ with and without AC revealed an overall attenuation recovery of 9-21% for MAP reconstructed images, i.e., SUV without AC could underestimate the true activity at this level. For subcutaneous tumors, the attenuation was 13 ± 4% (9-17%), for kidneys 20 ± 1% (19-21%), and for bladder 18 ± 3% (15-21%). The FBP reconstructed images showed almost the same attenuation levels as the MAP reconstructed images for all organs. Conclusions: The annihilation photons are suffering attenuation even in small subjects. Both PET-based and CT-based are adequate as AC methods. The amplitude of the AC recovery could be overestimated using the uniform map. Therefore, application of a global attenuation factor on PET data might not be accurate for attenuation correction. abstract_id: PUBMED:35623332 Evaluation of applying space-variant resolution modeling to attenuation correction in PET. Attenuation correction aims to recover the underestimated tracer uptake and improve the image contrast recovery in positron emission tomography (PET). However, traditional ray-tracing-based projection of attenuation maps is inaccurate as some physical effects are not considered, such as finite crystal size, inter-crystal penetration and inter-crystal scatter. In this study, we evaluated the effects of applying resolution modeling (RM) to attenuation correction by implementing space-variant RM to complement physical effects which are usually omitted in the traditional projection model. We verified this method on a brain PET scanner developed by our group, in both Monte Carlo simulation and real-world data, in comparison with space-invariant Gaussian RM, average-depth-of-interaction, and multi-ray tracing methods. The results indicate that the space-variant RM is superior in terms of artifacts reduction and contrast recovery. abstract_id: PUBMED:28084612 An experimental phantom study of the effect of gadolinium-based MR contrast agents on PET attenuation coefficients and PET quantification in PET-MR imaging: application to cardiac studies. Background: Simultaneous cardiac perfusion studies are an increasing trend in PET-MR imaging. During dynamic PET imaging, the introduction of gadolinium-based MR contrast agents (GBCA) at high concentrations during a dual injection of GBCA and PET radiotracer may cause increased attenuation effects of the PET signal, and thus errors in quantification of PET images. We thus aimed to calculate the change in linear attenuation coefficient (LAC) of a mixture of PET radiotracer and increasing concentrations of GBCA in solution and furthermore, to investigate if this change in LAC produced a measurable effect on the image-based PET activity concentration when attenuation corrected by three different AC strategies. Findings: We performed simultaneous PET-MR imaging of a phantom in a static scenario using a fixed activity of 40 MBq [18 F]-NaF, water, and an increasing GBCA concentration from 0 to 66 mM (based on an assumed maximum possible concentration of GBCA in the left ventricle in a clinical study). This simulated a range of clinical concentrations of GBCA. We investigated two methods to calculate the LAC of the solution mixture at 511 keV: (1) a mathematical mixture rule and (2) CT imaging of each concentration step and subsequent conversion to LAC at 511 keV. This comparison showed that the ranges of LAC produced by both methods are equivalent with an increase in LAC of the mixed solution of approximately 2% over the range of 0-66 mM. We then employed three different attenuation correction methods to the PET data: (1) each PET scan at a specific millimolar concentration of GBCA corrected by its corresponding CT scan, (2) each PET scan corrected by a CT scan with no GBCA present (i.e., at 0 mM GBCA), and (3) a manually generated attenuation map, whereby all CT voxels in the phantom at 0 mM were replaced by LAC = 0.1 cm-1. All attenuation correction methods (1-3) were accurate to the true measured activity concentration within 5%, and there were no trends in image-based activity concentrations upon increasing the GBCA concentration of the solution. Conclusion: The presence of high GBCA concentration (representing a worst-case scenario in dynamic cardiac studies) in solution with PET radiotracer produces a minimal effect on attenuation-corrected PET quantification. Answer: The application of intravenous contrast in PET/CT has been a subject of investigation to determine whether it introduces significant attenuation correction (AC) errors. According to a study by Cohade et al. (PUBMED:15695788), the use of intravenous iodinated contrast agent did not produce statistically or clinically significant spuriously elevated standardized uptake value (SUV) levels that might interfere with the diagnostic value of PET/CT. In their study, 54 patients underwent whole-body PET/CT scans, and the SUV(max) values were obtained from various anatomical sites and compared between noncontrast and intravenous contrast-enhanced CT (CECT). The results showed no statistically significant elevation of the SUV level in the measured anatomic sites as a whole. However, there were statistically significant results observed for areas of pathology in part I of the study, but these were considered to be clinically insignificant. No abnormal focal increased (18)F-FDG activity was detected as a result of the intravenous contrast in both parts of the examination. In contrast, another study by Eiber et al. (PUBMED:33892276) found that the administration of gadolinium-based contrast agents in PET/MRI showed a considerable influence on the AC-maps and consequently impacted quantification in the reconstructed PET data. They suggested that PET/MRI staging protocols should be adjusted so that AC-map acquisition is performed prior to contrast agent administration to avoid these issues. Overall, while the study by Cohade et al. suggests that intravenous contrast does not introduce significant AC errors in PET/CT, the study by Eiber et al. indicates that contrast agents can have a considerable impact on AC in PET/MRI. It is important to note that these findings may vary depending on the type of contrast agent used, the imaging modality (PET/CT vs. PET/MRI), and the specific clinical protocols employed. Therefore, the application of intravenous contrast in PET imaging should be carefully considered, and protocols may need to be adjusted to minimize potential AC errors.
Instruction: Is rectovaginal endometriosis a progressive disease? Abstracts: abstract_id: PUBMED:15547522 Is rectovaginal endometriosis a progressive disease? Objective: The purpose of this study was to observe the natural history of untreated asymptomatic rectovaginal endometriosis. Study Design: This was a prospective, observational study. Eighty-eight patients with untreated asymptomatic rectovaginal endometriosis were followed for 1 to 9 years. Pain symptoms and clinical and transrectal ultrasonographic findings were evaluated before and every 6 months after diagnosis. Results: Two patients had specific symptoms that were attributable to rectovaginal endometriosis that was associated with an increase in lesion size and underwent surgery. In 4 other patients, the size of the endometriotic lesions increased, but the patients remained symptom free. The estimated cumulative proportion of patients with progression of disease and/or appearance of pain symptoms that were attributable to rectovaginal endometriosis after 6 years of follow up was 9.7%. For the remaining patients, the follow-up period was uneventful, with no detectable clinical nor echographic changes of the lesions and with no appearance of new symptoms. Conclusion: Progression of the disease and appearance of specific symptoms rarely occurred in patients with asymptomatic rectovaginal endometriosis. abstract_id: PUBMED:29162069 Martius' flap for recurrent perineal and rectovaginal fistulae in a patient with Crohn's disease, endometriosis and a mullerian anomaly. Background: Rectovaginal fistulas represent 5% of all anorectal fistulae and are a disastrous manifestation of Crohn's disease that negatively affects patients' social and sexual quality of life. Treatment remains challenging for colorectal surgeons, and the recurrence rate remains high despite the numerous available options. Case Presentation: We describe a 31-year-old female patient with a Crohn's disease-related recurrent perineo-vaginal and recto-vaginal fistulae and a concomitant mullerian anomaly. She complained of severe dyspareunia associated with penetration difficulties. The patient's medical history was also significant for a previous abdominal laparoscopic surgery for endometriosis for the removal of macroscopic nodules and a septate uterus with cervical duplication and a longitudinal vaginal septum. The patient was successfully treated using a Martius' flap. The postoperative outcome was uneventful, and no recurrence of the fistula occurred at the last follow-up, eight months from the closure of the ileostomy. Conclusion: Martius' flap was first described in 1928, and it is considered a good option in cases of rectovaginal fistulas in patients with Crohn's disease. The patient should be referred to a colorectal centre with expertise in this disease to increase the surgical success rate. abstract_id: PUBMED:34365705 Interposition of a biological mesh may not affect the rate of rectovaginal fistula after excision of large rectovaginal endometriotic nodules: a pilot study of 209 patients. Aim: The aim of this work was to assess whether placement of a biological mesh (Permacol® ) between the vaginal and rectal sutures reduces the rate of rectovaginal fistula in patients with deep rectovaginal endometriosis. Method: We report a retrospective, comparative study enrolling patients with vaginal infiltration of more than 3 cm in diameter and rectal involvement in two centres. They benefited from complete excision of rectovaginal endometriotic nodules with or without a biological mesh placed between the vaginal and rectal sutures. The rate of rectovaginal fistula was compared between the two groups. Results: Two hundred and nine patients were enrolled: 42 patients underwent interposition of biological mesh (cases) and 167 did not (controls). Ninety-two per cent of cases and 86.2% of controls had rectal infiltration more than 3 cm in diameter. Cases underwent rectal disc excision more frequently (64.3% vs. 49.1%) and had a smaller distance between the rectal staple line and the anal verge (4.4 ± 1.4 cm vs. 6 ± 2.9 cm). Rectovaginal fistulas occurred in 4 cases (9.5%) and 12 controls (7.2%). Logistic regression analyses revealed no difference in the rate of rectovaginal fistula following the use of mesh (adjusted OR 1.6, 95% CI 0.3-9.5). A distance of less than 7 cm between the rectal staple line and the anal verge was found to be an independent risk factor for the development of rectovaginal fistula (adjusted OR 15.1, 95% CI 1.7-132). Conclusion: Our results suggest that the placement of a biological mesh between the vagina and rectal sutures may not affect the rate of formation of postoperative rectovaginal fistula following excision of deep infiltrating rectovaginal endometriosis. abstract_id: PUBMED:9668154 Laparoscopic treatment of type IV rectovaginal fistula. Fistulas between the anorectum and vagina may arise from several causes. Treatment depends on their etiology and location, as well as the surgeon's experience. Operative laparoscopy was successful in two women with type IV (mid)rectovaginal fistula in whom previous surgical attempts failed. Our experience suggests that mid and high rectovaginal fistulas can be effectively treated by laparoscopy in the hands of experienced endoscopic surgeons. abstract_id: PUBMED:28191113 The use of intra-operative saline sonovaginography to define the rectovaginal septum in women with suspected rectovaginal endometriosis: a pilot study. Objectives: The aim of this study was to perform saline sonovaginography (SVG) in women with suspected rectovaginal endometriosis (RVE) in order to establish the thickness of the rectovaginal septum (RVS) in this population and to predict the presence or absence of RVE. Methods: Prospective observational pilot study. Women undergoing laparoscopy for possible endometriosis on the basis of history or clinical examination were offered to participate in the study. All women underwent saline SVG during general anesthesia just prior to their laparoscopy. RVS nodules were visualised as hypoechoic lesions of various shapes. The sonologist predicted whether or not a nodule was present in the retrocervical area or in the RVS. The thickness of the posterior vaginal wall ± RVS was then taken at three points in the mid-sagittal plane: at the posterior fornix (retrocervical area), at the middle third of the vagina (upper RVS) and just above the perineal body (lower RVS). The diagnosis of RVE was established using the gold standards of laparoscopy and histological confirmation. The RVS thickness was then compared between women with RVE and the absence of RVE. Results: Twenty-three women were enrolled in the study. Mean age was 38 years (33-44 years). A history of endometriosis was present in 72.7% (8/11). RVE was confirmed in 17.4% (4/23). Visualisation of a hypoechoic nodule at saline SVG demonstrated sensitivity and specificity of 75% and 95%, respectively. All rectovaginal nodules were located in the retrocervical region. Mean diameter (SD) of RVE nodules was 27.3 (± 9.4) mm. Mean thickness of vaginal wall ± RVS at the posterior fornix, at the middle third of the vagina and just above the perineal body was 5.1, 1.4 and 4.0 mm, respectively. These measurements were not significantly different in the presence of a rectovaginal nodule. Conclusions: Using saline SVG, we have established the mean RVS thickness in a small group of women with suspected RVE. Although the numbers are small, there was no correlation between RVS thickness and presence of RVE. The visualisation of hypoechoic lesions at saline SVG seems to be the best ultrasonographic predictor for RVE. SVG is a valuable pre-operative tool for the assessment of RVS and for the prediction of RVE, which allows for the mapping and planning of advanced endometriosis surgery. abstract_id: PUBMED:20988950 Rectovaginal septum endometrioma. N/A abstract_id: PUBMED:23870030 Value of diagnostic procedures in rectovaginal endometriosis. Objective: Rectovaginal endometriosis has the potential to infiltrate into the rectal wall. The recognition of infiltration prior to surgery is of utmost importance since only infiltrative disease should be treated by partial or complete rectal resection. This study compares different imaging procedures in rectovaginal endometriosis cases in an everyday clinical setting. Methods: Seventy nine consecutive women diagnosed with rectovaginal endometriosis were included in this prospective study. Preoperatively, all women had a rectovaginal gynaecological examination and transvaginal sonography. Furthermore, MRI or rectal endosonography imaging procedures together with a rectosigmoidoscopy and estimation of a serum Ca125 were undertaken. Sensitivity and specificity of all diagnostic tools were compared with the intraoperative findings. Results: The procedure with the highest accuracy was bimanual rectovaginal gynaecological examination (sensitivity: 0.92/specificity: 0.32). Rectal endosonography obtained a sensitivity of 0.44 and a specificity of 0.77. All other diagnostic procedures such as Ca125 (sensitivity: 0.42/specificity: 0.81), MRI (sensitivity: 0.41/specificity: 0.83), transvaginal sonography (sensitivity: 0.2/ specificity: 0.79) and rectosigmoidoscopy (sensitivity: 0.03/specificity: 0.92) were only of limited value. Conclusion: The diagnostic method with the highest sensitivity to detect bowel infiltration in an everyday clinical setting is the gynaecological examination. It is followed by rectal endosonography. However, none of the currently available preoperative diagnostic tools can predict infiltrative growth of rectovaginal endometriosis with any certainty. Hence, infiltrative growth still needs to be verified by operative assessment. abstract_id: PUBMED:30254864 The benefit of adenomyomectomy on fertility outcomes in women with rectovaginal endometriosis with coexisting adenomyosis. Study Objective: To evaluate the effect of removal of coexisting adenomyosis on fertility outcomes in women with rectovaginal endometriosis. Design: A retrospective cohort study. Setting: A general hospital. Patients: A total of 190 women who underwent laparoscopic nodule excision surgery for rectovaginal endometriosis between April 2007 and December 2012. Interventions: Surgical excision of the rectovaginal endometriosis and coexisting uterine adenomyosis. Statistical analysis for fertility outcomes. Measurement And Main Results: A total of 119 women desired postoperative pregnancy. Coexisting adenomyosis was found in 21% of the women. The overall clinical pregnancy rate was 41.2%. The only determining factor associated with a successful pregnancy was "age at surgery". Clinical pregnancy rates with or without adenomyosis were 36.0% and 42.6%, respectively. We found no significant difference in clinical pregnancy rates between the groups. Conclusion: There is a possibility that surgical removal of coexisting adenomyosis positively effects fertility outcomes in women with rectovaginal endometriosis. However, it is also important to note that the age at surgery was a critical factor for successful pregnancy. abstract_id: PUBMED:21691574 Mixed adenocarcinoma of the rectovaginal septum associated with endometriosis and endometrial carcinoma: a case report. Purpose: To present a case of primary mixed (clear cell and endometrioid type) adenocarcinoma of the rectovaginal septum, probably arising from endometriosis and associated with a highly differentiated, early-stage endometrioid endometrial carcinoma. The case was managed by a minimally invasive approach and postoperative adjuvant chemotherapy. Results: The patient underwent clinical/instrumental follow-up and a second-look laparoscopy after the primary surgery as well as adjuvant chemotherapy. No evidence of disease could be observed after the treatment. Conclusion: Surgery with postoperative chemotherapy can be recommended for the treatment of mixed adenocarcinoma of the rectovaginal septum. abstract_id: PUBMED:25336705 Endometriosis of the retrocervical septum is proposed to replace the anatomically incorrect term endometriosis of the rectovaginal septum. We propose that the term retrocervical septum be added to the medical lexicon to designate the anatomic location of endometriosis of the septum that separates the vagina and posterior vaginal fornix from the rectovaginal pouch of Douglas. Use of the terms retrocervical septum and endometriosis of the retrocervical septum would correct the century-long misuse of the anatomically incorrect term, endometriosis of the rectovaginal septum. Answer: Rectovaginal endometriosis may not always be a progressive disease. A prospective observational study followed 88 patients with untreated asymptomatic rectovaginal endometriosis for 1 to 9 years. The study found that progression of the disease and the appearance of specific symptoms rarely occurred in these patients. Only two patients developed specific symptoms attributable to an increase in lesion size and underwent surgery. Four other patients experienced an increase in the size of the endometriotic lesions but remained symptom-free. After 6 years of follow-up, the estimated cumulative proportion of patients with progression of disease and/or appearance of pain symptoms attributable to rectovaginal endometriosis was 9.7%. For the majority of patients, the follow-up period was uneventful, with no detectable clinical or echographic changes in the lesions and no new symptoms (PUBMED:15547522). Therefore, based on this study, it appears that rectovaginal endometriosis does not consistently progress or lead to the development of symptoms in all patients, particularly those who are asymptomatic. However, it is important to note that this study focused on a specific patient group with asymptomatic rectovaginal endometriosis, and the progression of the disease may vary among individuals with different presentations or severity of the condition.
Instruction: Transcutaneous oxygen measurement in stroke: circulatory disorder of the affected leg? Abstracts: abstract_id: PUBMED:9305275 Transcutaneous oxygen measurement in stroke: circulatory disorder of the affected leg? Objective: To identify variances in the microcirculation of the affected leg of stroke patients and to correlate them with a number of variables that are clinically associated with a possible circulatory disorder ("cold leg"). Design: Survey. Setting: Large regional (tertiary care) rehabilitation center. Patients: From 93 acute, first-ever stroke patients admitted for stroke rehabilitation, 10 individuals were selected. Patients with vascular or cardiopulmonary pathology and severe cognitive or speech impairments were excluded. Main Outcome Measures: A clinical assessment of the following variables was performed: subjective complaints of the affected leg, medication, walking performance, degree of lower-leg edema, trophic pathology, voluntary muscle activity of the dorsal flexors of the affected foot, and the degree of spasticity of the calf muscles. The microcirculation of the affected leg was registered via transcutaneous oxygen measurement (TcPO2). Results: The clinical picture associated with a circulatory disorder ("cold leg") was partially and modestly present in seven patients. The TcPO2 values showed no differences between the paretic and nonparetic lower legs, nor did values change in the course of time after stroke: mean 77.9 mmHg (range 42-124) versus 86.1 (41-124) after 8 weeks (n = 10, p = .17); 76.9 (45-96) versus 73.1 (50-96) after 14 weeks (n = 9, p = .38); and 65.8 (44-88) versus 65.8 (37-78) after 20 weeks (n = 8, p = .48). The clinical symptoms could not be objectified in relation to the microcirculation. Conclusions: In selected stroke patients, no differences were established between microcirculation in both lower legs. TcPO2 measurement does not seem to be a suitable method for clinical research on this topic. abstract_id: PUBMED:3966742 Hemodynamic response to oxygen therapy in chronic obstructive pulmonary disease. At six centers, 203 patients with stabilized hypoxemic chronic obstructive pulmonary disease were evaluated hemodynamically during a continuous or 12-hour oxygen therapy program. Neither oxygen therapy program resulted in correction or near-correction of the baseline hemodynamic abnormalities. The continuous oxygen therapy group did show improvement in pulmonary vascular resistance, pulmonary arterial pressure, and stroke volume index. The improvement in pulmonary vascular resistance was associated with improved cardiac function, as evidenced by an increase in baseline and exercise stroke volume index. The nocturnal oxygen therapy group showed stable hemodynamic variables. For both groups, changes in mean pulmonary artery pressure during the first 6 months were associated with subsequent survival after adjustment for association with the baseline mean pulmonary artery pressure. Continuous oxygen therapy can improve the hemodynamic abnormalities of patients with hypoxic chronic obstructive pulmonary disease. The hemodynamic response to this treatment is predictive of survival. abstract_id: PUBMED:3629855 Potentials of rheovasography in assessing circulatory disorders in the postoperative period in children Clinical investigations have shown possibilities of using rheovasography of the leg for the complex assessment of alterations of the peripheral blood circulation in children after surgical interventions. A direct relationship between the decreased stroke volume of the heart and the value of the systolic rheographic index on the leg has been established and great informative significance of leg rheography has been proved for the assessment of peripheral disorders as compared with other vascular zones (forearm, finger). The use of leg rheovasography is recommended for the assessment of peripheral blood circulation and choice of a correcting therapy. abstract_id: PUBMED:7237712 Acute effects of oral pirbuterol on myocardial oxygen metabolism and systemic hemodynamics in chronic congestive heart failure. Pirbuterol hydrochloride, an orally effective beta-adrenergic agonist, improves hemodynamic abnormalities in patients with congestive heart failure, but its effects on myocardial oxygen consumption (MVO2) and coronary blood flow have not been characterized. We studied the effects of 20-30 mg of oral pirbuterol on myocardial metabolic and hemodynamic parameters in 12 patients (six with coronary artery disease) with chronic CHF refractory to standard medical therapy. Pirbuterol induced an increase in cardiac index (1.7 +/- 0.1 to 2.3 +/- 0.2 l/min/m2, p less than 0.05) and a fall in systemic vascular resistance (1884 +/- 118 to 1391 +/- 69 dyn-sec-cm-5, p less than 0.01) 2 hours after administration. Pulmonary capillary wedge pressure fell from arterial and right atrial pressures did not change. Heart rate remained constant. Arterial-coronary sinus oxygen content difference narrowed (from 12.9 +/- 0.4 to 11.1 +/- 0.3 vol%, p less than 0.05), while no significant change occurred in MVO2. Myocardial oxygen extraction ratio and myocardial lactate extraction ratio did not change, and no patient developed angina or electrocardiographic evidence of myocardial ischemia. Patients with coronary artery disease had hemodynamic and myocardial metabolic responses similar to those without coronary artery disease. Pirbuterol effects substantial acute hemodynamic improvement in patients with chronic congestive heart failure without increasing requirements for coronary blood flow or myocardial oxygen delivery and without provoking myocardial ischemia. abstract_id: PUBMED:9550507 Increased oxygen extraction fraction is associated with prior ischemic events in patients with carotid occlusion. Background And Purpose: The purpose of our study was to investigate the relationship between misery perfusion (increased oxygen extraction fraction, OEF) and baseline risk factors in patients with carotid occlusion. Methods: One-hundred seventeen patients with atherosclerotic carotid occlusion were studied prospectively by clinical evaluation, laboratory testing, and positron emission tomography (PET). PET measurements of cerebral blood flow (CBF), cerebral blood volume (CBV), and OEF were made on enrollment in the study. Increased ipsilateral OEF was identified by comparison with 18 normal control subjects. Twenty-five baseline clinical, epidemiological, and arteriographic risk factors were assessed on study entry. Student t tests, chi(2) tests, and Fisher exact tests with Bonferroni correction were used to assess statistical significance (P&lt;.05). Results: Of 117 patients, 44 had increased OEF distal to the occluded carotid and 73 had normal OEFs. Thirty-nine of the 81 patients with prior ipsilateral ischemic symptoms had high OEFs (42%), whereas only 5 of the 31 asymptomatic patients had high OEFs (16%, P&lt;.001). All of the other baseline risk factors were similar between the two groups of patients. Conclusions: Investigations of the relationship between hemodynamic factors and stroke risk must take into account the lower frequency of hemodynamic abnormalities in asymptomatic patients. abstract_id: PUBMED:1124667 Oxygen uptake and cardiac output during submaximal and maximal exercise in adult subjects with totally corrected tetralogy of fallot. Ten female and eight male adults with tetralogy of Fallot, the majority totally corrected at adult age, have been studied at rest and during submaximal and maximal exercise on a bicycle ergometer. Oxygen uptake was determined by the Douglas bag technique and cardiac output by the dye-dilution method. Maximal oxygen uptake was reduced about 30-40% from normal. Thus a complete normalization of the aerobic working capacity was not achieved in spite of an intracardiac repair that was considered surgically satisfactory. Cardiac output response to exercise was subnormal, mainly due to small stroke volumes and partly because of low heart rates. A fall in stroke volume of more than 10 ml was found in 8 of the patients during exercise. No correlation was found between stroke volume during maximal excercise, on the one hand, and the presence of a particular residual defect, anatomy of the right ventricular outflow tract prior to operation and the use of a right ventricular outflow patch on the other. However, too few patients were studied to allow any definite conclusions as to the possible influence of these variables. It remains to be shown whether the haemodynamic abnormalities will be less and the aerobic work capacity better if total correction is undertaken at an early age. abstract_id: PUBMED:9173811 A comparative study of the effectiveness of 5% human albumin and 10% hydroxyethyl starch (HAES-steril) for correcting hemodynamics and O2 transport in surgical interventions Forty patients subjected to cavitary operations were examined. A high risk of hemodynamic disorders necessitated invasive monitoring; with this aim in view a catheter was inserted for measuring arterial pressure and the Swan-Ganz catheter for measuring the pressure in the pulmonary artery, in which helped monitor the hemodynamics and oxygen transport in the course of hypervolemic hemodilution. Plasma substitutes (one of two) were selected at random. After catheterization of the left radial artery and insertion of the Swan-Ganz catheter in the pulmonary artery through the internal right jugular vein the patients were infused either 5% human albumin or 10% hydroxyethyl starch in a dose of 125 ml every 5 min. The parameters of hemodynamics and oxygen transport were recorded after 500 ml of the solution was infused and the wedge pressure of 18 mm Hg attained. Both agents appreciably improved the mean arterial pressure, central nervous pressure, and wedge pressure. Cardiac index, left ventricular output, and stroke volume increased in both groups, and the pulmonary vascular resistance decreased. Both agents improved oxygen utilization and appreciably decreased hemoglobin level. The positive effect of hydroxyethyl starch on cardiac index, pulmonary vascular resistance, left ventricular output, and oxygen utilization was more expressed; moreover, a lesser dose of this drug was needed than of 5% human albumin (Behringwerke, Marburg, Germany). Loss of plasma caused by surgical intervention was better compensated for with synthetic colloid solutions, such as 10% HAES-steril (Fresinium, Oberurzel, Germany), provided that plasma protein level was at least 3-4 g/dl. The effect of 10% HAES-steril as regards the increase of circulating blood volume is 145%. Due to the hyperoncotic direction of its action it has a positive impact on the hemodynamics and oxygen transport and is needed in lower doses than other colloid solutions. abstract_id: PUBMED:37467924 Systemic Arterial Oxygen Levels Differentiate Pre- and Post-capillary Predominant Hemodynamic Abnormalities During Exercise in Undifferentiated Dyspnea on Exertion. Background: Whether systemic oxygen levels (SaO2) during exercise can provide a window into invasively derived exercise hemodynamic profiles in patients with undifferentiated dyspnea on exertion is unknown. Methods: We performed cardiopulmonary exercise testing with invasive hemodynamic monitoring and arterial blood gas sampling in individuals referred for dyspnea on exertion. Receiver operator analysis was performed to distinguish heart failure with preserved ejection fraction from pulmonary arterial hypertension. Results: Among 253 patients (mean ± SD, age 63 ± 14 years, 55% female, arterial O2 [PaO2] 87 ± 14 mmHg, SaO2 96% ± 4%, resting pulmonary capillary wedge pressure [PCWP] 18 ± 4mmHg, and pulmonary vascular resistance [PVR] 2.7 ± 1.2 Wood units), there was no exercise PCWP threshold, measured up to 49 mmHg, above which hypoxemia was consistently observed. Exercise PaO2 was not correlated with exercise PCWP (rho = 0.04; P = 0.51) but did relate to exercise PVR (rho = -0.46; P &lt; 0.001). Exercise PaO2 and SaO2 levels distinguished left-heart-predominant dysfunction from pulmonary-vascular-predominant dysfunction with an area under the curve of 0.89 and 0.89, respectively. Conclusion: Systemic O2 levels during exercise distinguish relative pre- and post-capillary pulmonary hemodynamic abnormalities in patients with undifferentiated dyspnea. Hypoxemia during upright exercise should not be attributed to isolated elevation in left heart filling pressures and should prompt consideration of pulmonary vascular dysfunction. abstract_id: PUBMED:2400223 Exercise ability after Mustard's operation. Twenty children who were well six to 12 years after undergoing Mustard's operation for transposition of the great arteries were studied. Each child performed a graded maximal treadmill test with measurements of gas exchange and oxygen saturation, and had electrocardiography carried out. Nineteen were also catheterised, and oxygen consumption was measured so that pulmonary and systemic flow could be calculated. Compared with 20 age and size matched controls, seven of the patients had normal exercise tolerance (as judged by a maximal oxygen consumption of greater than 40 ml/kg/min), 10 showed a moderate reduction (30-39 ml/kg/min), and three were more seriously limited. None of the patients with normal exercise tolerance had obstruction of venous return but six of those with mild impairment of exercise ability had partial or complete obstruction of one or both of the vena cavas. More severe limitation was associated with pulmonary vascular disease and fixed ventricular outflow tract obstruction. Formal exercise testing of apparently well children who have undergone Mustard's operation identifies those with haemodynamic abnormalities that may require intervention. abstract_id: PUBMED:16510383 Hyperbaric oxygen in the treatment of patients with cerebral stroke, brain trauma, and neurologic disease. Hyperbaric oxygen (HBO) therapy has been used to treat patients with numerous disorders, including stroke. This treatment has been shown to decrease cerebral edema, normalize water content in the brain, decrease the severity of brain infarction, and maintain blood-brain barrier integrity. In addition, HBO therapy attenuates motor deficits, decreases the risks of sequelae, and prevents recurrent cerebral circulatory disorders, thereby leading to improved outcomes and survival. Hyperbaric oxygen also accelerates the regression of atherosclerotic lesions, promotes antioxidant defenses, and suppresses the proliferation of macrophages and foam cells in atherosclerotic lesions. Although no medical treatment is available for patients with cerebral palsy, in some studies, HBO therapy has improved the function of damaged cells, attenuated the effects of hypoxia on the neonatal brain, enhanced gross motor function and fine motor control, and alleviated spasticity. In the treatment of patients with migraine, HBO therapy has been shown to reduce intracranial pressure significantly and abort acute attacks of migraine, reduce migraine headache pain, and prevent cluster headache. In studies that investigated the effects of HBO therapy on the damaged brain, the treatment was found to inhibit neuronal death, arrest the progression of radiation-induced neurologic necrosis, improve blood flow in regions affected by chronic neurologic disease as well as aerobic metabolism in brain injury, and accelerate the resolution of clinical symptoms. Hyperbaric oxygen has also been reported to accelerate neurologic recovery after spinal cord injury by ameliorating mitochondrial dysfunction in the motor cortex and spinal cord, arresting the spread of hemorrhage, reversing hypoxia, and reducing edema. HBO has enhanced wound healing in patients with chronic osteomyelitis. The results of HBO therapy in the treatment of patients with stroke, atherosclerosis, cerebral palsy, intracranial pressure, headache, and brain and spinal cord injury are promising and warrant further investigation. Answer: The study identified in abstract PUBMED:9305275 aimed to determine if there were variances in the microcirculation of the affected leg in stroke patients and to see if these variances correlated with clinical variables associated with a possible circulatory disorder, often referred to as a "cold leg." The study was conducted at a large regional rehabilitation center and involved 10 selected stroke patients who did not have vascular or cardiopulmonary pathology and severe cognitive or speech impairments. The main outcome measures included clinical assessment of subjective complaints of the affected leg, medication, walking performance, degree of lower-leg edema, trophic pathology, voluntary muscle activity of the dorsal flexors of the affected foot, and the degree of spasticity of the calf muscles. The microcirculation of the affected leg was measured using transcutaneous oxygen measurement (TcPO2). The results showed that the clinical picture associated with a circulatory disorder was partially and modestly present in seven patients. However, TcPO2 values did not show differences between the paretic (affected by stroke) and nonparetic lower legs, nor did the values change significantly over time after the stroke. The clinical symptoms could not be objectively linked to the microcirculation based on TcPO2 measurements. The conclusion of the study was that in the selected stroke patients, no differences were established between the microcirculation in both lower legs. Therefore, TcPO2 measurement does not appear to be a suitable method for clinical research on circulatory disorders in the affected leg of stroke patients.
Instruction: Could mean platelet volume be a predictive marker for acute myocardial infarction? Abstracts: abstract_id: PUBMED:10833791 Soluble P-selectin - a marker of platelet activation and vessel wall injury: increase of soluble P-selectin in plasma of patients with myocardial infarction, massive atherosclerosis and primary pulmonary hypertension Aim: A comparative analysis of the content of the soluble form of cell adhesion protein P-selectin in the blood plasma of patients with acute myocardial infarction (AMI), massive atherosclerosis (MA) and primary pulmonary hypertension (PPH), investigation of the relationship between plasma content of P-selectin and known markers of platelets and endothelial cells activation, preliminary assessment of the prognostic value of P-selectin determination. Materials And Methods: This study included 16 patients with AMI, 20 patients with MA, 21 patients with PPH and 18 healthy donors. The follow-up was 1-5 years. End-points in the group of patients with AMI were recurrent acute coronary syndrome and coronary artery by-pass operation, in the group with MA--thrombotic complications (acute coronary syndrome, ischemic stroke) and in the group with PPH--death. P-selectin was measured by ELISA and platelet factor 4 (PF4), thromboxane B2 (TXB2), endothelin-I and stable prostacyclin metabolite 6-keto-prostaglandin F1 alpha (6-keto-PGF1 alpha) by means of commercial ELISA kits. Results: Mean level of P-selectin in blood plasma of patients with AMI (1 day) (361 +/- 18 ng/ml), MA (410 +/- 31 ng/ml) and PPH (627 +/- 83 ng/ml) was increased in comparison with the group of healthy donors (269 +/- 12 ng/ml) (everywhere p &lt; 0.001). In AMI, P-selectin was increased on day 1 only, on days 2, 3 and 10-14 of the disease the level of P-selectin was significantly lower than on day 1 and did not differ from the control level in the group of donors. In patients with MA a significant correlation was detected between plasma content of P-selectin and platelet activation marker PF4 (r = 0.606, P = 0.007) and in patients with PPH between the content of P-selectin and another platelet activation marker TXB2 (r = 0.622, p = 0.013). However, no correlation was found in PPH patients between the content of P-selectin and markers of endothelial activation and/or damage (endothelin-1 and 6-keto-PGF1 alpha). Difference in the concentration of P-selectin in patients with or without end-points during the follow-up period was detected in patients with AMI (353 +/- 14 ng/ml and 451 +/- 24 ng/ml, p = 0.009) and PPH (477 +/- 58 ng/ml and 927 +/- 184 ng/ml, p = 0.017) but not with MA (426 +/- 37 ng/ml and 361 +/- 24 ng/ml, p = 0.295). Conclusion: The level of P-selectin in plasma was increased in patients with acute thrombosis (AMI, 1 day) as well as in patients without clinical signs of thrombosis but with a massive injury of the vasculature (MA and PPH). The increase of P-selectin was, presumably, caused by its secretion from activated platelets since its concentration in plasma correlated with platelet concentration but not endothelial activation markers. Preliminary data indicate that blood plasma soluble P-selectin may be considered as a potential prognostic marker in AMI and PPH. abstract_id: PUBMED:6194555 The clinical significance of beta-thromboglobulin and platelet factor-4 in polycythaemic patients. Simultaneous assays of platelet factor 4 (PF-4) and beta-thromboglobulin (beta TG) were performed in 192 cases of myeloproliferative syndromes (polycythaemia vera and primary thrombocytosis, as defined by the Polycythaemia Vera Study Group). The results led to the following conclusions: (I) both assays must be combined in order to avoid a poor interpretation due to marker release in vitro; (II) the 'normality' of the values must take the platelet number into account, even in the 'normal' range of this parameter; (III) the sensitivity of the beta TG assay is greater than that of PF-4 when considering the correlation of the marker values with arterial accidents; (IV) the predictive value of an excessive level of beta TG and/or PF-4 is difficult to define, since only 13 of the cases studied had a vascular accident during the 12-month follow-up period, and the levels of the markers in these patients were not statistically different from the levels in those patients not experiencing such accidents. abstract_id: PUBMED:11712446 Platelet factor 4 as a marker of platelet activation in patients with acute myocardial infarction. The aim of the present study was evaluate dynamics of platelet factor 4, as a marker of platelet activation, in patients with acute myocardial infarction according to the disease duration and type of treatment. In the recent years much attention has been paid to the role of platelets in the pathogenesis of ischaemic disease and myocardial infarction. Rupture or splitting of atheroma and increased platelet activation are a direct cause of acute thrombotic process in coronary vessels. During platelet activation alpha granules release proteins, e.g. platelet factor 4 (PF 4). We investigated 29 patients with acute myocardial infarction (MI); the patients were divided into two groups: group A--15 patients treated with heparin and aspirin; group B--14 patients treated with streptokinase, heparin and aspirin. Control group (C) consisted of 21 healthy subjects. PF 4 concentration was determined on the 1st, 3rd, 5th, 8th, 11th day of MI using the immunoenzymatic method. Our results indicate that in the course of myocardial infarction there is a change in the platelet factor 4 and that thrombolytic therapy inhibits platelet activation. abstract_id: PUBMED:1289095 Anticardiolipin antibodies and coronary heart disease. Arterial or venous thrombotic events have been described as complications in patients with positive anticardiolipin antibodies (aCL), affecting various organs including the heart. In order to see whether aCL could be, among others, a predisposing factor for coronary artery occlusions and whether it could serve as a prognostic marker for coronary heart disease, 232 patients enrolled in the European Concerted Action on Thrombosis Angina Pectoris Study were studied. aCL and various other haemostatic parameters were determined at time of admittance in order to see whether a relationship existed between haemostasis at baseline and extent or prognosis of the cardiovascular disease. A follow-up at 12 and 24 months after angiography included information about relapsing coronary or other thrombotic events, treatment and outcome of the disease. aCL were not found to be a marker of either progressive cardiovascular disease or recurrent thrombotic events. No correlation was found, either in aCL positive or in aCL negative patients, between high levels of haemostasis activation markers, such as beta-thromboglobulin, platelet factor 4 or fibrinopeptide A and recurrent cardiovascular disease. abstract_id: PUBMED:14524031 The value of platelet function tests Platelet function tests are used to detect patients with abnormal platelet function, which may be inborn or acquired or to detect increased platelet activation which may be accompanied by an increased risk of thrombosis. Platelet function tests are also used to monitor platelet function inhibitors as aspirin, clopidogrel or platelet membrane glycoprotein IIb/IIIa-inhibitors. Incorrect blood sampling is a major source of error in measuring platelet function. Global tests besides platelet count and bleeding time are thrombelastography, the platelet function analyzer (PFA) and possibly in the future the new Impact-system. Specific tests measure platelet spreading and adhesion to defined surfaces. In a series of methods platelets are counted before and after passage of a filter. Some of these tests are partially standardized. The most frequently measured platelet function is aggregation induced by ADP, collagen or other substances as first described by Gustav Born. Some newer methods to perform aggregometry are described. Platelet activation can be detected by measuring spontaneous aggregation as in the PAT-test. Prospective trials with this test have shown that enhanced spontaneous aggregation is a risk factor for new vascular occlusions in diabetics and for myocardial infarctions in healthy individuals. The Wu and Hoak-test and the measurement of released platelet factor 4 or betaglobulin are of limited value. Flow cytometric methods are frequently used to measure platelet activation markers as CD 62 and others. Platelet induced thrombin generation is an interesting function to measure drug effects. None of the presently available platelet function tests is well standardized, so there is much room for improvement. abstract_id: PUBMED:2440682 Central haemodynamic and antiplatelet effects of iloprost--a new prostacyclin analogue--in acute myocardial infarction in man. In 14 patients with acute myocardial infarction, a 24-hour Iloprost infusion was started with a mean delay of 309 +/- 22 minutes from onset of symptoms. Patients were haemodynamically monitored with a pulmonary artery catheter and an arterial cannula. The dose of Iloprost was 1-4 ng kg-1 min-1 and titrated according to blood pressure and systemic vascular resistance. When 2.0-4.0 ng kg-1 min-1 of Iloprost were infused, 5 out of 10 patients required dose reduction due to hypotension, nausea or both. However, in all patients the infusion period was completed as planned. Acute reductions of systolic blood pressure and vascular resistance were seen, whereas stroke volume increased and heart rate remained unchanged. The infusion of Iloprost caused profound inhibition of ADP-induced platelet aggregation but no significant changes in plasma values for platelet-specific proteins or thromboxane B2 were recorded. It is concluded that it was possible to safely administer Iloprost over 24 hours in the early phase of acute myocardial infarction and profound anti-aggregatory effects were observed. These findings should be evaluated in a controlled study. abstract_id: PUBMED:20060182 Differential protein biomarker expression and their time-course in patients with a spectrum of stable and unstable coronary syndromes in the Integrated Biomarker and Imaging Study-1 (IBIS-1). Objectives: IBIS-1 was a pilot study undertaken to correlate coronary imaging with circulating biomarker expression in patients with stable angina, unstable angina and acute myocardial infarction. We hypothesized that patients at high risk of future events could be identified in the future by a combination of high risk plaque features by plaque echogenicity and palpography and a set of circulating blood biomarkers. Results And Methods: We assessed the expression of conventional biomarkers and novel marker protein microarray (170 analytes) over 6 months. There were no strong correlations observed between conventional biomarkers and coronary imaging in non-culprit artery. Proteomic microarray was performed in 66 patients. Seventy eight (45%) analytes showed dynamic changes over time. Using hierarchical clustering and principal component analysis two subsets of biomarkers were identified: initial up-regulation and decrease over time (D-dimer, hepatocyte growth factor, CXCL9/MIG, platelet factor 4/CXCL4, CTACK, C-6 Kine, follistatin, and FGF-7) and the opposite increase (PAI-1- anti-apoptotic protein and I-309--chemokine induced on the human endothelium by Lp(a)). Conclusions: Proteomic analysis identifies dynamic patterns in circulating biomarkers in a wide range of patients with coronary artery disease. Further large natural history studies are needed to better define multibiomarker sets for identification of patients at risk of future CV events. abstract_id: PUBMED:6221433 Hemodynamic and platelet response to the bolus intravenous administration of porcine heparin. There is considerable evidence that under some conditions intravenous heparin infusion may cause or at least enhance platelet aggregation in vivo. Reports of heparin-induced vasodilatation and decreases in arterial blood pressure have not been accompanied by simultaneous observations of the platelet response. In this study both the hemodynamic and platelet response to the bolus administration of porcine intestinal mucosa sodium heparin were monitored in 24 cardiac and 12 vascular surgery patients. Mean arterial blood pressure decreased 7.1 +/- 0.8 mmHg as a result of a 247 +/- 34 dyne X sec/cm5 decrease in systemic vascular resistance. Platelet count, platelet volume distribution, and beta-thromboglobulin levels did not change with heparin infusion. These responses did not differ when comparing the 155 unit/kg group and the 400 unit/kg group or the 400 unit/kg groups treated with different commercial preparations. The single patient who did have a decrease in platelet count and a severe rise in beta-thromboglobulin with heparin died intraoperatively of a massive myocardial infarction. Large increases in platelet factor 4 with heparin administration were not associated with platelet release but were dependent on whether or not the patient was treated with preoperative subcutaneous or intravenous heparin. There was no evidence that heparin-induced vasodilatation was mediated by platelet aggregation and release. abstract_id: PUBMED:27445266 Preoperative hemostatic testing and the risk of postoperative bleeding in coronary artery bypass surgery patients. Background: We sought to assess predictability of excessive bleeding using thrombelastography (TEG), multiplate impedance aggregometry, and conventional coagulation tests including fibrinogen in patients undergoing coronary artery bypass graft (CABG) surgery. Methods: A total of 170 patients were enrolled in this prospective observational study. TEG, Multiplate aggregometry, and coagulation tests were sampled on the day before surgery. Excessive bleeding was defined as &gt;1000 mL over 18 hours. Results: Multiplate-adenosine diphosphate (ADP) measurements were significantly lower in patients with excessive bleeding, 85.5AU ± 32.8 versus 108.5AU ± 30.0, p = 0.012. Bivariate analysis revealed body mass index, myocardial infarction, and multiplate-ADP as predictors of bleeding. In multivariable linear regression analysis, multiplate-ADP remained a significant predictor of bleeding (β: -6.2 [confidence interval: -12.0 to -0.3], p = 0.035). The lowest interval of multiplate-ADP (&lt;50 AUC) was associated with significantly more bleeding and need for platelet concentrate transfusion. Fibrinogen levels &lt;2.5 g/L were also found to be associated with excess bleeding (p = 0.020). Conclusions: Multiplate impedance aggregometry identified patients at risk for excessive bleeding after CABG. Low fibrinogen levels were associated with increased bleeding. Neither routine TEG parameters nor conventional coagulation tests were correlated with bleeding. abstract_id: PUBMED:6973419 Increased plasma concentrations of platelet factor 4 in coronary artery disease: a measure of in vivo platelet activation and secretion. Previous studies have shown that there is both a significant shortening in platelet survival and a measured hyperactivity to platelet-aggregating agents in patients with documented coronary artery disease compared with control groups. We used a recently described radioimmunoassay for the platelet-secreted protein platelet factor 4 (PF4) to study 162 patients with documented coronary artery disease. There was a significant increase in plasma PF4 concentrations in patients with documented coronary artery disease compared with angiographically normal patients (8.7 vs 16 ng/ml, respectively, n = 121), but as in previous studies of platelet survival, we could not correlate elevated plasma PF4 concentration and the severity or site of the coronary artery disease. In addition, there was no correlation with left ventricular function, serum cholesterol or the type of angina. Patients with confirmed acute myocardial infarction had no significant difference in mean plasma PF4 concentrations compared with similar groups of coronary disease patients who had prolonged chest pain or chronic stable angina. Coronary artery bypass grafting in a subgroup of patients did not affect the mean plasma PF4 concentration during 1 year of follow-up after bypass surgery, but medical therapy for angina with increasing doses of propranolol and nitrates significantly reduced PF4 concentration in another subgroup of patients who were not considered to be candidates for surgical therapy. Answer: The provided abstracts do not directly address the question of whether mean platelet volume (MPV) could be a predictive marker for acute myocardial infarction (AMI). Instead, they focus on other platelet-related markers such as P-selectin, platelet factor 4 (PF4), beta-thromboglobulin (beta TG), and anticardiolipin antibodies (aCL), as well as various platelet function tests and their potential roles in predicting thrombotic events or assessing platelet activation in the context of coronary artery disease (CAD) and myocardial infarction (MI). For instance, one study found that soluble P-selectin levels were increased in patients with AMI and correlated with platelet activation markers, suggesting its potential as a prognostic marker in AMI (PUBMED:10833791). Another study evaluated the dynamics of PF4 as a marker of platelet activation in patients with AMI and found that thrombolytic therapy inhibits platelet activation (PUBMED:11712446). Additionally, a study on the Integrated Biomarker and Imaging Study-1 (IBIS-1) identified dynamic patterns in circulating biomarkers, including PF4, in patients with a spectrum of stable and unstable coronary syndromes (PUBMED:20060182). While these studies highlight the importance of platelet activation markers in the context of AMI, they do not provide information on MPV as a specific predictive marker for AMI. To determine the potential predictive value of MPV for AMI, one would need to consult studies that specifically investigate the relationship between MPV and the incidence or prognosis of AMI.
Instruction: Do manual therapies help low back pain? Abstracts: abstract_id: PUBMED:27661020 The effectiveness of complementary manual therapies for pregnancy-related back and pelvic pain: A systematic review with meta-analysis. Background: Low back pain and pelvic girth pain are common in pregnancy and women commonly utilize complementary manual therapies such as massage, spinal manipulation, chiropractic, and osteopathy to manage their symptoms. Objective: The aim of this systematically review was to critically appraise and synthesize the best available evidence regarding the effectiveness of manual therapies for managing pregnancy-related low back and pelvic pain. Methods: Seven databases were searched from their inception until April 2015 for randomized controlled trials. Studies investigating the effectiveness of massage and chiropractic and osteopathic therapies were included. The study population was pregnant women of any age and at any time during the antenatal period. Study selection, data extraction, and assessment of risk of bias were conducted by 2 reviewers independently, using the Cochrane tool. Separate meta-analyses were conducted to compare manual therapies to different control interventions. Results: Out of 348 nonduplicate records, 11 articles reporting on 10 studies on a total of 1198 pregnant women were included in this meta-analysis. The therapeutic interventions predominantly involved massage and osteopathic manipulative therapy. Meta-analyses found positive effects for manual therapy on pain intensity when compared to usual care and relaxation but not when compared to sham interventions. Acceptability did not differ between manual therapy and usual care or sham interventions. Conclusions: There is currently limited evidence to support the use of complementary manual therapies as an option for managing low back and pelvic pain during pregnancy. Considering the lack of effect compared to sham interventions, further high-quality research is needed to determine causal effects, the influence of the therapist on the perceived effectiveness of treatments, and adequate dose-response of complementary manual therapies on low back and pelvic pain outcomes during pregnancy. abstract_id: PUBMED:28750984 A commentary review of the cost effectiveness of manual therapies for neck and low back pain. Background & Purpose: Neck and low back pain (NLBP) are global health problems, which diminish quality of life and consume vast economic resources. Cost effectiveness in healthcare is the minimal amount spent to obtain acceptable outcomes. Studies on manual therapies often fail to identify which manual therapy intervention or combinations with other interventions is the most cost effective. The purpose of this commentary is to sample the dialogue within the literature on the cost effectiveness of evidence-based manual therapies with a particular focus on the neck and low back regions. Methods: This commentary identifies and presents the available literature on the cost effectiveness of manual therapies for NLBP. Key words searched were neck and low back pain, cost effectiveness, and manual therapy to select evidence-based articles. Eight articles were identified and presented for discussion. Results: The lack of homogeneity, in the available literature, makes difficult any valid comparison among the various cost effectiveness studies. Discussion: Potential outcome bias in each study is dependent upon the lens through which it is evaluated. If evaluated from a societal perspective, the conclusion slants toward "adequate" interventions in an effort to decrease costs rather than toward the most efficacious interventions with the best outcomes. When cost data are assessed according to a healthcare (or individual) perspective, greater value is placed on quality of life, the patient's beliefs, and the "willingness to pay." abstract_id: PUBMED:28246693 Manual therapy in lumbovertebral syndromes Based upon the shortly presented survey among the members of the Swiss Medical Association for Manual Medicine, the low back pain problems are approached by the means of manual therapy on average 805 times per year and physician. On average each case with low back pain is treated 1,4 times by a general practitioner with experience in manual medicine while specialists who are dealing with more complexe cases on average 4 to 5 times. The functional disorders of the lumbar spine treated by manual therapy are superior to the physiotherapy approach or in comparison with a "placebo group" (information by general practitioner about low back pain and application of medication). An appropriate indication for manual therapy is relevant to avoid or reduce the possible risk of the treatment procedure. The physician who performs the manual therapy has to know the limits of the method. Also the knowledge of contraindications for manual therapy will reduce the incidence of possible complication. However based upon the survey among the swiss association the side effects and complications due to manual therapy of the lumbal spine are extremely seldom. abstract_id: PUBMED:34009423 Manual medicine, manual treatment : Principles, mode of action, indications and evidence Manual medicine is the medical discipline that comprehensively deals with the diagnosis, treatment and prevention of reversible functional disorders of the musculoskeletal system and other related organ systems. The article illustrates the neuroanatomical and neurophysiological basic elements and mechanisms of manual medical diagnostics and treatment. Based on the most recent literature and in consideration of various scientific guidelines, the evidence-based effectiveness of manual medical procedures is presented. In detail: acute and chronic low back pain, cervicogenic headache, neck and shoulder pain, radicular arm pain, dysfunctional thoracic pain syndromes, diseases of the rotator cuff, carpal tunnel syndrome and plantar fasciitis. Clinical case examples illustrate the clinical approach. The terminology, origin and clinical presence of "osteopathy" are described in detail and the national and international associations and societies of manual medicine, the German Society for Manual Medicine (DGMM), the European Scientific Society of Manual Medicine (ESSOMM) and the Fédération Internationale de Medicine Manuelle (FIMM) are lexically presented. Finally, contraindications for manual interventions and an outlook on requirements and possibilities of the scientific analysis of pain are presented, as they are postulated in the preamble of the guidelines on specific low back pain of the German Society for Orthopedics and Orthopedic Surgery (DGOOC). abstract_id: PUBMED:24965495 Criterion validity of manual assessment of spinal stiffness. Assessment of spinal stiffness is widely used by manual therapy practitioners as a part of clinical diagnosis and treatment selection. Although studies have commonly found poor reliability of such procedures, conflicting evidence suggests that assessment of spinal stiffness may help predict response to specific treatments. The current study evaluated the criterion validity of manual assessments of spinal stiffness by comparing them to indentation measurements in patients with low back pain (LBP). As part of a standard examination, an experienced clinician assessed passive accessory spinal stiffness of the L3 vertebrae using posterior to anterior (PA) force on the spinous process of L3 in 50 subjects (54% female, mean (SD) age = 33.0 (12.8) years, BMI = 27.0 (6.0) kg/m(2)) with LBP. A criterion measure of spinal stiffness was performed using mechanized indentation by a blinded second examiner. Results indicated that manual assessments were uncorrelated to criterion measures of stiffness (spearman rho = 0.06, p = 0.67). Similarly, sensitivity and specificity estimates of judgments of hypomobility were low (0.20-0.45) and likelihood ratios were generally not statistically significant. Sensitivity and specificity of judgments of hypermobility were not calculated due to limited prevalence. Additional analysis found that BMI explained 32% of the variance in the criterion measure of stiffness, yet failed to improve the relationship between assessments. Additional studies should investigate whether manual assessment of stiffness relates to other clinical and biomechanical constructs, such as symptom reproduction, angular rotation, quality of motion, or end feel. abstract_id: PUBMED:12400231 Complementary and alternative therapies in occupational health. Part II--Specific therapies. Dossey (2001) says, "The nurse serves as a facilitator and helps assist the patient and his or her significant others to be in the best state for healing to take place. Nurses are in a unique position to be instruments of healing at all times." According to Fitch (1999), "A fundamental goal of nursing is to comfort." Complementary and alternative therapies offer many self care and comforting remedies help employees prevent disease and promote healing. Occupational health nurses have the ability to educate employees and offer guidance about CAM therapies; encourage self care management of minor complaints; and encourage employees, when appropriate, to seek health care. As employees' use of CAM continues to increase, occupational health nurses need to monitor use of CAM therapies among employees. Nurses should inform the employer, case managers, and insurance companies involved about the potential increase in CAM use to promote changes in the health care system and integrate conventional and CAM therapies as needed. Further research related to CAM therapies continues as the health care system warrants safe, effective, and cost effective ways to promote health and prevent or manage illness. abstract_id: PUBMED:28286240 Reconceptualising manual therapy skills in contemporary practice. With conflicting evidence regarding the effectiveness of manual therapy calls have arisen within some quarters of the physiotherapy profession challenging the continued use of manual skills for assessment and treatment. A reconceptualisation of the importance of manual examination findings is put forward, based upon a contemporary understanding of pain science, rather than considering these skills only in terms of how they should "guide" manual therapy interventions. The place for manual examination findings within complex, multidimensional presentations is considered using vignettes describing the presentations of five people with low back pain. As part of multidimensional, individualised management, the balance of evidence relating to the effectiveness, mechanisms of action and rationale for manual skills is discussed. It is concluded that if manual examination and therapeutic skills are used in a manner consistent with a contemporary understanding of pain science, multidimensional patient profiles and a person-centred approach, their selective and judicious use still has an important role. abstract_id: PUBMED:38353102 Efficacy of manual therapy for sacroiliac joint pain syndrome: a systematic review and meta-analysis of randomized controlled trials. Introduction: This study examined the efficacy of manual therapy for pain and disability measures in adults with sacroiliac joint pain syndrome (SIJPS). Methods: We searched six databases, including gray literature, on 24 October 2023, for randomized controlled trials (RCTs) examining sacroiliac joint (SIJ) manual therapy outcomes via pain or disability in adults with SIJPS. We evaluated quality via the Physiotherapy Evidence Database scale and certainty via Grading of Recommendations, Assessment, Development, and Evaluation (GRADE). Standardized mean differences (SMDs) in post-treatment pain and disability scores were pooled using random-effects models in meta-regressions. Results: We included 16 RCTs (421 adults; mean age = 37.7 years), with 11 RCTs being meta-analyzed. Compared to non-manual physiotherapy (i.e. exercise ± passive modalities; 10 RCTs) or sham (1 RCT) interventions, SIJ manual therapy did not significantly reduce pain (SMD: -0.88; 95%-CI: -1.84; 0.08, p = 0.0686) yet had a statistically significant moderate effect in reducing disability (SMD: -0.67; 95% CI: -1.32; -0.03, p = 0.0418). The superiority of individual manual therapies was unclear due to low sample size, wide confidence intervals for effect estimates, and inability to meta-analyze five RCTs with a unique head-to-head design. RCTs were of 'good' (56%) or 'fair' (44%) quality, and heterogeneity was high. Certainty was very low for pain and low for disability outcomes. Conclusion: SIJ manual therapy appears efficacious for improving disability in adults with SIJPS, while its efficacy for pain is uncertain. It is unclear which specific manual therapy techniques may be more efficacious. These findings should be interpreted cautiously until further high-quality RCTs are available examining manual therapy against control groups such as exercise. Registration: PROSPERO (CRD42023394326). abstract_id: PUBMED:32523640 Non-invasive Complementary Therapies in Managing Musculoskeletal Pains and in Preventing Surgery. Background: Musculoskeletal disorders are disabling diseases which affect work performance, thereby affecting the quality of life of individuals. Pharmacological and surgical management are the most recommended treatments. However, non-invasive physical therapies are said to be effective, for which the evidence is limited. Aim/purpose: To study the effect of non-invasive physical interventions in preventing surgery among patients recommended for surgery for musculoskeletal complaints, who attended sports and fitness medicine centres in India. Settings: SPARRC (Sports Performance Assessment Research Rehabilitation Counselling) Institute) is a physical therapy centre with 13 branches spread all over India. This Institute practices a combination of manual therapies to treat musculoskeletal complaints. Research Design: Descriptive cohort study involving the review of case records of the patients enrolled from June 2013 to July 2017, followed by the telephone survey of the patients who have completed treatment. Intervention: Combination of physical therapies such as myofascial trigger release with icing, infra-red therapy, pulsed electromagnetic field therapy, stretch release, aqua therapy, taping, and acupuncture were employed to reduce the pain and regain functionalities. Main Outcome Measures: Self-reported pains were measured using visual analogue scale at different levels of therapy-preand post-therapy and post-rehabilitation. Results: In total, 909 patients were studied, of whom 152 (17%) patients completed the treatment protocol. Majority of patients presented with knee and low-back pain. The reduction in pain due to the treatment protocol in terms of mean VAS score from baseline to post-therapy and baseline to post-rehabilitation was statistically significant (p value = .00). Among those contacted post-rehabilitation, 82 patients remained without surgery, and the median surgery-free time was around two years. Conclusion: Thus the study concluded that non-invasive physical therapies may prevent or postpone surgeries for musculoskeletal complaints. abstract_id: PUBMED:30959363 Successful manual reduction for ureterosciatic hernia: A case report. Introduction: Sciatic hernias are the least common type of pelvic floor hernias. The purpose of this study was to present a novel technique for manual reduction and to conduct a systematic review of previous reports of sciatic hernias to characterize them and review the outcomes. Presentation Of Case: An 86-year-old female presented with left-sided lumbar pain. She had a past medical history of rheumatoid arthritis and was treated with prednisolone and methotrexate. Her left abdomen and left lumbar area were tender. An unenhanced abdominal computed tomography scan revealed invagination of the left ureter into the left sciatic foramen and a dilated left proximal ureter and renal pelvis. Ultrasonography showed an invaginated left ureter viewing from the left buttock. She was diagnosed with a sciatic hernia. Ultrasound-guided manual transvaginal reduction was performed. Post-procedure unenhanced abdominal computed tomography scan confirmed reduction of the ureter. After 10-months of follow-up, there is no evidence of recurrence. Discussion: Previous reports of patients with sciatic hernia were identified. Clinical data associated with the hernia, reduction technique and clinical outcomes were collected for 72 patients. Open reduction was performed in 24 patients. A ureteral stent was placed in eight patients when the hernia contained the ureter. Four postoperative complications including one death were reported in adults. There were no reports of closed manual reduction. Conclusion: A sciatic hernia in women may be manually reduced without surgery. Further reviews of this rare entity are needed to determine the best management strategy. Answer: The evidence on the effectiveness of manual therapies for low back pain is mixed and somewhat limited. A systematic review with meta-analysis found that manual therapies, such as massage and osteopathic manipulative therapy, had positive effects on pain intensity when compared to usual care and relaxation, but not when compared to sham interventions (PUBMED:27661020). This suggests that while some patients may experience pain relief from manual therapies, the effect might not be due to the specific therapeutic action of the treatment but could be related to non-specific effects or placebo. Another study indicated that manual therapy can be frequently used to approach low back pain problems and that functional disorders of the lumbar spine treated by manual therapy are superior to physiotherapy or a placebo group (PUBMED:28246693). However, the study also emphasized the importance of appropriate indication and knowledge of contraindications to minimize the risk of complications, which are reported to be extremely rare. A commentary on the cost-effectiveness of manual therapies for neck and low back pain highlighted the difficulty in comparing studies due to a lack of homogeneity and potential outcome bias, depending on the perspective from which the studies are evaluated (PUBMED:28750984). This suggests that while manual therapies may be used, their cost-effectiveness is not well established. Research on the criterion validity of manual assessments of spinal stiffness, which is often used in manual therapy, found that manual assessments were uncorrelated with criterion measures of stiffness, indicating that such assessments may not be reliable (PUBMED:24965495). A reconceptualization of manual therapy skills in contemporary practice suggests that manual examination and therapeutic skills, if used in line with a contemporary understanding of pain science and a multidimensional patient profile, still have an important role in the management of low back pain (PUBMED:28286240). Lastly, a systematic review and meta-analysis on the efficacy of manual therapy for sacroiliac joint pain syndrome found that manual therapy appears efficacious for improving disability but its efficacy for pain is uncertain (PUBMED:38353102). In conclusion, while some studies suggest that manual therapies may provide relief for low back pain and improve disability, the evidence is not unequivocal, and the effectiveness may vary depending on the type of manual therapy, the patient's condition, and the context in which it is used. Further high-quality research is needed to determine the specific effects and to identify which manual therapy techniques may be most efficacious.
Instruction: Does evidence lead to a change in clinical practice? Abstracts: abstract_id: PUBMED:33129744 Midwifery leaders' views on the factors considered crucial to implementing evidence-based practice in clinical areas. Problem: The evidence-to-practice gap continues to persist in healthcare and midwives report limited knowledge and use of effective intervention strategies to support the implementation of new evidence-based practices in clinical settings. Background: Despite ongoing development and dissemination of high quality research findings, the translation of latest research evidence by midwives into new evidence-based practices remains sub-optimal. This inefficiency places consumers at risk of obsolete or potentially dangerous healthcare interventions. Aim: To explore midwifery leaders' views on what information and support midwives require to lead practice change initiatives in clinical areas. Methods: The study formed part of a broader Participatory Action Research (PAR) project designed to improve the processes by which midwives implement evidence-based practice change in clinical settings. The study employed a qualitative design and was guided by the methodological underpinnings of Action Research (AR). Findings: One core finding emerged to fulfil the aim and objectives of the study. To lead implementation of evidence-based practices, midwives need practical solutions and a map of the process, packaged into a centralised web-based resource. Discussion: The findings reported in this study provide valuable insight into the specific needs of midwives wanting to improve the uptake and longevity of new evidence based practices in clinical areas. This includes information specific to evidence implementation, support networks and knowledge of Implementation Science. Conclusion: To lead practice change initiatives, midwives require a web-based resource that standardises the process of evidence implementation, while providing midwives with clear direction and the support needed to confidently champion for evidence base change in clinical areas. abstract_id: PUBMED:32713823 The only constant in radiography is change: A discussion and primer on change in medical imaging to achieve evidence-based practice. Medical imaging is an ever changing field with significant advancements in techniques and technologies over the years. Despite being constantly challenged by change, it can be difficult to introduce changes into healthcare settings. In this article we introduce the principles of change management to achieve an evidence-based practice in radiography. abstract_id: PUBMED:27649522 Translating research findings to clinical nursing practice. Aims And Objectives: To describe the importance of, and methods for, successfully conducting and translating research into clinical practice. Background: There is universal acknowledgement that the clinical care provided to individuals should be informed on the best available evidence. Knowledge and evidence derived from robust scholarly methods should drive our clinical practice, decisions and change to improve the way we deliver care. Translating research evidence to clinical practice is essential to safe, transparent, effective and efficient healthcare provision and meeting the expectations of patients, families and society. Despite its importance, translating research into clinical practice is challenging. There are more nurses in the frontline of health care than any other healthcare profession. As such, nurse-led research is increasingly recognised as a critical pathway to practical and effective ways of improving patient outcomes. However, there are well-established barriers to the conduct and translation of research evidence into practice. Design: This clinical practice discussion paper interprets the knowledge translation literature for clinicians interested in translating research into practice. Methods: This paper is informed by the scientific literature around knowledge translation, implementation science and clinician behaviour change, and presented from the nurse clinician perspective. We provide practical, evidence-informed suggestions to overcome the barriers and facilitate enablers of knowledge translation. Examples of nurse-led research incorporating the principles of knowledge translation in their study design that have resulted in improvements in patient outcomes are presented in conjunction with supporting evidence. Conclusions: Translation should be considered in research design, including the end users and an evaluation of the research implementation. The success of research implementation in health care is dependent on clinician/consumer behaviour change and it is critical that implementation strategy includes this. Relevance To Practice: Translating best research evidence can make for a more transparent and sustainable healthcare service, to which nurses are central. abstract_id: PUBMED:17236569 The use of an evidence-based portfolio in the management of change in dental practice. In this paper the author gives his opinion about the problems of getting practices to change systems in order to institute clinical governance. There are many reasons why practices need to change and for this change to be monitored. This paper explains the need for change and the use of the evidence-based portfolio, which is produced by candidates for the Membership of the Faculty of General Dental Practice (UK) [MFGDP(UK)] examination. It can also be produced by individuals who are not taking the MFGDP(UK) examination in conjunction with the Faculty of General Dental Practice (UK)'s key skills programme. It provides a mechanism for demonstrating change and for assessing the quality of care provided by a general dental practice. The author concludes that the evidence-based portfolio will enable a practitioner to apply clinical governance in a practical way. abstract_id: PUBMED:37341352 Change in orthopaedic surgeon behaviour by implementing evidence-based practice. Introduction: Orthopaedic practice is not always aligned with new evidence which may result in an evidence-practice gap. Our aim was to present and report the use of a new model for implementation of evidence-based practice using treatment of distal radius fractures (DRF) as an example. Methods: A new implementation model from the Centre for Evidence-Based Orthopaedics (CEBO) was applied. It comprises four phases: 1) baseline practice is held up against best available evidence, and barriers to change are assessed. 2) A symposium involving all stakeholders discussing best evidence is held, and agreement on a new local guideline is obtained. 3) The new guideline based on the decisions at the symposium is prepared and implemented into daily clinical practice. 4) Changes in clinical practice are recorded. We applied the model on the clinical question of whether to use open reduction and internal fixation with a locked volar plate (VLP) or closed reduction and percutaneous pinning (CRPP) in adults with DRF. Results: Prior to application of the CEBO model, only VLP was used in the department. Based on best evidence, the symposium found that a change in practice was justified. A local guideline stating CRPP as first surgical choice was implemented. If acceptable reduction could not be obtained, the procedure was converted to VLP. A year after implementation of the guideline, the rate of VLP had declined from 100% to 44%. Conclusion: It is feasible to change surgeons' practice according to best evidence using the CEBO model. Funding: None. Trial Registration: Not relevant. abstract_id: PUBMED:19704295 Implementing evidence-based practice: a mantra for clinical change. Evidence-based practice (EBP) requires a commitment to adopting innovation to change clinical problems. In perinatal and neonatal care, this commitment involves utilization of current best evidence in decision making about patient care for the benefit of mothers, infants, and their families. Embracing EBP can lead to increased patient and professional outcomes, creating synergy that will be welcomed on all levels. Moving toward EBP in this arena is a challenging goal for perinatal nurses, who may encounter many barriers. This article describes the need for "buy in" from key stakeholders at the bedside and within the infrastructure of the organization. Provided herein are stepwise methods to engage nurses in EBP as well as ideas to promote use of research in a way that every patient receives the right care every time. This article provides an overview of how perinatal and neonatal clinicians can shift their focus to embrace EBP and translate research into practice at the bedside. abstract_id: PUBMED:26378428 Encouraging Reflection and Change in Clinical Practice: Evolution of a Tool. This article describes the systematic development and gradual transformation of a tool to guide participants in a continuing medical education program to reflect on their current practices and to make commitments to change. The continuous improvement of this tool was influenced by evolving needs of the program, reviews of relevant educational literature, feedback from periodic program surveys, interviews with group facilitators, and results from educational research studies. As an integral component of the educational process used in the Practice Based Small Group Learning Program, the current tool is designed to help family physicians think about what has been learned during each educational session and examine issues related to the implementation of evidence-based changes into their clinical practice. Lessons learned will be highlighted. Both the developmental processes employed and the practice reflection tool itself have applicability to other educational environments that focus on continuing professional development. abstract_id: PUBMED:36523127 Acute post-stroke aphasia management: An implementation science study protocol using a behavioural approach to support practice change. Background: Evidence should guide decisions in aphasia practice across the continuum of stroke care; however, evidence-practice gaps persist. This is particularly pertinent in the acute setting where 30% of people with stroke will have aphasia, and speech pathologists experience many challenges implementing evidence-based practice. This has important consequences for people with aphasia and their close others, as well as speech pathologists working in acute settings. Aims: This study protocol details how we will target practice change using a behavioural approach, with the aim of promoting the uptake of synthesized evidence in aphasia management post-stroke in the acute hospital setting. Methods & Procedures: We will conduct a mixed-methods before-and-after study following the Knowledge-to-Action (KTA) framework. Researchers, speech pathologists and people with lived experience of aphasia will collaborate to identify and prioritize practice gaps, and develop and implement changes to clinical practice based on the Theoretical Domains Framework and Behaviour Change Wheel. Discussion: This study may provide a template for acute stroke services in how to use an implementation science approach to promote the application of synthesized evidence into routine clinical practice to ensure people with aphasia receive high-quality services. Collaboration among researchers, healthcare providers, people with aphasia and their close others ensures that the identification and targeting of practice gaps are driven by theory, lived experience and the local context. What This Paper Adds: What is already known on this subject Synthesized evidence, such as clinical guidelines and consensus statements, provides the highest level of evidence to inform clinical practice, yet discrepancies between delivered care and evidence remain. This discrepancy is of note in the acute setting where clinicians report many challenges implementing the best available evidence, combined with a high proportion of people with stroke who will have aphasia (30%). There are many reasons why evidence is not put into practice, and efforts to change clinical practice need to consider these barriers when developing interventions. What this paper adds to existing knowledge This study protocol details an implementation science approach to affect clinical practice change, informed by a collaboration of key stakeholders (researchers, speech pathologists, and people with aphasia and their close others). Protocol papers that focus on bridging the gap between evidence and practice are uncommon in communication disorders; moreover, explicit prioritization of practice gaps is a critical but often overlooked aspect of promoting evidence-based practice. What are the potential or actual clinical implications of this work? This protocol provides insights into how one study site identified and prioritized evidence-practice gaps using a participatory approach. We provide insights into how clinical practice change may occur by describing how we plan to identify priority evidence-practice gaps and develop an intervention to improve the use of aphasia evidence in routine practice. This protocol aims to share an implementation science approach to service improvement that may be replicated across other services. abstract_id: PUBMED:25300276 Clinical practice guidelines: based on eminence or evidence? Background: Too often, clinical practice guidelines, or similar documents, are of poor quality or are eminence based. Consequently, health care decisions might be based on biased or erroneous information. Here, issues related to standards for clinical practice guidelines that ensure the inclusion of objective, transparent, and scientifically valid information will be discussed. Key Messages: To ensure the quality of guidelines, standards for the development of evidence-based clinical practice guidelines have been in development. While differences among them exist, overall, these documents emphasize the need for the establishment of transparency, management of conflicts of interest, use of a multidisciplinary development group, utilization of a systematic literature review, use of grading systems to rate the strength of evidence recommendations, articulation of recommendations, performance of an external review, and regular updating. Conclusions: No clinical practice guidelines are perfect. They vary significantly in quality and, therefore, in the trustworthiness of the yielded recommendations. Hence, it is important that one can distinguish evidence-based clinical practice guidelines from guidelines that are not based on evidence. Standards for the development of evidence-based guidelines have been developed. If strictly adhered to, this should lead to more valid and trustworthy guidelines. abstract_id: PUBMED:33559236 The realities of practice change: Nurses' perceptions. Aims And Objectives: To explore registered nurses' perceptions of practice change and describe factors that influence the adoption of practice changes. Background: Nurses play a critical role in optimising patient outcomes. Healthcare organisations and nurses must do their part to achieve the Quadruple Aim, which requires nurses to change their practice. Nurses are ideally positioned to improve patient outcomes by changing their practice to align it with research evidence and organisational initiatives; however, this experience of practice change by nurses is grossly under-studied. Design: A qualitative design, inductive content analysis, was used to understand nurses' perceptions of practice change. Methods: Eleven registered nurses, who worked in one hospital system, participated in one-on-one semi-structured interviews. The Consolidated Criteria for Reporting Qualitative Studies (COREQ) were followed. Findings: The 11 participants described 63 distinct experiences with practice changes. Their experiences with and perceptions of practice change are categorised as (1) There is A History; (2) It's A Lot of Work; (3) It Happens to Nurses; and (4) Doing Right for the Patient. Conclusion: Nurses experience practice change as a central part of the work they do; it occurs frequently and multiple practice changes may co-occur. Nurses identify strategies, such as thoughtful planning, engaging nurses, and communicating the rationale for and the outcomes of the practice change, to facilitate changing practice. Relevance To Clinical Practice: Involving nurses in practice changes could alleviate some inhibiting factors for adopting new practices. Local hospital and national policies should explore creative and practical ways to balance the competing needs of nurses providing direct care at the bedside and dedicated time to be engaged in practice change initiatives. With the ongoing focus on improving patient care and optimising patient safety, nurses should be viewed as highly valued members of the team when designing and implementing practice changes. Answer: Yes, evidence does lead to a change in clinical practice, but the process of translating research findings into actual clinical practice can be complex and challenging. There is a universal acknowledgment that clinical care should be informed by the best available evidence, and translating research evidence into practice is essential for safe, transparent, effective, and efficient healthcare provision (PUBMED:27649522). However, despite its importance, there are well-established barriers to the conduct and translation of research evidence into practice, and the success of research implementation in healthcare is dependent on clinician/consumer behavior change (PUBMED:27649522). Midwifery leaders have emphasized the need for practical solutions and a clear map of the process to lead the implementation of evidence-based practices, suggesting that a web-based resource could standardize the process of evidence implementation and provide the necessary direction and support (PUBMED:33129744). Similarly, in radiography, the principles of change management are introduced to achieve evidence-based practice, acknowledging the challenges of introducing changes into healthcare settings (PUBMED:32713823). In orthopedic surgery, a new model for implementing evidence-based practice has been shown to be feasible in changing surgeons' practice according to the best evidence, as demonstrated by the change in treatment of distal radius fractures (PUBMED:37341352). Moreover, in perinatal and neonatal care, embracing evidence-based practice (EBP) can lead to increased patient and professional outcomes, but it requires "buy-in" from key stakeholders and a commitment to adopting innovation (PUBMED:19704295). The development of clinical practice guidelines based on evidence rather than eminence is also crucial for ensuring that healthcare decisions are based on objective, transparent, and scientifically valid information (PUBMED:25300276). Nurses, who play a critical role in optimizing patient outcomes, perceive practice change as a central part of their work, and strategies such as thoughtful planning, engaging nurses, and communicating the rationale for practice changes are identified to facilitate the adoption of new practices (PUBMED:33559236). In summary, while evidence is a critical driver for change in clinical practice, the actual implementation of evidence-based changes requires careful consideration of barriers, engagement of stakeholders, effective communication, and behavior change strategies to ensure that the best available evidence is incorporated into routine clinical practice.