input
stringlengths
6.82k
29k
Instruction: The moderate decrease in invasive cervical cancer incidence rates in Spain (1980-2004): limited success of opportunistic screening? Abstracts: abstract_id: PUBMED:20427362 The moderate decrease in invasive cervical cancer incidence rates in Spain (1980-2004): limited success of opportunistic screening? Background: Despite its low incidence, invasive cervical cancer (ICC) is still a public health concern in Spain, due to its being perceived as an avoidable neoplasm. Social changes in recent decades may have increased the risk of ICC among Spanish females. This study sought to update incidence trends in Spanish registries. Patients And Methods: We studied the ICC incidence time trend over the period 1980-2004 using data from 13 Spanish population-based cancer registries, analysing all cases and the two main histological types, i.e. squamous cell carcinoma (SCC) and adenocarcinoma (AC). Change-point Poisson regression models were fitted for all women and for the following three age groups: 25-44, 45-64 and > or =65 years. Age-period-cohort models were used to study cohort and period effect. Results: ICC incidence showed a moderate decrease (annual percentage change: -0.9%; 95% CI -1.3% to -0.5%). Whereas rates among younger women rose sharply by 5.5% until 1990, when they began to descend by 1.2% annually, among women aged > or =45 years they declined by 1.5% per annum. In the last decade of the study period, divergent trends were found for SCC and AC, i.e. downward versus upward, respectively. Risk increased with birth cohort after 1940-45 but the period effect showed opposite trends for the two histological types. Conclusion: Cervical cancer incidence rates have decreased moderately in Spain, a trend consistent with the preventive effect of widespread opportunistic screening. abstract_id: PUBMED:9328198 International incidence rates of invasive cervical cancer after introduction of cytological screening. Because Pap-smear screening can detect pre-invasive cervical cancer, such screening can markedly reduce the occurrence of invasive cancer. However, its impact in different populations is uncertain. This study compares the changes in cervical cancer incidence at different ages after the introduction of screening in different populations, and addresses the impact of organized and opportunistic smear taking. We identified 17 cancer registries large enough and existing long enough to analyze screening effects. For each registry, we calculated the relative reduction in age-specific incidence rates and in incidence rates age-standardized to the world population after the introduction of cytologic screening. In 11 of the 17 populations, age-standardized incidence rates declined markedly from 27 percent in Norway and to 77 percent in Finland. Age-specific declines were confined to women aged 30 to 70 years old with a nadir around ages 40 to 55. In six other populations, age-standardized incidence rates declined less than 25 percent, an amount too small to provide unambiguous evidence of a screening effect. In several populations, cytologic screening had a more pronounced effect than is generally recognized. Because age-specific declines in cervical cancer incidence rates were strikingly similar in populations with widely different screening practices, organized screening may not be markedly superior to opportunistic screening. The reduction in reported cancer incidence because of screening is smaller in younger and older women. abstract_id: PUBMED:21051058 In situ and invasive cervical cancer epidemiology in the province of Girona, Spain 1990-2004: incidence, mortality, survival and screening history Background And Objective: The epidemiological vigilance of cervical cancer (CC) should comprise from screening to mortality. The objective of the work is to show a global vision of CC in the Province of Girona (Spain) between the years 1990-2004. It is a longitudinal study that includes incidence for in situ (ISCC) and invasive (ICC) carcinoma of the cervix, mortality, relative survival rates and the screening history of 1/3 of the affected women. Patients And Method: The crude and the age-adjusted annual and five years incidence rates for ISCC and ICC and the mortality for ICC were calculated. Regression Joinpointmodels were used to estimate the annual percentage change (APC) and the changes in the trend. The five-year relative survival rate was estimated. Cytology screening smears were reviewed. Results: Three hundred seventy-three incident cases of ICC and 1,093 cases of ISCC were diagnosed. After an initial rise of the incidence in both types, the adjusted rate of ISCC stabilized around 28 cases per 100,000 and that of ICC declined to 5.8 cases per 100,000 women in the period 2000-2004. The mortality remained steady and the survival rate had a non-significant increase. In the cases with screening history almost 50% of the women diagnosed of ICC had no previous cytology in contrast with the high percentage of the women with ISCC that had it done (92% in 200-2004). Conclusions: Cervical cancer incidence is decreasing in Girona, although the dates of the screening histories suggest that it can be further reduced. abstract_id: PUBMED:9139836 International incidence rates of invasive cervical cancer before cytological screening. Huge differences in incidence rates of invasive cervical cancer occur among populations. These differences reflect the influences of both etiological environmental factors and removal of precursor lesions detected upon screening. The purposes of this article are (i) to describe similarities and differences in the shapes and magnitudes of age-specific incidence rates of invasive cervical cancer before screening had an effect, (ii) to provide baseline data for further global study of screening effects, and (iii) to provide baseline incidence data for the design of optimal screening programs. To eliminate the impact of screening effects, we have selected age-specific incidence rates from times when and from populations in which screening was insignificant. The selected rates were suitably scaled and compared regarding age at onset of increase in incidence, age at peak incidence, and rate of subsequent decline. Despite a 16-fold difference in incidence rates, all curves had the same basic structure, with an increase to a peak followed by a decline or a plateau. Although all populations but one had an onset around age 25, 7 European countries showed an earlier peak age (mean = 46 vs. 59) and a more rapid decline after the peak than most other populations. The common basic shape of the age-specific incidence curve, overall, suggests a relatively similar development of invasive cervical cancer in different populations. These results illustrate the underlying similarities in the markedly different age-specific incidence rates of invasive cervical cancer. They also provide a basis for studying screening effects and for optimizing screening programs in specific geographic areas. abstract_id: PUBMED:21852701 Narrowing the equity gap: the impact of organized versus opportunistic cancer screening in Catalonia (Spain). Objectives: To assess the impact on equity of access of an organized breast cancer screening programme, compared with opportunistic breast and cervical cancer screening activities. Methods: Two cross-sectional health interview surveys conducted in 1994 and 2006 in Catalonia (Spain), with 6382 and 7653 women participating in both surveys. The main outcome measures were having undergone regular mammography, and regular cytology. Age-standardized prevalence rates for both screening tests were computed using the direct method. The relative inequality index was computed to measure changes over time in inequality on screening utilization. Results: Participation among women aged 50-69 has increased after the introduction of the organized breast screening programme; the greatest impact has been observed among those women with lower educational levels (from 17% in 1994 to 79% in 2006). Equity of access by education was particularly increased in the target group for breast cancer screening. Conclusion: This study indicates that an organized screening programme could improve participation and equity of access. abstract_id: PUBMED:23845269 Coverage and costs of opportunistic screening for cervical cancer in Cantabria (Spain) Objective: To analyze cervical cancer screening performed in Cantabria by evaluating the coverage and costs of screening and by calculating the available direct costs of the disease and the indirect costs of cervical cancer mortality. Methods: Screening for cervical cancer is performed in women aged between 21 and 65. According to the census for 2011 in Cantabria, there were 189.111 women in this age group. We performed a cross sectional, quantitative and qualitative study of the Pap smears performed and analyzed the direct and indirect costs of cervical cancer. Results: Between 2006 and 2011, 51% of women studied had one Pap smear, and 26% had two. In 2011, 31.554 Pap smears were performed in opportunistic screening in Cantabria, and the direct cost was 2,904.760 €. The annual direct cost of cervical cancer (average 2008-2010) was 567.567 €. The annual indirect costs (average 2001-2008) of cervical cancer was 386.122.02 €, in the reference scenario considered. Conclusions: Only 26% of women in Cantabria attended screening within the intervals recommended in the current opportunistic protocol. The cost data provided in this study may be useful for future economic evaluations. abstract_id: PUBMED:9096657 Treatment of pre-invasive conditions during opportunistic screening and its effectiveness on cervical cancer incidence in one Norwegian county. Norway had until recently no organized screening programme for cervical cancer, but opportunistic screening was common. This study focuses on the effectiveness of treatment of pre-malignant cervical conditions (CIN III) on cervical-cancer incidence in the county of Sør-Trøndelag in Norway, prior to the introduction of organized mass screening. The study is based on cervical-cancer incidence rates during the years 1965-92 and treatment data for CIN III. The expected number of cervical-cancer cases prevented due to early intervention was expressed in a regression model with 2 unknown parameters: the probability, p, of cancer development in case of CIN III, and the time lag, t, between treatment and when clinical cancer would otherwise have been diagnosed. The estimated probability that a patient treated for CIN III would have developed cervical cancer if not treated was found to be approximately 20%, and the mean time delay was around 16 years. In the last period of study (1988-92), the incidence was reduced by nearly 40% of what would have been expected without early intervention. Based on equal treatment rates as in 1990, parameter estimates were used to predict future incidence reduction. Maximum effectiveness will be achieved around the year 2005, with a nearly 70% reduction. Opportunistic screening and treatment of CIN III seems to have had considerable influence on cervical-cancer incidence. The costs, however, are substantial over-treatment, since our results indicate that 4 of 5 women treated for CIN III would not progress into the invasive state. abstract_id: PUBMED:22129658 Trend in cervical cancer screening in Spain (2003-2009) and predictors of adherence. Cervical cancer remains a public health problem in developed countries. Early detection of both premalignant lesions and cervical cancer through an appropriate screening programme may decrease its incidence and mortality. High rates of participation are essential to ensure the desired impact on the population. The aims of this study were to assess the use of Pap smears in Spain in 2009 to identify factors associated with screening adherence (predictors) and assess the trend from 2003 to 2009. We included women surveyed in the European Health Interview Survey for Spain. Cervical cancer screening included self-reported Pap smears over the last 3 years. The target age range was 25-64 years. The following independent variables were analysed: sociodemographic variables, chronic conditions, and lifestyle. Predictors of Pap smear adherence were explored using multivariate logistic regression. The screening coverage in the target population was 66.1% (95% confidence interval: 64.8-67.4). Undergoing Pap smears was associated positively with the following: being married, higher levels of education and income, having visited a general practitioner in previous weeks, and suffering from musculoskeletal disease. Belonging to an older age group (55-64 years) and obesity were associated with nonadherence to Pap smears. We did not find significant differences when we compared cervical cancer screening adherence over time since 2003. Adherence to cervical cancer screening in Spain does not seem to be improving. An effort must be made, and the implementation of population-based programmes instead of opportunistic screening could be considered, to recruit women who are less likely to undergo screening. abstract_id: PUBMED:18606440 Current techniques in screening for cervical cancer in Spain: updated recommendations. A population-based survey (AFRODITA Study) was conducted in Spain in order to estimate the coverage and factors associated with cervical cancer cytological screening. The results of this survey indicate that the rate of screening for cervical cancer in Spain is 75.6% in women between 18 and 65 years. This high rate of opportunistic cervical cancer screening possibly has increased in the last 5 years. However, screening participation still needs to be improved in older women, women living in rural areas, women at a low socioeconomic level, and women living in certain autonomous regions. Conversely, an overuse of cytology has been observed in Spain, as a result of opportunistic screening. A survey in 2005, carried out in 14 public and private Spanish cytological laboratories, showed that among 409,443 women, the mean rate of abnormal cytology (a diagnosis of at least atypical cells of undetermined significance on a Pap smear) was 3.5% with a range of 0.5%-7.0% in Spain. We believe that this low rate of abnormal Pap smears is the result of repeated annual opportunistic screening in a low-risk population of women. A new Spanish consensus protocol for screening for cervical carcinoma was developed in 2006 by the Spanish Society of Gynaecology and Obstetrics, the Spanish Association of Cervical Pathology and Colposcopy, the Spanish Society of Cytology, and the Spanish Society of Anatomic Pathology. In order to rationalize the use of cervical cancer screening in Spain, the recommendations of the new Spanish consensus screening protocol must be followed. abstract_id: PUBMED:24129395 Trends in incidence of, and mortality from, cervical lesions in Ireland: baseline data for future evaluation of the national cervical screening programme. Aim: To investigate incidence and mortality trends for cervical lesions in Ireland in the period 1994-2008. Methods: We used data from the National Cancer Registry, Ireland and national death registration data to calculate age-standardised rates for the periods of interest. We used standardised rate ratios to test whether incidence was associated with socio-demographic variables and used Joinpoint to examine trends by morphology grouping. Results: Incidence of cervical cancer and cervical intraepithelial neoplasia (CIN3) rose over the period 1994-2008. The annual percentage change for cervical cancer was 1.8% and that for CIN3 was 3.8%. Women resident in the most deprived areas had invasive cervical cancer incidence almost twice as high as those resident in the least deprived areas (standardised rate ratio (SRR)=1.8). Comparing incidence in Ireland to England and Wales, Northern Ireland and Scotland in the three years 2005-2007, the SRRs (other areas vs. Ireland) were 0.70, 0.88 and 0.84 respectively. Cervical cancer rates have fallen in these countries in the same period that there is a rise demonstrated in Ireland. Conclusion: Incidence rates of cervical cancer rose in Ireland steadily, albeit modestly, during 1994-2008, most likely due to long-term changes in patterns of sexual behaviour and contraceptive use. A more pronounced rise in CIN3 rates point to considerable levels of opportunistic screening during this period. Mortality rates have changed little over the past four decades, in contrast to trends in countries with well-organised screening programmes. Answer: The moderate decrease in invasive cervical cancer (ICC) incidence rates in Spain from 1980 to 2004 suggests a limited success of opportunistic screening. The study by PUBMED:20427362 indicates that ICC incidence showed a moderate decrease with an annual percentage change of -0.9%. This trend is consistent with the preventive effect of widespread opportunistic screening. However, the decrease was not uniform across all age groups or histological types. While rates among younger women initially rose sharply until 1990 and then began to descend, rates among women aged ≥45 years declined steadily. Additionally, in the last decade of the study period, divergent trends were observed for squamous cell carcinoma (SCC) and adenocarcinoma (AC), with SCC rates decreasing and AC rates increasing. The findings from other regions in Spain, such as Girona (PUBMED:21051058), also show a decrease in ICC incidence, with the adjusted rate of ICC declining to 5.8 cases per 100,000 women in the period 2000-2004. However, the mortality remained steady, and the survival rate had a non-significant increase. Notably, almost 50% of the women diagnosed with ICC had no previous cytology, contrasting with the high percentage of women with in situ cervical cancer (ISCC) who had undergone screening. The limited success of opportunistic screening in Spain is further highlighted by the fact that adherence to cervical cancer screening does not seem to be improving, with no significant differences in adherence over time since 2003 (PUBMED:22129658). Moreover, the high rate of opportunistic cervical cancer screening in Spain may have increased in the last 5 years, but participation still needs improvement among certain demographics (PUBMED:18606440). In summary, while there has been a moderate decrease in ICC incidence rates in Spain, the success of opportunistic screening appears to be limited, with significant room for improvement in screening adherence and equity of access (PUBMED:21852701). The divergent trends in SCC and AC, as well as the steady mortality rates, suggest that the impact of opportunistic screening on reducing ICC incidence and mortality could be enhanced with more organized and targeted screening efforts.
Instruction: Are pain location and physical examinations useful in locating a tear site of the rotator cuff? Abstracts: abstract_id: PUBMED:16219939 Are pain location and physical examinations useful in locating a tear site of the rotator cuff? Background: Pain is the most common symptom of patients with rotator cuff tendinopathy, but little is known about the relationship between the site of pain and the site of cuff pathologic lesions. Also, accuracies of physical examinations used to locate a tear by assessing the muscle strength seem to be affected by the threshold for muscle weakness, but no studies have been reported regarding the efficacies of physical examinations in reference to their threshold. Hypothesis: Pain location is useful in locating a tear site. Efficacies of physical examinations to evaluate the function of the cuff muscles depend on the threshold for muscle weakness. Study Design: Case series; Level of evidence, 4. Methods: The authors retrospectively reviewed the clinical charts of 160 shoulders of 149 patients (mean age, 53 years) with either rotator cuff tears (140 shoulders) or cuff tendinitis (20 shoulders). The location of pain was recorded on a standardized form with 6 different areas. The diagnostic accuracies of the following tests were assessed with various thresholds for muscle weakness: supraspinatus test, the external rotation strength test, and the lift-off test. Results: Lateral and anterior portions of the shoulder were the most common sites of pain regardless of existence of tear or tear location. The supraspinatus test was most accurate when it was assessed to have positive results with the muscle strength less than manual muscle testing grade 5, whereas the lift-off test was most accurate with a threshold less than grade 3. The external rotation strength test was most accurate with a threshold of less than grade 4+. Conclusion: The authors conclude that pain location is not useful in locating the site of a tear, whereas the physical examinations aiming to locate the tear site are clinically useful when assessed to have positive results with appropriate threshold for muscle weakness. abstract_id: PUBMED:23306597 Rotator cuff tear: physical examination and conservative treatment. Rotator cuff tear is one of the most common shoulder diseases. It is interesting that some rotator cuff tears are symptomatic, whereas others are asymptomatic. Pain is the most common symptom of patients with a tear. Even in patients with an asymptomatic tear, it may become symptomatic with an increase in tear size. Physical examination is extremely important to evaluate the presence, location, and extent of a tear. It also helps us to understand the mechanism of pain. Conservative treatment often works. Patients with well-preserved function of the supraspinatus and infraspinatus are the best candidates for conservative treatment. After a successful conservative treatment, the symptom once disappeared may come back again. This recurrence of symptoms is related to tear expansion. Those with high risk of tear expansion and those with less functional rotator cuff muscles are less likely to respond to conservative treatment. They may need a surgical treatment. abstract_id: PUBMED:27385156 Surgery or conservative treatment for rotator cuff tear: a meta-analysis. Purpose: Comparative evidence on treating rotator cuff tear is inconclusive. The objective of this review was to evaluate the evidence on effectiveness of tendon repair in reducing pain and improving function of the shoulder when compared with conservative treatment of symptomatic rotator cuff tear. Method: Search on CENTRAL, MEDLINE, EMBASE, CINAHL, Web of Science and Pedro databases. Randomised controlled trials (RCT) comparing surgery and conservative treatment of rotator cuff tear. Study selection and extraction based on the Cochrane Handbook for Systematic reviews of Interventions. Random effects meta-analysis. Results: Three identified RCTs involved 252 participants (123 cases and 129 controls). The risk of bias was considered low for all three RCTs. For Constant score, statistically insignificant effect size was 5.6 (95% CI -0.41 to 11.62) points in 1-year follow up favouring surgery and below the level of minimal clinically important difference. The respective difference in pain reduction was -0.93 (95% CI -1.65 to -0.21) cm on a 0-10 pain visual analogue scale favouring surgery. The difference was statistically significant (p = 0.012) in 1-year follow up but below the level of minimal clinically important difference. Conclusion: There is limited evidence that surgery is not more effective in treating rotator cuff tear than conservative treatment alone. Thus, a conservative approach is advocated as the initial treatment modality. Implications for Rehabilitation There is limited evidence that surgery is not more effective in treating rotator cuff tear than conservative treatment alone. There was no clinically significant difference between surgery and active physiotherapy in 1-year follow-up in improving Constant score or reducing pain caused by rotator cuff tear. As physiotherapy is less proneness to complications and less expensive than surgery, a conservative approach is advocated as the initial treatment modality to rotator cuff tears. abstract_id: PUBMED:29767810 A novel remaining tendon preserving repair technique leads to improved outcomes in special rotator cuff tear patterns. Introduction: The purpose of this study was to identify the tear pattern that could be anatomically repaired by preserving the remaining tendon on footprint and evaluate clinical outcomes of patients who underwent remaining tendon preserving cuff repair. Materials And Methods: Of 523 patients with full-thickness rotator cuff tears who underwent arthroscopic repair, 41 (7.8%) patients had repairable rotator cuff tear while preserving the remaining tendon. Among them, 31 patients were followed-up for more than 2 years, including 26 patients with posterior L-shaped tear and 5 patients with transtendinous tear patterns. Clinical outcomes were evaluated using ASES and Constant score, SANE score for patient satisfaction, and VAS for pain. MRI was taken for tendon integrity 6 months postoperatively. Results: Of the 31 patients, 11 (35.5%) had previous injury history before rotator cuff tear, including 7 (26.9%) of the 26 patients with posterior L-shaped tear and 4 (80%) of the 5 patients with transtendinous tear. The average size of preoperative cuff tear was 17.8 ± 6.8 mm in anterior-to-posterior direction and 15.2 ± 5.1 mm in medial-to-lateral direction. ASES and Constant score, SANE score, and VAS for pain were significantly (p < 0.001) improved after remaining tendon preserving rotator cuff repair. Rotator cuff tendons of 22(84.6%) patients with posterior L-shaped tear and 4(80%) patients with transtendinous tear patterns were healed. Conclusion: Patients who underwent rotator cuff repair with preservation of the remaining tendon on the footprint obtained satisfactory functional outcomes. Rotator cuff tears in patients who had posterior L-shaped tear extending between supraspinatus and infraspinatus tendons or transtendinous tear pattern with substantial remaining tendon could be repaired using remaining tendon preserving repair technique. Anatomic reduction of torn cuff tendon without undue tension could be achieved using the remaining tendon preserving repair technique. abstract_id: PUBMED:32055877 Anterior rotator cable disruption does not affect outcomes in rotator cuff tear with subscapularis involvement. Purpose: The purpose of this study was to compare clinical and radiological outcomes after arthroscopic repair of two different rotator cuff tear configurations: anterosuperior rotator cuff tear and rotator cuff tears with subscapularis involvement. It was hypothesized that, although both tear configurations would show significant improvement in clinical outcomes after arthroscopic repair, the rotator cuff tears with subscapularis involvement where the anterior rotator cable maintains its integrity would have better clinical outcomes and structural integrity. Methods: This study included 226 patients who underwent arthroscopic repair of anterosuperior rotator cuff tears (n = 107, group A) and rotator cuff tears with subscapularis involvement (n = 119, group B). The visual analog scale (VAS) pain score, subjective shoulder value (SSV), American Shoulder and Elbow Surgeons (ASES) score, University of California at Los Angeles (UCLA) shoulder score, and active range of motion (ROM) were assessed. Modified belly press test was performed to assess the strength of the subscapularis muscle. Cuff integrity was evaluated using magnetic resonance arthrography or computed tomographic arthrography at 6 months after operation. Results: At 3-year follow-up, the VAS score, SSVs, ASES scores, UCLA shoulder scores, active ROM, and modified belly press test showed significant improvement in both groups (p < 0.001). However, these improvements showed no statistical significance between the two groups. On follow-up radiologic evaluations, no significant difference in re-tear rates between group A (25 of 107, 23.4%) and group B (23 of 119, 19.3%) was observed. Conclusions: The presence of anterior cable involvement of the anterosuperior rotator cuff tear did not affect postoperative clinical outcomes and re-tear rate compared to rotator cuff tears with subscapularis involvement where the anterior cable integrity was maintained, although the anterosuperior rotator cuff tear was associated with more significant preoperative supraspinatus fatty infiltration. Therefore, the present study determined that it would not be necessary to differentiate treatment protocols between these patterns. Level Of Evidence: Level III. abstract_id: PUBMED:35141677 Patient-Reported Outcomes Measurements Information System (PROMIS) upper extremity and pain interference do not significantly predict rotator cuff tear dimensions. Background: Proper diagnosis of rotator cuff tears is typically established with magnetic resonance imaging (MRI); however, studies show that MRI-derived measurements of tear severity may not align with patient-reported pain and shoulder function. The purpose of this study is to investigate the capacity for the Patient-reported Outcomes Measurements Information System (PROMIS) computer adaptive tests to predict rotator cuff tear severity by correlating preoperative tear morphology observed on MRI with PROMIS upper extremity (UE) and pain interference (PI) scores. This is the first study to investigate the relationship between tear characteristics and preoperative patient-reported symptoms using PROMIS. Considering the essential roles MRI and patient-reported outcomes play in the management of rotator cuff tears, the findings of this study have important implications for both treatment planning and outcome reporting. Methods: Two PROMIS-computer adaptive test forms (PROMIS-UE and PROMIS-PI) were provided to all patients undergoing rotator cuff repair by one of three fellowship-trained surgeons at a single institution. Demographic information including age, sex, race, employment status, body mass index, smoking status, zip code, and preoperative PROMIS-UE and -PI scores was prospectively recorded. A retrospective chart review of small to large full- or partial-thickness rotator cuff tears between May 1, 2017 and February 27, 2019 was used to collect each patient's MRI-derived tear dimensions and determine tendon involvement. Results: Our cohort consisted of 180 patients (56.7% male, 43.3% female) with an average age of 58.9 years (standard deviation, 9.0). There was no significant difference in PROMIS-UE or -PI scores based on which rotator cuff tendons were involved in the tear (P > .05). Neither PROMIS-UE nor PROMIS-PI significantly correlated with tear length or retraction length of the supraspinatus tendon (P > .05). The sum of tear lengths in the anterior-posterior and medial-lateral directions was weakly correlated with PROMIS-UE (P = .042; r = -0.152, r2 = 0.031) and PROMIS-PI (P = .027; r = 0.165, r2 = 0.012). Conclusion: Rotator cuff tear severity does not significantly relate to preoperative PROMIS-UE and -PI scores. This finding underscores the importance of obtaining a balanced preoperative assessment of rotator cuff tears that acknowledges the inconsistent relationship between rotator cuff tear characteristics observed on MRI and patient-reported pain and physical function. abstract_id: PUBMED:34890864 Weight-bearing shoulder and rotator cuff tear. Two percent of the general population are wheelchair-dependent. The shoulder takes on the weight-bearing locomotor function, and tends with age to develop degenerative pathologies, notably in the rotator cuff. The association between weight-bearing shoulder and rotator cuff tear raises several questions: what are the mechanisms by which wheelchair propulsion and transfer overload the shoulder, and what specificities do the lesions display? They occur in younger patients than in the rest of the population, after about 15 years' fairly constant wheelchair use. As well as the classical supraspinatus damage, an anterior cuff extension is the most frequent case. Is there a particular clinical presentation of cuff tear in this population? As the shoulder cannot be functionally protected, pain is constant and asymptomatic cuff tear is rare. Any pain after 12 years' wheelchair use requires morphologic exploration. How does treatment strategy differ for the weight-bearing shoulder? More than the classic quest for compensation, in the weight-bearing shoulder, the rotator cuff should be protected against any local aggression by systematic surgical recalibration of the subacromial space, and all lesions should be repaired, to avoid extension. Are functional results to be expected to be poorer in this population? On condition that the suture is protected by postponing any transfer beyond 4 months and that the shoulder is protected on a daily basis, analytic results and healing rates are comparable to those in the general population. However, general complications and skin complications are more frequent and require postoperative care in a specialized center. abstract_id: PUBMED:37915870 Osteopathic Manipulative Treatment for a Chronic Rotator Cuff Tear: A Case Report. Rotator cuff tears, particularly involving the supraspinatus muscle and/or tendon, are highly prevalent among individuals engaged in repetitive shoulder motions. Occupations demanding constant and repetitive shoulder movements are especially susceptible to rotator cuff injuries, potentially leading to prolonged joint wear and tear and an increased likelihood of joint replacement. Considering the impact of social determinants of health, including access to healthcare and socioeconomic status, it is imperative to explore conservative treatment modalities that alleviate financial burdens and reduce lengthy recovery periods. In this report, we present a case of a 64-year-old female hairdresser diagnosed with a chronic partial thickness rotator cuff tear who remained unresponsive to physical therapy and chiropractic manipulation but exhibited improvement following osteopathic manipulative treatment. Additionally, osteopathic considerations and pertinent literature are reviewed to provide insight into the broader context of shoulder pain management. abstract_id: PUBMED:33276984 Acromioclavicular cyst with geyser sign - An uncommon presentation of massive rotator cuff tear. Rotator cuff muscle tear is a common finding among adults and acromioclavicular cyst is a rare secondary manifestation. This case report describes the clinical presentation and workup diagnosis of a patient with acromioclavicular cyst in context of massive rotator cuff tear. Woman, 83-year-old developed a tumefaction over the left acromioclavicular joint. She had pain, limitation on active range of motion and function limitation of the left shoulder. The X-ray revealed superior humeral head displacement and signs of arthropathy. The MRI revealed "geyser sign" and identified an acromioclavicular cyst secondary to cuff tear arthropathy. Aspiration was not performed due to high recurrence rate and surgical removal was decided. Clinicians should be aware of this rare complication of rotator cuff tear, demanding exclusion of other possible causes of acromioclavicular cyst and offer suitable treatment options. abstract_id: PUBMED:28444541 Reverse shoulder arthroplasty for massive irreparable rotator cuff tears and cuff tear arthropathy: a systematic review. Purpose: To report the outcomes and complications of reverse shoulder arthroplasty (RSA) in massive irreparable rotator cuff tears (MIRCT) and cuff tear arthropathy (CTA). Methods: A systematic review of the literature contained in Medline, Cochrane, EMBASE, Google Scholar and Ovid databases was conducted on May 1, 2016, according to PRISMA guidelines. The key words "reverse total shoulder arthroplasty" or "reverse total shoulder prostheses" with "rotator cuff tears"; "failed rotator cuff surgery"; "massive rotator cuff tears"; "irreparable rotator cuff tears"; "cuff tear arthropathy"; "outcomes"; "complications" were matched. All articles reporting outcomes and complications of RSA for the management of MIRCT or CTA were included. The comparison between preoperative and postoperative clinical scores, as well as range of motion (ROM), was performed using the Wilcoxon-Mann-Whitney test. P values lower than 0.05 were considered statistically significant. Results: Seven articles were included in our qualitative synthesis. A statistically significant improvement in all clinical scores and ROM was found comparing the preoperative value with the postoperative value. The degrees of retroversion of the humeral stem of the RSA do not influence the functional outcomes in a statistically significant fashion. There were 17.4% of complications. The most frequent was heterotopic ossification, occurring in 6.6% of patients. Revision surgery was necessary in 7.3% of patients. Conclusions: RSA restores pain-free ROM and improves function of the shoulder in patients with MIRCT or CTA. However, complications occur in a high percentage of patients. The lack of level I studies limits the real understanding of the potentials and limitations of RSA for the management of MIRCT and CTA. Answer: Pain location is not useful in locating the site of a rotator cuff tear. A study found that the lateral and anterior portions of the shoulder were the most common sites of pain regardless of the existence of a tear or the tear location, suggesting that pain location does not correlate well with the specific site of a rotator cuff tear (PUBMED:16219939). However, physical examinations that evaluate the function of the cuff muscles can be clinically useful in locating the tear site when assessed with appropriate thresholds for muscle weakness. The accuracy of tests such as the supraspinatus test, the external rotation strength test, and the lift-off test depends on the threshold set for muscle weakness (PUBMED:16219939). Physical examination is extremely important to evaluate the presence, location, and extent of a rotator cuff tear, as well as to understand the mechanism of pain (PUBMED:23306597). Therefore, while pain location alone may not be a reliable indicator of the tear site, the combination of physical examinations and understanding the patient's symptoms and functional limitations can provide valuable information for the diagnosis and management of rotator cuff tears.
Instruction: Should all patients undergo postoperative duplex imaging to detect a deep vein thrombosis after varicose vein surgery? Abstracts: abstract_id: PUBMED:18034276 Should all patients undergo postoperative duplex imaging to detect a deep vein thrombosis after varicose vein surgery? Objectives: Deep vein thrombosis (DVT) is a serious complication of varicose vein surgery, with attendant risks of pulmonary embolization. Prospective duplex screening identifies DVT in 5% of patients compared to clinical incidence of approximately 1%. Universal duplex screening is costly, and the benefits of diagnosing subclinical DVT are unproven. This study evaluates whether a policy of using clinical indications (leg swelling) to determine the need for duplex imaging is safe after varicose vein surgery. Methods: Patients undergoing varicose vein surgery over a 4-year period were studied. Postoperative venous duplex imaging was performed if leg swelling occurred within 6 weeks of surgery. Long-term follow-up was performed to detect any missed occurrence of clinical DVT or pulmonary embolism. Results: A total of 411 patients had 491 leg operations with 80 bilateral procedures (27%); 29 patients with leg swelling underwent duplex imaging, 5 of whom had duplex-proven DVT. No patient without early clinical signs went on to develop clinical DVT on long-term follow-up. Conclusion: A policy of using clinical signs as a triage for duplex imaging detected all clinically significant DVTs and generated manageable workloads for our vascular laboratory. abstract_id: PUBMED:1861089 Assessment of the patency of deep leg veins with duplex. To evaluate the accuracy of venous duplex, results obtained in 101 patients are compared with venography. A first group consisted of 48 patients with clinically suspected deep vein thrombosis. In 30 of them a positive duplex scan was obtained and all had subsequently a positive venography. Eighteen patients with a normal duplex scan had a normal venography. Another group of 53 patients were tested preoperatively for varicose vein surgery. No obstruction of the venous system was withheld with duplex but 3 patients had an old thrombosis on venography. Thus duplex is a highly reliable method to detect proximal thrombosis in clinically suspected patients but detection of late sequelae of thrombosis may be more difficult. abstract_id: PUBMED:12027985 Persistent popliteal fossa reflux following saphenopopliteal disconnection. Background: This was a retrospective assessment of the technical adequacy of saphenopopliteal disconnection (SPD). Methods: Patients scheduled for SPD underwent preoperative colour-coded duplex imaging to localize the saphenopopliteal junction (SPJ). Operations were conducted with the patients under general anaesthesia in the prone position with planned full popliteal fossa exposure. Patients were reimaged 6 weeks after operation. Results: Some 69 patients had surgery over 4 years; complete data were available for 59 (27 men and 32 women, median age 55 (range 27-78) years). There were eight staged bilateral procedures. Postoperative duplex scans identified 23 (39 per cent) with ideal results (incompetent SPJ successfully disconnected), 12 (20 per cent) with satisfactory results (incompetent SPJ disconnected successfully but persisting venous reflux in superficial veins), eight duplex failures (14 per cent) (previously identified incompetent SPJ persisted but was competent after operation) and 13 surgical failures (22 per cent) (incompetent SPJ completely missed during surgery). There were three major postoperative complications (5 per cent) (two deep vein thromboses and one popliteal vein injury) and one patient suffered a sural nerve palsy. Conclusion: Despite preoperative duplex localization of the SPJ, SPD proved an unreliable technique in this series. abstract_id: PUBMED:15386324 Incidence of deep vein thrombosis after varicose vein surgery. Background: Varicose vein surgery is generally considered to have little risk of postoperative deep vein thrombosis (DVT). This prospective study examined the incidence of DVT in patients undergoing varicose vein surgery. Methods: Lower leg veins were assessed before operation by duplex ultrasonography in 377 patients, and reassessed 2-4 weeks after surgery, and again at 6 and 12 months. Patients were instructed to contact a physician if symptoms consistent with DVT occurred before the scheduled follow-up appointment. Preoperative prophylaxis (a single dose of subcutaneous heparin) was left to the discretion of the vascular surgeon. Results: DVT was detected in 20 (5.3 per cent) of the 377 patients. Of these, only eight were symptomatic and no patient developed symptoms consistent with pulmonary embolus. Eighteen of the 20 DVTs were confined to the calf veins. Subcutaneous heparin did not alter the outcome. No propagation of thrombus was observed and half of the DVTs had resolved without deep venous reflux at 1 year. Conclusion: The incidence of DVT following varicose vein surgery was higher than previously thought, but these DVTs had minimal short- or long-term clinical significance. abstract_id: PUBMED:7495065 Varicose vein stripping--a prospective study of the thrombotic risk and the diagnostic significance of preoperative color coded duplex sonography. Insufficiency of epifascial veins promotes venous ulceration and increases thromboembolic risk in general surgery patients. Epifascial varicose vein stripping is therefore considered the most effective prophylactic procedure. Thromboembolic risk of patients undergoing this surgical procedure has not yet been prospectively evaluated but appears to be lower than in general surgery patients. The gold standard of preoperative assessment of varicose surgery patients is ascending pressure phlebography, but this technique is invasive, time consuming and costly. We prospectively investigated 100 consecutive varicose vein surgery patients for postoperative thrombosis. Ascending pressure phlebography (APP) and colour coded duplex sonography (CCDS) were performed before and 10 to 21 days after the stripping operation in 100 and 70 patients, respectively. APP revealed no postoperative deep vein thrombosis in all 100 limbs investigated (0 percent; 95 percent confidence interval: 0 to 2.95). With regard to epifascial vein reflux there was good agreement between APP and CCDS (quadratic weighted kappa: 0.76). In 67 out of 73 superficial veins investigated excellent agreement of diagnostic accuracy was found for both diagnostic procedures (91.78 percent; 95 percent confidence interval: 82.96 to 96.92). We conclude that thrombotic risk of varicose vein surgery is low in properly selected patients. CCDS provides a high degree of accuracy in diagnosis of reflux and regular vein morphology and should therefore replace APP; however, APP does remain essential in the preoperative workup of atypical anatomical variants. abstract_id: PUBMED:23482541 Prevention of venous thromboembolism in patients undergoing surgical treatment of varicose veins. Introduction: There is no consensus among surgeons with regard to prevention of venous thromboembolism (VTE) in patients undergoing surgical treatment of varicose veins. We performed a systematic review of the available literature. Methods: We systematically searched the online database from PubMed for studies about the incidence of VTE and thromboprophylaxis in varicose vein surgery. We included 13 papers for review. Results: The incidence of VTE after varicose vein surgery remains unclear. Most retrospective case series report an incidence of deep venous thrombosis (DVT) of approximately 1%, based on a clinical diagnosis. However, three prospective studies have systematically detected DVT by means of duplex ultrasound and showed that the true incidence might be 5-10 times higher than expected on a clinical basis. Discussion More data on the incidence of VTE, and the need for postoperative thromboprophylaxis are necessary to formulate evidence-based clinical guidelines. Therefore, high-quality randomised clinical trials, with high numbers of included patients, and ideally comparing prophylaxis to placebo are warranted. abstract_id: PUBMED:23993109 Low-molecular-weight heparin for prevention of venous thromboembolism after varicose vein surgery in moderate-risk patients: a randomized, controlled trial. Background: Deep vein thrombosis (DVT) after varicose vein surgery (VVS) is not well recognized. Observational studies have yielded variable estimates of the risk, but evidence from randomized trials is lacking. Our aim was to compare the use of thromboprophylaxis with and without low-molecular-weight heparin (LMWH). Methods: We prospectively randomized moderate-risk patients scheduled for VVS in two arms. The first group of patients received bemiparin for 10 days at a prophylactic dose, early ambulation, and compression therapy for 3 months; the second group received early ambulation and compression therapy alone. The primary efficacy outcome was the composite of DVT (symptomatic or asymptomatic detected by mandatory, bilateral duplex scan). Secondary efficacy and primary safety end points were superficial venous thrombosis, postoperative bleeding, and clinical pulmonary embolism (PE). We assessed transient and permanent risk factors for venous thromboembolism. Results: Two-hundred sixty-two patients were eligible for evaluation. There were no cases of DVT. There were also no cases of clinical PE, death, or major bleeding. No significant differences were seen between groups in the rates of bleeding episodes. Conclusion: The data show no superiority of a short-term regimen of LMWH and early ambulation and compression therapy, as compared with early ambulation and compression therapy alone, in patients undergoing VVS in a moderate-risk population. abstract_id: PUBMED:10449068 Safety of perioperative subcutaneous heparin for prophylaxis of venous thromboembolism in patients undergoing craniotomy. Objective: To determine whether perioperative subcutaneous heparin is safe to use for patients undergoing craniotomy and to determine the incidence of venous thromboembolism in patients undergoing craniotomy. Methods: Perioperative prophylaxis with subcutaneous heparin, 5000 U every 12 hours, was begun at induction of anesthesia for craniotomy and continued for 7 days postoperatively or until the patient was ambulating. Entry criteria to the study included patient age over 18 years and no evidence of deep vein thrombosis (DVT) preoperatively as judged by lower limb duplex ultrasound. Patients were excluded if they had duplex evidence of DVT or clinical evidence of pulmonary embolus (PE) preoperatively, had hypersensitivity to heparin or related products, had sustained a penetrating head injury, or refused informed consent. Any patient undergoing craniotomy was eligible, including patients with a ruptured aneurysm or arteriovenous malformation and those with spontaneous intracranial hemorrhage. Patients underwent duplex study 1 week after surgery and 1 month of clinical follow-up. Records were also kept on 68 nonstudy patients who refused consent. All patients were treated with lower limb pneumatic compression devices. Results: One hundred six patients were treated. No differences were noted between study and nonstudy patients in some individual risk factors for DVT or PE, such as obesity, smoking, paralysis, infection, pregnancy or postpartum state, varicose veins, heart failure, or previous DVT or PE. Significantly more (43 of 106) patients in the study group had a history of risk factors for DVT or PE, particularly malignancy, however, compared with nonstudy patients (20 of 68 patients; chi2, P < 0.01). There were no differences between groups in intraoperative blood loss, transfusion requirements, or postoperative platelet counts. Four clinically significant hemorrhages occurred during surgery in patients receiving heparin. Three resulted from intraoperative aneurysm rupture and one from intraventricular bleeding during resection of an arteriovenous malformation. These events were believed to be related to known complications of these operations, not to heparin. Of the study patients, two developed symptomatic DVT and one developed a nonfatal PE during the 1-month postoperative period. One additional study patient developed DVT below the popliteal veins, which was not treated. Four study patients developed DVT 1 to 2 months after surgery. In nonstudy patients, three developed DVT and two developed PE (one fatal, one nonfatal). Conclusion: Perioperative heparin may be safe to administer to patients undergoing craniotomy, but a larger study is needed to demonstrate efficacy. abstract_id: PUBMED:19197095 Day surgery varicose vein treatment using endovenous laser. Objective: To examine the safety and efficacy of endovenous laser obliteration to treat varicose vein in a day surgery setting, using sedation and local anaesthesia. Design: Prospective study. Setting: Day surgery centre in a regional hospital in Hong Kong. Patients: A total of 24 patients with duplex-confirmed long saphenous vein insufficiency underwent endovenous laser (940 nm) varicose vein treatment from July to November 2007 in a single day surgery centre. Adjuvant phlebectomy and injection sclerotherapy were performed in the same session if indicated. All patients had postprocedural venous duplex scan and clinic assessment on day 7 and day 10 respectively. Main Outcome Measures: Procedure success rate, unplanned hospital admissions and re-admissions, major complications, and long saphenous vein obliteration rate. Results: A total of 31 limbs of the 24 patients were treated with endovenous laser varicose vein treatment under local anaesthesia and sedation. The procedural success rate was 100%. All but two patients were admitted on the day of treatment and none were re-admitted. The patients' mean visual analogue pain score for the whole procedure was 2.3 (standard deviation, 1.5; range, 0-5). Post-procedural duplex scans showed 100% thrombosis of the treated long saphenous veins with no deep vein thrombosis. There were no skin burns or instances of thrombophlebitis. Induration of the treated long saphenous vein was relatively common (54%). The majority of the patients (54%) experienced mild discomfort in the early postoperative period. Conclusion: Endovenous laser varicose vein treatment performed under local anaesthesia and sedation in a day surgery setting is safe, and yields satisfactory clinical and duplex outcomes. abstract_id: PUBMED:27982166 Randomized trial of radiofrequency ablation versus conventional surgery for superficial venous insufficiency: if you don't tell, they won't know. Objectives:: This study compared radiofrequency ablation versus conventional surgery in patients who had not undergone previous treatment for bilateral great saphenous vein insufficiency, with each patient serving as his own control. Method:: This was a randomized controlled trial that included 18 patients and was carried out between November 2013 and May 2015. Each of the lower limbs of each patient was randomly assigned to undergo either radiofrequency ablation or conventional surgery. Clinical features (hyperpigmentation, hematoma, aesthetics, pain, skin burn, nerve injury, and thrombophlebitis) were evaluated at one week, one month, and six months postoperatively. Hemodynamic assessments (presence of resection or occlusion of the great saphenous vein and recurrent reflux in the sapheno-femoral junction and in the great saphenous vein) were performed at one month, six months, and 12 months postoperatively. The independent observer (a physician not involved in the original operation), patient, and duplex ultrasonographer were not made aware of the treatment done in each case. Clinicaltrials.gov: NCT02588911. Results:: Among the clinical variables analyzed, only the aesthetic evaluation by the physicians was significant, with radiofrequency ablation being considered better than conventional surgery (average, 0.91 points higher: standard deviation: 0.31; 95% confidence interval: -1.51, -0.30; p=0.003). However, in our study, we observed primary success rates of 80% for radiofrequency ablation and 100% for conventional surgery. Conclusions:: If the physician is not required to inform the patient as to the technique being performed, the patient will not be able to identify the technique based on the signs and symptoms. Our study revealed that both techniques led to high levels of patient satisfaction, but our results favor the choice of conventional surgery over radiofrequency ablation, as patients who underwent conventional surgery had better hemodynamic assessments. Answer: The question of whether all patients should undergo postoperative duplex imaging to detect deep vein thrombosis (DVT) after varicose vein surgery is debated in the literature. Some studies suggest that the incidence of DVT following varicose vein surgery is higher than previously thought, with prospective duplex screening identifying DVT in 5% of patients compared to a clinical incidence of approximately 1% (PUBMED:15386324). However, the clinical significance of these DVTs is unclear, as many are confined to the calf veins and may resolve without long-term consequences (PUBMED:15386324). A study evaluating the use of clinical indications (leg swelling) to determine the need for duplex imaging after varicose vein surgery found that this approach detected all clinically significant DVTs and generated manageable workloads for vascular laboratories (PUBMED:18034276). This suggests that routine postoperative duplex imaging for all patients may not be necessary if clinical signs are used as a triage tool. Duplex ultrasonography is a highly reliable method to detect proximal thrombosis in clinically suspected patients, but it may be more difficult to detect late sequelae of thrombosis (PUBMED:1861089). Additionally, despite preoperative duplex localization of the saphenopopliteal junction, saphenopopliteal disconnection proved to be an unreliable technique in one series, with a significant number of surgical failures and postoperative complications, including DVT (PUBMED:12027985). The use of thromboprophylaxis with low-molecular-weight heparin (LMWH) in moderate-risk patients undergoing varicose vein surgery did not show superiority over early ambulation and compression therapy alone in preventing DVT (PUBMED:23993109). This indicates that routine postoperative pharmacological prophylaxis may not be necessary in all patients, which could also influence the decision on whether to perform routine postoperative duplex imaging. In conclusion, while the incidence of DVT after varicose vein surgery may be higher than clinically observed, the necessity of routine postoperative duplex imaging for all patients is not supported by all studies. A selective approach based on clinical indications may be sufficient and more cost-effective (PUBMED:18034276). However, more data and high-quality randomized clinical trials are needed to formulate evidence-based guidelines on this issue (PUBMED:23482541).
Instruction: Can the gateway hypothesis, the common liability model and/or, the route of administration model predict initiation of cannabis use during adolescence? Abstracts: abstract_id: PUBMED:21185527 Can the gateway hypothesis, the common liability model and/or, the route of administration model predict initiation of cannabis use during adolescence? A survival analysis--the TRAILS study. Purpose: There is substantial research linking tobacco and alcohol use to subsequent cannabis use, yet the specificity of this relationship is still under debate. The aim of this study was to examine which substance use model--the gateway hypothesis, the common liability (CL) model and/or the route of administration model--best explains the relationship between early onset of tobacco and alcohol use and subsequent cannabis use initiation. Methods: We used data from 2,113 (51% female) Dutch adolescents who participated in three consecutive assessment waves (mean age: 11.09, 13.56, and 16.27 years, respectively) of the TRacking Adolescents' Individual Lives Survey study. (Pre)adolescent cannabis, tobacco and alcohol use was assessed using the Youth Self-Report and a TRacking Adolescents' Individual Lives Survey developed questionnaire. Results: We found that, during adolescence, early onset of tobacco use does not pose a significantly higher risk of initiating cannabis use than early onset alcohol use. Therefore, we can rule out the route of administration model. Moreover, we found that adolescents who reported early onset comorbid use of both tobacco and alcohol have a higher likelihood to initiate cannabis use than adolescents who have tried either tobacco or alcohol. The gateway hypothesis is not broad enough to explain this finding. Therefore, the CL model best predicts our findings. Conclusion: Future research on adolescent cannabis initiation should focus on testing the robustness of the CL model. Furthermore, identifying adolescents who use both tobacco and alcohol, before the age of 13, may help to curtail the onset of cannabis use. abstract_id: PUBMED:26431216 The Gateway Hypothesis, Common Liability to Addictions or the Route of Administration Model A Modelling Process Linking the Three Theories. Background: The aim of this study was to describe the transitions between tobacco (T), cannabis (C) and other illicit drugs (OIDs) initiations, to simultaneously explore several substance use theories: gateway theory (GT), common liability model (CLM) and route of administration model (RAM). Methods: Data from 2 French nationwide surveys conducted in 2005 and 2010 were used (16,421 subjects aged 18-34). Using reported ages at initiations, we reconstituted a retrospective cohort describing all initiation sequences between T, C and OID. Transition probabilities between the substances were computed using a Markov multi-state model that also tested the effect of 2 latent variables (item response theory scores reflecting propensity for early onset and further substance use) on all transitions. Results: T initiation was associated with increased likelihood of subsequent C initiation, but the reverse relationship was also observed. While the most likely initiation sequence among subjects who initiated the 3 groups of substances was the 'gateway' sequence T x2192; C x2192; OID, this pattern was not associated with substance use propensity more than alternative sequences. Early use propensity was associated with the 'gateway' sequence but also with some alternative ones beginning with T, C or OID. Conclusion: If the gateway sequence appears as the most likely pattern, in line with GT, the effects of early onset and substance use propensities were also observed for some alternative sequences, which is more in line with CLM. RAM could explain reciprocal interactions observed between T and C. This suggests shared influences of individual (personality traits) and environmental (substance availability, peer influence) characteristics. abstract_id: PUBMED:34590374 Using Mendelian randomization to explore the gateway hypothesis: possible causal effects of smoking initiation and alcohol consumption on substance use outcomes. Background And Aims: Initial use of drugs such as tobacco and alcohol may lead to subsequent more problematic drug use-the 'gateway' hypothesis. However, observed associations may be due to a shared underlying risk factor, such as trait impulsivity. We used bidirectional Mendelian randomization (MR) to test the gateway hypothesis. Design: Our main method was inverse-variance weighted (IVW) MR, with other methods included as sensitivity analyses (where consistent results across methods would raise confidence in our primary results). MR is a genetic instrumental variable approach used to support stronger causal inference in observational studies. Setting And Participants: Genome-wide association summary data among European ancestry individuals for smoking initiation, alcoholic drinks per week, cannabis use and dependence, cocaine and opioid dependence (n = 1749-1 232 091). Measurements: Genetic variants for exposure. Findings: We found evidence of causal effects from smoking initiation to increased drinks per week [(IVW): β = 0.06; 95% confidence interval (CI) = 0.03-0.09; P = 9.44 × 10-06 ], cannabis use [IVW: odds ratio (OR) = 1.34; 95% CI = 1.24-1.44; P = 1.95 × 10-14 ] and cannabis dependence (IVW: OR = 1.68; 95% CI = 1.12-2.51; P = 0.01). We also found evidence of an effect of cannabis use on the increased likelihood of smoking initiation (IVW: OR = 1.39; 95% CI = 1.08-1.80; P = 0.01). We did not find evidence of an effect of drinks per week on other substance use outcomes, except weak evidence of an effect on cannabis use (IVW: OR = 0.55; 95% CI = 0.16-1.93; P-value = 0.35). We found weak evidence of an effect of opioid dependence on increased drinks per week (IVW: β = 0.002; 95% CI = 0.0005-0.003; P = 8.61 × 10-03 ). Conclusions: Bidirectional Mendelian randomization testing of the gateway hypothesis reveals that smoking initiation may lead to increased alcohol consumption, cannabis use and cannabis dependence. Cannabis use may also lead to smoking initiation and opioid dependence to alcohol consumption. However, given that tobacco and alcohol use typically begin before other drug use, these results may reflect a shared risk factor or a bidirectional effect for cannabis use and opioid dependence. abstract_id: PUBMED:34554350 A Gateway That Never Was. This brief communication responds to the article by Rajabi et al., recently published in Behavior Genetics. To test the hypothesis of cigarette smoking as a "gateway" for subsequent opium use and contrast it with the common liability model, Mendelian randomization analysis was applied to data obtained from an Iranian sample, using CHRNA3 rs1051730 as an instrumental variable. It is doubtful, however, if the assumptions of instrumental variable analysis hold in this case. The authors misstate both the gateway hypothesis and the common liability model. The article has many other deficiencies that diminish the veracity of its categorical conclusions that accept the causal interpretation of the "gateway hypothesis" and reject the common liability model, with which the data are fully consistent. abstract_id: PUBMED:26826476 Relationships between substance initiation sequence and further substance use: A French nationwide retrospective study. The Gateway theory (GT) proposes that tobacco or alcohol use lead to cannabis use, which can itself be followed by other illicit drugs (OID) onset. Aim of this study was to evaluate if the order of initiation sequence could influence further substance use. Data from a 2010 population-based survey were used (22,774 subjects aged 15-64). Using reported ages at initiations, 7 sequences were identified: initiation of tobacco only (T), cannabis or OID only, tobacco followed by cannabis (T-C), cannabis followed by tobacco (C-T), alternative 2-substance sequences, gateway sequence (T-C-OID) and 3-substance alternative sequences. Logistics regressions were performed to study the impact of sequence on further use (tobacco, alcohol, cannabis and OID), and substance use disorders (SUD) (tobacco, alcohol and cannabis). The most observed sequences were T (45.5%), T-C (20.5%), C-T (5.1%) and T-C-OID (3.5%). Further use and SUD likelihoods, whatever the substance considered, increased with the number of substances previously initiated. However, for a same number of substances initiated, current use and SUD likelihoods did not significantly vary according to sequence. Polysubstance initiation appears as a better predictor of further use and SUD than the initiation sequence, questioning the GT and being more in line with a common liability to substance use. abstract_id: PUBMED:28511099 The social exigencies of the gateway progression to the use of illicit drugs from adolescence into adulthood. Background: There is limited empirical integration between peer clustering theory and the Gateway framework. The goal of the present study was to test the hypothesis that friendship associations partly predict gateway escalations in the use of drugs from adolescence to adulthood. Method: This longitudinal study analyzed 3 waves of data from a community sample of 711 male and female participants without a history of illicit drug use reporting drug use at age 17, 22, and 27. Substance use assessments including tobacco, alcohol, cannabis, onset and abuse/dependence tendency of illicit drugs other than cannabis (i.e., cocaine, methamphetamine, and opiates), and friends' reported use of illicit drugs. Structural equation modeling was used to test the hypothesized model. Results: Participants' cannabis use level at age 17 was positively associated with perceived friends' drug use at age 22, which in turn predicted participants' onset of illicit drug use between ages 22 and 27. Moreover, progression of tobacco use throughout age 17 to 22 was associated with an increased onset of illicit drug use between ages 22 and 27. Apart for an effect of cannabis use at age 22 on abuse and dependence tendency to various drugs at age 28, results were similar. Conclusions: During this period of development, the availability and selection of drug-using friends contributes to the progression to potentially more rewarding and damaging illicit drugs. These findings suggest the need to attend to the peer ecology in prevention and support the common practice of using abstaining peers in treatment for drug dependence. abstract_id: PUBMED:30797762 Examination of the Gateway Hypothesis in a rat model. The Gateway Hypothesis is based on epidemiological data and states there is a progression of drug use from use of a softer drug (e.g., nicotine) to use of a harder drug (e.g., morphine). It has been suggested that this sequence is causal and is relevant to drug prevention policies and programs. The present experiment used an animal model to investigate whether the Gateway Hypothesis involves a causal progression. Subjects were 16 female and 16 male Sprague-Dawley rats with ages comparable to late adolescence/emerging adulthood in humans. Subjects received nicotine (6 mg/kg/day) or saline for 21 days SC via osmotic minipump and subsequently were allowed to self-administer IV morphine (0.5 mg/kg/injection, 3 h/day) for 10 days. Results did not confirm the Gateway Hypothesis. In fact, rats pre-exposed to nicotine self-administered significantly less morphine than did rats pre-exposed to saline. These findings may be relevant to future drug use prevention policies and programs. abstract_id: PUBMED:29430806 Cannabis use is associated with lower rates of initiation of injection drug use among street-involved youth: A longitudinal analysis. Introduction And Aims: Street-involved youth are known to be at elevated risk of initiating injection drug use. However, the impact of so-called 'gateway' drugs, such as cannabis, on injection initiation is unknown. The objective of this study was to examine the association between cannabis use and initiation of injection drug use among a prospective cohort of street-involved youth in Vancouver, Canada. Design And Methods: Data for this study were collected from the At-Risk Youth Study. From September 2005 to May 2015, participants aged 14-26 who reported illicit drug use were recruited into this open prospective cohort study. An extended Cox regression model with time-updated covariates was used to identify factors independently associated with injection initiation. Results: During the study period, 481 street-involved youth were included in this study. Of these, 228 (47.4%) reported at least daily cannabis use, and 103 (21.4%) initiated injection drug use. In a multivariable analysis, ≥daily cannabis use was associated with slower rates of injection initiation (adjusted relative hazard 0.66, 95% confidence interval 0.45-0.98; P = 0.038). Sub-analyses revealed that cannabis use was negatively associated with initiation of injection stimulants but not initiation of injection opioids. Discussion And Conclusions: Given the expansion of cannabis legalisation throughout North America, it is encouraging that cannabis use was associated with slower time to initiation of injection drug use in this cohort. This finding challenges the view of cannabis as a gateway substance that precipitates the progression to using harder and more addictive drugs. abstract_id: PUBMED:35410577 Prevalence of Cannabis Use and Cannabis Route of Administration among Massachusetts Adults in Buprenorphine Treatment for Opioid Use Disorder. Background: Recent prevalence estimates of cannabis use among individuals receiving medication treatment for OUD (MOUD) are lacking, and no study has characterized cannabis route of administration (cROA) in this population. These knowledge gaps are relevant because cannabis' effects and health outcomes vary by cROA and the availability and perceptions of cROA (e.g., vaping devices) are changing. Methods: The Vaping In Buprenorphine-treated patients Evaluation (VIBE) cross-sectional survey assessed the prevalence and correlates of cannabis use and cROA among adults receiving buprenorphine MOUD from 02/20 to 07/20 at five community health centers in Massachusetts, a state with legal recreational and medical cannabis use. Results: Among the 92/222 (41%) respondents reporting past 30-day cannabis use, smoking was the most common cROA (75%), followed by vaping (38%), and eating (26%). Smoking was more often used as a single cROA vs. in combination others (p = 0.01), whereas vaping, eating, and dabbing were more often used in combination with another cROA (all p < 0.05). Of the 39% of participants reporting multiple cROA, smoking and vaping (61%), and smoking and eating (50%), were the most prevalent combinations. Nonwhite race (vs. white) and current cigarette smoking (vs. no nicotine use) were associated with past 30-day cannabis use in multiple logistic regression. Conclusions: Prevalence of past 30-day cannabis use among individuals receiving buprenorphine MOUD in Massachusetts in 2020 was nearly double the prevalence of cannabis use in Massachusetts' adult general population in 2019 (21%). Our data are consistent with state and national data showing smoking as the most common cROA. abstract_id: PUBMED:37224582 Testing the cannabis gateway hypothesis in a national sample of Spanish adolescents. Background: The gateway hypothesis holds that the use of legal substances (i.e., tobacco and alcohol) increases the risk of initiating in cannabis use which, in turn, increases the chances of using other illegal substances. The validity of this hypothesis has been the subject of intense debate in recent years, finding sequences with a different order. Moreover, this pattern has been scarcely studied in Spain, where characteristics related to cannabis use are meaningfully different to other countries. This study aims to examine the gateway effects of cannabis towards other legal and illegal substances in Spanish adolescents. Material And Methods: Data were obtained from the Ministry of Health in Spain, through a representative survey of addictive behaviors of 36,984 Spanish adolescents (Mage = 15.7, SD = 1.2, 51.4% females). Results: Lifetime cannabis use increased the likelihood of later legal substance use, both tobacco (OR = 2.0; 95%CI 1.81, 2.22) and alcohol (OR = 1.93; 95%CI 1.61, 2.31), as well as illegal substances (OR = 5.36; 95%CI 4.80, 5.98) and polysubstance (OR = 18.24; 95%CI 14.63, 22.73). Early age of cannabis use onset significantly increased the likelihood of subsequent legal and illegal substance use (ORs between 1.82 and 2.65). Conclusions: These findings confirm and expand the available evidence on cannabis as a gateway substance. These results can help to drive preventive strategies for substance use in Spanish adolescents. Answer: The gateway hypothesis, common liability model, and the route of administration model are three theories that have been proposed to explain the initiation of cannabis use during adolescence. The gateway hypothesis suggests that the use of less harmful drugs like tobacco and alcohol may lead to the use of more harmful drugs such as cannabis and other illicit substances (PUBMED:26431216). However, the common liability model posits that there is a shared underlying risk factor, such as trait impulsivity, that predisposes individuals to substance use, rather than a causal progression from one drug to another (PUBMED:34554350). The route of administration model suggests that the method by which a substance is consumed may influence the likelihood of progressing to other substances (PUBMED:26431216). Research has shown mixed results regarding these models. A study using data from the TRacking Adolescents' Individual Lives Survey (TRAILS) found that early onset of tobacco use does not pose a significantly higher risk of initiating cannabis use than early onset alcohol use, ruling out the route of administration model. It also found that adolescents who reported early onset comorbid use of both tobacco and alcohol have a higher likelihood to initiate cannabis use than adolescents who have tried either substance alone, suggesting that the common liability model best predicts the initiation of cannabis use (PUBMED:21185527). Another study using French nationwide surveys found that while the 'gateway' sequence (tobacco → cannabis → other illicit drugs) was the most likely pattern, the effects of early onset and substance use propensities were also observed for some alternative sequences, which aligns more with the common liability model. The route of administration model could explain reciprocal interactions observed between tobacco and cannabis (PUBMED:26431216). Mendelian randomization studies have also been used to explore these hypotheses, with one study finding that smoking initiation may lead to increased alcohol consumption, cannabis use, and cannabis dependence, while cannabis use may lead to smoking initiation (PUBMED:34590374). However, these results may reflect a shared risk factor or a bidirectional effect for cannabis use and opioid dependence. In summary, while the gateway hypothesis has some support, the common liability model appears to be a more robust predictor of cannabis initiation during adolescence, and the route of administration model may have some explanatory power but is not the primary predictor (PUBMED:21185527; PUBMED:26431216; PUBMED:34590374).
Instruction: Ventricular arrhythmias and implantable cardioverter-defibrillator therapy in patients with continuous-flow left ventricular assist devices: need for primary prevention? Abstracts: abstract_id: PUBMED:24862733 Clinical experience of combined HeartWare ventricular assist device and implantable cardioverter defibrillator therapy. Introduction: The HeartWare continuous flow ventricular assist device (HVAD) is used in an increasing number of heart failure patients. In those patients, ventricular arrhythmias (VAs) are common and, consequently, many patients already have an implanted implantable cardioverter defibrillator (ICD) in place or receive ICD implantation after left ventricular assist device implantation. However, limited data on feasibility and necessity of combined ICD and HVAD therapy are available. In this study we present our technical and clinical experience. Methods And Results: Between 01/2010 and 06/2013, 41 patients received HVAD implantation. Twenty-six HVAD patients who already had an ICD device placed prior to HVAD implantation or received ICD implantation afterwards were enrolled in this study. Peri- and postoperative complications as well as ICD interrogations were documented and analyzed retrospectively. Mean patients age was 58.4 ± 12.6 years; 88.5% of patients were male. During mean follow-up of 12.2 ± 8.9 months, appropriate ICD interventions occurred in 9 patients (34.6%) due to ventricular tachyarrhythmia (n = 7) or ventricular fibrillation (n = 2). An inappropriate ICD intervention was seen in 1 patient (3.9%) due to tachycardic atrial fibrillation. Patients on HVAD with a history of VAs (n = 13) had a significantly higher incidence of ICD interventions compared to patients with primary prophylactic indication for ICD (n = 13; 53.8% vs. 7.7%; P = 0.015). No disturbance of ICD function was seen after HVAD implantation. Conclusion: Combined ICD and HVAD therapy was safe and feasible, without electromagnetic interference between ICD and ventricular assist device. The incidence of ICD interventions was high in patients with a history of VAs, but low in patients with ICD implantation for primary prevention. abstract_id: PUBMED:31768918 Current Review of Implantable Cardioverter Defibrillator Use in Patients With Left Ventricular Assist Device. Purpose Of Review: While use of implantable cardioverter defibrillator (ICD) in patients with heart failure (HF) and reduced ejection fraction is recommended, their role in patients with left ventricular assist device (LVAD) remains controversial, especially with continuous flow (CF) devices. Recent Findings: Ventricular arrhythmias (VA) in LVAD patients are frequent and associated with worse outcomes, especially postoperatively. Data on the role of ICDs in LVAD patients are scarce, based on case reports or small retrospective series. While the majority of LVAD patients currently have an ICD, it seems that some might not derive any benefit, with additional risks in terms of inappropriate shocks, psychological distress, and infections. Some CF-LVAD patients are at high risk of VA and hemodynamic collapse; under those circumstances, an ICD might provide benefits. A randomized-controlled trial of routine ICD implantation in CF-LVAD would be needed to clarify their impact on survival in low risk patients. abstract_id: PUBMED:30455836 Ineffective ICD Shocks for Ventricular Fibrillation in a Patient with a Left Ventricular Assist Device: Continuous Flow During the Electrical Storm. Ventricular arrhythmias are life-threatening and can serve as a precursor to sudden death. They are a common presentation in patients with severely reduced left ventricular (LV) function. The use of an implantable cardioverter defibrillator (ICD) is seen as an acceptable therapy against malignant ventricular arrhythmias. In patients with LV heart failure, a left ventricular assist device (LVAD) can provide pulsatile flow to mimic the cardiac systolic and diastolic function. We report a case of a 38-year-old male with a LVAD who presented to the emergency department due to syncope and frequent ICD discharges. There were documented episodes of ventricular fibrillation and a failed defibrillator threshold test. abstract_id: PUBMED:23643502 Ventricular arrhythmias and implantable cardioverter-defibrillator therapy in patients with continuous-flow left ventricular assist devices: need for primary prevention? Objectives: This study sought to evaluate the prevalence and significance of ventricular arrhythmia (VA) and the role of an implantable cardioverter-defibrillator (ICD) in patients supported by a continuous-flow left ventricular assist device (CF-LVAD). Background: VAs are common in patients supported by CF-LVADs but prospective data to support the routine use of ICDs in these patients are lacking. Methods: All patients supported by long-term CF-LVAD receiving care at our institution were enrolled. The ICDs were interrogated at baseline and throughout prospective follow-up. The VA was defined as ventricular tachycardia/fibrillation lasting >30 s or effectively terminated by appropriate ICD tachytherapy. The primary outcome was the occurrence of VA >30 days after CF-LVAD implantation. Results: Ninety-four patients were enrolled; 77 had an ICD and 17 did not. Five patients with an ICD had it deactivated or a depleted battery not replaced during the study. Twenty-two patients had a VA >30 days after LVAD implantation. Pre-operative VA was the major predictor of post-operative arrhythmia. Absence of pre-operative VA conferred a low risk of post-operative VA (4.0% vs. 45.5%; p < 0.001). No patients discharged from the hospital without an ICD after CF-LVAD implantation died during 276.2 months of follow-up (mean time without ICD, 12.7 ± 12.3 months). Conclusions: Patients with pre-operative VA are at risk of recurrent VA while on CF-LVAD support and should have active ICD therapy to minimize sustained VA. Patients without pre-operative VA are at low risk and may not need active ICD therapy. abstract_id: PUBMED:29475875 Prospective Randomized Evaluation of Implantable Cardioverter-Defibrillator Programming in Patients With a Left Ventricular Assist Device. Background: Ventricular arrhythmias are common in patients with left ventricular assist devices (LVADs) but are often hemodynamically tolerated. Optimal implantable cardioverter defibrillator (ICD) tachy-programming strategies in patients with LVAD have not been determined. We sought to determine if an ultra-conservative ICD programming strategy in patients with LVAD affects ICD shocks. Methods And Results: Adult patients with an existing ICD undergoing continuous flow LVAD implantation were randomized to standard ICD programming by their treating physician or an ultra-conservative ICD programming strategy utilizing maximal allowable intervals to detection in the ventricular fibrillation and ventricular tachycardia zones with use of ATP. Patients with cardiac resynchronization therapy (CRT) devices were also randomized to CRT ON or OFF. Patients were followed a minimum of 6 months. The primary outcome was time to first ICD shock. Among the 83 patients studied, we found no statistically significant difference in time to first ICD shock or total ICD shocks between groups. In the ultra-conservative group 16% of patients experienced at least one shock compared with 21% in the control group (P=0.66). There was no difference in mortality, arrhythmic hospitalization, or hospitalization for heart failure. In the 41 patients with CRT ICDs fewer shocks were observed with CRT-ON but this was not statistically significant: 10% of patients with CRT-ON (n=21) versus 38% with CRT-OFF (n=20) received shocks (P=0.08). Conclusions: An ultra-conservative programming strategy did not reduce ICD shocks. Programming restrictions on ventricular tachycardia and ventricular fibrillation zone therapy should be reconsidered for the LVAD population. The role of CRT in patients with LVAD warrants further investigation. Clinical Trial Registration: URL: https://www.clinicaltrials.gov. Unique identifier: NCT01977703. abstract_id: PUBMED:30236390 Predictors and Clinical Impact of Late Ventricular Arrhythmias in Patients With Continuous-Flow Left Ventricular Assist Devices. Objectives: This study aimed to evaluate the incidence, clinical impact, and predictors of late ventricular arrhythmias (VAs) in left ventricular assist device (LVAD) recipients aiming to clarify implantable cardioverter-defibrillator (ICD) indications. Background: The arrhythmic risk and need for ICD in patients implanted with an LVAD are not very well known. Methods: This observational study was conducted in 19 centers between 2006 and 2016. Late VAs were defined as sustained ventricular tachycardia or fibrillation occurring >30 days post-LVAD implantation, without acute reversible cause and requiring appropriate ICD therapy, external electrical shock, or medical therapy. Results: Among 659 LVAD recipients, 494 (median 58.9 years of age; mean left ventricular ejection fraction 20.7 ± 7.4%; 73.1% HeartMate II, 18.6% HeartWare, 8.3% Jarvik 2000) were discharged alive from hospital and included in the final analysis. Late VAs occurred in 133 (26.9%) patients. Multivariable analysis identified 6 independent predictors of late VAs: VAs before LVAD implantation, atrial fibrillation before LVAD implantation, idiopathic etiology of the cardiomyopathy, heart failure duration >12 months, early VAs (<30 days post-LVAD), and no angiotensin-converting enzyme inhibitors during follow-up. The "VT-LVAD score" was created, identifying 4 risk groups: low (score 0 to 1), intermediate (score 2 to 4), high (score 5 to 6), and very high (score 7 to 10). The rates of VAs at 1 year were 0.0%, 8.0%, 31.0% and 55.0%, respectively. Conclusions: Late VAs are common after LVAD implantation. The VT-LVAD score may help to identify patients at risk of late VAs and guide ICD indications in previously nonimplanted patients. (Determination of Risk Factors of Ventricular Arrhythmias [VAs] after implantation of continuous flow left ventricular assist device with continuous flow left ventricular assist device [CF-LVAD] [ASSIST-ICD]; NCT02873169). abstract_id: PUBMED:35587303 Primary prevention implantable cardioverter-defibrillator use in non-ischemic dilated cardiomyopathy based on arrhythmic risk stratification and left ventricular reverse remodeling prediction. Sudden cardiac death (SCD) and significant ventricular arrhythmias in patients with dilated cardiomyopathy (DCM) have been markedly reduced over the last couple of decades as a result of the advances in pharmacological and non-pharmacological treatment. Primary prevention implantable cardioverter-defibrillator (ICD) plays an important role in the treatment of patients at risk of SCD caused by ventricular arrhythmias. However, the arrhythmic risk stratification in patients with DCM remains extremely challenging, and the decision for primary prevention ICD implantation based on left ventricular ejection fraction (LVEF) solely appears to be insufficient. This review provides an update on current evidence for primary prevention ICD implantation, arrhythmic risk stratification, and left ventricular reverse remodeling (LVRR) prediction in patients with DCM in addition to most recent guideline recommendations for primary prevention ICD implantation in DCM patients and a proposed multiparametric algorithm based on arrhythmic risk stratification and left ventricular reverse remodeling (LVRR) prediction to better identify patients who are likely to benefit from primary prevention ICD. abstract_id: PUBMED:37332585 Prognostic impact of implantable cardioverter defibrillators and associated adverse events in patients with continuous flow left ventricular assist devices. Objectives: Implantation of implantable cardioverter defibrillators (ICD) reduces the risk of all-cause mortality in symptomatic heart failure (HF) patients with severe left ventricular (LV) dysfunction. Nevertheless, the prognostic impact of ICD therapy in continuous flow left ventricular assist device (LVAD) recipients remains controversial. Methods: 162 consecutive HF patients, who underwent LVAD implantation at our institution between 2010 and 2019, were categorized according to the presence (n = 94, ICD-group) or absence (n = 68, Control-group) of ICDs. Apart from clinical baseline and follow-up parameters, adverse events (AEs) related to ICD therapy and overall survival rates were retrospectively analyzed. Results: Out of 162 consecutive LVAD recipients 79 patients (48.8%) were preoperatively categorized as Interagency Registry for Mechanically Assisted Circulatory Support (INTERMACS) profile ≤2. The prevalence of severe HF symptoms and preoperative use of short-term circulatory support devices (54.4% vs. 13.8%, p < 0.001) was higher within the Control-group, although baseline severity of LV and RV dysfunction was similar. Apart from an increased prevalence of perioperative right heart failure (RHF) within the Control-group (45.6% vs. 17.0%; p < 0.001), procedural characteristics and perioperative outcome were similar. Overall-survival during a median follow-up of 14 (3.0-36.5) months was similar within both groups (p = 0.46). During the first 2 years after LVAD implantation 53 ICD-related AEs occurred within the ICD-group. Thereof, lead-dysfunction occurred in 19 patients and unplanned ICD-reintervention in 11 patients. Furthermore, in 18 patients appropriate shocks without loss of consciousness occurred, whereas inappropriate shocks occurred in 5 patients. Conclusion: ICD therapy in LVAD recipients was not associated with a survival benefit or reduced morbidity after LVAD implantation. Conservative ICD-programming seems to be justified to avoid ICD-related complications and "awake shocks" after LVAD implantation. abstract_id: PUBMED:34113891 Ineffective implantable cardioverter-defibrillator shocks among patients on continuous left ventricular assist device support: Clinical characteristics and management. Background: Placement of a left ventricular assist device (LVAD) has been described to compromise implantable cardioverter-defibrillator (ICD) defibrillation threshold (DFT). Elevated DFT will have negative consequences and increases the risk of ineffective ICD shocks, morbidity, and mortality. DFT testing is not routinely performed in clinical practice, despite this fact. Objective: We describe the clinical characteristics of 7 LVAD patients who presented with multiple ineffective ICD shocks, along with the management strategy in such patients. Methods: Seven patients (5 male, mean age 52.2 ± 9 years, 85.7% nonischemic cardiomyopathy) with an ICD in situ who progressed to NYHA class IV, ACC/AHA stage D chronic systolic congestive heart failure who underwent successful LVAD implantation presented to our institution in the setting of ventricular tachyarrhythmia and ineffective ICD shocks. Six patients underwent implantation of azygos and subclavian coils with subsequent DFT testing. The remaining patient was made comfort care. Results: Five patients had successful DFT testing with azygos (n = 4) and subclavian (n = 1) defibrillation coil implantation. One patient had unsuccessful DFT testing despite evaluation of multiple shock vectors. There were no major or minor vascular complications in any of the cases. There were no procedural-related deaths. Conclusion: This case series highlights the need for a systematic approach to management of ICDs and DFT testing in LVAD patients. The addition of new shock vectors with azygos and subclavian coil implantation may reduce DFT, shock burden, morbidity, and mortality. abstract_id: PUBMED:38013210 Managing ventricular arrhythmias and implantable cardiac defibrillator shocks after left ventricular assist device implantation. Continuous flow left ventricular assist devices (CF-LVADs) have been shown to reduce mortality and morbidity in patients with advanced heart failure with reduced ejection fraction. However, ventricular arrhythmias (VA) are common, are mostly secondary to underlying myocardial scar, and have a higher incidence in patients with pre-LVAD VA. Sustained VA is well tolerated in the LVAD patient but can result in implantable defibrillator (ICD) shocks, right ventricular failure, hospitalizations, and reduced quality of life. There is limited data regarding best practices for the medical management of VA as well as the role for procedural interventions in patients with uncontrolled VA and/or ICD shocks. Vast majority of CF-LVAD patients have a preexisting cardiovascular implantable electronic device (CIED) and ICD and/or cardiac resynchronization therapies are continued in many. Several questions, however, remain regarding the efficacy of ICD and CRT following CF-LVAD. Moreover, optimal CIED programming after CF-LVAD implantation. Therefore, the primary objective of this review article is to provide the most up-to-date evidence and to provide guidance on the clinical significance, pathogenesis, predictors, and management strategies for VA and ICD therapies in the CF-LVAD population. We also discuss knowledge gaps as well as areas for future research. Answer: The need for primary prevention of ventricular arrhythmias (VAs) with implantable cardioverter defibrillator (ICD) therapy in patients with continuous flow left ventricular assist devices (CF-LVADs) is a topic of ongoing debate and research. Several studies have investigated the prevalence, significance, and management of VAs in this patient population, as well as the role of ICDs. Patients with CF-LVADs commonly experience VAs, which are associated with worse outcomes, especially postoperatively (PUBMED:31768918). The presence of pre-operative VAs is a major predictor of post-operative arrhythmias, and patients with a history of VAs are at risk of recurrent VAs while on CF-LVAD support. These patients should have active ICD therapy to minimize sustained VAs (PUBMED:23643502). However, patients without pre-operative VAs are at low risk and may not need active ICD therapy (PUBMED:23643502). A study on combined ICD and HeartWare ventricular assist device (HVAD) therapy found it to be safe and feasible without electromagnetic interference. The incidence of ICD interventions was high in patients with a history of VAs but low in patients with ICD implantation for primary prevention (PUBMED:24862733). Another study created a "VT-LVAD score" to identify patients at risk of late VAs and guide ICD indications in previously nonimplanted patients (PUBMED:30236390). Despite these findings, the prognostic impact of ICD therapy in LVAD recipients remains controversial. Some studies suggest that ICD therapy in LVAD recipients is not associated with a survival benefit or reduced morbidity after LVAD implantation (PUBMED:37332585). Furthermore, ineffective ICD shocks for ventricular fibrillation have been reported in patients with LVADs, highlighting the need for a systematic approach to the management of ICDs and defibrillation threshold testing in these patients (PUBMED:34113891). In conclusion, while there is evidence to support the use of ICDs in CF-LVAD patients with a history of VAs, the need for primary prevention with ICDs in patients without such a history is less clear. Prospective data and randomized-controlled trials would be beneficial to clarify the impact of routine ICD implantation in CF-LVAD patients, particularly those at low risk of VAs (PUBMED:31768918).
Instruction: Are older adults up-to-date with cancer screening and vaccinations? Abstracts: abstract_id: PUBMED:15963306 Are older adults up-to-date with cancer screening and vaccinations? Introduction: Public health organizations in the United States emphasize the importance of providing routine screening for breast cancer, cervical cancer, and colorectal cancer, as well as vaccinations against influenza and pneumococcal disease among older adults. We report a composite measure of adults aged 50 years and older who receive recommended cancer screening services and vaccinations. Methods: We analyzed state data from the 2002 Behavioral Risk Factor Surveillance System, which included 105,860 respondents aged 50 and older. We created a composite measure that included colonoscopy or sigmoidoscopy within 10 years or a fecal occult blood test in the past year, an influenza vaccination in the past year, a Papanicolaou test within 3 years for women with an intact cervix, a mammogram, and for adults aged 65 and older, a pneumonia vaccination during their lifetime. We performed separate analyses for four age and sex groups: men aged 50 to 64, women aged 50 to 64, men aged 65 and older, and women aged 65 and older. Results: The percentage of each age and sex group that was up-to-date according to our composite measure ranged from 21.1% of women aged 50 to 64 (four tests) to 39.6% of men aged 65 and older (three tests). For each group, results varied by income, education, race/ethnicity, insurance status, and whether the respondent had a personal physician. Conclusion: These results suggest the need to improve the delivery of cancer screenings and vaccinations among adults aged 50 and older. We propose continued efforts to measure use of clinical preventive services. abstract_id: PUBMED:34287279 What Should We Recommend for Colorectal Cancer Screening in Adults Aged 75 and Older? The current recommendation to stop colorectal cancer screening for older adults is based on a lack of evidence due to systematic exclusion of this population from trials. Older adults are a heterogenous population with many available strategies for patient-centered assessment and decision-making. Evolutions in management strategies for colorectal cancer have made safe and effective options available to older adults, and the rationale to screen for treatable disease more reasonably, especially given the aging Canadian population. In this commentary, we review the current screening guidelines and the evidence upon which they were built, the unique considerations for screening older adults, new treatment options, the risks and benefits of increased screening and potential considerations for the new guidelines. abstract_id: PUBMED:29129211 Individualized Approach to Cancer Screening in Older Adults. The primary goal of cancer screening is early detection of cancer to reduce cancer-specific mortality and morbidity. The benefits of screening in older adults are uncertain due to paucity of evidence. Extrapolating data from younger populations, evidence suggests that the benefit occurs years later from the time of initial screening and therefore may not be applicable in those older adults with limited life expectancy. Contrast this with the harms of screening, which are more immediate and increase with age and comorbidities. An individualized approach to cancer screening takes these factors into consideration, allowing for thoughtful decision making for older adults. abstract_id: PUBMED:26614858 Cancer Screening in Older Adults. Cancer screening is an important tool for reducing morbidity and mortality in the elderly. In this article, performance characteristics of commonly used screening tests for colorectal, lung, prostate, breast, and cervical cancers are discussed. Guidelines are emphasized and key issues to consider in screening older adults are highlighted. abstract_id: PUBMED:33962825 Psychosocial and clinical predictors of continued cancer screening in older adults. Objective: Many older adults (aged 75+) continue cancer screening despite guidelines suggesting they should not. Using mixed-methods, we examined psychosocial and clinical factors associated with continued breast/prostate screening. Methods: We conducted an online, scenario-based, randomized study in Australia with participants aged 65+ years. The primary outcome was screening intention (10-point scale, dichotomized: low (1-5) and high (6-10)). We also measured demographic, psychosocial, and age-related clinical variables. Participants provided reason/s for their screening intentions in free-text. Results: 271 eligible participants completed the survey (aged 65-90 years, 71% adequate health literacy). Those who reported higher cancer anxiety, were men, screened more recently, had family history of breast/prostate cancer and were independent in activities of daily living, were more likely to intend to continue screening. Commonly reported reasons for intending to continue screening were grouped into six themes: routine adherence, the value of knowing, positive screening attitudes, perceived susceptibility, benefits focus, and needing reassurance. Conclusions: Psychosocial factors may drive continued cancer screening in older adults and undermine efforts to promote informed decision-making. Practice Implications: When communicating benefits and harms of cancer screening to older adults, both clinical and psychosocial factors should be discussed to support informed decision-making. abstract_id: PUBMED:33099456 Cancer Screening in Older Adults: Individualized Decision-Making and Communication Strategies. Cancer screening decisions in older adults can be complex due to the unclear cancer-specific mortality benefits of screening and several known harms including false positives, overdiagnosis, and procedural complications from downstream diagnostic interventions. In this review, we provide a framework for individualized cancer screening decisions among older adults, involving accounting for overall health and life expectancy, individual values, and the risks and benefits of specific cancer screening tests. We then discuss strategies for effective communication of recommendations during clinical visits that are considered more effective, easy to understand, and acceptable by older adults and clinicians. abstract_id: PUBMED:36606360 Breast and prostate cancer screening rates by cognitive status in US older adults. Introduction: For most older adults with dementia, the short-term harms and burdens of routine cancer screening likely outweigh the delayed benefits. We aimed to provide a more updated assessment of the extent that US older adults with dementia receive breast and prostate cancer screenings. Methods: Using the Health and Retirement Study (HRS) Wave 12 (2014-2015) linked to Medicare, we examine rates of breast and prostate cancer screenings in adults 65+ years by cognitive status. We used claims data to identify eligibility for screening and receipt of screening. We used a validated method using HRS data to define cognitive status. Results: The analytic sample included 2439 women in the breast cancer screening cohort and 1846 men in the prostate cancer screening cohort. Average ages were 76.8 years for women and 75.6 years for men, with 9.0% and 7.6% with dementia in each cohort, respectively. Among women with dementia, 12.3% were screened for breast cancer. When stratified by age, 10.6% of those 75+ and have dementia were screened for breast cancer. When stratified by predicted life expectancy, 10.4% of those with predicted life expectancy of <10 years and have dementia were screened for breast cancer. Among men with dementia, 33.9% were screened for prostate cancer. When stratified by age, 30.9% of those 75+ and have dementia were screened for prostate cancer. When stratified by predicted life expectancy, 34.4% of those with predicted life expectancy of <10 years and have dementia were screened for prostate cancer. Using multivariable logistic regression, dementia was associated with lower odds of receiving breast cancer screening (OR 0.36, 95% CI 0.23-0.57) and prostate cancer screening (OR 0.58, 95% CI 0.36-0.96). Discussion: Our results suggest potential over-screening in older adults with dementia. Better supporting dementia patients and caregivers to make informed cancer screening decisions is critical. abstract_id: PUBMED:32803486 Cancer Screening Among Older Adults: a Geriatrician's Perspective on Breast, Cervical, Colon, Prostate, and Lung Cancer Screening. Purpose Of Review: We summarize the evidence of benefits, harms, and tools to assist in individualized decisions among older adults in screening for breast, prostate, colon, lung, and cervical cancer. Recent Findings: The benefits of cancer screening in older adults remain unclear due to minimal inclusion of adults > 75 years old in most randomized controlled trials. Indirect evidence suggests that the benefits of screening seen in younger adults (< 70 years old) can be extrapolated to older adults when they have an estimated life expectancy of at least 10 years. However, older adults, especially those with limited life expectancy, may be at increased risk for experiencing harms of screening, including overdiagnosis of clinically unimportant diseases, complications from diagnostic procedures, and distress after false positive test results. We provide a framework to integrate key factors such as health status, risks and benefits of specific tests, and patient preferences to guide clinicians in cancer screening decisions in older adults. abstract_id: PUBMED:32880888 Cancer Prevention and Screening for Older Adults: Part 1. Lung, Colorectal, Bladder, and Kidney Cancer. The incidence of most cancers increases with age. Cancer is the second most common cause of death in older adults after cardiovascular disease. Many common cancers in older adults can be prevented from occurring or can be identified at an early stage and treated effectively. The prevention and identification of cancer in its early stages, in an attempt to reduce discomfort and disability associated with advanced cancer and cancer treatment, is also a priority. Overscreening for cancer in older adults can lead to unnecessary diagnostic testing and unnecessary treatment. Both older adults and their healthcare providers need guidance on the appropriate use of cancer prevention and screening interventions. This first of a two-part review addresses special considerations regarding cancer prevention for adults aged 65 and older. Screening decisions and the impact of limited life expectancy and an older adult's ability to tolerate cancer treatment are also addressed. Guidance is provided regarding the prevention and early identification of lung, colorectal, bladder, and kidney cancer in older adults. The prevention of breast, prostate, and female urogenital cancers are addressed in Part 2. J Am Geriatr Soc 68:2399-2406, 2020. abstract_id: PUBMED:30403555 Primary Care Providers' Attitudes and Practices Regarding Cancer Screening in Older Adults. Cancer screening decisions for older adults should be individualized. However, conducting such complex shared decisions may be challenging for primary care providers (PCPs). Additionally, there is little information about how PCPs make these decisions. This study consisted of a provider survey and chart review to assess current PCP approaches to breast and colorectal cancer (CRC) screening for patients ages ≥75 years. PCP survey questions: panel age, comfort with discussion of screening harms and benefits, screening decision-making process, and discussion style. One hundred charts were chosen from a random sample of male and female patients ages ≥75 with a recent office visit. Chart reviews assessed whether providers recommended screening for breast and/or CRC in patients ages ≥75, if there was a documented screening discussion, and if screening was completed. Fifty-one PCPs completed the survey. PCPs varied in the proportions of older adults they recommended for breast and CRC screening; 90.2% reported feeling very (43.1%) or somewhat (47.1%) comfortable discussing reasons for/against screening with older patients. Top screening considerations: life expectancy (84.3%), patient preference (82.4%), and severity of medical conditions (70.6%). Three-quarters (74.55%) reported a shared decision-making approach with discussion of harms/benefits. Of 61 eligible patients, 8 (13.1%) had a documented discussion regarding mammography; of 58 patients eligible for CRC screening, 7 (12.1%) had a documented discussion. Findings showed inconsistency in PCP approaches to cancer screening in older adults and in documentation of discussion. There is ample room for improvement in standardizing approaches and documentation of cancer screening discussions with older patients. Answer: The data suggests that older adults may not be consistently up-to-date with cancer screening and vaccinations. According to a study analyzing state data from the 2002 Behavioral Risk Factor Surveillance System, which included 105,860 respondents aged 50 and older, the percentage of adults who were up-to-date with recommended cancer screenings and vaccinations ranged from 21.1% of women aged 50 to 64 to 39.6% of men aged 65 and older. The results varied by income, education, race/ethnicity, insurance status, and whether the respondent had a personal physician (PUBMED:15963306). Furthermore, there is evidence that many older adults (aged 75+) continue cancer screening despite guidelines suggesting they may not need to, influenced by psychosocial factors such as cancer anxiety, routine adherence, and a focus on the benefits of screening (PUBMED:33962825). This indicates that while some older adults may be up-to-date, others may be undergoing screenings that are not recommended for their age group or health status. Additionally, a study using the Health and Retirement Study (HRS) Wave 12 (2014-2015) linked to Medicare data found that among older adults with dementia, 12.3% of women were screened for breast cancer and 33.9% of men were screened for prostate cancer, suggesting potential over-screening in this population (PUBMED:36606360). Overall, the evidence points to a need for improved delivery of cancer screenings and vaccinations among adults aged 50 and older, as well as better support for informed decision-making, particularly among those with dementia or limited life expectancy (PUBMED:15963306, PUBMED:33962825, PUBMED:36606360).
Instruction: Does total knee arthroplasty modify flexion axis of the knee? Abstracts: abstract_id: PUBMED:24836651 Changes in the functional flexion axis of the knee before and after total knee arthroplasty using a navigation system. Long term satisfaction of patients with total knee arthroplasty (TKA) has lagged behind that of total hip arthroplasty. One possible reason is the failure of the artificial joint to recreate natural kinematics of the knee. This study evaluated the pre and post implant functional flexion axis in the knees of 285 total knee arthroplasty patients using a surgical navigation system. Results showed that post-implant there was less femoral rollback early in flexion on the lateral side of the joint than pre-implant. Designing future generations of knee implants to allow for this motion may give patients a more 'natural' feeling knee and may benefit outcomes. abstract_id: PUBMED:26438246 Changes in the orientation of knee functional flexion axis during passive flexion and extension movements in navigated total knee arthroplasty. Purpose: Recently, the functional flexion axis has been considered to provide a proper rotational alignment of the femoral component in total knee arthroplasty. Several factors could influence the identification of the functional flexion axis. The purpose of this study was to analyse the estimation of the functional flexion axis by separately focusing on passive flexion and extension movements and specifically assessing its orientation compared to the transepicondylar axis, in both the axial plane and the frontal plane. Methods: Anatomical and kinematic acquisitions were performed using a commercial navigation system on 79 patients undergoing total knee arthroplasty with cruciate substituting prosthesis design. The functional flexion axis was estimated from passive movements, between 0° and 120° of flexion and back. Intra-observer agreement and reliability, internal-external rotation and the angle with the surgical transepicondylar axis, in axial and frontal planes, were separately analysed for flexion and extension, in pre- and post-implant conditions. Results: The analysis of reliability and agreement showed good results. The identification of the functional flexion axis showed statistically significant differences both in relation to flexion and extension and to pre- and post-implant conditions, both in frontal plane and in axial plane. The analysis of internal-external rotation confirmed these differences in kinematics (p < 0.05, between 25° and 35° of flexion). Conclusions: The identification of the functional flexion axis changed in relation to passive flexion and extension movements, above all in frontal plane, while it resulted more stable and reliable in axial plane. These findings supported the possible clinical application of the functional flexion axis in the surgical practice by implementing navigated procedures. However, further analyses are required to better understand the factors affecting the identification of the functional flexion axis. Level Of Evidence: IV. abstract_id: PUBMED:37729737 Comparison of finite helical axis of the knee joint before and after bi-cruciate-preserving knee arthroplasty. Background: Difference in the finite helical axis of the knee joints before and after anterior cruciate ligament-preserving knee arthroplasties such as unicompartmental knee arthroplasty and bi-cruciate-retaining total knee arthroplasty remains unknown. This study compared the knee finite helical axes before and after bi-cruciate-preserving knee arthroplasty. Methods: Patients undergoing medial unicompartmental knee arthroplasty and bi-cruciate-retaining total knee arthroplasty were included. Under fluoroscopy, participants performed a deep knee bend before and after surgery. A two/three-dimensional registration technique was employed to measure tibiofemoral kinematics. Femoral finite helical axis was calculated in a flexion range of 0-120° using 30° windows (early-, mid-, late-, and deep-flexion phases). Findings: In unicompartmental knee arthroplasty, the preoperative knee vertical angle was larger than the postoperative vertical angle in mid- and deep-flexion phases. The postoperative knee vertical angle was smaller in unicompartmental knee arthroplasty than in bi-cruciate-retaining total knee arthroplasty. In unicompartmental knee arthroplasty, the preoperative horizontal angle was smaller than the postoperative horizontal angle in the early-flexion phase. However, in bi-cruciate-retaining total knee arthroplasty, the preoperative horizontal angle was larger than the postoperative horizontal angle in mid- and deep-flexion phases. The horizontal angle was smaller before unicompartmental knee arthroplasty than that before bi-cruciate-retaining total knee arthroplasty in early-, mid-, and deep-flexion phases. However, the vertical angle was larger after unicompartmental knee arthroplasty than that after bi-cruciate-retaining total knee arthroplasty in the early-flexion phase. Interpretation: The knee finite helical axes before and after unicompartmental knee arthroplasty differed from those before and after bi-cruciate-retaining total knee arthroplasty. abstract_id: PUBMED:24832694 Does total knee arthroplasty modify flexion axis of the knee? Purpose: To prospectively investigate whether preoperative functional flexion axis in patients with osteoarthritis- and varus-alignment changes after total knee arthroplasty and whether a correlation exists both between preoperative functional flexion axis and native limb deformity. Methods: A navigated total knee arthroplasty was performed in 108 patients using a specific software to acquire passive joint kinematics before and after implant positioning. The knee was cycled through three passive range of motions, from 0° to 120°. Functional flexion axis was computed using the mean helical axis algorithm. The angle between the functional flexion axis and the surgical transepicondylar axis was determined on frontal (α (F)) and axial (α (A)) plane. The pre- and postoperative hip-knee-ankle angle, related to femur mechanical axis, was determined. Results: Postoperative functional flexion axis was different from preoperative only on frontal plane, while no differences were found on axial plane. No correlation was found between preoperative α (A) and native limb deformity, while a poor correlation was found in frontal plane, between α (F) and preoperative hip-knee-ankle angle. Conclusions: Total knee arthroplasty affects functional flexion axis only on frontal plane while has no effect on axial plane. Preoperative functional flexion axis is in a more varus position respect to the transepicondylar axis both in pre- and postoperative conditions. Moreover, the position of the functional axis on frontal plane in preoperative conditions is dependent on native limb alignment, while on axial plane is not dependent on the amount of preoperative varus deformity. abstract_id: PUBMED:34539072 Factors affecting early knee-flexion range of motion after total knee arthroplasty. [Purpose] To investigate the factors affecting the knee-flexion range of motion in the early period after total knee arthroplasty. [Participants and Methods] Ninety-nine patients who had undergone total knee arthroplasty at our hospital between 2016 and 2019 were allocated into two groups based on the presence of a 110° knee-flexion range of motion at 14 days post-surgery. From medical records, we extracted data for the participants' basic attributes and preoperative/postoperative physical function (knee-flexion range of motion, Timed Up & Go Test results, resting/walking pain according to a numerical rating scale, and knee-extension muscle strength). Postoperative physical function was measured 14 days post-surgery. [Results] Preoperative knee-flexion range of motion, preoperative femorotibial angle, postoperative knee-extensor strength, and postoperative Timed Up & Go Test value differed significantly as factors related to achieving a 110° knee-flexion range of motion. Through further statistical analyses, we selected the preoperative knee-flexion range of motion, preoperative femorotibial angle, preoperative Timed Up & Go Test result, and postoperative knee-extension strength as factors affecting the knee-flexion range of motion at 14 days post-surgery. [Conclusion] Preoperative knee-flexion range of motion, preoperative femorotibial angle, preoperative Timed Up & Go Test result, and postoperative knee-extension strength influence knee-flexion range of motion at 14 days after total knee arthroplasty, and our findings indicate the effectiveness of active physiotherapy interventions. abstract_id: PUBMED:25209210 Unicompartmental knee arthroplasty cannot restore the functional flexion axis of a living knee to normal. Purpose: The purpose of this study was to investigate the hypothesis that a medial unicompartmental knee arthroplasty might restore the functional flexion axis of a knee to normal. The flexion axis can be indirectly identified by tracking the vertical translation of anatomic landmarks that basically move around the flexion axis during a knee motion. If a unicompartmental knee could help restore the normal flexion axis, the anatomic landmarks after the arthroplasty would show the vertical translation similar to those of normal knees during a knee flexion. Methods: While performing a squatting motion, the kinematics of 17 knees were determined before and after a medial unicompartmental arthroplasty to calculate the vertical translation of a clinical epicondylar axis, using a three- to two-dimensional registration technique through a single-plane fluoroscopic system incorporating a biplanar static radiography. The results were compared with a normal data, and a statistical analysis including a two-way repeated-measured analysis of variance was performed. Results: For the medial end, from 10° to 100° knee flexion, normal, osteoarthritic, and unicompartmental knees had the average superior vertical translation of 7.3 ± 4.2, 4.3 ± 7.2, and 2.4 ± 3.1 mm, respectively, with statistical significance between normal and unicompartmental knees (p < 0.001). The vertical translation did not return to normal post-implantation. Conclusions: A unicompartmental knee could not reproduce the normal flexion axis. As for clinical relevance, the changes of the implant design and surgical procedure may be necessary to obtain the normal flexion axis reproducing a normal motion. Level Of Evidence: IV. abstract_id: PUBMED:35093564 Contralateral knee flexion predicts postoperative knee flexion in unilateral total knee arthroplasty: A retrospective study. Background: Postoperative flexion angle is associated with preoperative flexion in the same knee after total knee arthroplasty (TKA). However, whether the preoperative contralateral knee flexion angle could influence postoperative flexion remains unknown. This study aimed to test the effect of contralateral knee flexion on final flexion in the postoperative knee after unilateral TKA. We hypothesized that the preoperative contralateral knee flexion angle affects postoperative knee flexion, as does preoperative flexion, in the operated knee. Patients And Methods: Ninety-five patients with knee osteoarthritis (OA) who underwent cruciate-retaining TKA were included in this study, and their maximum passive knee flexion angle was evaluated preoperatively and at final follow-up on the operative and contralateral sides. Patients were divided according to their preoperative flexion angles: Group C, those whose contralateral knee flexion angle was greater than the operative knee flexion angle; and Group O, those whose contralateral knee flexion angle was equal to or smaller than the operative knee flexion angle. We performed multiple regression analysis, with postoperative flexion and improvement in the flexion angle as objective variables and preoperative predictors (age, sex, body mass index, preoperative flexion, and contralateral knee flexion) as explanatory variables. The results of the flexion angles were compared between the two groups using unpaired t-tests. Results: Postoperative flexion angle was significantly affected by the preoperative contralateral knee flexion angle (β=0.35, p=0.002), and an improvement in flexion was negatively affected by preoperative flexion of the operated knee (β=-0.83, p<0.001) and positively affected by contralateral knee flexion (β=0.35, p=0.001). Group O achieved poorer postoperative flexion than Group C (110.4±16.6° vs 119.6±16.5°, p=0.0017). Conclusion: Patients with less flexion of the contralateral knee are less likely to obtain a greater degree of knee flexion after TKA. Level Of Evidence: III; retrospective comparative study. abstract_id: PUBMED:34998082 Effect of weight-bearing in bicruciate-retaining total knee arthroplasty during high-flexion activities. Background: To evaluate the effect of weight-bearing on the kinematics of the bicruciate-retaining total knee arthroplasty design during high knee flexion activities. Methods: The kinematics of 21 bicruciate-retaining total knee arthroplasties were evaluated under fluoroscopy, with two- and three-dimensional image registrations, during squatting (weight-bearing) and active-assisted knee flexion (non-weight-bearing). The following variables were measured: knee range of motion, axis of femoral rotation and varus-valgus angle relative to the tibial component, anteroposterior translation of the medial and lateral contact points, and the kinematic pathway of the joint surfaces. Findings: From 20° to 100° of flexion, the femoral external rotation during weight-bearing was larger than that during non-weight-bearing. There were no differences in the varus-valgus angles between the two conditions. From 10° to 50° of flexion, the medial contact point during weight-bearing was located posterior to the point of contact during non-weight-bearing; this difference between the two weight-bearing conditions was significant. From 0° to 90° of flexion, the lateral contact point in weight-bearing was located posterior to the contact point in non-weight-bearing; this difference between the two weight-bearing conditions was also significant. Interpretation: The anteroposterior position of the medial and lateral contact points of the bicruciate-retaining total knee arthroplasty design was significantly more posterior in the mid-range of knee flexion in weight-bearing than in non-weight-bearing. However, no anterior translation of the bicruciate-retaining total knee arthroplasty design was observed. Therefore, bicruciate-retaining total knee arthroplasty appears to have good anteroposterior stability throughout the range of knee flexion, regardless of the weight-bearing condition. abstract_id: PUBMED:25854497 Alignment in knee flexion position during navigation-assisted total knee arthroplasty. Purpose: The aim of this study was to demonstrate the lower limb alignment in knee flexion position after navigation-assisted total knee arthroplasty using the gap technique and to identify the correlative factors. Methods: One hundred and twenty consecutive osteoarthritic knees (120 patients) were prospectively enrolled for intraoperative data collection. All TKA surgeries were performed using the navigation system (OrthoPilot™, version 4.0; B. Braun Aesculap, Tuttlingen, Germany). Before and after final prosthesis implantation, the lower limb navigation alignment in both knee extension (0°) and knee flexion (90°) position was recorded. The knee flexion alignment was divided into three groups: varus, neutral and valgus alignment. To determine the factors of the alignment in knee flexion position, preoperative demographics, radiologic and intraoperative data were obtained. Pearson's correlation (r) analysis was performed to find the correlation. The Knee Society Score and Western Ontario and McMaster Universities Osteoarthritis Index were compared between groups. Results: Although all postoperative extension alignment was within neutral position (between -2° and +2°), postoperative knee flexion alignment was divided into three groups: varus (≤-3°), 24 cases (20 %); neutral (between -2° and +2°), 85 cases (70.8 %) and valgus (≥+3°) alignment, 11 cases (9.2 %). There were a good correlation of alignment in knee flexion position with the rotation of femoral component relative to posterior condylar axis (r = -0.502, p = 0.000) and weak correlations with posterior femoral cut thickness (lateral condyle) (r = 0.207, p = 0.026), medial flexion (90°) gap after femoral component rotation adjustment (r = 0.276, p = 0.003). Other variables did not show correlations. There were no statistical clinical differences between varus, neutral and valgus knee flexion alignment groups. Conclusion: About 30 % of the cases showed malalignment of more than 3° in knee flexion position although with neutral alignment in extension position. The knee flexion alignment had a good correlation with the rotation of femoral component relative to posterior condylar axis. Neutral alignment in knee flexion position may be adjusted by femoral component rotation especially by the use of navigation system. Level Of Evidence: IV. abstract_id: PUBMED:31141758 Medial knee stability at flexion increases tibial internal rotation and knee flexion angle after posterior-stabilized total knee arthroplasty. Background: Soft-tissue balance is an important element for the success of total knee arthroplasty; however, the influence of intraoperative soft-tissue balance on knee kinematics in posterior-stabilized-total knee arthroplasty remains unknown. We investigated whether intraoperative soft-tissue balance could influence knee kinematics and flexion angle after posterior-stabilized-total knee arthroplasty. Methods: This study included 30 patients with knee osteoarthritis and varus alignment who underwent posterior-stabilized total knee arthroplasty; intraoperative soft-tissue balance parameters, such as varus ligament balance and joint component gap, were assessed by an offset-type tensor at knee flexion angles of 0°, 10°, 30°, 60°, 90°, 120°, and 135°. Medial and lateral compartment gaps were calculated. The amount of tibial internal rotation was determined between 60° and 135° flexion by the navigation system. Simple linear regression analysis was used to analyze the effects of intraoperative soft-tissue balance on tibial internal rotation. Linear regression analysis was used to assess correlation between tibial internal rotation and postoperative knee flexion angle. Findings: Medial compartment gaps at 60° (r = -0.57, P < 0.05) and 90° (r = -0.60, P < 0.05) of flexion were significantly negatively correlated with tibial internal rotation. Moreover, tibial internal rotation showed a significant correlation with the 2-year postoperative knee flexion angle (r = 0.50, P < 0.05) and improvement in knee flexion angle (r = 0.61, P < 0.05). Interpretation: Thus, smaller medial compartment gaps at 60° and 90° of flexion play an important role in achieving medial pivot motion with tibial internal rotation; moreover, tibial internal rotation provides a better flexion angle after posterior-stabilized total knee arthroplasty. Answer: Yes, total knee arthroplasty (TKA) does modify the flexion axis of the knee. Studies have shown that the functional flexion axis, which is important for the natural kinematics of the knee, changes after TKA. One study found that the postoperative functional flexion axis was different from the preoperative axis on the frontal plane, although no differences were found on the axial plane. This suggests that TKA affects the functional flexion axis only on the frontal plane (PUBMED:24832694). Another study reported that after TKA, there was less femoral rollback early in flexion on the lateral side of the joint than pre-implant, indicating a change in the knee's kinematics post-surgery (PUBMED:24836651). Additionally, the identification of the functional flexion axis showed statistically significant differences in relation to passive flexion and extension movements, particularly in the frontal plane, after TKA (PUBMED:26438246). Moreover, it was observed that unicompartmental knee arthroplasty could not restore the functional flexion axis of a knee to normal, as the vertical translation of anatomic landmarks post-implantation did not return to normal (PUBMED:25209210). This further supports the notion that TKA alters the knee's flexion axis. Furthermore, the finite helical axis of the knee joints before and after bi-cruciate-preserving knee arthroplasty also showed differences, indicating a change in the knee's kinematics following surgery (PUBMED:37729737). In summary, TKA does modify the flexion axis of the knee, which can affect the natural kinematics and potentially the overall satisfaction of patients with their knee function post-surgery.
Instruction: Can contraction patterns predict neonatal outcomes? Abstracts: abstract_id: PUBMED:24228752 Can contraction patterns predict neonatal outcomes? Objective: To estimate the association between contraction patterns in labor and neonatal outcomes. Methods: A nested case-control study within a consecutive term birth cohort included women in labor with intrauterine pressure catheters (IUPCs) who reached the second stage. Cases were women delivering neonates with composite morbidity: special care or intensive care unit admission, umbilical artery pH ≤ 7.1 or 5-min Apgar < 7. The control group delivered without any components of the composite morbidity. Contraction frequency, duration, relaxation time, Montevideo units (MVUs) and baseline tone in the last 30 min prior to delivery were compared. We used logistic regression to adjust for potential confounders and receiver operating characteristic curves to evaluate the ability of contraction parameters to predict adverse neonatal outcomes. Results: There were 183 cases of adverse neonatal outcomes and 2172 controls without the composite outcome. Contraction duration, relaxation time, MVUs and baseline tone did not significantly differ between the groups. Tachysystole was more common in women with the adverse neonatal outcome (21% versus 15%, p = 0.01). A model including tachysystole, oxytocin use and nulliparity did not adequately predict the adverse outcome (AUC = 0.61). Conclusions: Although tachysystole is associated with adverse neonatal outcomes, uterine activity cannot be used to predict neonatal outcome. abstract_id: PUBMED:36238451 Dietary patterns and birth outcomes of healthy Lebanese pregnant women. Background: The aim of this study was to define the dietary patterns (DPs) of a sample of Lebanese pregnant women and to establish their correlation with maternal and neonatal outcomes. Methods: A cross-sectional study was conducted among 358 Lebanese pregnant women. Maternal socio-demographic variables, anthropometric measurements, gestational weight gain and neonatal outcomes such as weight, length, head circumference and Apgar score were collected by qualified dietitians. Dietary intake was assessed by a validated food frequency questionnaire and three 24-h dietary recalls. DPs were determined, a posteriori, by a factor analysis to distinguish the inter-correlations between the food groups and a cluster analysis method to assemble the participants into groupings based on similarities in food consumption. Results: The identified DPs were not exclusively composed of specific food groups, since some components were overlapping in the DPs. The first one was characterized by a high consumption of starchy vegetables, unsaturated fats and unhealthy foods, the second was rich in fruits and vegetables, seeds, rice and pasta, and in fried local meals and the third was mainly based on protein-rich foods like poultry, fish, eggs and dairy products. They were named respectively as "Westernized," "Mixed" and "Neo-Mediterranean" by the research team. Women having the lowest pre-gestational body mass index (BMI) and higher gestational age followed mainly the "Neo-Mediterranean" eating pattern. Conclusion: The three identified DPs among Lebanese pregnant women were correlated with the pre-gestational BMI, and some maternal variables. However, neither the maternal nor the neonatal outcomes were correlated with the DP adopted by the mothers. abstract_id: PUBMED:29621921 Temporal and quantitative associations of electronic fetal heart rate monitoring patterns and neonatal outcomes†. Objective: The objective of this study is to evaluate the associations of electronic fetal heart rate monitoring (EFM) patterns and adverse neonatal outcomes Study design: From 2013 to 2016; 12,067 term, singleton deliveries in labor ≥2 h with abnormal EFM defined as absent accelerations, variable, late or prolonged decelerations, tachycardia, bradycardia, or minimal variability were analyzed as any documentation during labor, in first hour and last hour of labor. Outcomes were composite neonatal adverse outcomes, neonatal intensive care unit (NICU) admission, neonatal hypoxia, neonatal hypoglycemia, umbilical artery pH, and base excess. Independent associations were ascertained using regression analysis. Results: Significant independent associations occurred between any abnormal EFM during the last hour and five adverse neonatal outcomes; between abnormal EFM at any time and one adverse neonatal outcome while there was none with the first hour of labor. In the last hour, accelerations had significant negative associations with three adverse neonatal outcomes, while prolonged decelerations, late decelerations, tachycardia, and bradycardia had significant positive associations with three adverse neonatal outcomes. Throughout labor, increasing accelerations events were significantly negatively correlated with all adverse neonatal outcomes, while increasing frequency of late, variable, and prolonged decelerations were positively associated with five adverse neonatal outcomes. Hierarchical analysis showed that bradycardia/tachycardia contributed only 0.8%, while all EFM periodic changes contributed 1%; the addition of the frequencies of abnormal EFM events contributed 0.6% to the variance in umbilical artery pH and base excess. Conclusions: Terminal EFM patterns are independently associated with neonatal outcomes. Accelerations are protective of adverse neonatal outcomes. Increasing frequency of EFM patterns overtime contributes to neonatal outcome. abstract_id: PUBMED:37882870 A point-of-care urine test to predict adverse maternal and neonatal outcomes in Asian women with suspected preeclampsia. Objectives: To assess clinical utility of the urine Congo red dot test (CRDT) in predicting composite adverse maternal and neonatal outcomes in women with suspected preeclampsia (PE). Methods: CRDT result and pregnancy outcomes were prospectively documented in women with new onset or pre-existing hypertension, new or pre-existing proteinuria, PE symptoms and suspected PE-related fetal growth restriction or abnormal Doppler presenting from 20 weeks' gestation between January 2020 and December 2022. Participants and clinicians were blinded to the CRDT result and managed according to internally agreed protocols. Composite maternal outcome was defined as PE, postpartum hemorrhage, intensive care unit admission, and maternal death. Composite neonatal outcome was defined as small for gestational age, preterm birth, 5-min Apgar score < 7, neonatal intensive care unit admission, and neonatal death. Results: Two hundred and forty-four women out of two hundred and fifty-one (97.2%) had a negative CRDT. All seven women with positive CRDT had both adverse maternal and neonatal outcomes, giving positive predictive values (PPV) of 100%. Rates of composite adverse maternal and neonatal outcomes in CDRT negative women were 103/244 [42.2%, 95% confidence interval (CI) 36.2%-48.5%] and 170/244 (69.7%, 95% CI 63.6%-75.1%), respectively. CRDT negative predictive values (NPV) for adverse maternal and neonatal outcomes were, respectively, 141/244 (57.8%, 95% CI 48.6%-68.2%) and 74/244 (30.3%, 95% CI 23.8%-38.1%). Conclusion: CRDT had low NPV but high PPV for adverse maternal and neonatal outcomes in women with suspected PE. Its role in clinical management and triage of women with suspected PE is limited as it cannot identify those at low risk of developing adverse outcomes. abstract_id: PUBMED:33742192 Maternal diet patterns during early pregnancy in relation to neonatal outcomes. Background: Research has established that maternal diet influences fetal growth and preterm birth, but most studies only evaluate single nutrients. Relations between dietary patterns and neonatal outcomes are understudied. Objective: We evaluated associations of neonatal outcomes with maternal diet patterns derived using 3 a priori diet scores [Alternative Healthy Eating Index-2010 (AHEI-2010), alternate Mediterranean diet score (aMed), and Dietary Approaches to Stop Hypertension (DASH)] as well as principal components analysis (PCA). Methods: We studied 1948 women from the Eunice Kennedy Shriver National Institute of Child Health and Human Development Fetal Growth Studies-Singletons, a racially diverse multisite cohort of pregnant women in the USA (2009-2013). Diet in the past 3 mo was assessed using a self-administered FFQ at 8-13 weeks of gestation. Birthweight was abstracted from medical records and neonatal anthropometry measured postdelivery using standardized protocols. Results: All 3 a priori scores were significantly associated with increased birthweight, and aMed was also associated with reduced odds of low birthweight [quartile 4 versus 1: ORadj = 0.42; 95% CI: 0.18, 1.00 (P-trend = 0.02)]. Greater aMed and DASH scores were significantly associated with increased length [aMed: quartile 4 versus 1: 0.54 cm; 95% CI: 0.10, 0.99 (P-trend = 0.006); DASH: quartile 4 versus 1: 0.62 cm; 95% CI: 0.25, 0.99 (P-trend = 0.006)] and upper arm length. Neither diet pattern derived from PCA was significantly associated with birthweight. Conclusion: Among mostly low-risk pregnant women, pre- and early pregnancy healthful diet quality indices, particularly the aMed score, were associated with larger neonatal size across the entire birthweight distribution. In the absence of generally accepted pregnancy-specific diet quality scores, these results provide evidence for an association between maternal diet patterns and neonatal outcomes. abstract_id: PUBMED:32731425 Main Fetal Predictors of Adverse Neonatal Outcomes in Pregnancies with Gestational Diabetes Mellitus. The objectives of this study were to (a) assess the utility of fetal anthropometric variables to predict the most relevant adverse neonatal outcomes in a treated population with gestational diabetes mellitus (GDM) beyond the known impact of maternal anthropometric and metabolic parameters and (b) to identify the most important fetal predictors. A total of 189 patients with GDM were included. The fetal predictors included sonographically assessed fetal weight centile (FWC), FWC > 90% and <10%, and fetal abdominal circumference centile (FACC), FACC > 90% and < 10%, at 29 0/7 to 35 6/7 weeks. Neonatal outcomes comprising neonatal weight centile (NWC), large and small for gestational age (LGA, SGA), hypoglycemia, prematurity, hospitalization for neonatal complication, and (emergency) cesarean section were evaluated. Regression analyses were conducted. Fetal variables predicted anthropometric neonatal outcomes, prematurity, cesarean section and emergency cesarean section. These associations were independent of maternal anthropometric and metabolic predictors, with the exception of cesarean section. FWC was the most significant predictor for NWC, LGA and SGA, while FACC was the most significant predictor for prematurity and FACC > 90% for emergency cesarean section. In women with GDM, third-trimester fetal anthropometric parameters have an important role in predicting adverse neonatal outcomes beyond the impact of maternal predictors. abstract_id: PUBMED:34044480 Cognitive outcomes in late childhood and adolescence of neonatal hypoxic-ischemic encephalopathy. Hypoxic-ischemic encephalopathy (HIE) is the most common cause of neonatal encephalopathy with a global incidence of approximately 1 to 8 per 1,000 live births. Neonatal encephalopathy can cause neurodevelopmental and cognitive impairments in survivors of hypoxic-ischemic insults with and without functional motor deficits. Normal neurodevelopmental outcomes in early childhood do not preclude cognitive and behavioral difficulties in late childhood and adolescence because cognitive functions are not yet fully developed at this early age. Therapeutic hypothermia has been shown to significantly reduced death and severe disabilities in term newborns with HIE. However, children treated with hypothermia therapy remain at risk for cognitive impairments and follow-up is necessary throughout late childhood and adolescence. Novel adjunctive neuroprotective therapies combined with therapeutic hypothermia may enhance the survival and neurodevelopmental outcomes of infants with HIE. The extent and severity of brain injury on magnetic resonance imaging might predict neurodevelopmental outcomes and lead to targeted interven tions in children with a history of neonatal encephalopathy. We provide a summary of the long-term cognitive outcomes in late childhood and adolescence in children with a history of HIE and the association between pattern of brain injury and neurodevelopmental outcomes. abstract_id: PUBMED:34024787 Prevalence and predictors of adverse outcomes in neonatal seizures. Objectives: Neonatal seizures are significant cause of neonatal mortality and morbidity. Current study was planned to study prevalence of adverse outcomes in neonatal seizures and identify its predictors. Methods: This observational descriptive study was carried out on 220 neonates with seizures. Neonates who succumbed to illness/ death before investigations, or whose maternal records were incomplete were excluded. Blood sugar, serum calcium, serum electrolytes, and USG skull were done in all patients. CT scan, MRI and inborn errors of metabolism profile were done as and when indicated. Adverse outcomes were defined as death, phenobarbitone non responders, or abnormal examination at discharge. Antenatal, perinatal and neonatal predictors of adverse outcomes in neonatal seizures were evaluated. Results: Out of 220 neonates with seizures 76(34.5%) had adverse outcomes. Very low birth weight babies (≤1500 gm) [OR 1.27(CI 0.57-2.84)], microcephaly [OR 5.93 (CI 0.55-64.41)], Apgar score≤3 at 5 minutes [OR 11.28(CI 14.18-30.45)], seizure onset within 24 hours [OR 5.99(CI 12.43-14.78)], meningitis [OR 2.63(CI 0.08-6.39)], septicemia [OR1.22(CI 0.45-3.31)] and abnormal cranial USG [OR 7.95(CI 12.61-24.22)] were significant predictors of adverse outcomes in neonates with seizures. Conclusion: Prematurity, very low birth weight, birth asphyxia, meningitis, septicemia and abnormal USG could predict adverse outcomes in neonatal seizures. Improved antenatal and neonatal clinical practices may help reduce adverse outcomes in these patients. abstract_id: PUBMED:26106032 Eclampsia: Maternal and neonatal outcomes. The objective of our study is to define the maternal and neonatal outcomes associated with eclampsia. This retrospective cohort study was performed using the Consortium on Safe Labor, database from 12 clinical centers, including 19 hospitals, from 2002-2008. All patients admitted with a diagnosis of eclampsia or seizure in labor and delivery or postpartum were included in the analysis. Patients with history of seizure disorder were excluded. Maternal and neonatal outcomes were compared to outcomes of women with preeclampsia and their neonates. Statistical analyses were performed using SAS. Chi square and t-test were used for categorical and continuous variables, respectively. Logistic regression and general linear regression were used to calculate odds ratios and 95% confidence intervals. p<0.05 was considered significant. The eclampsia prevalence was 0.08% (n=191) in our population; the preeclampsia cohort had 7012 women. There were significantly more eclamptic women (49%) delivered by cesarean section, as compared to preeclamptic women (36%), OR 1.7 (1.28-2.28). These women were more likely to have an ICU admission OR 12.9 (7.0-23.7). The mean gestational ages and birthweights were lower in the neonates of the eclampsia group. A multivariate analysis revealed that low cord arterial pH, low 5min Apgar score, respiratory distress syndrome OR 5.5, (1.11-27.66) and seizures OR 10.3 (3.12-33.68), p<0.05, were significantly elevated in the eclampsia cohort. The prevalence of eclampsia in our contemporary obstetrics population was 0.08%. Both mothers as well as neonates of eclamptics are at significant risk for complications due to their illness. abstract_id: PUBMED:28958087 Neonatal Sleep-Wake Analyses Predict 18-month Neurodevelopmental Outcomes. Objectives: The neurological examination of critically ill neonates is largely limited to reflexive behavior. The exam often ignores sleep-wake physiology that may reflect brain integrity and influence long-term outcomes. We assessed whether polysomnography and concurrent cerebral near-infrared spectroscopy (NIRS) might improve prediction of 18-month neurodevelopmental outcomes. Methods: Term newborns with suspected seizures underwent standardized neurologic examinations to generate Thompson scores and had 12-hour bedside polysomnography with concurrent cerebral NIRS. For each infant, the distribution of sleep-wake stages and electroencephalogram delta power were computed. NIRS-derived fractional tissue oxygen extraction (FTOE) was calculated across sleep-wake stages. At age 18-22 months, surviving participants were evaluated with Bayley Scales of Infant Development (Bayley-III), 3rd edition. Results: Twenty-nine participants completed Bayley-III. Increased newborn time in quiet sleep predicted worse 18-month cognitive and motor scores (robust regression models, adjusted r2 = 0.22, p = .007, and 0.27, .004, respectively). Decreased 0.5-2 Hz electroencephalograph (EEG) power during quiet sleep predicted worse 18-month language and motor scores (adjusted r2 = 0.25, p = .0005, and 0.33, .001, respectively). Predictive values remained significant after adjustment for neonatal Thompson scores or exposure to phenobarbital. Similarly, an attenuated difference in FTOE, between neonatal wakefulness and quiet sleep, predicted worse 18-month cognitive, language, and motor scores in adjusted analyses (each p < .05). Conclusions: These prospective, longitudinal data suggest that inefficient neonatal sleep-as quantified by increased time in quiet sleep, lower electroencephalogram delta power during that stage, and muted differences in FTOE between quiet sleep and wakefulness-may improve prediction of adverse long-term outcomes for newborns with neurological dysfunction. Answer: Contraction patterns in labor, specifically uterine activity, have been studied to determine if they can predict neonatal outcomes. However, the evidence suggests that while certain patterns, such as tachysystole, are associated with adverse neonatal outcomes, these contraction patterns alone are not sufficient to predict neonatal outcomes effectively. A study that examined contraction frequency, duration, relaxation time, Montevideo units (MVUs), and baseline tone found no significant differences between groups with and without adverse neonatal outcomes, except for a higher incidence of tachysystole in the group with adverse outcomes. Despite this association, a model including tachysystole, oxytocin use, and nulliparity did not adequately predict adverse neonatal outcomes, indicating that uterine activity parameters are not reliable predictors (PUBMED:24228752). In summary, while there is an association between certain contraction patterns, such as tachysystole, and adverse neonatal outcomes, these patterns cannot be used reliably to predict neonatal outcomes. Other factors and clinical assessments are likely necessary to improve the prediction of neonatal outcomes.
Instruction: Malpractice in invasive cardiology: is angiography of abdominal aorta or subclavian artery appropriate in patients undergoing coronary angiography? Abstracts: abstract_id: PUBMED:16322917 Malpractice in invasive cardiology: is angiography of abdominal aorta or subclavian artery appropriate in patients undergoing coronary angiography? A meta analysis. Background: Identification of peripheral vascular disease by angiography in patients undergoing coronary angiography may be considered as malpractice but sometimes seems to be justified under clear entry criteria. The present mata-analysis is aimed to analyze the appropriateness and results of screening angiography of subclavian or abdominal aorta performed at the time of coronary angiography. Methods: A search of published literature for peripheral angiography in patients undergoing coronary angiography over the last 10 years was performed using the MEDLINE database. No language restriction was employed. Only studies enrolling more than 100 patients for abdominal aortography and 50 patients for subclavian/internal mammary artery angiography were considered. Reference lists from identified studies were also reviewed to identify other potentially relevant references. Results: Twenty-nine studies were retrieved: 8 articles about subclavian artery (SA) and internal mammary (IMA) angiography and 21 about renal (RA) and aortoiliac (AOI) angiography. The total number of patients enrolled was 27,936. Nine studies out of 29 were prospective. Defined entry criteria were reported in 24 out of 29 studies. Significant SA and IMA stenosis were reported in 5.5 and 9% of patients, respectively. RA stenosis >50% was present in 12.7% of patients with CAD. Finally, undetected AOI disease was reported in 35.5% of patients undergoing coronary angiography. Mean complication rate was 0.8 +/- 0.6%. Predictors of SA and IMA stenosis were unclear. Age, multi-risk profile, multi-vessel CAD, history of PVD or carotid disease, severe hypertension, unexplained renal dysfunction or decreased creatinine clearance have been reported most frequently as predictors of RA and AOI disease in patients undergoing coronary angiography. Conclusions: Consistent evidence of appropriateness of renal angiography in selected patients undergoing coronary angiography have been produced in literature. IMA and AOI angiography seem to be not justified unless they are part of SA in patients scheduled for arterial conduit with brachial differential pressure, thoracic irradiation or surgery, or of abdominal angiography to detect RA stenosis in laboratories with radiological digital peripheral equipment. abstract_id: PUBMED:32787591 Iatrogenic aortic dissection following transradial coronary angiography in a patient with an aberrant right subclavian artery. An aberrant right subclavian artery is a congenital aortic arch anomaly in which the right subclavian artery originates from the proximal descending aorta. The presence of an aberrant right subclavian artery can make right transradial coronary interventions more difficult and even lead to complications. Iatrogenic intramural hematomas and dissection of aberrant right subclavian arteries during transradial coronary angiography have been previously reported. We herein report a case of iatrogenic aortic dissection following attempts to perform right transradial coronary angiography in a patient with an aberrant right subclavian artery. Clinicians should be vigilant for the presence of an aberrant right subclavian artery during right transradial coronary angiography and ensure gentle manipulation of wires and catheters to avoid complications. abstract_id: PUBMED:36239814 Frequency of subclavian artery stenosis in patients with mammarian artery coronary bypass and suspected coronary artery disease progression. We retrospectively analyzed patient records of all patients with a history of internal mammarian artery (IMA) coronary bypass undergoing coronary angiography at two cardiovascular centers between January 1st 1999 and December 31st 2019. A total of 11,929 coronary angiographies with or without percutaneous coronary intervention were carried out in 3921 patients. Our analysis revealed 82 (2%) patients with documented subclavian artery stenosis. Of these, 8 (10%) patients were classified as having mild, 18 (22%) moderate, and 56 (68%) severe subclavian artery stenosis. In 7 (9%) patients with subclavian artery stenosis, angiography revealed occlusion of the IMA graft. 26 (32%) patients with severe subclavian artery stenosis underwent endovascular or surgical revasculararization of the subclavian artery. In this retrospective multicenter study, subclavian artery stenosis was a relevant finding in patients with an internal mammarian artery coronary bypass graft undergoing coronary angiography. The development of dedicated algorithms for screening and ischemia evaluation in affected individuals may improve treatment of this potentially underdiagnosed and undertreated condition. abstract_id: PUBMED:19381625 An isolated left subclavian artery supplied by a collateral artery from the abdominal aorta. An isolated left subclavian artery is a rare anomaly. We report a 9-month-old boy with an isolated left subclavian artery associated with tetralogy of Fallot and the right aortic arch. MRI and angiography show that the blood supply through the left subclavian artery was maintained by a large tortuous collateral artery from the abdominal aorta. This type of collateral artery structure is unique. abstract_id: PUBMED:18405481 Coronary subclavian steal syndrome: two cases after coronary artery bypass grafting Reverse flow in the internal mammary artery (IMA) graft due to stenosis or occlusion of the proximal ipsilateral subclavian artery causes coronary subclavian steal syndrome (CSSS). We describe two patients who were diagnosed with CSSS following CABG. Patient A presented with angina pectoris, was diagnosed with CSSS and treated by transposition of the proximal IMA from the subclavian artery to the aorta. Patient B was diagnosed with CSSS by control angiography. Myocardial scintigraphy showed reversible silent ischemia. He was offered treatment, but refused as he was symptom-free. abstract_id: PUBMED:9267355 Coronary artery bypass grafting of Takayasu's disease with calcified aorta and subclavian artery obstruction. The lesions of Takayasu's disease, which may involve the aorta and major arterial branches, may pose technical difficulties during coronary artery bypass grafting (CABG). Because the aorta is often thick and calcified, the coronary arterial lesion is often located near the coronary orifice, and the internal thoracic arteries may not be suitable for grafting due to the lesions in the subclavian arteries. A 63-year-old man with a known history of Takayasu's disease was referred to our department with chest pain complaint. Coronary angiography revealed obstruction of the left main coronary artery. CT scan showed that he had a thickened and calcified aorta. Aortography showed that both subclavian arteries had obstructive disease. Instead of using ordinary arterial grafts or vein grafts, a piece of prosthetic patch carrying saphenous vein grafts was sewn to the ascending aorta to construct the proximal anastomosis of CABG. Distal anastomosis was made on the left anterior descending artery and the obtuse marginal branch in the usual fashion. The patient showed uneventful recovery, and a postoperative coronary angiogram showed patent grafts and uncomplicated ventricular performance. This technique is useful when the aorta is not suitable for vein graft anastomosis and arterial grafts are not available as in this case with aortitis reported in the paper. abstract_id: PUBMED:15353836 Complete revascularization of total obstruction of both subclavian arteries and descending abdominal aorta by combined surgery and percutaneous transluminal angioplasty. A 48-year-old man was admitted with chest pain, intermittent claudication and right upper extremity weakness. Magnetic resonance angiography revealed total occlusion of both subclavian arteries and descending abdominal aorta below renal artery. End-to-side aorta to bifemoral graft surgery and percutaneous transluminal angioplasty of both subclavian arteries were performed. Follow-up angiography after 6 months revealed patent left subclavian artery but 70% occluded right subclavian artery. Therefore re-intervention was performed at right subclavian artery. abstract_id: PUBMED:30610680 Appropriate use of elective coronary angiography in patients with suspected stable coronary artery disease. Introduction: The American College of Cardiology Foundation (ACCF) with other professional societies developed appropriate use criteria (AUC) for diagnostic coronary angiography. This study aims to examine the appropriate use of elective coronary angiography for patients with suspected stable coronary artery disease in an Irish tertiary referral center. Methods: A retrospective chart review of all elective outpatient diagnostic coronary angiograms performed in Cork University Hospital in 2014 was carried out. Data on demographics, risk factors, clinical presentation, and prior non-invasive investigations were extracted. Each patient was evaluated according to the 2012 ACCF appropriate use criteria and classified as appropriate, uncertain, or inappropriate. Univariable and multivariable logistic regression analysis was performed to investigate the factors that predict patients undergoing appropriate diagnostic coronary angiography. Results: Of 417 elective outpatient diagnostic coronary angiograms, 259 (62%) were included in the analysis. Of these, 108 (42%) were classified as appropriate, 97 (37%) as uncertain, and 54 (21%) as inappropriate. In multivariable analysis, those with private health insurance were more likely to have an appropriate indication for coronary angiogram (OR 2.1, 95% CI 1.2-3.7) as were those with atrial fibrillation (OR 3.9, 95% CI 1.2-12.8). Ever smokers were less likely to have an appropriate indication for coronary angiogram than never smokers (OR 0.5, 95% CI 0.3-0.9). Conclusion: This is the first Irish study to evaluate the appropriate use of elective coronary angiography. Rates were similar to those of studies in other jurisdictions; however, there remains significant potential for quality improvement. abstract_id: PUBMED:16321663 Vascular profile of patients with multivessel coronary artery disease. Purpose: Early and accurate diagnosis of peripheral atherosclerosis is of paramount importance for global management of patients with known coronary artery disease (CAD). We sought to retrospectively evaluate the clinical relevance of significant subclavian artery stenosis (SAS) and abdominal vessel stenosis or aneurysm (AVA) in patients undergoing coronary angiography. Subjects And Methods: Medical records of consecutive patients underwent coronary angiography at two public institutions over a 12-month period were evaluated. Angiographic results of patients underwent coincident diagnostic abdominal aorta angiography to evaluate abdominal vessels or subclavian angiography on the basis of clinical and angiographic criteria were analyzed. Results: During the study period, 724 consecutive patients (535 males, mean age 68.1+/-11 years) were enrolled (Table 2): a significant atherosclerotic disease of at least one arterial segments were observed in 198 patients (27.3%). Angiographically significant SAS was observed in 18/220 patients (8.1%) undergoing concomitant subclavian artery angiography and candidates to coronary surgery using the internal mammary artery. Ten patients with subclavian artery stenosis and upper limb ischemia underwent subclavian artery angioplasty and stenting and bypass surgery using LIMA. AVA were reported in 180 (35.7%) of 504 consecutive patients: renal artery stenosis was found in 13.1% of cases (66 patients), aortoiliac artery disease in 13.7 % (69 patients), and aortic aneurismal disease in 8.9% (45 patients). Logistic regression analyses revealed > or =3-vessel CAD (odds ratio [OR] 9.917, p=0.002), age>60 years old (OR 3.817, p=0.036), > or =3 risk factors (OR 2.8, p=0.048) as independent predictors of SAS and AVA. Conclusion: Patients with multivessel CAD are frequently aged, have an high risk profile and multiple vascular atherosclerotic distributions, suggesting the usefulness of a more global and comprehensive cardiovascular approach. abstract_id: PUBMED:12667131 Subclavian artery reconstruction in patients undergoing coronary artery bypass grafting. The presence of occlusive disease of the subclavian artery (SCA) proximal to the origin of the internal thoracic artery (ITA) influences the operative strategy and the outcome of coronary artery bypass grafting (CABG). Of 780 patients who underwent CABG, concomitant SCA occlusive lesions were reconstructed in 13 patients (nine males, four females). The affected SCAs were left-sided in 11 patients, and right-sided and bilateral in one, each. An aortoaxillary bypass utilizing an 8-mm PTFE graft was constructed in nine patients and a carotid-subclavian (C-S) transposition in two, simultaneously with CABG. Percutaneous balloon angioplasty with a stent was performed in two patients prior to CABG. With follow-up periods ranging from 4 to 8.4 years (mean, 6.3 years), aortoaxillary bypass grafts were patent in all patients. Other reconstructive procedures, including a C-S transposition and balloon angioplasty, were performed safely and effectively in off-pump CABG patients. In six patients, the left internal thoracic artery (LITA) could be used as a graft to the coronary artery after SCA reconstruction. Aortoaxillary bypass using an 8-mm PTFE graft is a safe and effective way for simultaneous subclavian reconstruction in patients undergoing CABG. Mid-term patency of the graft is satisfactory. The LITA can be used as a graft to the coronary arteries in selected patients. Preoperative brachial angiography is mandatory in these patients. Answer: The appropriateness of performing angiography of the abdominal aorta or subclavian artery during coronary angiography is a matter of clinical judgment based on specific indications and patient profiles. According to a meta-analysis, the screening angiography of the subclavian or abdominal aorta at the time of coronary angiography may sometimes be justified under clear entry criteria (PUBMED:16322917). The study found that significant subclavian artery and internal mammary artery stenosis were reported in 5.5% and 9% of patients, respectively, and undetected aortoiliac disease was reported in 35.5% of patients undergoing coronary angiography. The appropriateness of renal angiography in selected patients undergoing coronary angiography has been consistently supported in the literature. However, internal mammary artery and aortoiliac angiography are not justified unless they are part of subclavian angiography in patients scheduled for arterial conduit with brachial differential pressure, thoracic irradiation, or surgery, or part of abdominal angiography to detect renal artery stenosis in laboratories with radiological digital peripheral equipment (PUBMED:16322917). Additionally, the presence of an aberrant right subclavian artery can complicate right transradial coronary interventions and lead to complications such as iatrogenic aortic dissection (PUBMED:32787591). Subclavian artery stenosis is also a relevant finding in patients with an internal mammarian artery coronary bypass graft undergoing coronary angiography, suggesting the need for dedicated screening and ischemia evaluation algorithms (PUBMED:36239814). Furthermore, cases of coronary subclavian steal syndrome after coronary artery bypass grafting highlight the clinical relevance of subclavian artery stenosis (PUBMED:18405481; PUBMED:9267355). In conclusion, while not all patients undergoing coronary angiography may require additional angiography of the abdominal aorta or subclavian artery, it can be appropriate and justified in selected cases based on patient history, risk factors, and clinical indications. It is not considered malpractice when performed with clear justification and in accordance with established criteria to improve patient outcomes.
Instruction: Is Temporal Lobe Epilepsy with childhood febrile seizures a distinctive entity? Abstracts: abstract_id: PUBMED:21167748 Is Temporal Lobe Epilepsy with childhood febrile seizures a distinctive entity? A comparative study. Objective: Pharmacoresistance continues to be a major challenge in Temporal Lobe Epilepsies (TLE). A key to overcome pharmacoresistance is to identify subgroups among the TLE and disclose their specific molecular pathways. This will facilitate a tailored pharmacological treatment and improve outcome. There is growing evidence in favor of the theory that TLE with childhood febrile seizures (TLE-FS) may represent one distinctive subgroup among the TLE. Material And Methods: We compared clinical features from 102 TLE-FS patients with 105 TLE patients without FS. We also conducted a logistic regression analysis to adjust for possible confounders caused by overrepresentation of patients with Mesial Temporal Lobe Epilepsy with Hippocampal Sclerosis (MTLE-HS) in the TLE-FS group. Results: MTLE-HS was overrepresented in patients with TLE-FS (p=0.043). Age at epilepsy onset was lower in patients with TLE-FS (p<0.001). TLE-FS patients had a higher frequency of first grade family members with FS (p=0.003, adjusted for MTLE-HS: p=0.002). They were more frequently plagued with simple partial seizures (p=0.015, adjusted: p=0.038), and especially with vertiginous symptoms (p=0.004 adjusted: p=0.006). They also had the higher frequency of autonomic symptoms (p=0.003; adjusted: p=0.012), and more generalized tonic-clonic seizures (0.034; adjusted p=0.038). Conclusion: We identified TLE-FS as a phenotype that can be delineated from other TLE. None of the characteristics are specific, but we disclosed a set of features also when adjusted for MTLE-HS. abstract_id: PUBMED:20662892 Psychological outcome profiles in childhood-onset temporal lobe epilepsy. Purpose: To examine the effect of childhood-onset temporal lobe epilepsy (TLE) on long-term psychological function and to identify outcome profiles related to the natural course and treatment of TLE. Methods: Psychological function was studied in a prospective, community-based cohort of childhood-onset TLE, approximately 13 years following seizure onset. Fifty-three patients were assessed using a semi-structured psychosocial interview, supplemented by self-report questionnaires measuring quality-of-life, depression, self-esteem, and anxiety. Results: Common patterns were observed, giving rise to four distinct patient groups and psychological outcomes: (1) patients who experienced spontaneous remission of their seizures fared best; their psychological profile was characterized by heightened worry about the possibility of seizure recurrence; (2) patients who progressed to surgery and were seizure free reported adjustment difficulties associated with learning to become "well"; (3) patients who progressed to surgery and were not seizure free had the poorest psychological outcomes, with depression featuring prominently; and (4) patients with ongoing intractable epilepsy reported psychological and social features consistent with the effects of their chronic illness. Discussion: Patients with childhood-onset TLE face distinctive long-term psychological challenges. The specific nature of these challenges can be understood in terms of the natural evolution and treatment of their epilepsy. abstract_id: PUBMED:15009232 Benign focal epileptiform discharges of childhood and hippocampal sclerosis. Purpose: Benign focal epileptiform discharges of childhood (BFEDCs) are common EEG findings between ages 4 and 14 years. This epoch of maturational development overlaps with the age at presentation of temporal lobe epilepsy (TLE) due to hippocampal sclerosis (HS) in children. Methods: From our series of 17 preadolescent children who eventually underwent anteromesial temporal resection for medically refractory TLE due to HS, we identified two children, plus one thereafter, who were initially dismissed as candidates for epilepsy surgery because of abundant extratemporal sharp waves, which were bilateral in two cases. The sharp waves had the distinctive morphology, distribution, and sleep activation suggestive of BFEDCs, but the medical intractability and seizure symptoms were unusual for benign focal epilepsy of childhood. Results: In each case, surgical candidacy was clarified when magnetic resonance imaging (MRI) showed unilateral HS and video-EEG demonstrated seizure onset in the ipsilateral anteromesial temporal region. The postoperative freedom from seizures in each case (follow-up, 2 to 4 years) confirmed that HS was the primary epileptogenic process, and that the BFEDCs were incidental or an atypical secondary manifestation. Conclusions: These cases illustrate the need for more extensive study of children with BFEDCs when medical intractability and seizure symptoms speak against a simple diagnosis of benign focal epilepsy of childhood. In addition, we observed that the BFEDCs in two of our children had an unusual bilateral occipitofrontal distribution, and we speculate that the coexistence of the BFEDCs in children with HS may not be an incidental finding. abstract_id: PUBMED:19606910 Sudden death, febrile seizures, and hippocampal and temporal lobe maldevelopment in toddlers: a new entity. Recently, we reported hippocampal and temporal lobe abnormalities in 5 toddlers with sudden unexplained death in childhood (SUDC). The association of these anomalies with a high incidence (40%) of individual/family histories of simple febrile seizures in the cases raised concern that febrile seizures can be associated with death. In a series of 64 toddlers with sudden death, we tested the hypothesis that an SUDC subset is characterized by hippocampal and temporal lobe maldevelopment and an individual and/or family history of simple familial seizures. Cases of sudden and unexplained death in children aged 1.0 to 5.9 years (median 1.7 years) were divided into groups based upon a history of febrile or nonfebrile seizures, familial febrile seizures, and autopsy classification of cause of death. Forty-nine of the 64 cases (77%) were classified as SUDC, of which 40% had an individual/family history of febrile seizures. Of the 26 SUDC cases with available hippocampal sections, 62% (16/26) had hippocampal and temporal lobe anomalies, including 82% (9/11) of cases with an individual/family history of febrile seizures. Cases with these anomalies were all found dead during a sleep period, typically in the prone (87%) position. We conclude that a potential new entity may account for the majority of SUDC in toddlers, defined by sleep-related death in the prone position, individual/family history of febrile seizures, and hippocampal and temporal lobe anomalies. The mechanism of death appears analogous to sudden death in (temporal lobe) epilepsy, with a putative unwitnessed seizure during sleep leading to airway occlusion and death. This study mandates further research into the potential link between simple febrile seizures and death. abstract_id: PUBMED:33773226 Transition from pediatric to adult care in a Japanese cohort of childhood-onset epilepsy: prevalence of epileptic syndromes and complexity in the transition. Aim: We retrospectively examined patients with childhood-onset epilepsy who transitioned from pediatric to adult care to reveal the clinical characteristics and evaluate the complexity of transitioning. Methods: The subjects were 220 patients (89 males, 131 females) who had been treated at our pediatric epilepsy clinic and had transferred to adult care between 2014 and 2018 without attending a transition clinic or program. The demographic data of the patients were retrospectively analyzed. Results: The ages at transition ranged from 15 to 54 years (median: 27 years old). There were 91 patients with focal epilepsies (FEs) and 129 patients with generalized epilepsies [genetic generalized epilepsy (GGE) n = 30, generalized epilepsy of various etiologies (GEv) n = 99]. A most frequent epileptic syndrome was temporal lobe epilepsy followed by frontal lobe epilepsy in FEs, GTCS only followed by juvenile myoclonic epilepsy in GGE and Lennox-Gastaut syndrome followed by Dravet syndrome in GEv. At the age of transition, a total of 77 of the 96 patients with developmental and epileptic encephalopathies (DEE) had pharmacoresistant seizures, which was positively correlated with a late transition age (P≤0.05). More than monthly seizures and greater than moderate disabilities were noted in 45% and 55% of the patients, respectively. Conclusion: The patients with childhood-onset epilepsy transitioned to adult care from the hospital-based pediatric epilepsy clinic were characterized by generalized>focal epilepsy, a frequent complication of DEE, more than monthly seizures, and worse than moderate intellectual disabilities. The complication of DEE made a smooth transition difficult and delayed the transition age. abstract_id: PUBMED:12552029 The pathological basis of temporal lobe epilepsy in childhood. Objective: To characterize the pathologic findings of temporal lobe epilepsy (TLE) in children undergoing temporal lobectomy for refractory seizures and to correlate these findings with clinical presentation. Methods: The authors reviewed the charts of all children who underwent anterior temporal lobectomy for refractory TLE from 1979 through 1999. A new neuropathologic analysis was performed blinded to clinical features and outcome. Results: Twenty-two children met inclusion criteria. Mean age at onset of epilepsy was 3 years, 7 months (range 1 month to 10 years). Mean age at surgery was 10 years, 11 months (range 1 to 18 years). All patients had complex partial seizures, 48% with secondary generalization. Most had daily seizures. Auras were reported in 45% of patients. Post-resection follow-up averaged 5 years, 2 months (range 2 to 19 years). Seizure-free status was achieved in 41% of patients, and 14% had residual auras only. The most frequent neuropathologic abnormalities were cortical dysplasia (CD) of the temporal neocortex (14 of 22) and mesial temporal sclerosis (MTS) (12 of the 15 children with available hippocampal tissue). These two findings coexisted in seven children. MTS was associated with extra-hippocampal pathology in 8 of 12 (67%) of the cases. Conclusions: MTS occurs frequently in association with CD in this population of children. The high incidence of dual pathology could explain the early age of seizure onset and high seizure frequency rate observed. TLE in childhood may constitute a different entity than in adults, from both the clinical and neuropathologic perspectives. abstract_id: PUBMED:23315620 Is there anything distinctive about epileptic deja vu? Background: Déjà vu can occur as an aura of temporal lobe epilepsy and in some psychiatric conditions but is also common in the general population. It is unclear whether any clinical features distinguish pathological and physiological forms of déjà vu. Methods: 50 epileptic patients with ictal déjà vu, 50 non-epileptic patients attending general neurology clinics and 50 medical students at Edinburgh University were recruited. Data were collected on demographic factors, the experience of déjà vu using a questionnaire based on Sno's Inventory for Déjà Vu Experiences Assessment, symptoms of anxiety and depression using the Hospital Anxiety and Depression Scale as well as seizure characteristics, anti-epileptic medications, handedness, EEG and neuroimaging findings for epileptic patients. Results: 73.5% of neurology patients, 88% of students and (by definition) all epilepsy patients had experienced déjà vu. The experience of déjà vu itself was similar in the three groups. Epileptic déjà vu occurred more frequently and lasted somewhat longer than physiological déjà vu. Epilepsy patients were more likely to report prior fatigue and concentrated activity, associated derealisation, olfactory and gustatory hallucinations, physical symptoms such as headaches, abdominal sensations and fear. After controlling for study group, anxiety and depression scores were not associated with déjà vu frequency. Conclusions: Déjà vu is common and qualitatively similar whether it occurs as an epileptic aura or normal phenomenon. However ictal déjà vu occurs more frequently and is accompanied by several distinctive features. It is distinguished primarily by 'the company it keeps'. abstract_id: PUBMED:17963043 Neuropsychological deficits in childhood epilepsy syndromes. Seizure disorders are relatively common in childhood, and the International League Against Epilepsy (ILAE) provides a hierarchical classification system to define seizure types. At the final level of classification, specific epilepsy syndromes are defined that represent a complex of signs and symptoms unique to an epilepsy condition. The present review discusses the issues related to several of these epilepsy syndromes in childhood, including those classified as generalized idiopathic epilepsies (e.g., childhood absence epilepsy, juvenile absence epilepsy, juvenile myoclonic epilepsy), focal epilepsies (benign rolandic epilepsy, occipital epilepsy, temporal lobe epilepsy, frontal lobe epilepsy) and the "epileptic encephalopathies," including Dravet's Syndrome, West Syndrome, Lennox-Gastaut Syndrome, Myoclonic Astatic Epilepsy, and Landau-Kleffner Syndrome. For each syndrome, the epidemiology, clinical manifestations, treatments, and neuropsychological findings are discussed. abstract_id: PUBMED:11151722 Dysgenetic mesial temporal sclerosis: an unrecognized entity. Mesial temporal sclerosis (MTS) is the most frequently encountered lesion in adult patients with intractable temporal epilepsy; it is found in isolation in approximately two-thirds of surgically treated cases. Whereas the exact etiology of MTS is still controversial, several reports suggest that this pathologic lesion is both the cause and the consequence of chronic seizures and develops progressively during childhood secondary to recurrent seizures. In order to evaluate the clinical importance of MTS in children, we retrospectively reviewed the clinical charts of children who underwent surgery for medically intractable temporal epilepsy and report cases presenting an amygdalo-hippocampic dual pathology. Six children aged 1.5-16 years (mean +/- SD: 7.5 +/- 3 years) presenting with partial complex seizures (5 cases) or extension spasms (1 case), with onset from 6 months to the age of 8.5 years (mean seizure onset +/- SD: 3 +/- 5 years) underwent anterior temporal lobectomy including resection of the amygdala and hippocampus. All patients exhibited variable degrees of severity of neuronal loss and gliosis in the amygdala and/or hippocampus. The pathological picture of MTS was not isolated, however. Careful pathological examination has thus shown foci of amygdalo-hippocampic neuronal dysplasia in six patients, with concomitant bilaminated fascia dentata in two cases. Postoperatively, no mortality or morbidity was encountered. After a mean follow-up of 2.5 years, four patients are seizure free. One patient had a 80% rate of improvement in seizure frequency, though still having occasional febrile convulsions. In another patient, complex partial seizures resolved, but rare episodes of absence were still observed. These data are in keeping with the hypothesis that MTS could be secondary to repeated seizures. The analysis of this series of patients could suggest that mesiotemporal dysplastic lesions within the amygdalo-hippocampic structures induce seizures, which, in turn, will favor the development of MTS during childhood. MTS could then lead to synaptic reorganization, which can express abnormal hyperexcitability and result in more recurrent seizures. In this way a vicious circle is set up, which may explain the progression of seizures in some patients. abstract_id: PUBMED:28255730 Medial temporal lobe epilepsy associated with hippocampal sclerosis is a distinctive syndrome. Epileptic syndromes are distinctive disorders with specific features, which when taken together, permit a specific diagnosis. There is actually a debate on that medial temporal lobe epilepsy with hippocampal sclerosis is an epileptic syndrome. To address this issue, we searched for discriminative semiological features between temporal lobe epilepsy patients with hippocampal sclerosis (TLE-HS patients or group 1), TLE patients with medial structural lesion other than hippocampal sclerosis or in MRI-negative cases with medial onset on further investigations (group 2) and lateral TLE patients (LTLE or group 3). We retrospectively collected data from medical and EEG-video records of 523 TLE patients, referred for surgery to the Pitié-Salpêtrière Epileptology Unit between 1991 and 2014. We identified 389 patients belonging to group 1, 61 patients belonging to group 2, and 73 patients belonging to group 3 and performed a comparative analysis of their clinical data and surgical outcomes. TLE-HS patients (group 1): (1) began epilepsy earlier (11 ± 9 vs. 20 ± 10 vs. 15 ± 9 years); (2) exhibited more frequently early febrile convulsions (FC) (59 vs 7 vs 5%); (3) presented more: ictal gestural automatisms (90 vs 54 vs 67%), dystonic posturing (47 vs 20 vs 23%), and secondary generalized tonic-clonic seizures (GTCS) (70 vs 44% vs 48%) as compared to both groups 2 and 3 patients (all p < 0.001). With respect to auras, abdominal visceral auras were more reported by TLE-HS than by LTLE patients (49 vs 16%). Three cardinal criteria correctly classified 94% of patients into TLE-HS group: history of FC, dystonic posturing, and secondary GTCS. Postoperative outcome was significantly better in TLE-HS group than in the two other groups (p = 0.03 and 0.003). Our study demonstrates that cardinal criteria are reliably helpful to distinguish patients with TLE-HS from those with other TLE and may allow considering TLE-HS as a distinctive syndrome. Answer: Yes, Temporal Lobe Epilepsy with childhood febrile seizures (TLE-FS) appears to be a distinctive entity. A comparative study found that TLE-FS can be delineated from other types of Temporal Lobe Epilepsies (TLE) based on a set of clinical features, even when adjusted for the presence of Mesial Temporal Lobe Epilepsy with Hippocampal Sclerosis (MTLE-HS). Patients with TLE-FS were more likely to have a lower age at epilepsy onset, a higher frequency of first-grade family members with febrile seizures, a higher frequency of simple partial seizures with vertiginous symptoms, autonomic symptoms, and more generalized tonic-clonic seizures (PUBMED:21167748). Additionally, there is evidence suggesting that TLE associated with hippocampal sclerosis (TLE-HS) is a distinctive syndrome with specific features such as an earlier onset of epilepsy, frequent early febrile convulsions, and particular seizure characteristics like ictal gestural automatisms, dystonic posturing, and secondary generalized tonic-clonic seizures. These features, along with better postoperative outcomes, support the notion that TLE-HS, which often begins with childhood febrile seizures, may be considered a distinctive syndrome (PUBMED:28255730). Moreover, the presence of hippocampal and temporal lobe anomalies in a subset of toddlers with sudden unexplained death in childhood (SUDC) who had a history of febrile seizures suggests a potential link between febrile seizures and significant developmental abnormalities, further indicating that TLE-FS could represent a unique clinical entity (PUBMED:19606910). In summary, the evidence from these studies supports the concept that TLE with childhood febrile seizures constitutes a distinctive entity with specific clinical and pathological characteristics.
Instruction: Active treatment given in the last weeks of life: poor quality cancer care or justifiable behavior? Abstracts: abstract_id: PUBMED:24817616 Active treatment given in the last weeks of life: poor quality cancer care or justifiable behavior? Purpose: Chemotherapy near the end of life is frequently considered as an indicator of inappropriate aggressiveness. We were interested in revising our prescribing habits and in analyzing the reasons for offering active treatment to patients with advanced cancer. Methods: We examined the electronic medical records of all the cancer patients died in the Italian Region of Valle d'Aosta in a 1-year period and extracted all the available clinical data. From the 350 deceased patients, we selected the 141 to whom active treatment had been given during the natural history of their disease. Results: Among the patients undergoing any active treatment, the median number of days from the last administration to death was 75. Thirty-seven patients (26.2 %) had their last treatment administration during the 4 weeks before death and 20 (14.2 %) during the last 2 weeks. Fourteen patients (9.9 %) started treatment during the last 4 weeks. When the patients undergoing treatment in the last 4 weeks of life were compared with those subject to earlier withdrawal, only age and pretreatment were statistically significantly different. Most of the treatment choices were considered appropriate, and earlier treatment withdrawal could have been advised only in a minority of the cases. Conclusions: Our data were at the lower range when compared with the available literature. Uncertainties in prognostication and the possibility of response to treatment can justify chemotherapy prescriptions in selected cases. We suggest that the focus should move to the provision of adequate and timely supportive care. abstract_id: PUBMED:35790195 Management of Physical Symptoms in Patients with Advanced Cancer during the Last Weeks and Days of Life. Patients with advanced cancer are faced with many devastating symptoms in the last weeks and days of life, such as pain, delirium, dyspnea, bronchial hypersecretions (death rattle) and intractable seizures. Symptom management in the last weeks of life can be particularly challenging because of the high prevalence of delirium complicating symptom assessment, high symptom expression secondary to psychosocial and spiritual factors, limited life-expectancy requiring special considerations for prognosis-based decision-making, and distressed caregivers. There is a paucity of research involving patients in the last weeks of life, contributing to substantial variations in clinical practice. In this narrative review, we shall review the existing literature and provide a practical approach to in-patient management of several of the most distressing physical symptoms in the last weeks to days of life. abstract_id: PUBMED:27738795 The role of palliative care in the last month of life in elderly cancer patients. Introduction: One major health care issue encountered in elderly cancer patients is the alteration of the quality of life. The purpose of our study is to evaluate the administration of chemotherapy in the last month of life (CLML) and to evaluate the impact of the palliative care consult (PCC) in the elderly patients. Methods: We conducted a retrospective observational study that included elderly patients diagnosed with an end-stage cancer and who were deceased between the 1st of January 2012 and the 31st of December 2015. Patient medical records were reviewed for patients' characteristics and management during the last month of life. Results: This study enrolled 231 patients that fulfilled the eligibility criteria. CLML was administered in 91 patients (39.4 %) among which 43 patients (47.3 %) had their treatment within the last 2 weeks of life. Seventy-seven patients (33.3 %) had a palliative care consult (PCC) with a median duration of follow up of 13 days (range 2-56 days). Overall, PCC failed to decrease CLML administration, the duration of hospitalization, and ICU admissions. However, CLML administration decreased by 69 % among patients that had their PCC before receiving treatment (OR = 0.31; 95 % CI 0.15-0.63). PCC also led to a change in the pattern of treatment administered in the last month of life with less cytotoxic therapy (OR = 0.27 CI 95 % 0.09-0.9, p = 0.02) and higher rates of oral agents being prescribed (OR = 3.8; 95 % CI 1.3-11.3, p = 0.014). Conclusion: Our elderly patients seem to receive aggressive management similar to the general oncology population. Early PCC was shown throughout our results to decrease the aggressiveness of cancer treatment in elderly patients which seems to improve the quality of care of our patients. abstract_id: PUBMED:24020919 Unmet quality indicators for metastatic cancer patients admitted to intensive care unit in the last two weeks of life. Background: The majority of U.S. cancer patients express the desire to die at home, though most do not, and are often subjected to ineffective therapies near the end of life (EOL). In 2008 the U.K. Department of Health and National Health Care Service implemented a set of 10 quality markers to improve EOL care (EOLC) for seriously ill persons. Data show that this intervention has had a positive impact on patients' EOLC. Purpose: The study assessed the quality of EOLC received by terminally ill cancer patients admitted to the intensive care unit (ICU) in the last two weeks of life. Design: A retrospective chart review was done of the electronic medical record (EMR) of 2498 patients admitted to the ICU from January to August 2011. The six U.K. quality indicators pertaining to patients were used to assess quality of EOLC. Setting: The setting was a tertiary academic medical center with 663 beds and 66 adult ICU beds in northern California. Patients: EMR analysis identified 2498 patients admitted to the ICU during the study period--232 died within two weeks of admission. Sixty-nine decedents had metastatic cancer. Of the patients, 58% were male, average age 59.8 years (range 25-91). Results: Quality indicators were met in a relatively small percentage of patients admitted to the ICU in the last two weeks of life: prognostication 67%, advance care planning 32%, goals of care 42%, caregiver needs 0%, coordination of care across organizational boundaries 7%, and standardized care pathway implementation 58%. Palliative care consultations occurred in 28 patients. Conclusions: Quality indicators for EOLC were unmet in cancer patients admitted to the ICU in the last two weeks of life. Hospital-wide provider education about the need for early advance care planning with all seriously ill patients has been implemented in an effort to improve quality of care at EOL. abstract_id: PUBMED:12505207 Quality of life of palliative care patients in the last two weeks of life. Quality of life (QOL) is the main consideration in caring for advanced cancer patients, yet little is known about the QOL in the terminal phase. We profiled the QOL of 58 advanced cancer patients during their last 2 weeks of life using the McGill QOL questionnaire-Hong Kong version. The patients provided ratings of QOL an average of 5.6 (median 6) days pre-death. Palliative care services were successful in maintaining the total QOL score during the dying phase. The mean score was 7.0 of 10. Among the various domains, the physical and existential domains scored relatively poorly at 5.9 and 6 of 10, respectively. The worst physical symptom and meaning of life were the individual items with the poorest scores (4.8 and 5.4 of 10, respectively). Compared with admission, there was statistically significant improvement in the worst physical symptom (P = 0.02) and eating item (P = 0.002), but deterioration in physical well-being (P = 0.03), meaning of existence (P = 0.007), and satisfaction with oneself (P = 0.04). In conclusion, QOL evaluation during the terminal phase identifies important aspects requiring improvement during the last two weeks of life. Physical and existential domains of dying cancer patients needed more attention. abstract_id: PUBMED:38471241 Aggressive anticancer treatment in the last 2 weeks of life. Background: There is a concern that terminally ill cancer patients may be aggressively treated due to the rapidly growing possibilities of anticancer treatment. The aim of this study was to evaluate the use of anticancer treatment at the end of life (EoL). Materials And Methods: This retrospective study included adult patients with advanced solid cancers who were treated at the Institute of Oncology Ljubljana and died of cancer between January 2015 and December 2019. A multiple logistic regression model was used to assess an association between the aggressiveness of anticancer treatment (i.e. systemic therapy, radiotherapy and surgery) in the last 2 weeks of life and year of death, age at death, sex, prognosis of cancer and enrolment into the specialist palliative care (SPC). Results: We included 1736 patients in our analysis. Overall, 13.7% of patients were enrolled into the SPC and 14.4% received anticancer treatment in the last 2 weeks of life. The odds of receiving anticancer treatment significantly increased over time [odds ratio (OR) 1.15, 95% confidence interval (CI) 1.04-1.27]. There was an increased use of novel systemic therapy (e.g. small-molecule targeted therapy and immunotherapy) at the EoL. Older patients had significantly lower odds to receive anticancer treatment in the last 2 weeks of life as compared to younger patients (OR 0.96, 95% CI 0.95-0.98). As compared to patients receiving only a standard oncology care, those also enrolled into the SPC had significantly lower odds for anticancer treatment in the last 2 weeks of life (OR 0.22, 95% CI 0.12-0.43). Conclusions: Terminally ill cancer patients have increased odds for receiving anticancer treatment, especially novel systemic therapies, in the last 2 weeks of life. Younger patients and those not enrolled into the SPC are at particular risk for anticancer treatment at the EoL. abstract_id: PUBMED:32576290 Health- and social care in the last year of life among older adults in Sweden. Background: In the last years of life, burden of disease and disability and need of health- and social care often increase. Social, functional and psychological factors may be important in regard to social- and health care utilization. This study aims to describe use of health- and social care during the last year of life among persons living in ordinary housing or in assisted living facilities. Methods: A retrospective study examining health- and social care utilization during their last year of life, using a subsample from the Swedish twin registries individually linked to several Swedish national quality registries (NQR). Persons that died during 2008-2009 and 2011-2012 (n = 1518) were selected. Results: Mean age at death was 85.9 ± 7.3 (range 65.1-109.0). Among the 1518 participants (women n = 888, 58.5%), of which 741 (49%) were living in assisted living facilities and 1061 (69.9%) had at least one hospitalization during last year of life. The most common causes of death were cardiovascular disease (43.8%) and tumors (15.3%). A multivariable logistic regression revealed that living in ordinary housing, younger age and higher numbers of NQR's increased the likelihood of hospitalization. Conclusions: Persons in their last year of life consumed high amount of health- and social care although 12% did not receive any home care. Married persons received less home care than never married. Persons living in ordinary housing had higher numbers of hospitalizations compared to participants in assisted living facilities. Older persons and persons registered in fewer NQR's were less hospitalized. abstract_id: PUBMED:32321483 Early palliative care versus standard care in haematologic cancer patients at their last active treatment: study protocol of a feasibility trial. Background: Patients with advanced haematological malignancies suffer from a very high symptom burden and psychological, spiritual, social and physical symptoms comparable with patients with metastatic non-haematological malignancy. Referral to palliative care services for these patients remains limited or often confined to the last days of life. We developed a palliative care intervention (PCI) integrated with standard haematological care. The aim of the study was focussed on exploring the feasibility of the intervention by patients, professionals and caregivers and on assessing its preliminary efficacy. Methods/design: This is a mixed-methods phase 2 trial. The Specialist Palliative Care Team (SPCT) will follow each patient on a monthly basis in the outpatient clinic or will provide consultations during any hospital admission. SPCT and haematologists will discuss active patient issues to assure a team approach to the patient's care. This quantitative study is a monocentric parallel-group superiority trial with balanced randomisation comparing the experimental PCI plus haematological standard care versus haematological standard care alone. The primary endpoint will calculate on adherence to the planned PCI, measured as the percentage of patients randomised to the experimental arm who attend all the planned palliative care visits in the 24 weeks after randomisation. The qualitative study follows the methodological indications of concurrent nested design and was aimed at exploring the acceptability of the PCI from the point of view of patients, caregivers and physicians. Discussion: In this trial, we will test the feasibility of an integrated palliative care approach starting when the haematologist decides to propose the last active treatment to the patient, according to his/her clinical judgement. We decided to test this criterion because it is able to intercept a wide range of patients'needs. The feasibility of this approach requires that we enrol at least 60 patients and that more than 50% of them be followed by the palliative care team for at least 24 weeks. The trial will include integrated qualitative data analysis; to give essential information on feasibility and acceptability. Trial Registration: ClinicalTrials.gov: NCT03743480 (November 16, 2018). abstract_id: PUBMED:30086748 Trends in treatment during the last stages of life in end-stage gynecologic cancer patients who received active palliative chemotherapy: a comparative analysis of 10-year data in a single institution. Background: Palliative chemotherapy should be used with caution when attempting to alleviate symptoms in patients with end-stage cancer. However, palliative chemotherapy continues to be utilized in cancer patients during their last stages of life. In this study, we analyzed the pattern of chemotherapy administered during the last 6 months of life in patients with end-stage gynecologic cancer who were treated with active palliative chemotherapy for the past 10 years. Method: We retrospectively analyzed the data for patients with gynecologic cancer who died after undergoing active palliative chemotherapy without receiving hospice management at Asan Medical Center from 2006 to 2015. Patients were divided into two groups: those who died between 2006 and 2010, and those who died between 2011 and 2015. Based on the electronic medical records, the demographic and baseline characteristics of the patients, hospital admission during the last 6 months, invasive procedures, palliative chemotherapy patterns, and the time of the last chemotherapy session were confirmed. Results: A total of 193 patients with gynecologic cancer were eligible for this study. 92 patients died during 2006 to 2010, and 101 patients died during 2011 to 2015. The mean frequency of admission during the last 6 months was 5.12 for those who died in 2006-2010 and 6.06 for those who died during 2011-2015 (p = 0.003); similarly, the mean frequency of palliative chemotherapy during the last 6 months was 3.84 (2006-2010) vs. 4.93 times (2011-2015; p < 0.001). The proportion of patients undergoing invasive procedures during the last 3 months was 41.3% (2005-2010) vs. 56.4% (2011-2015; p = 0.044). Conclusions: The frequency of palliative chemotherapy and the rate of invasive procedures have increased in patients with end-stage gynecologic cancer who were treated aggressively without hospice management over 2011-2015 when compared to 2006-2010, along with an increase in the mean frequency of admission during the last 6 months at our institution. Gynecologic oncologists need to evaluate whether active palliative chemotherapy is beneficial to patients at the end-of-life stage, and if not helpful, should communicate with the patients and caregivers about when the palliative chemotherapy should be discontinued. abstract_id: PUBMED:16119667 Assessment of quality of life, pain and effectiveness of treatment in palliative care patients. Purpose: Evaluation of quality of life, appraisal of pain quality and intensity, assessment of treatment and care effectiveness in palliative care patients treated at the in-patient Palliative Care Department in Czestochowa Province Hospital. Material And Methods: The study was performed in 50 randomly chosen patients at the in-patient Palliative Care Department in Czestochowa Province Hospital. The studied group comprised 22 women and 28 men. The trial lasted since October 2003 till April 2004 and this was longitudinal study. At the first assessment patients filled Modified Sheet Pain Assessment, Support Team Assessment Schedule (STAS) and Rotterdam Symptom Checklist (RSCL). At the second, third and fourth appraisal patients filled RSCL and STAS. Results: In patients surveyed by STAS at the second assessment 52% of patients achieved very high scores (poor effectiveness of treatment and care), 32% high scores - unsatisfactory treatment and care, 15% average results (average treatment and care). Results of RSCL indicate for decrease in physical activity and global quality of life of terminal patients. At the fourth assessment after 4 weeks of the treatment nearly 80% patients assessed their physical state as low. Conclusions: The results indicate that patients have poor performance status, no effective treatment is provided, psychological state is significantly impaired, and patients were forced to resign from social life because of cancer progression. Answer: Active treatment given in the last weeks of life is a complex issue that can be viewed as either poor quality cancer care or justifiable behavior, depending on various factors and perspectives. On one hand, chemotherapy near the end of life is often considered an indicator of inappropriate aggressiveness. A study examining the prescribing habits for patients with advanced cancer found that a significant proportion of patients received active treatment in the last weeks of life, with some starting treatment during the last four weeks. However, most of these treatment choices were deemed appropriate, and earlier treatment withdrawal was advised only in a minority of cases. The study suggests that uncertainties in prognostication and the possibility of response to treatment can justify chemotherapy prescriptions in selected cases, and emphasizes the importance of providing adequate and timely supportive care (PUBMED:24817616). Conversely, aggressive management, including chemotherapy in the last month of life (CLML), is common among elderly cancer patients, similar to the general oncology population. Early palliative care consults (PCC) have been shown to decrease the aggressiveness of cancer treatment in elderly patients, which seems to improve the quality of care (PUBMED:27738795). Moreover, aggressive anticancer treatment in the last two weeks of life has been associated with increased use of novel systemic therapies, with younger patients and those not enrolled in specialist palliative care (SPC) being at particular risk for such treatment at the end of life (PUBMED:38471241). Quality indicators for end-of-life care (EOLC) were found to be unmet in cancer patients admitted to the ICU in the last two weeks of life, suggesting a need for early advance care planning to improve the quality of care at the end of life (PUBMED:24020919). Additionally, palliative care services have been successful in maintaining the total quality of life (QOL) score during the dying phase, although physical and existential domains required more attention (PUBMED:12505207). In summary, while there are justifiable reasons for active treatment in the last weeks of life, such as potential response to treatment and uncertainties in prognostication, the trend towards aggressive management and the underutilization of palliative care services indicate that there is room for improvement in end-of-life care practices. The focus should be on individualized decision-making, early integration of palliative care, and ensuring that treatment aligns with the patient's goals and quality of life.
Instruction: Are breastfeeding problems related to incorrect breastfeeding technique and the use of pacifiers and bottles? Abstracts: abstract_id: PUBMED:9534504 Are breastfeeding problems related to incorrect breastfeeding technique and the use of pacifiers and bottles? Background: In Western countries during the 1960s and 1970s, sore nipples and insufficient milk were common problems that made it hard for mothers to maintain breastfeeding for long. This study investigated the relationship of breastfeeding problems to nursing behavior and pacifier use. Methods: Fifty-two healthy mother-infant pairs with breastfeeding problems were referred for observation of nursing behavior to a breastfeeding clinic at the Department of Pediatrics of Malmö General Hospital, Malmö, Sweden, from August 1987 to July 1989. The infants ranged in age from 1 to 17 weeks. A faulty nursing pattern was corrected as necessary. Forty mother-infant pairs with no breastfeeding problems provided a control group. Results: In most cases the nursing problems were related to incorrect sucking technique. The difference in technique of the study group compared with the control group was significant (p = 0.0001). The continuation of breastfeeding was poorer if the infant already had become used to bottle-feeding. Pacifier use was more common in conjunction with breastfeeding problems and in cases with a faulty superficial nipple-sucking technique. Conclusion: Breastfeeding problems may be prevented by the adoption of hospital routines that do not interfere with the start of breastfeeding and by the avoidance of extensive use of pacifiers. abstract_id: PUBMED:21256098 Use of pacifiers and breastfeeding Pacifiers are widely used in developed societies. They are used for soothing infants, reducing their stress and pain during procedures and to help them sleep. The use of pacifiers has been associated, however, with a shorter duration and exclusivity of breastfeeding, with higher rates of otitis media and dental problems, as well as a higher risk of accidents during infancy. Recent studies have also described a relationship between pacifier use and SIDS, especially when used during infant́s sleep. Other described benefits are analgesia and stimulation of non-nutritive sucking in preterm and term infants. There is, at present, wide debate and controversy on whether or not to recommend its use; thus it seems important for professionals and parents to be aware of the risks and benefits associated to its use, particularly related to breastfeeding. Due to the existing controversy of scientific findings, the Committee on Breastfeeding of the Spanish Paediatrics Society, publishes this review, trying to summarise present evidence with the objective of, after analysing scientific results and recommendations, making recommendations regarding the use of the pacifier in the breastfed infant. abstract_id: PUBMED:9271978 Breastfeeding and the use of pacifiers. Background: This study was designed to test the hypothesis that incorrect sucking technique and pacifier use are factors contributing to breastfeeding failure. Method: The nursing patterns of 82 exclusively breastfeeding mother-infant pairs were observed 4 to 5 days postpartum on the maternity ward at University Hospital in Malmö, Sweden. The breastfeeding outcome and pacifier use was assessed by regular telephone contacts during a 4-month follow-up. Results: The breastfeeding rate at 4 months was 91 percent in the nonpacifier group and 44 percent in the pacifier group (p = 0.03). An incorrect superficial nipple-sucking technique at the breast from the start combined with pacifier use resulted in early weaning in most cases. Conclusion: To promote successful breastfeeding and to reduce nursing problems, an incorrect sucking technique should be prevented or corrected, and the use of pacifiers should be avoided or restricted. abstract_id: PUBMED:14581850 Does the use of pacifiers shorten breastfeeding duration in infants? Breastfeeding has a number of benefits for both mother and baby, which appear to increase with longer duration of breastfeeding. The use of pacifiers has been negatively associated with shortened breastfeeding duration. The aim of this review was to determine whether the use of pacifiers actually shortens breastfeeding duration in infants. The Cochrane library, Medline, CINAHL and Embase databases were searched for systematic reviews, randomized controlled trials and cohort studies examining the effect of pacifier use on breastfeeding duration. After applying appropriate inclusion criteria only two cohort studies were identified. In one the overall breastfeeding duration was longer in non pacifier-using infants compared to those given pacifiers (10 months vs. 7.5 months). The mean breastfeeding duration in children who were given pacifiers before 6 weeks of age was 168.5 days compared to 196.0 days for children who had a pacifier introduced later or never. However, there is no evidence to suggest any effect from occasional pacifier use, and it remains unclear whether pacifiers are an independent causal factor for reducing breastfeeding duration in infants. abstract_id: PUBMED:19278381 How are effective breastfeeding technique and pacifier use related to breastfeeding problems and breastfeeding duration? Background: Inconsistent findings leave uncertainty about the impact of pacifier use on effective breastfeeding technique. The purpose of this study was to investigate how breastfeeding technique and pacifier use were related to breastfeeding problems and duration of breastfeeding. Methods: Data were collected from the intervention group of a randomized trial in which health visitors followed up with mothers for 6 months after childbirth. The health visitors classified the breastfeeding technique at approximately 1 week after birth and repeated the observation if a correction was necessary. Effective technique included positioning, latch, sucking, and milk transfer. Data on breastfeeding problems and pacifier use were obtained from self-reported questionnaires. The study population included 570 mother-baby pairs with complete information on breastfeeding technique and pacifier use. The primary outcome was duration of exclusive breastfeeding. Results: One-half of the mothers showed ineffective breastfeeding technique at the first\ observation, most frequently ineffective position (61%) and latch (52%). In the unadjusted analysis, only sucking and milk transfer were associated with breastfeeding duration. In the adjusted analysis, ineffective technique was significantly associated with mothers reporting early breastfeeding problems, which thereby influenced the breastfeeding duration. Pacifier use had an independent negative impact on duration of breastfeeding. A single correction of the breastfeeding technique was not associated with duration or occurrence of problems. Conclusions: Observation of breastfeeding technique may help mothers in the stage of when they are establishing breastfeeding to avoid early and later problems, but breastfeeding technique is less useful in predicting breastfeeding duration. Use of a pacifier should be avoided in the first weeks after birth. abstract_id: PUBMED:28841401 Pacifiers and Exclusive Breastfeeding: Does Risk for Postpartum Depression Modify the Association? Background: Postpartum depression is associated with lower rates of breastfeeding. Evidence describing the effect of pacifiers on breastfeeding is inconsistent, and previous research suggests that pacifiers may help vulnerable mothers breastfeed. Research aim: This study aimed to determine (a) how receiving a pacifier in the hospital affects exclusive breastfeeding (EBF) at 1 week and 3 months postpartum and (b) whether this association is modified by risk for postpartum depression (PPD). Methods: Data were derived from Listening to Mothers III. We included participants ( n = 1,349) who intended to breastfeed and delivered at term. Mothers were considered at high risk for PPD if they reported feeling the need to receive treatment for depression during pregnancy. We used weighted multivariable logistic regression models to adjust for covariates. Results: Receiving a pacifier in the hospital was not significantly associated with EBF at 1 week (odds ratio [ OR] = 0.84, 95% confidence interval [CI] [0.62, 1.12]) but was significantly associated with lower odds of EBF at 3 months postpartum ( OR = 0.72, 95% CI [0.54, 0.95]). Risk for PPD modified this association. Among mothers at high risk for PPD, receiving a pacifier was significantly associated with increased odds of EBF ( OR = 3.31, 95% CI [1.23, 8.97] at 1 week and OR = 5.27, 95% CI [1.97, 14.12] at 3 months); however, among mothers who were at lower risk for PPD, receiving a pacifier was associated with decreased odds of EBF ( OR = 0.75, 95% CI [0.56, 1.02] at 1 week and OR = 0.62, 95% CI [0.46, 0.82] at 3 months). Conclusion: Pacifiers may help protect against early cessation of EBF among mothers at high risk for depression. Additional research is needed to better understand this association. abstract_id: PUBMED:29434650 Breastfeeding technique and associated factors among breastfeeding mothers in Harar city, Eastern Ethiopia. Background: Ineffective breastfeeding technique is one of the factors contributing to mothers practicing non-exclusive breastfeeding. Inappropriate breastfeeding technique is the leading cause of nipple pain in Ethiopia, as in other countries. However, no studies have examined breastfeeding technique and associated factors in eastern Ethiopia. Therefore, this study was conducted with the aim of assessing breastfeeding technique and associated factors among breastfeeding mothers. Method: We conducted an institution based cross-sectional study in selected governmental health facilities of Harar city, Eastern, Ethiopia, from January to February 2017. Systematic random sampling technique was used to select 422 study participants. Data were collected using pretested observational checklist and interviewer administered questionnaires containing sociodemographic, maternal and infant characteristics. The variables, positioning, attachment and suckling, were used to assess the outcome variable of breastfeeding technique. Descriptive, bivariate and multivariate logistic regression analysis was done to identify independent predictors of BFT after controlling for confounding variables. Result: The proportion of mothers practicing an effective breastfeeding technique was 43.4% (179/412). Effective breastfeeding technique was 2.3 times more common among mothers with at least secondary school educational status compared to mothers with no formal education (Adjusted Odds Ratio [AOR] 2.3; 95% Confidence Interval [CI] 1.1, 3.9). The practice of effective breastfeeding technique was significantly associated with mothers who had immediate breastfeeding technique counseling after birth (AOR 1.7, 95% CI 1.1, 2.8) and at least two postnatal visits (AOR 5.9; 95% CI 2.1, 15.9) compared to one visit. Absence of breast problems and having previous breastfeeding experience were also associated with the likelihood of effective breastfeeding technique practice (AOR 4.0; 95% CI 1.4, 10.9) and (AOR 3.3; 95% CI 1.1, 10.7) respectively. Conclusion: The practice of effective breastfeeding technique was low. Effective breastfeeding technique practice was associated with higher educational status, previous information about breastfeeding technique, previous breastfeeding experience, absence of breast problems, receiving breastfeeding technique counseling immediately after birth and at least two postnatal visits. Therefore; health services should provide education about effective breastfeeding techniques and ensure postnatal care for all women, particularly primipara. abstract_id: PUBMED:37550129 The effects of the breastfeeding problems management model on breastfeeding problems, breastfeeding motivation and breastfeeding success: A randomized controlled trial. Purpose: The majority of women experience breastfeeding problems and need professional support during the lactation period. We developed a Nurse-led Breastfeeding Problems Management Model to address this issue. The components of the model included meeting women face-to-face, giving education using a booklet, providing breastfeeding consultancy on the phone, and conducting home visits. This study aims to determine the effects of the Breastfeeding Problems Management Model (BPMM) on breastfeeding problems, motivation, and success. Design And Methods: This randomized controlled trial included 50 women (25 intervention and 25 control). The intervention group received BPMM under the leadership of a nurse during the early postnatal period, before discharge, and in the 1st, 2nd, 6th, and 8th weeks of the postpartum period. The control group received routine care. The Breastfeeding Experience Scale (BES) was applied to both groups in the 1st, 2nd, 6th, and 8th weeks of the postpartum period. LATCH Assessment Tool was administered to both groups before discharge and in the 8th week of the postpartum period. The Breastfeeding Motivation Scale (BMS) was administered to both groups in the 8th week of the postpartum period. Results: The LATCH post-test scores were significantly higher in the intervention group than in the control group (intervention 9.72 ± 0.54, control 8.60 ± 1.25). In all postpartum weeks, BES scores were significantly lower in the intervention group than in the control group. Except for the introjected regulation-social approval and introjected regulation-social pressure sub-scale mean scores of the BMS, all the other sub-scales in the intervention group were significantly higher than in the control group. Conclusions: The BPMM was found to decrease and eliminate breastfeeding problems and increase breastfeeding motivation and breastfeeding success in women. Practice Implications: The BPMM leadership of a nurse should be considered as part of routine postpartum care for preventing and eliminating breastfeeding problems and increasing breastfeeding motivation and breastfeeding success. Clinical Trials Registration: The study was registered at Clinical-Trials.gov (NCT05543902). abstract_id: PUBMED:28449646 When is the use of pacifiers justifiable in the baby-friendly hospital initiative context? A clinician's guide. Background: The use of pacifiers is an ancient practice, but often becomes a point of debate when parents and professionals aim to protect and promote breastfeeding as most appropriately for nurturing infants. We discuss the current literature available on pacifier use to enable critical decision-making regarding justifiable use of pacifiers, especially in the Baby-Friendly Hospital Initiative context, and we provide practical guidelines for clinicians. Discussion: Suck-swallow-breathe coordination is an important skill that every newborn must acquire for feeding success. In most cases the development and maintenance of the sucking reflex is not a problem, but sometimes the skill may be compromised due to factors such as mother-infant separation or medical conditions. In such situations the use of pacifiers can be considered therapeutic and even provide medical benefits to infants, including reducing the risk of sudden infant death syndrome. The argument opposing pacifier use, however, is based on potential risks such as nipple confusion and early cessation of breastfeeding. The Ten Steps to Successful Breastfeeding as embedded in the Baby-Friendly Hospital Initiative initially prohibited the use of pacifiers in a breastfeeding friendly environment to prevent potential associated risks. This article provides a summary of the evidence on the benefits of non-nutritive sucking, risks associated with pacifier use, an identification of the implications regarded as 'justifiable' in the clinical use of pacifiers and a comprehensive discussion to support the recommendations for safe pacifier use in healthy, full-term, and ill and preterm infants. The use of pacifiers is justifiable in certain situations and will support breastfeeding rather than interfere with it. Justifiable conditions have been identified as: low-birth weight and premature infants; infants at risk for hypoglyceamia; infants in need of oral stimulation to develop, maintain and mature the sucking reflex in preterm infants; and the achievement of neurobehavioural organisation. Medical benefits associated with the use of pacifiers include providing comfort, contributing towards neurobehavioural organisation, and reducing the risk of sudden infant death syndrome. Guidelines are presented for assessing and guiding safe pacifier use, for specific design to ensure safety, and for cessation of use to ensure normal childhood development. abstract_id: PUBMED:12227560 Using pacifiers: what are breastfeeding mothers doing? The objective of this study was to identify the impact of pacifier use on the duration of breastfeeding amongst Australian women. A cohort of 556 mothers who delivered in Perth, Western Australia was recruited to study their infant feeding practices. The mothers were interviewed in hospital and again at 2, 6, 10, 14, 18 and 24 weeks postpartum, or until they ceased to breastfeed. At two weeks 62% of breastfed babies were using a pacifier, increasing to a peak of 78% at six weeks. Infants who were using a pacifier had slightly fewer feeds each day at every age period (for example 6.9 versus 7.4 feeds at six weeks of age), but there was no difference in the number of night feeds. A recent study suggested that the mothers resorted to the use of pacifiers when they were having problems breastfeeding, and any impact of pacifiers on breastfeeding duration was due to confounding factors. However in this longitudinal study, after adjusting for the presence of breastfeeding problems, the use of a pacifier at two weeks was associated with reduced likelihood of breastfeeding to six months (odds ratio 0.40; 95%CI 0.25-0.63). Based on the results of this study we concluded that the use of a pacifier at two weeks of age reduced the likely duration of breastfeeding to six months. A possible mechanism of action was the reduced number of daily feeds in breastfed infants that would reduce breast stimulation. If mothers choose to use a pacifier they should introduce it later and use it infrequently. Answer: Yes, breastfeeding problems are related to incorrect breastfeeding technique and the use of pacifiers and bottles. Studies have shown that most nursing problems are related to incorrect sucking technique, and the use of pacifiers was more common in conjunction with breastfeeding problems and cases with a faulty superficial nipple-sucking technique (PUBMED:9534504). Additionally, the use of pacifiers has been associated with a shorter duration and exclusivity of breastfeeding (PUBMED:21256098). An incorrect superficial nipple-sucking technique at the breast from the start combined with pacifier use resulted in early weaning in most cases (PUBMED:9271978). Furthermore, the use of pacifiers has been negatively associated with breastfeeding duration, although the evidence is not conclusive on whether pacifiers are an independent causal factor for reducing breastfeeding duration in infants (PUBMED:14581850). Ineffective breastfeeding technique has been significantly associated with mothers reporting early breastfeeding problems, which thereby influenced the breastfeeding duration, and pacifier use had an independent negative impact on the duration of breastfeeding (PUBMED:19278381). However, the relationship between pacifier use and breastfeeding may be more complex, as one study found that among mothers at high risk for postpartum depression, receiving a pacifier was significantly associated with increased odds of exclusive breastfeeding, suggesting that pacifiers may help protect against early cessation of breastfeeding in this particular group (PUBMED:28841401). In summary, breastfeeding problems can be related to incorrect breastfeeding technique and the use of pacifiers and bottles, and these factors can negatively impact breastfeeding success and duration. However, the context and individual circumstances, such as the mother's risk for postpartum depression, may modify the association between pacifier use and breastfeeding outcomes.
Instruction: Variability of surgical knot tying techniques: do we need to standardize? Abstracts: abstract_id: PUBMED:32532699 Optimizing Assessment of Surgical Knot Tying Skill. Objectives: Knot tying is a fundamental surgical skill. Existing knot tying models assess tying efficiency and errors but do not address respect for tissue. Development of a model that assesses tissue displacement during knot tying may provide a good surrogate for respect for tissue, allow detection of expertise, and offer an improved training platform for skill acquisition. This study aimed to develop a novel, low-cost, knot tying board (KTB) that collects objective, automated metrics of knot displacement, and assesses knot displacement by level of surgical expertise. Design: The novel KTB was developed in collaboration between engineering students and surgical educators. Joystick potentiometers were incorporated on 2 parallel rubber tubes to measure vertical and horizontal displacement while tying. Participants used a standardized technique to tie 1- and 2-handed knots. Differences in time and tubing displacement were compared among junior residents (postgraduate year 1-2), senior residents (postgraduate year 3-5), and attending surgeons; p < 0.05 was considered statistically significant. Setting: This study was conducted at the Indiana University Surgical Skills Center in Indianapolis, IN. Participants: Forty-seven residents and faculty participated in the study (26 juniors, 14 seniors, 7 attendings). Results: KTB development required 100 hours and $70.00. The attending surgeons tied 2-handed knots faster and with significantly more vertical tubing displacement than residents. Senior residents tied knots significantly faster but with similar tubing displacement as juniors. Similar trends were found for 1-handed knots. Conclusions: A novel, low-cost KTB was developed to measure knot displacement as a surrogate for tissue handling. The new performance metric of vertical knot displacement proved more sensitive in detecting performance differences among groups compared with horizontal knot displacement. This board and its novel metrics may promote the development of robust knot tying skill by residents. abstract_id: PUBMED:38035223 Nodo-Tie: an innovative, 3-D printed simulator for surgical knot-tying skills development. Introduction: Clinical simulators are an important resource for medical students seeking to improve their fundamental surgical skills. Three-dimensional (3-D) printing offers an innovative method to create simulators due to its low production costs and reliable printing fidelity. We aimed to validate a 3-D printed knot-tying simulator named Nodo-Tie. Methods: We designed a 3-D printed knot-tying simulator integrated with a series of knot-tying challenges and a designated video curriculum made accessible via a quick-response (QR) code. The Nodo-Tie, which costs less than $1 to print and assemble, was distributed to second-year medical students starting their surgical clerkship. Participants were asked to complete a survey gauging the simulator's usability and educational utility. The time between simulator distribution and survey completion was eight weeks. Results: Students perceived the Nodo-Tie as easy-to-use (4.6 ± 0.8) and agreed it increased both their motor skills (4.5 ± 0.9) and confidence (4.5 ± 0.8) for tying surgical knots in the clinical setting. Many students agreed the Nodo-Tie provided a stable, durable surface for knot-tying practice (83.7%, n = 41) and that they would continue to use it beyond their participation in the study period (91.7%, n = 44). Discussion: Medical students found this interactive, 3-D printed knot-tying simulator to be an effective tool to use for self-directed development of their knot-tying skills. Given the Nodo-Tie's low cost, students were able to keep the Nodo-Tie for use beyond the study period. This increases the opportunity for students to engage in the longitudinal practice necessary to master knot-tying as they progress through their medical education. Key Messages: Clinical simulators provide proactive learners with reliable, stress-free environments to engage in self-directed surgical skills development. The Nodo-Tie, a 3-D printed simulator, serves as a cost-effective, interactive tool for medical students to develop their knot-tying abilities beyond the clinical setting. abstract_id: PUBMED:33549063 Differences of tensile strength in knot tying technique between orthopaedic surgical instructors and trainees. Background: Knot tying technique is an extremely important basic skill for all surgeons. Clinically, knot slippage or suture breakage will lead to wound complications. Although some previous studies described the knot-tying technique of medical students or trainees, little information had been reported on the knot-tying technique of instructors. The objective of the preset study was to assess surgeons' manual knot tying techniques and to investigate the differences of tensile strength in knot tying technique between surgical instructors and trainees. Methods: A total of 48 orthopaedic surgeons (postgraduate year: PGY 2-18) participated. Surgeons were requested to tie surgical knots manually using same suture material. They were divided into two groups based on each career; instructors and trainees. Although four open conventional knots with four throws were chosen and done with self-selected methods, knot tying practice to have the appropriate square knots was done as education only for trainees before the actual trial. The knots were placed over a 30 cm long custom made smooth polished surface with two cylindrical rods. All knots were tested for tensile strength using a tensiometer. The surgical loops were loaded until the knot slipped or the suture broke. The tensile strength of each individual knot was defined as the force (N) required to result in knot failure. Simultaneously, knot failure was evaluated based on knot slippage or suture rupture. In terms of tensile strength or knot failure, statistical comparison was performed between groups using two-tailed Mann-Whitney U test or Fisher exact probability test, respectively. Results: Twenty-four instructors (PGY6-PGY18) and 24 trainees (PGY2-PGY5) were enrolled. Tensile strength was significantly greater in trainees (83.0 ± 27.7 N) than in instructors (49.9 ± 34.4 N, P = 0.0246). The ratio of slippage was significantly larger in instructors than in trainees (P < 0.001). Knot slippage (31.8 ± 17.7 N) was significantly worse than suture rupture (89.9 ± 22.2 N, P < 0.001) in tensile strength. Conclusions: Mean tensile strength of knots done by trainees after practice was judged to be greater than that done by instructors in the present study. Clinically, knot slippage can lead to wound dehiscence, compared to suture rupture. abstract_id: PUBMED:29650503 Instructional Video and Medical Student Surgical Knot-Tying Proficiency: Randomized Controlled Trial. Background: Many senior medical students lack simple surgical and procedural skills such as knot tying. Objective: The aim of this study was to determine whether viewing a Web-based expert knot-tying training video, in addition to the standard third-year medical student curriculum, will result in more proficient surgical knot tying. Methods: At the start of their obstetrics and gynecology clerkship, 45 students were videotaped tying surgical knots for 2 minutes using a board model. Two blinded female pelvic medicine and reconstructive surgery physicians evaluated proficiency with a standard checklist (score range 0-16) and anchored scoring scale (range 0-20); higher numbers represent better skill. Students were then randomized to either (1) expert video (n=26) or (2) nonvideo (n=24) groups. The video group was provided unlimited access to an expert knot-tying instructional video. At the completion of the clerkship, students were again videotaped and evaluated. Results: At initial evaluation, preclerkship cumulative scores (range 0-36) on the standard checklist and anchored scale were not significantly different between the nonvideo and video groups (mean 20.3, SD 7.1 vs mean 20.2, SD 9.2, P=.90, respectively). Postclerkship scores improved in both the nonvideo and video groups (mean 28.4, SD 5.4, P<.001 and mean 28.7, SD 6.5, P=.004, respectively). Increased knot board practice was significantly correlated with higher postclerkship scores on the knot-tying task, but only in the video group (r=.47, P<.05). Conclusions: The addition of a Web-based expert instructional video to a standard curriculum, coupled with knot board practice, appears to have a positive impact on medical student knot-tying proficiency. abstract_id: PUBMED:25697509 An objective assessment tool for basic surgical knot-tying skills. Objective: To determine if a knot-tying checklist can provide a valid score and if the checklist can be used by novice surgeons in a reliable manner. Methods: This study was conducted at the Surgical Skills Center at the University of California, San Francisco. A knot-tying checklist was developed from a kinesthetic knot-tying curriculum. Novice (67 first-year medical students) and experienced surgeons (8 residents postgraduate year 3 and higher and 2 attending physicians) were videotaped performing 4 knot-tying tasks, and the videotapes were rated with a global score and a checklist by interns (n = 3) and experienced (n = 3) surgeons. Results: Both interns and experienced surgeons can use the knot-tying checklist with acceptable reliabilities (>0.8 with 3 raters). The checklist is able to differentiate between novice and experienced surgeons, when used by both interns and experienced raters. The expert knot-tying score correlated with the global score overall (r = 0.88) and for each task (r was 0.82 for task 1, 0.85 for task 2, 0.80 for task 3, and 0.81 for task 4). Conclusions: The knot-tying checklist provides a valid score for basic surgical knot-tying and can be used by novice and experienced raters. Its use supports peer assessment of performance in a surgical skills laboratory setting. abstract_id: PUBMED:28017412 Intracorporal knot tying techniques - which is the right one? Introduction: Intracorporal knot tying (ICKT) and suturing in minimal invasive surgery (MIS) represent a key skill for advanced procedures such as Nissen fundoplication. Suture placement under tension is particularly challenging during MIS. The aim of this study was to compare ICKT of the common surgical square knot and the slipping knot on a simulated suture placement under tension. Furthermore, we objectively assessed the surgical skill of ICKT following 3hours of hands-on training. Methods: A box trainer was used for laparoscopic knot tying with predefined openings. A 12cm suture was placed in central view. We used a standardized silicon suture pad with a defined wound dehiscence of 0.5cm and marked needle entrance and exit targets next to the incision. Twenty participants were divided among three groups in this study. The first group (n=5) consisted of senior physicians. The second group (n=5) was made up of surgical residents in the first to fourth year of residency training. The third group (n=10) contained medical students between their third and sixth year of study without any prior experience in laparoscopic surgery. Residents and students received a 3-hour hands-on training in surgical square and slipping knot tying. Each participant tied two of each knot types before and after the hands-on training. Knot quality, performance, total time and suture placement accuracy were the parameters for assessment in this study. Results: The knot quality was greater for the slipping knot compared with the square knot in all groups. There were no significant intragroup differences in knot tying performance, task time and accuracy of both suture methods. Students and residents improved in all categories for both ICKT techniques after training. Discussion: We compared ICKT of the surgical square knot with the slipping knot on a simulated suture placement under tension during a standardized training program for medical students and surgical residents. In our study, the average quality of the slipping knot was significantly superior to the square knot in all participants. The knot tying performance, task time, and accuracy of students and residents after hands-on training were not significantly different between both suture methods. This suggests that the two ICKT techniques have similar properties except the quality advantages of the slipping knot. Although students and residents improved in all categories for both ICKT techniques after training, they did not achieve expert level for task times and accuracy. Conclusions: In this pilot trail, the quality of the slipping knot was superior to the surgical square knot during intracorporal suturing under tension. Our 3-hour sequential training program improved laparoscopic suturing and knot tying skills in all participants. Our findings suggest that training centers for MIS using validated training programs should be a continuous and integral part of surgical education. Level Of Evidence: USPSTF Level III: monocentric, prospective, pilot Trial. abstract_id: PUBMED:28769832 Supporting Third Year Medical Students' Skill Acquisition and Self-Efficacy with Coping Models and Process Feedback during Laparoscopic Knot Tying Simulation. Background: During the third year general surgery clerkship, medical students are required to develop laparoscopic knot-tying skills. Knot-tying skills studies often rely on objective variables (e.g., time, materials used, number of iterations) that lend themselves to correlational analysis of pre- and post-intervention skill level. This study differs by examining how instructional interventions-role modeling and feedback-affect medical students' skill acquisition and self-efficacy during a laparoscopic surgical simulation training session. Methods: Seventy-eight surgical clerkship students were assigned randomly to one cell of a 2X2 factorial design. Participants observed one of two types of role modeling (expert vs. coping) and received either process-oriented or outcome-oriented feedback during a 30-min laparoscopic training session. Participants also completed several surveys that assessed their interest in surgery and their self-efficacy for laparoscopic knot tying. Results: Coping model groups tended to perform better on the knot tying task, though this was less the case in the presence of outcome feedback. Expert model groups slightly outperformed the coping model group on the peg transfer task, but in the presence of outcome feedback they reported the lowest satisfaction with their performance and the lowest self-efficacy for the knot tying task. The coping model combined with process feedback had a positive influence on students' efficiency in learning the task, on their satisfaction with their performance, and on their self-efficacy for laparoscopic knot typing. Conclusions: Results are discussed relative to self-regulated learning theory. abstract_id: PUBMED:35467247 A knot-pusher technique with hand-tying precision through a mini-thoracotomy. This study documents the application of a knot-pusher technique via a mini-thoracotomy with the traditional one-handed knot-tying rationale using an existing long-shaft knot-pushing device. This technique achieved the typical hand-tying precision and secured tight knots in minimally invasive cardiac surgery. abstract_id: PUBMED:26622113 Towards a better knot: Using mechanics methods to evaluate three knot-tying techniques in laparo-endoscopic single-site surgery. Introduction: Knot tying is difficult but important for laparo-endoscopic single-site surgery (LESS). There are several techniques for LESS knot-tying. However, objective assessment of these skills has not yet been established. The aim of this study was to assess three different knot-tying techniques in LESS using mechanical methods. Materials And Methods: The subject tied 24 knots, eight knots with each of the three techniques in an inanimate box laparoscopic trainer while the movements of their instruments were evaluated using a LESS mechanical evaluation platform. The operations were assessed on the basis of the time, average load of the dominant hand. Then, forces caused the knots to rupture were measured using a material testing system and used to compare the knots's strength. Results: The intracorporeal one-hand knot-tying technique presented significantly better time and average load scores than the extracorporeal knot-tying technique (P < 0.01), and the intracorporeal side winding technique was more time and average load consuming in comparison to other techniques during the performance of knot-tying (P < 0.01). The intracorporeal one-handed knot-tying knots can tolerate better distraction forces compared with the intracorporeal side winding knot-tying knots and the extracorporeal knot-tying knots (P < 0.05). Conclusions: The intracorporeal one-hand knot-tying technique and knots showed better results than the intracorporeal "side winding" technique and the extracorporeal knot-tying technique in terms of the time, average load taken and the force caused the knot to rupture. abstract_id: PUBMED:29460691 A New Needle Holder Design to Facilitate Intracorporeal Knot Tying. Purpose: Intracorporeal knot tying in laparoscopic surgery continues to be a problem especially for beginners and inexperienced surgeons. A wide-angle needle holder was designed to make the knot maneuver easier while also ensuring that the knot does not come out of the needle holder. In this study, it was planned to compare the wide-angle needle holder with the classic needle holder in regard to knot tying time. Material And Method: A total of 11 male volunteers were randomly selected from freshmen students of the faculty of medicine, who had no experience of surgery or laparoscopic surgery. After the required training and practice, candidates were asked to tie 3 knots each in the training box using a classic needle holder and a wide-angle needle holder. Their knot tying times were recorded. Results: Although the students had no experience, it was observed that they tied knots more easily and more comfortably using the wide-angle needle holder. It was found that the knot tying times with the wide-angle needle holder were quite short compared with the classic needle holder in all candidates. This difference was also statistically significant ( P = .01). Conclusion: We believe and claim that the use of a wide-angle needle holder during knot tying in laparoscopic surgery can facilitate knot tying and shorten the duration of the knotting, especially for inexperienced surgeons. Answer: The variability in surgical knot-tying techniques and the need for standardization is a topic of ongoing discussion in the medical community. The studies provided offer insights into different aspects of surgical knot tying, including the development of assessment tools, the impact of training and skill levels, and the introduction of innovative devices and methods to facilitate learning and performance. The development of a novel, low-cost knot tying board (KTB) that measures knot displacement as a surrogate for tissue handling suggests that there is room for improvement in assessing surgical knot-tying skills, with a focus on detecting performance differences among various levels of surgical expertise (PUBMED:32532699). Similarly, the Nodo-Tie, a 3-D printed knot-tying simulator, demonstrates the potential for innovative, cost-effective tools to aid in the self-directed development of knot-tying skills for medical students (PUBMED:38035223). Studies have also shown differences in tensile strength between knots tied by surgical instructors and trainees, indicating that there may be variations in technique that affect the clinical outcomes, such as knot slippage or suture breakage (PUBMED:33549063). Additionally, the use of instructional videos has been shown to improve medical students' knot-tying proficiency when coupled with practice (PUBMED:29650503). The use of a knot-tying checklist as an objective assessment tool suggests that there is a need for standardized metrics to evaluate basic surgical knot-tying skills (PUBMED:25697509). Furthermore, the comparison of intracorporeal knot-tying techniques highlights the importance of selecting the right technique for specific surgical situations, with some techniques showing superior quality or performance after training (PUBMED:28017412). Interventions such as role modeling and feedback during training sessions have been studied for their effects on skill acquisition and self-efficacy, suggesting that instructional methods can influence learning outcomes (PUBMED:28769832). Moreover, the application of a knot-pusher technique via a mini-thoracotomy has been documented, which achieves hand-tying precision in minimally invasive cardiac surgery (PUBMED:35467247). Finally, a study evaluating three knot-tying techniques in laparo-endoscopic single-site surgery using mechanical methods indicates that certain techniques may be more efficient and produce stronger knots than others (PUBMED:26622113).
Instruction: Is rectal disimpact always necessary in children with chronic constipation? Abstracts: abstract_id: PUBMED:20414661 Is rectal disimpact always necessary in children with chronic constipation? Evaluation with pelvic ultrasound. Purpose: The aim of study was to evaluate if pelvic ultrasound can be useful in managing children with chronic idiopathic constipation. Methods: A total of 270 children with idiopathic chronic constipation were enrolled in the study. At baseline and at monthly checkups children were evaluated by clinical score and pelvic ultrasound (US). Patients have been divided in 2 groups, based on pelvic US results: group A with a rectal diameter &gt;3 cm, group B with a rectal diameter &lt;3 cm or rectum not visualized. Both groups were subsequently randomly divided in two subgroups (A1, A2, B1, B2) on the basis of the prescribed treatment (disimpaction for the first week and daily laxative or only daily laxative). Results: After 1 month of therapy all clinical features improved in group A1 and at pelvic US, rectal size reduced and became not visualized; group A2 showed poor clinical response and transverse diameter of rectum did not modify significantly; B1 and B2 groups showed significant improvement only after 2-3 months. Conclusions: Rectal disimpaction is necessary only in presence of MR. Pelvic US is a useful to diagnose MR and to set up the most appropriate treatment protocol for different chronic constipation cases. abstract_id: PUBMED:36536742 Treatment of internal rectal prolapse in children: A cross sectional study. Objectives: Internal rectal prolapse in children is one of the causes of annoying and drug-resistant constipation. The disease causes a type of obstructive constipation due to the appearance of mucosal folds in the distal rectum. If the diagnosis is made in time according to the methods proposed by the author, the treatment can also be done with simple techniques. In this article, the treatment methods for this well-known disease are presented. Methods: This is a prospective cross-sectional study, Sixty children (36 girls and 24 boys) between the ages of 6 months and 15 years who have specific symptoms of this disease and different degrees after definitive diagnosis of anesthesia with Delshad-mash have been treated with the sclerosing solution under the rectal mucosa or excision prolapse. Results: Depending on the severity of the disease, treatment with a subcutaneous injection of sclerosing solution responds well. The condition resolves after years of suffering from high degrees of prolapsed mucosal excision. Conclusion: Diagnosis of internal rectal prolapse can be made based on specific symptoms and barium enema radiology before the anesthesia test for children and determine the patient's treatment path and prevent additional operations such as colostomy-pulmonary with misdiagnosis of Hirschsprung. abstract_id: PUBMED:27610352 Solitary rectal ulcer syndrome: Is it really a rare condition in children? Aim: To evaluate the clinicopathologic characteristics of the children with solitary rectal ulcer. Methods: Fifty-five children with a confirmed diagnosis of solitary rectal ulcer were studied in a period of 11 years from March 2003 to March 2014. All data were collected from the patients, their parents and medical records in the hospital. Results: From 55 studied patients, 41 were male (74.5%) and 14 female (25.5%). The mean age of the patients was 10.4 ± 3.7 years and the average time period from the beginning of symptoms to diagnosis of solitary rectal ulcer was 15.5 ± 11.2 mo. The most common clinical symptoms in our patients were rectal bleeding (n = 54, 98.2%) and straining during defecation or forceful defecation (n = 50, 90.9%). Other symptoms were as follows respectively: Sense of incomplete evacuation (n = 34, 61.8%), mucorrhea (n = 29, 52.7%), constipation (n = 14, 25.4%), tenesmus and cramping (n = 10, 18.2%), diarrhea (n = 9, 16.4%), and rectal pain (n = 5, 9.1%). The colonoscopic examination revealed 67.3% ulcer, 12.7% polypoid lesions, 10.9% erythema, 7.3% both polypoid lesions and ulcer, and 1.8% normal. Most of the lesions were in the rectosigmoid area at a distance of 4-6 cm from the anal margin. Finally, 69.8% of the patients recovered successfully with conservative, medical and surgical management. Conclusion: The study revealed that solitary rectal ulcer is not so uncommon despite what was seen in previous studies. As the most common symptom was rectal bleeding, clinicians and pathologists should be familiar with this disorder and common symptoms in order to prevent its complications with early diagnosis. abstract_id: PUBMED:32766189 A Clinical Entity Often Missed-Solitary Rectal Ulcer Syndrome in Children. Background: Solitary Rectal Ulcer Syndrome (SRUS) was a relatively uncommon and easily misdiagnosed clinical entity in children. The diagnosis of this condition was often delayed due to lack of clinical suspicion. Only case series were available and no definitive treatment was postulated. Here, we share our experience of SRUS in our institute and reviewed the literature published so far. Aim: To study the clinical profile and treatment response of Solitary Rectal ulcer Syndrome in Children (SRUS). Materials: The clinical profile and 1 year follow up response of the diagnosed cases of SRUS over a period of 5 years was retrospectively collected from medical record department. Results: The median age of presentation among 24 children was 8 years with majority (75%) above 5 years. All children presented with intermittent rectal bleeding with median duration of 5.5 months. The other presenting symptoms documented were hard stool (79%), mucorrhea (70%), and abdominal pain (58%). One child presented with rectal prolapse. On colonoscopy, 46% had single ulcer while another 46% had multiple ulcers and 8% had polypoidal lesion. All lesions were within distal rectum and had characteristic histological pattern. All children were treated with conventional treatment like dietary fibers and laxatives along with toilet training. About 75% children attained remission and 25% had relapse but responded with corticosteroid enema. None required surgery. Conclusion: Conventional treatments itself induce and maintain remission in most of SRUS patients if treatment is instituted at the earliest. Thus, early suspicion and diagnosis is needed to achieve remission. abstract_id: PUBMED:27086003 Rectal Prolapse in Children: Significance and Management. Rectal prolapse is a herniation of the rectum through the anus. It is rare in children. When it does occur, it is usually prior to 4 years of age and due to anatomical variants. A few conditions predispose children to rectal prolapse, the most common being constipation. Cystic fibrosis used to be commonly associated with rectal prolapse, but with the advent of cystic fibrosis newborn screening, this association is no longer as frequently seen. Many recent case reports, detailed in this chapter, describe conditions previously unknown to be associated with rectal prolapse. Management is usually supportive; however, rectal prolapse requires surgical management in certain situations. This review details the presentation of rectal prolapse, newly described clinical manifestations, and associated conditions, and up-to-date medical and surgical management. abstract_id: PUBMED:33936361 Solitary rectal ulcer syndrome in children and adolescents: a descriptive clinicopathologic study. Solitary rectal ulcer syndrome (SRUS) is an uncommon disorder of the rectum. While benign, it can cause concern for patients and affect quality of life. Reported studies on SRUS worldwide are scarce. The aim of this study is to describe the clinicopathologic characteristics of SRUS in a cohort of children based in Saudi Arabia. In this study, children with a confirmed diagnosis of SRUS at King Abdulaziz University Hospital (KAUH) were included, during the period November 2003 to November 2017. Data were collected from hospital medical records. The study comprised twenty-one patients: 17 males (81%) and 4 females (19%); the median age was 11.4 years (range, 5.43-17.9 years). The most common presenting symptoms were rectal bleeding in 21 patients (100%), passage of mucus in 16 (76.1%), abdominal pain in 14 (66.6%), constipation in 13 (61.9%), straining in 9 (42.9%), and rectal prolapse in 5 (23.8%). The most common finding at initial colonoscopy was a single ulcer in 7 patients (33.3%), multiple ulcers in 6 (28.5%), polypoid lesions in 5 (23.8%), and hyperemic mucosa in 3 (14.2%). All patients received medical treatment and 14 (81%) continued to manifest one or more of the symptoms following treatment, which required subsequent modification of the treatment course. None of the patients required surgery. In conclusion, the study found rectal bleeding to be the most common presentation, with a single ulcer being the most prevalent lesion in endoscopy. Treatment response was variable, but almost half of patients reported relief of symptoms following treatment. abstract_id: PUBMED:28712179 Clinical Spectrum Of Solitary Rectal Ulcer In Children Presenting With Per-Rectal Bleed. Background: Solitary rectal ulcer syndrome (SRUS) is a benign and chronic disorder well known in young adults and less common in children. The objective of this study was to determine the frequency and clinical spectrum of solitary rectal ulcer in children with bleeding per rectum. Methods: This study was conducted in the Department of Paediatric Gastroenterology Hepatology&amp; Nutrition; The Children's Hospital &amp; The Institute of Child Health, Lahore, from January-December 2015. Total 187 children presenting with per-rectal bleeding who underwent colonoscopy were entered in the study. Demographic and presenting clinical features; colonoscopy and histopathology findings were recorded. Data was analysed using SPSS-20. Results: Out of a total of 187 children with bleeding per rectum, 21 (11.23%) were diagnosed with solitary rectal ulcer. Males were 15 (71.43%) and females were 6 (28.57%) with age range 8-12 years. Mucus in stool 14 (66.7%), constipation 12 (57.1%) and tenesmus 10 (47.6%) were the most common clinical presentations. Colonoscopic finding are solitary erythmatous ulcerative lesion was seen in 8 (38.09%) children, multiple ulcerative lesions in colon 6 (28.57%), multiple ulcerative lesions in rectum 5 (23.81%), polypoidal growth in colon and hyperaemic rectal mucosa in 1 (4.76%) each. Histopathological findings were consistent with SRUS in all the cases. Conclusions: The frequency of SRUS was high (19.6%) in patients with per-rectal bleed. Mucus in stool, constipation and tenesmus were the most common clinical presentations. Colonoscopic and histopathological findings were helpful in the confirmation of the underlying aetiology. abstract_id: PUBMED:3067883 Rectal prolapse in children. Review of 260 cases Rectal prolapse is mostly seen in constipated children after prolonged straining at stool. In Morocco, it will occur in children suffering from digestive parasitosis or hypotrophic children following episodes of acute diarrhea. In order to evaluate the respective role of these factors and consequently adapt the management of this condition, a prospective study of 260 cases was conducted between 1979 and 1986 bearing on etiologic factors on the one hand and with a view to assessing the efficacy of medical treatment and of sclerosing injections on the other hand. As regards etiology, diarrhea in connection with digestive parasitosis appears as a predominating factor (62%). A deficient condition was also noted in 67 out of 160 children (42%). Medical treatment was successful only in 28% of cases, whereas 98% of our patients were cured by sclerosing injections without need for surgical treatment. abstract_id: PUBMED:23473901 Rectal examination in children: digital versus transabdominal ultrasound. Purpose: We investigate 2 diagnostic tests to assess the rectal filling state. Materials And Methods: The rectal filling state was assessed with transabdominal ultrasound or with digital rectal examination by 2 independent investigators in children with urological problems before a scheduled diagnostic or surgical urological procedure. A dilated rectum filled with stool or large amounts of (usually) hard stool were both considered as a rectal fecal mass. All investigations were performed with the patient under general anesthesia. The kappa test was used to evaluate agreement between transabdominal ultrasound and digital rectal examination. Results: A total of 84 children (54 boys) with a median (p25-p75) age of 9.0 years (6.4-11) were eligible candidates. A rectal mass was found on transabdominal ultrasound and digital rectal examination in 32% and 41% of all children, respectively, with agreement between the 2 tests in 82.5%. Cohen's kappa showed good agreement of 0.62 (95% CI 0.45-0.79) between transabdominal ultrasound and digital rectal examination. The median (IQR) diameter of the rectum was 3.3 cm (2.8-3.9) in children with a full rectum, and 2.5 cm (1.8-2.8) and 2.0 cm (1.5-2.2) in patients with a half filled and empty rectum, respectively. Conclusions: Transabdominal ultrasound is a noninvasive and reliable alternative to assess the rectal filling state, and might replace digital rectal examination in the evaluation of children with constipation. abstract_id: PUBMED:34882271 Evaluation of diagnostic factors used to refer children with constipation for rectal biopsies. Purpose: Children with constipation and suspected Hirschsprung's disease are referred for rectal biopsy. Since this is an invasive procedure, appropriate indications should be applied to minimize the number of "unnecessary" biopsies. Methods: We reviewed all constipated children who underwent a rectal biopsy to diagnose a possible Hirschsprung's disease at a tertiary referral hospital over a 6-year period (2013-2018). We registered clinical and demographic factors in these children and conducted correlation and multivariate regression analysis to evaluate the relation between these factors and a diagnosis of Hirschsprung's disease. Results: We identified 225 children, aged 0-17 years. In total, Hirschsprung's disease was diagnosed in only 49/225 (22%). Among the 49 children with Hirschsprung's disease, 29 (59%) were diagnosed in the neonatal period. Among girls, HD was confirmed in only 10/101 (10%) children, and only 1 of these 10 girls was older than 6 months at the time of the biopsy. The following factors correlated significantly with Hirschsprung's disease diagnosis in children older than 1 month: "male sex", "failure to thrive", "gross abdominal distention plus vomiting" and "fulfils the Rome 4 criteria for functional constipation". Conclusion: In children referred for rectal biopsy, the factors most indicative of Hirschsprung's disease were "male sex", "failure to thrive", "gross abdominal distention plus vomiting" and "fulfils the Rome 4 criteria for functional constipation". Notably, the prevalence of Hirschsprung's disease decreased with the increasing age of the children. Girls referred for a biopsy rarely had Hirschsprung's disease, especially those older than 1 month. Answer: No, rectal disimpact is not always necessary in children with chronic constipation. According to a study evaluating the use of pelvic ultrasound in managing children with chronic idiopathic constipation, rectal disimpaction was deemed necessary only in the presence of a rectal diameter greater than 3 cm as measured by pelvic ultrasound. The study divided children into two groups based on pelvic ultrasound results: group A with a rectal diameter >3 cm and group B with a rectal diameter <3 cm or rectum not visualized. Both groups were further divided into subgroups based on the prescribed treatment, which included disimpaction for the first week and daily laxative or only daily laxative. The results showed that after 1 month of therapy, all clinical features improved in group A1 (who received disimpaction and laxative) and the rectal size reduced and became not visualized at pelvic ultrasound. In contrast, group A2 (who received only laxative) showed poor clinical response and the transverse diameter of the rectum did not modify significantly. Groups B1 and B2 (with smaller rectal diameter) showed significant improvement only after 2-3 months. The study concluded that pelvic ultrasound is useful to diagnose megarectum (MR) and to set up the most appropriate treatment protocol for different chronic constipation cases, suggesting that disimpaction is necessary only in the presence of MR (PUBMED:20414661).
Instruction: Laryngeal effect of experimental postnatal hypothyroidism: do thyroid hormone receptors change? Abstracts: abstract_id: PUBMED:27107603 Laryngeal effect of experimental postnatal hypothyroidism: do thyroid hormone receptors change? Objectives: In this study, we aimed to investigate the laryngeal histopathological alterations and thyroid hormone receptors in rats with experimentally-induced postnatal hypothyroidism. Materials And Methods: In this prospective, randomized study, pregnant Wistar albino rats were followed and newborn 20 Wistar albino rat pups were included in the study. The pups were randomly divided into two groups: In group 1 (methimazole (MMI)-induced hypothyroidism group), the mothers and pups were given MMI added water up to 90th day, as the pups were fed with breast milk for 19 to 22 days. In group 2 (control group), the mothers and pups were fed with MMI-free water up to 90th days. When the pups were 90 days of age, they were decapitated and their larynx was removed. Their larynx was evaluated for edema, inflammation, goblet cells, and thyroid hormone receptors (TR-α, TR-β). Results: Nine larynx samples for group 1 and eight for group 2 were studied. There was a significant difference in inflammation between the groups with slightly lower in the hypothyroidism group (p=0.009). The TR-α, TR-β, and edema were significantly higher in the hypothyroidism group (p=0.002, p=0.029, p=0.029). There was no significant difference in the Goblet cells between the groups (p=0.637). Conclusion: Histopathologically increased laryngeal edema and increased thyroid hormone receptors were found, shedding light on the mechanism of voice changes in hypothyroidism. abstract_id: PUBMED:27404574 Inflammation severely alters thyroid hormone signaling in the central nervous system during experimental allergic encephalomyelitis in rat: Direct impact on OPCs differentiation failure. Differentiation of oligodendrocyte precursor cells (OPCs) into myelinating oligodendrocytes is severely impaired by inflammatory cytokines and this could lead to remyelination failure in inflammatory/demyelinating diseases. Due to the role of thyroid hormone in the maturation of OPCs and developmental myelination, in this study we investigated (i) the possible occurrence of dysregulation of thyroid hormone signaling in the CNS tissue during experimental neuroinflammation; (ii) the possible impact of inflammatory cytokines on thyroid hormone signaling and OPCs differentiation in vitro. The disease model is the experimental allergic encephalomyelitis in female Dark-Agouti rats, whereas in vitro experiments were carried out in OPCs derived from neural stem cells. The main results are the following: (i) a strong upregulation of cytokine mRNA expression level was found in the spinal cord during experimental allergic encephalomyelitis; (ii) thyroid hormone signaling in the spinal cord (thyroid hormone receptors; deiodinase; thyroid hormone membrane transporter) is substantially downregulated, due to the upregulation of the thyroid hormone inactivating enzyme deiodinase 3 and the downregulation of thyroid hormone receptors, as investigated at mRNA expression level; (iii) when exposed to inflammatory cytokines, deiodinase 3 is upregulated in OPCs as well, and OPCs differentiation is blocked; (iv) deiodinase 3 inhibition by iopanoic acid recovers OPCs differentiation in the presence on inflammatory cytokines. These data suggest that cellular hypothyroidism occurs during experimental allergic encephalomyelitis, possibly impacting on thyroid hormone-dependent cellular processes, including maturation of OPCs into myelinating oligodendrocytes. GLIA 2016;64:1573-1589. abstract_id: PUBMED:12126244 Mechanism of the prorelaxing effect of thyroxine on the sphincter of Oddi. Background: Disturbances in the function of sphincter of Oddi (SO) may prevent normal bile flow and thus enhance the probability of common bile duct stone (CBDS) formation. Previously, we have shown increased prevalence of hypothyroidism in CBDS patients. Methods: In animal (pig) experiments, thyroxine (T4) and triiodothyronine have a specific inhibitory effect on SO contractility, which raises the possibility that the lack of this prorelaxing effect in hypothyroidism could, at least in part, explain the increased prevalence of CBDS. The aims of the present study were to investigate, whether human SO reacts similarly to T4, and to study the mechanisms of the T4 prorelaxing effect. Results: We found that T4 had similar inhibitory effects on both human and pig SO contractions. The T4 effect was dose-dependent, and maximum was observed in 30 min. The maximal prorelaxing effect was achieved with 0.1 nM T4 concentration, the effect of the physiological T4 concentration (0.01 nM) being about half of the maximal effect. Addition of alpha-adrenoceptor antagonist phentolamine, beta-adrenoceptor antagonist propranolol, nitric oxide (NO)-synthesis inhibitor L-NAME, nerve conductance blocker tetrodotoxin, or cyclooxygenase inhibitor diclofenac did not affect the T4-induced inhibition of contraction. Addition of transcription inhibitor actinomycin D or translation inhibitor cyclophosphamide partially reversed the T4-induced inhibition of contraction. Addition of K+ channel blocker glibenclamide totally reversed the T4-induced inhibition of contraction. In Western blotting, the thyroid hormone receptor (TR) antibody recognized 53 kDa and 58 kDa proteins, corresponding to beta1 and beta2 isoforms of TR, in the human SO tissue. Conclusions: We conclude that T4 has a direct prorelaxing effect on human SO that expresses TR beta1 and beta2. This effect is mediated through a transcriptional mechanism that requires new mRNA and protein synthesis and subsequently leads to the activation of K+ channels. abstract_id: PUBMED:21707794 Triiodothyronine administration ameliorates the demyelination/remyelination ratio in a non-human primate model of multiple sclerosis by correcting tissue hypothyroidism. Remyelination failure is a key landmark in chronic progression of multiple sclerosis (MS), the most diffuse demyelinating disease in human, but the reasons for this are still unknown. It has been shown that thyroid hormone administration in the rodent models of acute and chronic demyelinating diseases improved their clinical course, pathology and remyelination. In the present study, we translated this therapeutic attempt to experimental allergic encephalomyelitis (EAE) in the non-human primate Callithrix Jacchus (marmoset). We report that short protocols of triiodothyronine treatment shifts the demyelination/remyelination balance toward remyelination, as assessed by morphology, immunohistochemistry and molecular biology, and improves the clinical course of the disease. We also found that severely ill animals display hypothyroidism and severe alteration of deiodinase and thyroid hormone receptor mRNAs expression in the spinal cord, which was completely corrected by thyroid hormone treatment. We therefore suggest that thyroid hormone treatment improves myelin sheath morphology in marmoset EAE, by correcting the dysfunction of thyroid hormone cellular effectors. abstract_id: PUBMED:3777160 Brown fat nuclear triiodothyronine receptors in rats. Brown adipose tissue (BAT) thermogenesis is important in the adaptation of body energy expenditure to cold exposure, fasting, and overfeeding and in the pathogenesis of obesity. Thyroid hormones are required for the normal functioning of BAT. The mechanism of their effect on BAT, however, has not yet been elucidated. Since most thyroid hormone effects are mediated via the binding of T3 to nuclear receptors (NT3R) the present studies were performed to investigate whether BAT contained NT3R and whether these NT3R were affected by cold exposure, fasting, overfeeding, or hypo- or hyperthyroidism. Rat BAT was found to contain NT3R with a maximum binding capacity (MBC) of 0.28 ng T3/mg DNA and a dissociation constant (Kd) of 3.2 X 10(-10) M. These parameters were unaffected by any of the experimental conditions studied. A major alteration of BAT NT3R MBC or Kd is thus not a causal factor in the changes in BAT thermogenesis induced by the above experimental conditions. In contrast, 3-wk overfeeding increased (+48%) and fasting decreased (-29%) the MBC of hepatic NT3R. Overfeeding increased serum T3 (+110%), while fasting decreased it (-37%). abstract_id: PUBMED:2998571 Effect of cycloheximide on nuclear triiodothyronine receptors in the pituitary of the normal and hypothyroid rat Cycloheximide (Cy), an inhibitor of protein synthesis was found to provoke a dose-dependent decrease of the hypophysis T3 nuclear receptors (T3nR) concentration in normal rats. In thyroidectomized rats, the reduced T3nR density was found to be normalized within 3 hrs. after a single injection of T3. Pretreatment with Cy inhibited the T3 effect on its own receptors, whereas Cy given after T3 was partially or not effective. These data suggest that the half-life of T3nR in the hypophysis is short (about 3 hrs.), and that it depends on protein neosynthesis. abstract_id: PUBMED:33202032 Thyroid gland dysfunction and its effect on the cardiovascular system: a comprehensive review of the literature. There is an intimate and functional relationship between the cardiovascular system and the thyroid gland; from sharing the same embryologic origin to modulating each of the components of the heart for a normal function. Due to this relationship, patients suffering from cardiovascular diseases often undergo a thyroid function test to rule out hypo- or hyperthyroidism. The signs and symptoms of hyper- and hypothyroidism are clinically relevant and profound. The cardiac function changes can be explained through the cellular mechanism of the thyroid hormone action on the heart. Minor alteration of thyroid hormone can change vascular resistance, cardiac contractility, blood pressure, and heart rhythm, because of the presence of the thyroid hormone receptors on these tissues. A better understanding of the impact of thyroid hormones on the cardiovascular system is paramount for physicians to make a quick decision and initiate a treatment plan because it has been shown to reverse some of the cardiac changes such as systolic and diastolic dysfunction. With this literature review, we aim to describe the holistic effect of thyroid hormones on the cardiovascular system, from its effect on a cellular level to changes in cardiac functions in subclinical and overt hypo/hyperthyroidism. Additionally, we will describe the effects of the drug treatment regimen of thyroid on the cardiac function. abstract_id: PUBMED:22023791 Mechanisms in endocrinology: primary HT and risk for breast cancer: a systematic review and meta-analysis. Objective: The association between hypothyroidism and breast cancer has been described from very early on. Breast and thyroid tissue are interconnected on a molecular level mainly through activation of thyroid hormone receptors expressed on cells of the mammary gland as well as on the plasma membrane of breast cancer cells. Despite the experimental evidence the true value of hypothyroidism as a risk factor for breast cancer remains controversial. Methods: We searched the PubMed database through February 2011 to identify studies that evaluated the association between hypothyroidism and risk for breast cancer as well as the effect of thyroid hormone replacement therapy on breast cancer incidence. Results: A meta-analysis performed in 12 studies showed that hypothyroidism was not associated with risk for breast cancer (pooled risk ratio (RR)=1.06, 95% confidence intervals (CIs) 0.82-1.35, P = 0.672). The effect of treatment was assessed in seven studies and no evidence for an association between thyroid hormone replacement and breast cancer was observed with an overall RR of 0.99 (95% CI 0.73-1.35, P = 0.965). Conclusions: Our meta-analysis showed that hypothyroidism is not associated with increased risk for breast cancer and thyroid hormone replacement therapy does not reduce breast cancer prevalence; however, the heterogeneity of the studies analyzed precludes firm conclusions. abstract_id: PUBMED:37597171 Effect of Roxadustat on Thyroid Function in Patients With Renal Anemia. Context: Roxadustat, a hypoxia-inducible factor prolyl hydroxylase (HIF-PH) inhibitor, a recently developed class of drugs for treatment of anemia in chronic kidney disease (CKD), is reported to have a structure unlike that of other HIF-PH inhibitors but similar to that of triiodothyronine and bind to the thyroid hormone receptor in vitro. However, reports on the effects of roxadustat on thyroid function are limited and not detailed, and it remains unknown whether other HIF-PH inhibitors also affect thyroid function. Objective: To compare the effect of roxadustat with daprodustat, another HIF-PH inhibitor, on thyroid function in patients with renal anemia in CKD. Methods: This retrospective observational study included a total of 26 patients with anemia in CKD who were treated with roxadustat or daprodustat; thyroid-stimulating hormone (TSH) and free thyroxine (FT4) were measured before and after treatment with the drugs. Results: After initiation of roxadustat, TSH showed a significant decrease (2.4732 [1.7858-4.9016] μIU/mL before treatment and 0.659 [0.112-2.005] μIU/mL after treatment, P &lt; .05); FT4 showed a significant decrease (0.93 [0.84-1.05] ng/dL before treatment and 0.70 [0.53-0.85] ng/dL after treatment, P &lt; .01). After daprodustat initiation, neither TSH nor FT4 showed a significant change (TSH: 3.044 [1.853-4.171] μIU/mL before treatment and 2.893 [1.866-4.894] μIU/mL after treatment, P = .635; FT4 was 0.93 [0.81-1.00] ng/dL before treatment and 0.97 [0.87-1.05] ng/dL after treatment, P = .328). Conclusion: Roxadustat decreases TSH and FT4 levels while daprodustat does not. abstract_id: PUBMED:8793521 Distinct sexual dimorphism in the effect of hypothyroidism on the expression of the growth hormone receptor and growth hormone-binding protein gene in rat liver. Impairment of growth is a hallmark of hypothyroidism in animals. The ability of the thyroid hormone-thyroid hormone receptor complex to regulate gene transcription may be relevant to the growth impairment associated with hypothyroidism. To study the role of thyroid hormone in the expression of the GH receptor (GHR) and GH-binding protein (GHBP) gene, we examined the serum and liver tissue of female and male hypothyroid (thyroidectomized), thyroxine-treated thyroidectomized and euthyroid control rats. Compared to the control and to the thyroxine-treated group, the hypothyroid rats had significantly lower serum levels of thyroxine, increased levels of TSH, and decreased rates of weight gain. GHR and GHBP mRNA levels in liver were estimated by ribonuclease protection assays. In female rats, the levels of hepatic GHR and GHBP mRNA were increased in the hypothyroid group compared to euthyroid controls (p &lt; 0.001 for GHR and p &lt; 0.05 for GHBP). In contrast, in males the hypothyroid state was associated with decreased levels of GHR (p &lt; 0.001) and GHBP (p &lt; 0.001) mRNA levels compared to euthyroid controls. In both females and males, administration of thyroxine for a period of 2 weeks to the thyroidectomized rats prevented these changes in GHR and GHBP mRNA levels in liver. The differences observed between females and males could not be attributed to differences in the circulating levels of GH at sacrifice (female vs. male. 9.9 +/- 1.3 vs. 13.9 +/- 6.5 ng/ml). We conclude that (1) thyroid hormone affects the transcription of the GHR/GHBP gene; (2) there is a distinct sexual dimorphism in the effect of hypothyroidism on the expression of the GHR/GHBP gene, and (3) this effect is reversible following amelioration of the hypothyroid state. We speculate that regulation of expression of the GHR/GHBP gene by thyroid hormones involves multiple thyroid response elements that have opposite effects depending on the status of other factors such as sex hormones. Answer: The study on the laryngeal effect of experimental postnatal hypothyroidism aimed to investigate the histopathological alterations and thyroid hormone receptors in rats with induced postnatal hypothyroidism. The study included two groups of Wistar albino rat pups: one group with methimazole (MMI)-induced hypothyroidism and a control group without MMI. At 90 days of age, the pups were decapitated, and their larynxes were evaluated for various factors including edema, inflammation, goblet cells, and thyroid hormone receptors (TR-α, TR-β). The results showed that there was a significant difference in inflammation between the groups, with slightly lower inflammation in the hypothyroidism group (p=0.009). Importantly, the thyroid hormone receptors TR-α and TR-β, as well as edema, were significantly higher in the hypothyroidism group (p=0.002, p=0.029, p=0.029, respectively). There was no significant difference in the number of Goblet cells between the groups (p=0.637). The conclusion drawn from this study was that histopathologically, there was an increase in laryngeal edema and thyroid hormone receptors in the hypothyroidism group. This finding suggests that changes in thyroid hormone receptors may be part of the mechanism behind voice changes observed in hypothyroidism (PUBMED:27107603).
Instruction: Is the benefit of early recanalization sustained at 3 months? Abstracts: abstract_id: PUBMED:12624293 Is the benefit of early recanalization sustained at 3 months? A prospective cohort study. Background And Purpose: Early arterial recanalization can lead to dramatic recovery (DR) during intravenous tissue plasminogen activator (tPA) therapy. However, it remains unclear whether this clinical recovery is sustained 3 months after stroke. Methods: We studied consecutive patients treated with intravenous tPA (0.9 mg/kg within 3 hours) who had M1 or proximal M2 middle cerebral artery occlusion on pretreatment transcranial Doppler according to previously validated criteria. Patients were continuously monitored for 2 hours after tPA bolus to determine complete, partial, or no early recanalization with the Thrombolysis in Brain Ischemia (TIBI) flow grading system. A neurologist obtained the National Institutes of Health Stroke Scale (NIHSS) and modified Rankin Scale (mRS) scores independently of transcranial Doppler results. DR was defined as a total NIHSS score of 0 to 3 points, and early recovery (ER) was defined improvement by &gt; or =10 points at 2 hours after tPA bolus. Good long-term outcome was defined as an NIHSS score of 0 to 2 or an mRS score of 0 to 1 at 3 months. Results: Fifty-four patients with proximal middle cerebral artery occlusion had a median prebolus NIHSS score of 16 (range, 6 to 28; 90% with &gt; or =10 points). The tPA bolus was given at 130+/-32 minutes (median, 120 minutes; 57% treated within the first 2 hours). DR+ER was observed in 50% of patients with early complete recanalization (n=18), 17% with partial recanalization (n=18), and 0% with no early recanalization (n=18) (P=0.025). Overall, DR+ER was observed in 12 patients (22%), and 9 (75%) had good outcome at 3 months in terms of NIHSS (P=0.009) and mRS (P=0.006) scores compared with non-DR and non-ER patients. If early recanalization was complete, 50% of these patients had good outcome at 3 months, and 78% with DR+ER sustained early clinical benefit. If recanalization was partial, 44% had good long-term outcome, and 66% of patients with DR+ER sustained the benefit. If no early recanalization occurred, 22% had good long-term outcome despite the lack of DR within 2 hours of tPA bolus (P=0.046). Mortality was 11%, 11%, and 39% in patients with complete, partial, and no early recanalization, respectively (P=0.025). Reasons for not sustaining DR in patients with early recanalization were subsequent symptomatic intracranial hemorrhage and recurrent ischemic stroke. Conclusions: DR or ER after recanalization within 2 hours after tPA bolus was sustained at 3 months in most patients (75%) in our study. Complete or partial early recanalization leads to better outcome at 3 months after stroke. Fewer patients achieve good long-term outcome without early recanalization. abstract_id: PUBMED:37991143 Recanalization status and temporal evolution of early ischemic changes following stroke thrombectomy. Introduction: Present-day computer tomography (CT) scanners have excellent spatial resolution and signal-to-noise ratio and are instrumental detecting early ischemic changes (EIC) in brain. We assessed the temporal changes of EIC based on the recanalization status after thrombectomy. Patients And Methods: The cohort comprises consecutive patients with acute ischemic stroke in anterior circulation treated with thrombectomy in tertiary referral hospital. All baseline and follow-up scans were screened for any ischemic changes and further classified using Alberta Stroke Program Early CT Score (ASPECTS). Generalized linear mixed models were used to analyze the impact of recanalization status using modified Thrombolysis in Cerebral Infarction (mTICI) on temporal evolution of ischemic changes. Results: We included 614 patients with ICA, M1, or M2 occlusions. Median ASPECTS score was 9 (IQR 7-10) at baseline and 7 (5-8) at approximately 24 h. mTICI 3 was achieved in 207 (33.8%), 2B 241 (39.3%), 2A in 77 (12.6%), and 0-1 in 88 (14.3%) patients. Compared to patients with mTICI 3, those with mTICI 0-1 and 2A had less favorable temporal changes of ASPECTS (p &lt; 0.001). Effect of recanalization was noted in the cortical regions of ICA/M1 patients, but not in their deep structures or patients with M2 occlusions. All ischemic changes detected at baseline were also present at all follow-up images, regardless of the recanalization status. Conclusions: Temporal evolution of the ischemic changes and ASPECTS are related to the success of the recanalization therapy in cortical regions of ICA/M1 patients, but not in their deep brain structures or M2 patients. In none of the patients did EIC revert in any brain region after successful recanalization. abstract_id: PUBMED:37839571 Progression from Early Minor Recanalization to Major Recanalization in Ruptured Intracranial Aneurysms After Successful Coil Embolization: Risk Factors and Proposal of a Novel Predicting Variable. Background: The risk factors for the progression from early minor recanalization to major recanalization are not well established. Herein, we evaluated ruptured intracranial aneurysms (IAs) with minor recanalization within 1 year of coiling and their progression to major recurrence. Methods: We retrospectively reviewed our database of coiled IAs and searched for ruptured saccular IAs that were successfully embolized without residual sacs. We selected IAs with minor recanalization confirmed on radiological studies within 1 year of coil embolization. All the IA cases had a follow-up period longer than 36 months based on the radiological results. Results: Minor recanalization occurred in 45 IAs within 1 year of coil embolization. Among them, 14 IAs (31.1%) progressed to major recanalization, and 31 remained stable. Progression to major recanalization was detected 12 months after minor recanalization in 2 patients, 24 months in 7 patients, and 36 months in 5 patients. Moreover, the progression to major recanalization occurred more frequently in IAs at the posterior location (P = 0.024, odds ratio 11.20) and IAs with a proportional forced area &gt; 9 mm2 (P = 0.002, odds ratio 17.13), which was a newly proposed variable in the present study. Conclusions: Our results showed that approximately one third of the ruptured IAs with early minor recanalization after coiling progressed to major recanalization within 3 years. Physicians should focus on the progression of ruptured IAs from minor to major recanalization, especially those with a posterior circulation location and a proportional forced area &gt;9 mm2. abstract_id: PUBMED:37892733 Clinical Benefit of First-Pass Recanalization Is Time-Dependent in Endovascular Treatment of Acute Ischemic Stroke. Clinical benefit can be time-dependent even after first-pass recanalization (FPR) in endovascular treatment of acute stroke. This study aimed to evaluate the association between favorable outcome and FPR under a specific time frame. Patients who underwent mechanical thrombectomy were retrospectively reviewed. Recanalization status was categorized into four groups based on FPR and dichotomized time from groin puncture to recanalization (P-to-R time). Favorable outcomes were compared between groups. A total of 458 patients were included. As the cutoff of P-to-R time for favorable outcome was 30 min, recanalization status was categorized into FPR (+) with a P-to-R time ≤ 30 min (Group 1), FPR (-) with a P-to-R time ≤ 30 min (Group 2), FPR (+) with a P-to-R time &gt; 30 min (Group 3), and FPR (-) with a P-to-R time &gt; 30 min (Group 4). Favorable outcomes in Group 3 (37.5%) were significantly less frequent than those in Group 1 (60.4%, p = 0.029) and Group 2 (59.5%, p = 0.033) but were not significantly different from those in Group 4 (35.7%, p = 0.903). Compared to Group 1, Group 3 (adjusted odds ratio, 0.30 [95% confidence interval, 0.12-0.76]; p = 0.011) and Group 4 (0.25 [0.14-0.48]; p &lt; 0.001) were adversely associated with favorable outcomes. FPR was associated with functional outcome in a time-dependent manner. Even for patients who have achieved FPR, their functional outcome might not be favorable if the P-to-R time is &gt;30 min. abstract_id: PUBMED:21577354 Spontaneous recanalization of the basilar artery with conservative management months after symptom onset. Basilar artery occlusion may be associated with a poor prognosis in the absence of recanalization. Choices in aggressive treatment for this potentially fatal condition vary from intra-arterial or intravenous thrombolysis, endovascular removal, or a combination of the two, with adjunct anti-coagulation therapy. These therapies have proven to be effective in recanalization, whereas conservative management with anti-coagulants alone has had more limited success in the literature. We report a case of basilar artery occlusion managed conservatively with unfractionated heparin, resulting in complete recanalization 3.5 months after symptom onset. Conservative management of basilar artery occlusion with unfractionated heparin was associated with complete recanalization long after symptom onset. abstract_id: PUBMED:38044641 Preliminary study on a novel biological scaffold loaded with Apelin-13 sustained-release microcapsules for promoting fallopian tube recanalization in rabbits. Objectives: Tubal factor infertility severely impairs the natural fertility of women, and there is for genuine tubal recanalization, including restoration of both the anatomy and function of the diseased fallopian tubes. Currently, there is no effective treatment available. This study aims to explore methods for promoting the repair and recanalization of fallopian tubes from these 2 aspects. Methods: Apelin-13 sustained-release microspheres and poly (lactic-co-glycolic acid) (PLGA) three-dimensional (3D) biodegradable scaffolds were prepared. The basic characteristics and in vivo degradation (mass loss rate) of the biodegradable scaffolds were tested, along with the in vitro drug release (cumulative release rate), the in vivo drug release (Apelin-13 plasma concentration), and in vitro degradation (degradation rate) of the microspheres. The Apelin-13 microspheres (microsphere group)/PLGA 3D scaffolds loaded with Apelin-13 sustained-release microspheres (scaffold-microcapsule group) were injected/placed into the fallopian tubes of New Zealand rabbit of chronic salpingitis models. The patency, microscopic structure, and positive expression of estrogen receptor and progesterone receptor of the fallopian tubes in the control group, the model group, the microcapsule group, and the scaffold-microcapsule group was observed and compared. Results: At the 4th week post-operation, the mass loss rate of the PLGA 3D scaffolds, the degradation rate of the microspheres, and the Apelin-13 sustained-release microspheres-generated cumulative release rate in vitro over 30 days were 98.66%, 70.58%, and 98.68% respectively. The plasma concentration of Apelin-13 reached its peak within 5 days and remained stable for 25 days. Compared with the model and microsphere groups, the scaffold-microsphere group showed a milder inflammatory reaction within the tubal lumen, a higher rate of fallopian tube patency, and higher expression levels of estrogen and progesterone receptors (all P&lt;0.05). The indicators of the scaffold-microsphere group were close to those of the control group. Conclusions: The PLGA 3D scaffolds loaded with Apelin-13 sustained-release microspheres can comprehensively repair the anatomical structure and physiological function of the fallopian tubes and hold promise for truly effective tubal recanalization. abstract_id: PUBMED:32144482 Delayed Progression to Major Recanalization in Coiled Aneurysms with Minor Recanalization at 36-Month Follow-up : Incidence and Related Risk Factors. Purpose: The aim was to monitor aneurysms that show imaging evidence of minor recanalization 36 months after coil embolization and to determine the rate and related risk factors of major recanalization during more prolonged observation. Methods: A total of 54 patients with 55 aneurysms showing minor recanalization at 36-month follow-up imaging between 2011 and 2013 were retrospectively reviewed. Medical records and radiological data accumulating in the course of extended monitoring (mean 83.9 ± 21.5 months) were assessed. Incidence and average annual risk of progression to major recanalization were then calculated. Univariate and multivariate regression analyses were applied to determine possible risk factors for progression to major recanalization. Results: In the 55 aneurysms studied 26 showed sustained minor recanalization since month 6 of follow-up, whereas minor recanalization of 29 completely occluded coiled lesions appeared in follow-up images after 6-24 months. Only 8 coiled aneurysms (14.5%) with minor recanalization at 36 months progressed to major recanalization during 219.3 aneurysm-years of observation (3.6% per aneurysm-year), 2 surfacing within 72 months and 6 developing thereafter. Additional embolization was performed in six of these patients. By multivariate analysis, no clinical or anatomic factors were statistically linked to such progression, but younger age showed marginal significance (hazard ratio, HR = 1.076; p = 0.099). Conclusion: Most coiled aneurysms (85.5%) showing minor recanalization at 36 months postembolization proved to be stable in extended observation. Given the low probability but seriousness of delayed major recanalization, careful monitoring is still warranted in this setting but at less frequent intervals (every 2-3 years) beyond 36 months. abstract_id: PUBMED:36912985 Fallopian tube recanalization for the management of infertility. Infertility is a world-wide problem, defined as failure to achieve pregnancy after 12 months of regular unprotected sexual intercourse. There are multiple causes for infertility involving both male and female factors. Fallopian tube occlusion is a common reason for female infertility. The initial attempts to treat proximal obstruction involved the use of a whalebone bougie positioned in the uterine cornua to dilate the proximal tube by Smith as early as 1849. Fluoroscopic fallopian tube recanalization for the treatment of infertility was first described in 1985. Since that time, there have been over 100 papers describing various methods for recanalization of occluded fallopian tubes. Fallopian tube recanalization is a minimally invasive procedure which is performed on an outpatient basis. It should be a first line therapy for patients with proximal occlusion of fallopian tubes. abstract_id: PUBMED:28986149 Factors associated with early dramatic recovery following successful recanalization of occluded artery by endovascular treatment in anterior circulation stroke. Endovascular treatment (EVT) significantly increases the recanalization rate and improves functional outcomes in acute ischemic stroke. However, despite successful recanalization by EVT, some stroke patients demonstrate no early dramatic recovery (EDR). We assessed factors associated with EDR following recanalization by EVT. We included subjects with anterior circulation stroke treated with EVT who met the following criteria: Thrombolysis in Cerebral Ischemia scores (TICI) 2b-3 after EVT, lesion volume &lt;70mL as seen on the pre-treatment diffusion-weighted imaging (DWI) scan and a baseline NIHSS score ≥6. EDR was defined as a ≥8-point reduction in the NIHSS score, or NIHSS score of 0 or 1 measured 24h following treatment. Multivariate regression analyses were performed to identify the predictors associated with EDR. Of the 102 patients (mean age, 64.3years; median National Institutes of Health Stroke Scale score, 14), EDR was achieved in 39 patients (38.2%). The median DWI lesion volume was 12mL (interquartile range, 5-25mL). Median onset-to-recanalization time in these patients was 320min (interquartile range, 270-415min). Logistic regression analysis identified a higher initial NIHSS score (OR 1.17, 95% CI 1.03-1.33, P=0.016) and shorter time from onset to recanalization (OR 0.99, 95% CI 0.986-0.997, P=0.003), to be independently associated with EDR. In the setting of pretreatment DWI lesion volume &lt;70mL, a higher initial NIHSS score and faster time from onset to recanalization may be important predictors of EDR following successful EVT. abstract_id: PUBMED:33037513 Prediction of mTICI 3 recanalization and clinical outcomes in endovascular thrombectomy for acute ischemic stroke: a retrospective study in the Taiwan registry. Purpose: Early recanalization for acute ischemic stroke (AIS) due to large vessel occlusion (LVO) by endovascular thrombectomy (EVT) is strongly related to improved functional outcomes. With data obtained from the Taiwan registry, the factors associated with mTICI 3 recanalization and clinical outcomes in EVT are investigated. Methods: From January 2014 to September 2016, 108 patients who underwent EVT for AIS due to LVO in 11 medical centers throughout Taiwan were included. Complete recanalization is defined as achieving modified thrombolysis in cerebral infarction (mTICI) grade 3. Good clinical outcomes are defined by the modified Rankin scale (mRS) 0-2 at 3 months after EVT. Clinical and imaging parameters for predicting mTICI 3 recanalization and good clinical outcomes are analyzed. Results: Of the 108 patients who received EVT, 54 (50%) patients had mTICI 3 recanalization. Having received aspiration only and the use of IV-tPA are shown to be significant predictors for mTICI 3 recanalization with odds ratios of 2.61 and 2.53 respectively. Forty-six (42.6%) patients experienced good 3-month clinical outcomes (mRS 0-2). Pretreatment collateral statuses, NIHSS scores, time lapses between symptoms to needle, and the occurrence of hemorrhage at 24 h are all significant predictors for good outcomes with odds ratios of 2.88, 0.91, 0.99, and 0.31 respectively. Conclusions: Prediction of mTICI 3 recanalization and clinical outcomes offer valuable clinical information for treatment planning in EVT. Answer: The benefit of early recanalization appears to be sustained at 3 months in most patients. A prospective cohort study found that dramatic recovery (DR) or early recovery (ER) after recanalization within 2 hours after tPA bolus was sustained at 3 months in 75% of patients. Complete or partial early recanalization led to better outcomes at 3 months after stroke, with 50% of patients with complete early recanalization and 44% with partial recanalization having good long-term outcomes. Fewer patients achieved good long-term outcomes without early recanalization (PUBMED:12624293). Another study on endovascular treatment of acute ischemic stroke showed that clinical benefit can be time-dependent even after first-pass recanalization (FPR). Patients who achieved FPR within 30 minutes had significantly more favorable outcomes compared to those with FPR taking longer than 30 minutes, indicating that the timing of recanalization is also crucial for sustaining benefits (PUBMED:37892733). Furthermore, a study in the Taiwan registry for acute ischemic stroke due to large vessel occlusion found that 50% of patients who received endovascular thrombectomy (EVT) had complete recanalization (mTICI 3), and 42.6% experienced good 3-month clinical outcomes (mRS 0-2), suggesting that early recanalization is associated with sustained benefits (PUBMED:33037513). However, it is important to note that not all ischemic changes revert after successful recanalization. A study assessing the temporal changes of early ischemic changes (EIC) based on recanalization status after thrombectomy found that while temporal evolution of ischemic changes and ASPECTS are related to the success of recanalization therapy in cortical regions of ICA/M1 patients, none of the patients had EIC revert in any brain region after successful recanalization (PUBMED:37991143). In summary, the benefit of early recanalization is generally sustained at 3 months, particularly when recanalization occurs rapidly. However, the degree of benefit can vary based on the extent of recanalization and the timing of the intervention.
Instruction: Should epidural drain be recommended after supratentorial craniotomy for epileptic patients? Abstracts: abstract_id: PUBMED:19608006 Should epidural drain be recommended after supratentorial craniotomy for epileptic patients? Background: ED was once and is still commonly applied to prevent mainly EH and subgaleal CSF collection. We designed this study to observe if ED could decrease the incidence and volume of EH and subgaleal CSF collection after supratentorial craniotomy in epileptic patients. Methods: Three hundred forty-two epileptic patients were divided into 2 groups according to their first craniotomy date (group 1 in odd date and group 2 in even date). Patients in group 1 had ED and those in group 2 had no ED. The patient numbers and volumes of EH and subgaleal CSF collections in both groups were recorded and statistically analyzed. Results: There were 22 EHs in group 1 and 20 EHs in group 2. There were 11 and 10 subgaleal CSF collections in groups 1 and 2, respectively. The average volume of EH was 13.5 +/- 8.12 and 14.65 +/- 7.72 mL in groups 1 and 2, respectively. The average volume of subgaleal CSF collection was 42.76 +/- 12.09 and 43.75 +/- 11.44 mL in groups 1 and 2, respectively. There were no statistical differences in the incidence and average volume of EH and subgaleal CSF collection between the 2 groups. Conclusions: ED cannot decrease the incidence and volume of EH and subgaleal CSF collection. ED should not be recommended after supratentorial epileptic craniotomy. abstract_id: PUBMED:33029384 Advantages of using a prophylactic epidural closed drain and non-watertight dura suture in a craniotomy near the "parietal site". Background: In neurosurgery, the necessity of having a drainage tube is controversial. Subgaleal fluid collection (SFC) often occurs, especially in a craniotomy near the "parietal site".This study aimed to reassess the benefit of using a prophylactic epidural drainage (ED) and non-watertight dura suture in a craniotomy near the parietal site. Methods: A retrospective review was conducted on 63 consecutive patients who underwent a craniotomy near the parietal site. The patients were divided into two groups according to different period. The deal group received ED and a non-watertight dura suture (drain group, DG), the control group that did not (non-drain group, NDG). Complications and patient recovery were evaluated and analysed. Results: Three patients (11.5%, 26) in DG and 20 patients (54.1%, 37) in NDG presented with SFC (p &lt; 0.05). One patient (3.8%) in DG and three patients (8.1%) in NDG presented with subdural tensile hydrops (STH) (p &gt; 0.05). Six developed an infection in NDG (four intracranial infections, one abscess, one pulmonary infection), while none in DG (p &gt; 0.05) developed infection. Three (11.5%) cases in DG and one (2.7%) case in NDG had muscle strength that improved postoperatively (p &gt; 0.05). Fifteen (57.7%) in DG and 14 (37.8%) in NDG had epileptic seizures less frequently postoperatively (p &lt; 0.05). The average temperature (37.4 °C vs 37.6 °C, p &gt; 0.05), the maximum temperature (37.9 °C vs 38.1 °C, p &gt; 0.05) on 3 PODs, the postoperative hospital stay day (7.5 days vs 8.0 days, p &gt; 0.05), and the postoperative medicine fee (¥29762.0 vs ¥28321.0, p &gt; 0.05) were analysed. Conclusion: In patients who undergo a craniotomy near the parietal site, the prophylactic use of ED and a non-watertight dura suture helps reduce SFC, infection, and control epilepsy. abstract_id: PUBMED:38100760 Awake craniotomy for supratentorial tumors or epileptogenic lesions in pediatric patients: a 16-year retrospective cohort study. Objective: Awake craniotomy with intraoperative mapping is the widely accepted procedure for adult patients undergoing supratentorial tumor or epileptogenic focus resection near eloquent cortex. In children, awake craniotomies are notably less common due to concerns for compliance and emotional or psychological repercussions. Despite this, successfully tolerated awake craniotomies have been reported in patients as young as 8 years of age, with success rates comparable to those of adults. The authors sought to describe their experience with pediatric awake craniotomies, including insight regarding feasibility and outcomes. Methods: A retrospective review was completed for all pediatric (age &lt; 18 years) patients at Children's Wisconsin for whom an awake craniotomy was attempted from January 2004 until March 2020. Institutional review board approval was granted. Results: Candidate patients had intact verbal ability, cognitive profile, and no considerable anxiety concerns during neuropsychology assessment. Nine patients presented with seizure. Five patients were diagnosed with tumor and secondary epilepsy, 3 with tumor only, and 3 with epilepsy only. All patients who underwent preoperative functional MRI successfully completed and tolerated testing paradigms. A total of 12 awake craniotomies were attempted in 11 patients, with 1 procedure aborted due to intraoperative bleeding. One patient had a repeat procedure. The mean patient age was 15.5 years (range 11.5-17.9 years). All patients returned to or maintained baseline motor and speech functions by the latest follow-up (range 14-130 months). Temporary deficits included transient speech errors, mild decline in visuospatial reasoning, leg numbness, and expected hemiparesis. Of the 8 patients with a diagnosis of epilepsy prior to surgery, 7 patients achieved Engel class I designation at the 1-year follow-up, 6 of whom remained in class I at the latest follow-up. Conclusions: This study analyzes one of the largest cohorts of pediatric patients who underwent awake craniotomy for maximal safe resection of tumor or epileptogenic lesions. For candidate patients, awake craniotomy is safe, feasible, and effective in carefully selected children. abstract_id: PUBMED:27854111 The risk of hypotension and seizures in patients receiving prophylactic anti-epileptic drugs for supratentorial craniotomy. Background: Perioperative seizure prophylaxis with antiepileptic drugs (AED) has been advocated in patients undergoing supratentorial craniotomy. The practice remains controversial. The reasoning presupposes that the possibility of an adverse drug reaction from the AED is lower than the probability of harm from a seizure. Even short periods of hypotension during the operation can lead to acute kidney and myocardial injury. We retrospectively evaluated cardiovascular effects and tolerability of levetiracetam (LEV) alone, LEV and lacosamid (LCM) as compared to phenytoin (PHT). Methods: After IRB approval, the charts of individuals who underwent craniotomy from April 2007 to September 2011 were reviewed. Those receiving PHT were compared to those receiving LEV alone and LEV/LCM. The patient data included demographic, indication and procedure related data. The cumulative dose of norepinephrine (NET), atropine (ATR) and the change in systolic blood pressure during and after the administration of the AED were analyzed. Results: Five hundred thirty-eight patients were screened of which 122 were included for analysis. 40 patients with primary or secondary supratentorial brain tumors received LEV (19 female, 21 male; mean age 56 years), 41 patients received LEV/ LCM (16 female, 25 male; mean age 56 years) and 41 patients received PHT (15 female, 26 male; mean age 50 years). The commonest indications for craniotomy were glioblastoma (N.=14 vs. N.=12 vs. N.=15), meningiomas (N.=9 vs. N.=7 vs. N.=10), low-grade gliomas (N.=6 vs. N.=13 vs. N.=6) and brain metastases (N.=5 vs. N.=4 vs. N.=5). 1 LEV/LCM patient (2%) and 4 PHT patients (4.5%) had a seizure despite prophylaxis. Possible side effects were observed in 2 patients associated with PHT. During anesthesia there was a significant drop in systolic blood pressure in the PHT group after administration of the AED perioperatively when compared to LEV (P=0.001) and LEV/LCM (P≤0.0001) respectively. The mean cumulative doses of NET and ATR over the course of the operation did not differ significantly. Conclusions: LEV alone and in combination with LCM for patients without and with symptomatic epilepsy as seizure prophylaxis provides a safe and feasible alternative to PHT. PHT was associated with an unfavorable drop in blood pressure during anesthesia and more adverse reactions. abstract_id: PUBMED:10981750 Persistent headache after supratentorial craniotomy. Objective: Although the incidence, possible causes, and treatment of persistent headache after suboccipital craniotomy have been discussed extensively, few data have been published regarding persistent headache after supratentorial craniotomy. Methods: We retrospectively analyzed the senior author's patient series of 145 consecutive anterior temporal lobectomies for intractable epilepsy performed during a 9-year period. To eliminate confounding causes of headache, all patients studied were seizure-free, none had progressive mass lesions or persisting vascular anomalies, and none had major complications of surgery. With the permission of the institutional review board, information on headache was obtained from patient records, the patients' neurologists, and the patients themselves. Results: Of the 145 consecutive patients who underwent operations, 126 patients had adequate follow-up for analysis. Of the 126 patients, 104 did not have headaches and were not using regular analgesics 2 months postoperatively. Twenty-two patients had headaches persisting beyond 2 months. Seven (5.6%) of the 126 patients had headaches that lasted more than 2 months but less than 1 year, and they were free of recurrent headaches 1 year postoperatively. Fifteen (11.9%) of the 126 patients had ongoing headaches 1 year after surgery. Regarding headache severity, 4.0% of patients had medically uncontrolled headaches, and 3.2% continued to require prescription drugs for headaches 1 year postoperatively. Conclusion: Although the incidence of persistent head pain after supratentorial craniotomy is lower than that reported for posterior fossa procedures, the incidence is not trivial. The cause of persistent pain deserves further investigation. abstract_id: PUBMED:12763353 Cerebellar haemorrhage following supratentorial craniotomy. Distant cerebellar haemorrhage is a rare complication occurring in approximately 0.3-0.6% of all supratentorial craniotomy. A Medline and Pubmed search revealed only 98 cases in the English literature. We report three cases from our institution. An overall review of these 101 cases demonstrated that this complication commonly presented early in the postoperative course as decreased level of consciousness following aneurysm repair surgery or lobectomy for epilepsy. Asymptomatic presentation due to small haemorrhage was not uncommon. A transtentorial pressure gradient set up by excessive CSF loss is generally held responsible for disrupting the cerebellar venous blood flow and consequently leading to venous haemorrhage. Perioperative hypertension may also play a role. The outcome of patients who survived the complications was generally good, though not infrequently fatality resulted from the mass effect of extensive cerebellar haemorrhage demanded vigilance in its management. abstract_id: PUBMED:17994199 Remote cerebellar hemorrhage and iliofemoral vein thrombosis after supratentorial craniotomy. Introduction: Cerebellar hemorrhage following supratentorial craniotomy is rare. Its clinical symptoms are often mild and transient. Discussion: Here, we report a case of cerebellar hemorrhage associated with iliofemoral vein thrombosis as a complication of anterior temporal lobectomy and amygdalohippocampectomy for refractory medial temporal epilepsy. abstract_id: PUBMED:27438472 Epilepsy in Adults with Supratentorial Glioblastoma: Incidence and Influence Factors and Prophylaxis in 184 Patients. Aim: To analyze the incidence of epilepsy in adult patients with supratentorial glioblastoma, assess the factors influencing the development of epilepsy in these cases, and evaluate patients' response to antiepileptic drugs (AEDs) in a series of 184 patients. Methods: We retrospectively analyzed the 184 adult patients diagnosed with supratentorial glioblastoma. All subjects were treated within our hospital and subsequently died between 2003 and 2013. The incidence of epilepsy was assessed before and after initial resection and reexamined every 2 months thereafter. We evaluated the efficacy of prophylactic AEDs in this patient population based on the gathered incidence data. Results: Of 184 patients, 43 (23.37%) were diagnosed with epilepsy before their initial resection. The total incidence of epilepsy (both pre- and postoperative) was 68.48%. The prevalence of active epilepsy reached over 80% in patients with epilepsy and survival of greater than 13 months postoperatively. Patients with glioblastoma in the frontal and/or temporal lobes had a higher prevalence of epilepsy. In the 43 patients with preoperative epilepsy, total resection of glioblastoma resulted in significantly lower seizure frequency. Patients who received epilepsy prophylaxis with AEDs for at least 6 months had significantly fewer seizures and higher Karnofsky scores than those receiving AEDs for less than one month or not at all. Conclusion: The incidence of epilepsy in adult patients with glioblastoma was high and responded poorly to AEDs in the short term. However, when taken for longer periods, AEDs can reduce the frequency of seizures in patients with glioblastoma. abstract_id: PUBMED:16677527 Supratentorial cavernous angiomas in children Objective: To discuss the clinical features, treatment, and prognosis of supratentorial cavernous angiomas in children. Methods: The clinical data of 25 pediatric patients with supratentorial cavernous angioma, 17 boys and 6 girls, aged 9.6 (1-15), were reviewed retrospectively, and the relationship between the duration of epilepsy before surgery and prognosis was analyzed. Results: The 14 patients with preoperative seizures underwent surgery. All (8/8) of the patients suffering from preoperative seizures with an history of less than one year were seizure free after operation, while only half (3/6) of the patients with an history suffering from preoperative seizures with an history of more than one year were seizure free after operation. Ten of the 11 patients with hemorrhage and hemorrhage-related neurological symptoms as the initial symptoms underwent surgery, and no symptoms of re-bleeding and seizure occurred postoperatively. Conclusion: Craniotomy for lesionectomy results in good seizure control and re-bleeding avoidance. Better results with regard to seizure control are associated with shorter duration of symptoms before surgery. abstract_id: PUBMED:16733760 Risk factors for developing epilepsy after craniotomy in children. Introduction: We performed a retrospective analysis of children undergoing supratentorial craniotomy, attempting to identify possible risk factors for postoperative epilepsy and the need for prophylactic anticonvulsant therapy. Materials And Methods: We analysed 107 consecutive patients (55% males) who had supratentorial craniotomy for a variety of diagnoses (tumours, trauma, infection, vascular malformations and others) during 1995-1999. Mean age at operation was 89 months (range: 1-180 months). Patients who presented with epilepsy were excluded. Postoperative epilepsy was considered present if patients required systematic pharmacological treatment, at a minimum follow-up of 6 months. Linear regression was used to analyse the effect of sex, anticonvulsant prophylaxis, duration of operation, closure of dura, postoperative infection, the diagnosis, anatomical region of brain affected, operation type (craniotomy/craniectomy) and the need for brain resection. Results: Prophylactic anticonvulsants were given to 52% of the patients; 97% had craniotomy; in five patients, the dura was left open; in 33%, some kind of brain tissue resection had been performed; two patients (1.8%) developed postoperative infection; one patient died. Only 13 patients (12%) developed postoperative epilepsy. The only two factors with statistical significance were female sex (p=0.045) and the absence of dural closure (p=0.001). All other factors were not significant (p&gt;0.258). Conclusions: Postoperative epilepsy after supratentorial craniotomy is uncommon in children, incidence being 12%. The administration of prophylactic anticonvulsants does not appear to influence the risk of epilepsy. Surprisingly, females have statistically higher risk. Lack of dural closure has higher risk of epilepsy, but this may reflect the type of pathology. Answer: Based on the provided abstracts, the recommendation for using an epidural drain (ED) after supratentorial craniotomy in epileptic patients is not consistent. One study (PUBMED:19608006) concluded that ED does not decrease the incidence and volume of epidural hematoma (EH) and subgaleal cerebrospinal fluid (CSF) collection, suggesting that ED should not be recommended after supratentorial epileptic craniotomy. This study involved 342 epileptic patients divided into two groups, with one group receiving ED and the other not, and found no statistical differences in the incidence and average volume of EH and subgaleal CSF collection between the two groups. In contrast, another study (PUBMED:33029384) found that the prophylactic use of ED and a non-watertight dura suture in craniotomy near the parietal site helped reduce subgaleal fluid collection (SFC), infection, and control epilepsy. This retrospective review of 63 patients showed that fewer patients in the drain group (DG) presented with SFC compared to the non-drain group (NDG), and no infections were observed in the DG. The other abstracts provided do not directly address the use of ED after supratentorial craniotomy in epileptic patients but discuss related topics such as the safety and feasibility of awake craniotomy in pediatric patients (PUBMED:38100760), the cardiovascular effects and tolerability of different prophylactic anti-epileptic drugs (PUBMED:27854111), persistent headache after supratentorial craniotomy (PUBMED:10981750), distant cerebellar hemorrhage as a complication (PUBMED:12763353 and PUBMED:17994199), the incidence of epilepsy in patients with supratentorial glioblastoma (PUBMED:27438472), the clinical features and prognosis of supratentorial cavernous angiomas in children (PUBMED:16677527), and risk factors for developing epilepsy after craniotomy in children (PUBMED:16733760). In conclusion, the decision to recommend ED after supratentorial craniotomy for epileptic patients should be made on a case-by-case basis, considering the potential benefits and risks. The conflicting evidence from the studies suggests that further research may be needed to establish clear guidelines.
Instruction: Do job demands and job control affect problem-solving? Abstracts: abstract_id: PUBMED:22699186 Do job demands and job control affect problem-solving? Objective: The Job Demand Control model presents combinations of working conditions that may facilitate learning, the active learning hypothesis, or have detrimental effects on health, the strain hypothesis. To test the active learning hypothesis, this study analysed the effects of job demands and job control on general problem-solving strategies. Participants: A population-based sample of 4,636 individuals (55% women, 45% men) with the same job characteristics measured at two times with a three year time lag was used. Methods: Main effects of demands, skill discretion, task authority and control, and the combined effects of demands and control were analysed in logistic regressions, on four outcomes representing general problem-solving strategies. Results: Those reporting high on skill discretion, task authority and control, as well as those reporting high demand/high control and low demand/high control job characteristics were more likely to state using problem solving strategies. Conclusions: Results suggest that working conditions including high levels of control may affect how individuals cope with problems and that workplace characteristics may affect behaviour in the non-work domain. abstract_id: PUBMED:31957967 Problem solving and affect as mechanisms linking daily mindfulness to task performance and job satisfaction. This study investigated the daily relationships between mindfulness and job satisfaction via affect and mindfulness and task performance via problem solving confidence. Participants were 57 full-time and part-time employees who completed a baseline survey and five daily diary entries. Data were analysed using multilevel modelling. On days when they experienced greater mindfulness, employees also experienced (a) greater daily job satisfaction via high-activation positive affect (but not low-activation positive affect nor negative affect) and (b) greater daily self-reported task performance via problem solving confidence. Our findings suggest that high-activation positive affect and problem solving confidence may be important daily mechanisms of mindfulness at work. Growing evidence links mindfulness to work-related outcomes; however, limited research has investigated the mechanisms involved. The present study is one of the first to collect empirical observations regarding how state mindfulness operates at work on a daily basis, providing support for both affective and cognitive pathways. Advances in this area can underpin a robust theoretical framework to guide the implementation of daily mindfulness practice and interventions targeting specific work-related outcomes. abstract_id: PUBMED:31282780 The Relationships between Problem-Solving, Creativity, and Job Burnout in Iranian Occupational Therapists. Cognitive resources are assumed to have the potential to buffer the negative influence of job demands. In this study, the relationship between burnout, a consequence of job demands, and cognitive resources of problem-solving and creativity was investigated. There were 50 occupational therapists that completed three standardized instruments assessing problem-solving, creativity, and burnout. Using multiple regression analysis and the Independent-Samples t-test to analyze the data, results showed that cognitive resources were inversely associated with burnout [R2 = 0.38, F (4, 45) = 6.94, p &lt; 0.001] and comparison analysis showed significant differences in creativity between males and females (p = 0.04). Significant positive correlations were also found between creativity and working experience whereby higher levels of creativity associated with more working years. Problem-solving and creativity require attention in occupational therapy because they may mitigate against burnout as the consequence of job demands. A possible implication would be the necessity of empowering occupational therapists' cognitive resources. abstract_id: PUBMED:30625034 Resources Matter: Combined Influence of Job Demands and Job Control on Creative Process Engagement. Considerable research has been conducted on employee creativity, but it has generally focused on creative outcomes and rarely paid attention to employees' engagement in creative processes, that is, creative process engagement. Further, job characteristics as antecedents are little understood. Drawing on the job demands-resources model and the conservation of resources theory, this study examines the relationship between challenge-hindrance demands and creative process engagement at different levels of job control. In total, 572 questionnaires from China were used to test our proposed model through hierarchical regression analysis. The results show that when job control is high, the effect of challenge demands on creative process engagement is U-shaped, while that of hindrance demands is negative. When job control is low, both challenge and hindrance demands are not related to creative process engagement. abstract_id: PUBMED:35805798 Effects of Job Demands and Resources on Positive and Negative Affect of Delivery Drivers in China. The delivery workforce in China has grown substantially within the past decade. As this industry has grown, job demands (JD) have also increased to ensure productivity. Accordingly, delivery drivers are increasingly facing volatile and stressful work conditions that could influence their Positive and Negative Affect (PANA), which has been an important predictor of their health and well-being. This study utilizes a sample of modern delivery drivers in Beijing, China (n = 240) to study how the job demands-resources theory (JD-R) was related to PANA. The results indicate delivery drivers experience relatively high JD and moderate JR at their places of employment. As expected, JR were positively associated with PA and negatively associated with NA. Meanwhile, JD were positively related to NA; however, JD were also positively associated with PA within the sample. Further interaction analysis showed that only drivers with high JD and high JR were positively associated with PA and that drivers with high JD and low JR were associated with high NA and had no effect on PA. These findings call for support for Chinese delivery drivers who face high job stress and high job demands. abstract_id: PUBMED:19594240 An experience sampling study of learning, affect, and the demands control support model. The demands control support model (R. A. Karasek &amp; T. Theorell, 1990) indicates that job control and social support enable workers to engage in problem solving. In turn, problem solving is thought to influence learning and well-being (e.g., anxious affect, activated pleasant affect). Two samples (N = 78, N = 106) provided data up to 4 times per day for up to 5 working days. The extent to which job control was used for problem solving was assessed by measuring the extent to which participants changed aspects of their work activities to solve problems. The extent to which social support was used to solve problems was assessed by measuring the extent to which participants discussed problems to solve problems. Learning mediated the relationship between changing aspects of work activities to solve problems and activated pleasant affect. Learning also mediated the relationship between discussing problems to solve problems and activated pleasant affect. The findings indicated that how individuals use control and support to respond to problem-solving demands is associated with organizational and individual phenomena, such as learning and affective well-being. abstract_id: PUBMED:31925427 Job demands, resources and work-related well-being in UK firefighters. Background: There is evidence that firefighters are at risk of work-related stress and mental health problems, but little is known about the organizational hazards they experience. Insight is needed into the work-related factors that are most likely to threaten or protect their work-related well-being. Aims: To identify levels of job demands and resources (including demands relating to workload, work patterns and the working environment, relationship conflicts, control, support, role clarity and change management) among firefighters, and to use a job demands-resources framework to examine their impacts on work-related well-being. The role played by recovery strategies in predicting work-related well-being was also considered. Methods: Job demands and resources were assessed by the Health &amp; Safety Executive (HSE) Management Standards Indicator Tool. Validated scales measured recovery strategies (detachment, affective rumination and problem-solving pondering) and work-related well-being (anxiety-contentment and depression-enthusiasm). The impact of job demands, resources and recovery strategies was tested by multiple linear regression. Results: The sample comprised 909 firefighters across seven Fire and Rescue Services in the UK (85% male). Levels of job demands and resources did not meet HSE benchmarks. The main risk factors for poor work-related well-being were relationship conflicts and affective rumination, but resources such as role clarity and job control and the use of problem-solving pondering and detachment were beneficial. Conclusions: Interventions that aim to reduce relationship conflicts at work and promote problem-solving rather than affective rumination, and detachment from work when off-duty, are likely to improve work-related well-being. Attention to enhancing job resources may also be beneficial. abstract_id: PUBMED:24123665 Job demands × job control interaction effects: do occupation-specific job demands increase their occurrence? Despite evidence that the accurate assessment of occupational health should include measures of both generic job demands and occupation-specific job demands, most research includes only generic job demands. The inclusion of more focused occupation-specific job demands is suggested to explain a larger proportion of variance for both direct effects and job demands × job control/support interaction effects, as compared with the inclusion of generic job demands. This research tested these two propositions via a self-report survey assessing key psychological job characteristics administered twice to a sample of correctional workers (N = 746). The research clearly identified that the assessment of correctional-specific job demands (CJD) was more strongly associated with job satisfaction, work engagement, turnover intentions and psychological strain, as compared with an assessment of generic job demands. However, the CJD did not produce a greater proportion of significant job demands × job control/support interaction effects, as compared with the generic job demands measure. The results thereby provide further support for the acknowledged 'elusiveness' of these theoretical interactions. Overall, however, the results did support the inclusion of occupation-specific measures of job demands for the accurate assessment of the health and job performance of high-risk workers. The implications for theoretical discussions that describe how high job demands are moderated by job resources are discussed. abstract_id: PUBMED:34248738 Job Demands and Job Resources of Academics in Higher Education. Too many job demands and not enough job resources can negatively influence the well-being of employees. Currently, limited information exists surrounding the job demands and resources as experienced by academic employees in the higher education sector. Therefore, the aim of this study was to identify the job demands and job resources experienced by academic employees using qualitative methods. Semi-structured interviews were conducted with 23 academic employees, using an Interpretative Phenomenological Approach. Thematic analysis, specifically template analysis was used to categorize the themes. Job demands were divided into three categories: quantitative (publication pressure, overburdened with the load, and competing time demands), qualitative (work/home balance, complexity of student support, organizational politics, and lack of mental health support) and organizational demands (using technology-mediated learning and lack of structural resources). Job resources were organized into two categories: organizational (social support) and personal resources (autonomy, meaningful work, and personal support). Participant experiences are highlighted to provide a better understanding of the job demands and job resources encountered. The framework of job demands and job resources gleaned from the study could be used for further research to manage and monitor motivational processes for academic staff, and to reduce strain due to high job demands. abstract_id: PUBMED:35115957 Job Demands and Resources, Positive and Negative Affect, and Psychological Distress of Social Workers in China. Globally, human service professionals, like social workers, experience significant job demands (JD) which can lead to outcomes like psychological distress, burnout, and high turnover rates. This is especially true in China, where the social work profession has grown substantially in recent decades. Because social workers play a crucial role in supporting vulnerable communities, there is a need to understand how their work conditions affect outcomes like psychological distress. This study applies the job demands and resources (JD-R) model to study this relation, along with the mediational effects of positive affect (PA) and negative affect (NA), in social workers from Chengdu, China (n = 897). The results of structural equation modeling indicate that JD-R differentially affect psychological distress. PA and NA partially mediate these relations. Job resources (JR) reduced psychological distress by reducing NA and increasing PA. JD did not have any effect on PA but significantly increased NA, which was associated with higher psychological distress. The magnitudes of each estimate suggest that JR has a greater effect on PA and NA, relative to the effects of JD on PA and NA. Interventions that seek to promote PA and reduce NA may be able to work with existing JR to buffer against the effects of JD in social workers. Answer: Yes, job demands and job control do affect problem-solving. The study represented in abstract PUBMED:22699186 found that individuals who reported high levels of skill discretion, task authority, and control, as well as those in high demand/high control and low demand/high control job characteristics, were more likely to use problem-solving strategies. This suggests that working conditions that include high levels of control may influence how individuals cope with problems, potentially affecting behavior outside of the work domain as well. Additionally, the demands control support model, as discussed in abstract PUBMED:19594240, indicates that job control and social support enable workers to engage in problem-solving. The study found that the way individuals use control and support to respond to problem-solving demands is associated with learning and affective well-being. Furthermore, abstract PUBMED:31957967 links mindfulness at work to greater job satisfaction and task performance through high-activation positive affect and problem-solving confidence, respectively. This implies that cognitive states like mindfulness can enhance problem-solving abilities in the workplace. In the context of creative engagement, abstract PUBMED:30625034 shows that high job control can modulate the relationship between job demands and creative process engagement, which is a form of problem-solving. Lastly, abstract PUBMED:31282780 suggests that cognitive resources such as problem-solving and creativity, which are inversely associated with burnout, may serve as buffers against the negative influence of job demands. This indicates that problem-solving abilities can mitigate the detrimental effects of high job demands. In summary, job demands and job control have a significant impact on problem-solving, with higher job control being associated with better problem-solving strategies and outcomes.
Instruction: Is surveillance colonoscopy necessary for patients with sporadic gastric hyperplastic polyps? Abstracts: abstract_id: PUBMED:25874940 Is surveillance colonoscopy necessary for patients with sporadic gastric hyperplastic polyps? Background: Gastric polyps, such as adenomas and hyperplastic polyps, can be found in various colonic polyposis syndromes. Unlike in sporadic gastric adenomas, in which the increased risk of colorectal neoplasia has been well characterized, information in sporadic gastric hyperplastic polyps was limited. Aim: To evaluate the association of sporadic gastric hyperplastic polyps with synchronous colorectal neoplasia in a large cohort. Methods: Patients with sporadic gastric hyperplastic polyps who underwent colonoscopy simultaneously or within six months were consecutively enrolled. Each patient was compared with two randomly selected age and sex matched controls without gastric polyps who also underwent colonoscopy in the same period. Data of patients' demographics and characteristics of the gastrointestinal polyps were documented. Results: A total of 261 cases in 118,576 patients who underwent esophagogastroduodenoscopy were diagnosed as sporadic gastric hyperplastic polyps, and 192 of 261 (73.6%) patients underwent colonoscopy. Colorectal neoplasias were identified in 46 (24.0%) of 192 cases and in 40 (10.4%) of 384 controls (P<0.001). The mean size and distribution of colorectal neoplasias were not significantly different between the two groups. There was a significantly higher rate of colorectal adenoma (odds ratio [OR] 3.2, 95% confidence interval [CI] 1.9-5.3) in the gastric hyperplastic polyps group than in the control group, while the prevalence of colorectal cancer was similar in the two groups. Logistic regression analysis also suggested that the presence of gastric hyperplastic polyps (OR 2.5, 95% CI 1.5-4.0) was an independent risk factor for colorectal neoplasias. Conclusion: The risk of colorectal adenoma increases in patients with sporadic gastric hyperplastic polyps, and surveillance colonoscopy for these patients should be considered. abstract_id: PUBMED:18441865 Colonoscopy surveillance after polypectomy and colorectal cancer resection. This article describes a joint update of guidelines by the American Cancer Society and the U.S. Multi-Society Task Force on Colorectal Cancer delineating evidence-based surveillance recommendations for patients after polypectomy and colorectal cancer resection. Although there are some qualifying conditions, the following general guidelines apply: after colonoscopic polypectomy, patients with hyperplastic polyps should be considered to have normal colonoscopies, and subsequent colonoscopy is recommended at 10 years. Patients with one or two small (less than 1 cm) tubular adenomas, including those with only low-grade dysplasia, should have their next colonoscopy in five to 10 years. Patients with three to 10 adenomas, any adenoma 1 cm or larger, or any adenoma with villous features or high-grade dysplasia should have their next colonoscopy in three years. Following curative resection of colorectal cancer, patients should undergo a colonoscopy at one year, with subsequent follow-up intervals determined by the results of this examination. Adoption of these guidelines will have a dramatic impact on the quality of care provided to patients after a colorectal cancer diagnosis, will assist in shifting available resources from intensive surveillance to screening, and will ultimately decrease suffering and death related to colorectal cancer. abstract_id: PUBMED:25266407 Overuse of colonoscopy for colorectal cancer screening and surveillance. Background: Ongoing efforts to increase colorectal cancer (CRC) screening rates have raised concerns that these exams may be overused, thereby subjecting patients to unnecessary risks and wasting healthcare resources. Objective: Our aim was to measure overuse of screening and surveillance colonoscopies among average-risk adults, and to identify correlates of overuse. Design, Setting, And Participants: Our approach was a retrospective cohort study using electronic health record data for patients 50-65 years old with no personal history of CRC or colorectal adenomas with an incident CRC screening colonoscopy from 2001 to 2010 within a multispecialty physician group practice. Main Outcome Measures: We measured time to next screening or surveillance colonoscopy and predictors of overuse (exam performed more than one year earlier than guideline recommended intervals) of colonoscopies. Key Results: We identified 1,429 adults who had an incident colonoscopy between 2001 and 2010, and they underwent an additional 871 screening or surveillance colonoscopies during a median follow-up of 6 years. Most follow-up screening colonoscopies (88%) and many surveillance colonoscopies (49%) repeated during the study represented overuse. Time to next colonoscopy after incident screening varied by exam findings (no polyp: median 6.9 years, interquartile range [IQR]: 5.1-10.0; hyperplastic polyp: 5.7 years, IQR: 4.9-9.7; low-risk adenoma: 5.1 years, IQR: 3.3-6.3; high-risk adenoma: 2.9 years, IQR: 2.0-3.4, p &lt; 0.001). In logistic regression models of colonoscopy overuse, an endoscopist recommendation for early follow-up was strongly associated with overuse of screening colonoscopy (OR 6.27, 95% CI: 3.15-12.50) and surveillance colonoscopy (OR 13.47, 95% CI 6.61-27.46). In a multilevel logistic regression model, variation in the overuse of screening colonoscopy was significantly associated with the endoscopist performing the previous exam. Conclusions: Overuse of screening and surveillance exams are common and should be monitored by healthcare systems. Variations in endoscopist recommendations represent targets for interventions to reduce overuse. abstract_id: PUBMED:24909388 Risk of adenomas with high-risk characteristics based on two previous colonoscopy. Background And Aim: The recommended intervals between surveillance colonoscopies are based on the most recent examination findings. However, whether the two previous colonoscopies affect second surveillance colonoscopic findings is not established. The aim of this study is to estimate the risk of obtaining high-risk findings (HRF) on the next surveillance colonoscopy using the results of two previous colonoscopies, and to estimate the appropriate time interval for the next surveillance colonoscopy. Methods: Among subjects who underwent screening colonoscopy during January 2002-December 2009, patients who underwent second surveillance colonoscopy before June 2012 were enrolled. "No adenoma" was defined as a hyperplastic polyp or no polyp, "low-risk findings (LRF)" as one or two small (&lt; 1 cm) tubular adenomas, and "HRF" as advanced adenoma, cancer, or any sized multiple (≥ 3) adenomas. Results: Among enrolled 852 subjects, 65 (7.6%) had HRF at second surveillance colonoscopy. Multivariate analysis showed that HRF on second surveillance colonoscopy were associated with male and HRF on screening colonoscopy (all, P &lt; 0.01). In subjects with LRF on first surveillance colonoscopy, HRF on the screening colonoscopy significantly affected the detection of HRF on second surveillance colonoscopy (P &lt; 0.01). Patients with HRF on screening colonoscopy and LRF on the first surveillance colonoscopy had no different risk of HRF on second surveillance colonoscopy from those with HRF on first surveillance colonoscopy (P &gt; 0.05). Conclusions: The HRF on second surveillance are significantly associated with previous two colonoscopic results. In patients with LRF on first surveillance, screening colonoscopic findings should be considered to determine the optimal surveillance interval. abstract_id: PUBMED:19598329 Colonic adenoma patients have higher incidence of hyperplastic polyps on surveillance colonoscopy. Objectives: To determine the incidence of hyperplastic polyps in patients undergoing surveillance colonoscopy and to compare with the prevalence in individuals undergoing index colonoscopy. Patients And Methods: This prospective observational study included patients with index colonoscopy findings of adenomas, adenoma with concomitant hyperplastic polyps and advanced neoplasia who subsequently underwent surveillance colonoscopy. Patient characteristics, indications for colonoscopy, interval between examinations and findings were recorded. These findings were compared with our control group that comprised of patients who underwent index colonoscopy during the same period. Results: Study group consisted of 108 patients (median age 61.5 years) who underwent index colonoscopy for indications including, weight loss, rectal bleeding, altered bowel habits and anaemia. On index colonoscopy, adenomas, adenoma with concomitant hyperplastic polyps and advanced neoplasia were present in 61, 35 and 12 patients, respectively. Surveillance colonoscopy was performed after a mean follow-up period of 29 +/- 17 months. Findings on surveillance examination included hyperplastic polyps in 35 and 57% of patients with past adenomas and adenoma with concomitant hyperplastic polyps, respectively. Control group included 3852 symptomatic patients who had index colonoscopy (median age, 65 years). Hyperplastic polyps, adenomas and advanced neoplasia were found in 155 (4%), 388 (10%) and 60 (1.5%) of patients, respectively. Hyperplastic polyps and adenoma were significantly higher in study group as compared with control group (P &gt;0.5). Conclusion: Incidence of hyperplastic polyps is significantly higher on surveillance colonoscopy as compared with the prevalence on index colonoscopy. This may signify a continuous spectrum of biological evolution between hyperplastic polyps and adenomas. abstract_id: PUBMED:16737947 Guidelines for colonoscopy surveillance after polypectomy: a consensus update by the US Multi-Society Task Force on Colorectal Cancer and the American Cancer Society. Adenomatous polyps are the most common neoplastic findings uncovered in people who undergo colorectal screening or have a diagnostic workup for symptoms. It was common practice in the 1970s for these patients to have annual follow-up surveillance examinations to detect additional new adenomas as well as missed synchronous adenomas. As a result of the National Polyp Study report in 1993, which demonstrated clearly in a randomized design that the first postpolypectomy examination could be deferred for 3 years, guidelines published by a gastrointestinal consortium in 1997 recommended that the first follow-up surveillance be 3 years after polypectomy for most patients. In 2003, these guidelines were updated, colonoscopy was recommended as the only follow-up examination, and stratification at baseline into lower and higher risk for subsequent adenomas was suggested. The 1997 and 2003 guidelines dealt with both screening and surveillance. However, it has become increasingly clear that postpolypectomy surveillance is now a large part of endoscopic practice, draining resources from screening and diagnosis. In addition, surveys have demonstrated that a large proportion of endoscopists are conducting surveillance examinations at shorter intervals than recommended in the guidelines. In the present paper, a careful analytic approach was designed addressing all evidence available in the literature to delineate predictors of advanced pathology, both cancer and advanced adenomas, so that patients can be more definitely stratified at their baseline colonoscopy into those at lower or increased risk for a subsequent advanced neoplasia. People at increased risk have either three or more adenomas, or high-grade dysplasia, or villous features, or an adenoma &gt; or =1 cm in size. It is recommended that they have a 3-year follow-up colonoscopy. People at lower risk who have one or two small (&lt; 1 cm) tubular adenomas with no high-grade dysplasia can have a follow-up in 5 to 10 years, whereas people with hyperplastic polyps only should have a 10-year follow-up as average-risk people. Recent papers have reported a significant number of missed cancers by colonoscopy. However, high-quality baseline colonoscopy with excellent patient preparation and adequate withdrawal time should minimize this and reduce clinicians' concerns. These guidelines were developed jointly by the US Multi-Society Task Force on Colorectal Cancer and the American Cancer Society to provide a broader consensus and thereby increase utilization of the recommendations by endoscopists. Adoption of these guidelines nationally can have a dramatic impact on shifting available resources from intensive surveillance to screening. It has been shown that the first screening colonoscopy and polypectomy produces the greatest effects on reducing the incidence of colorectal cancer in patients with adenomatous polyps. abstract_id: PUBMED:21957689 Inappropriate colonoscopic surveillance of hyperplastic polyps. Colonoscopic surveillance of hyperplastic polyps alone is controversial and may be inappropriate. The colonoscopy surveillance register at a university teaching hospital was audited to determine the extent of such hyperplastic polyp surveillance. The surveillance endoscopy records were reviewed, those patients with hyperplastic polyps were identified, their clinical records were examined and contact was made with each patient. Of the 483 patients undergoing surveillance for colonic polyps 113 (23%) had hyperplastic polyps alone on last colonoscopy. 104 patients remained after exclusion of those under appropriate surveillance. 87 of the 104 patients (84%) were successfully contacted. 37 patients (8%) were under appropriate colonoscopic surveillance for a significant family history of colorectal carcinoma. 50 (10%) patients with hyperplastic polyps alone and no other clinical indication for colonoscopic surveillance were booked for follow up colonoscopy. This represents not only a budgetary but more importantly a clinical opportunity cost the removal of which could liberate valuable colonoscopy time for more appropriate indications. abstract_id: PUBMED:25575106 Serrated adenoma prevalence in inflammatory bowel disease surveillance colonoscopy, and characteristics revealed by chromoendoscopy and virtual chromoendoscopy. Background: Sessile or nonpolypoid neoplastic lesions, including sessile serrated adenomas (SSAs), are difficult to detect in patients with inflammatory bowel disease (IBD). Objectives: To assess the prevalence and endoscopic features of SSA in IBD patients undergoing surveillance colonoscopy using novel endoscopic techniques. Methods: Histology results of biopsies from a cohort of 87 patients (47 men; median age 51.4 years; median duration of disease 16.9 years; ulcerative colitis [n=40], Crohn disease [n=43], ischemic colitis [n=4]) with longstanding colonic IBD undergoing surveillance colonoscopy were reviewed. Lesions of dysplasia (adenoma-like mass, or dysplasia-associated lesion or mass), SSAs, adenoma-like polyps, hyperplastic polyps and inflammatory polyps were identified. Surveillance colonoscopy using high-definition alone, or with iScan (Pentax, USA) dye-sprayed or virtual chromoendoscopy was performed. Lesion characteristics were described before histological diagnosis. Results: Fourteen SSAs were detected in 87 (11%) IBD patients. The endoscopic characteristics of SSA lesions were: nonpolypoid appearance (86%), predominant localization in the proximal colon (79%), &gt;6 mm in size (79%), cloudy cover (64%), Kudo pit pattern modified type IIO (86%) and irregular spiral vascular pattern (79%). Among the 44 SSAs and hyperplastic polyps found in the present study, the above characteristics of SSA at colonoscopy had a sensitivity of 92.86% (95% CI 66.06% to 98.8%) and specificity of 93.33% (95% CI 77.89% to 98.99%) in predicting a histological diagnosis of SSA (positive predictive value 86.67%, negative predictive value 96.55%). Conclusion: SSAs are a common finding at surveillance colonoscopy in IBD and have several characteristic features. Further studies are needed to evaluate the natural history of these lesions in IBD patients. abstract_id: PUBMED:36895974 Sporadic hyperplastic polyp associated with above-average risk of developing metachronous colorectal cancer. Post-colonoscopy surveillance interval for colorectal polyps depends on the size, number, and pathological classification of removed polyps. The risk of sporadic hyperplastic polyps (HPs) for developing colorectal adenocarcinoma remains debatable due to limited data. We aimed to evaluate the risk of metachronous colorectal cancer (CRC) in patients with sporadic HPs. A total of 249 patients with historical HP(s) diagnosed in 2003 were included as the disease group, and 393 patients without any polyp as the control group. All historical HPs were reclassified into SSA or true HP based on the recent 2010 and 2019 World Health Organization (WHO) criteria. Polyp size was measured under light microscope. Patients developed CRC were identified from the Tumor Registry database. Each tumor was tested for DNA mismatch repair proteins (MMR) by immunohistochemistry. Results showed that 21 (8%) and 48 (19%) historical HPs were reclassified as SSAs based on the 2010 and 2019 WHO criteria, respectively. The mean polyp size of SSAs (6.7 mm) was significantly larger than HPs (3.3 mm) (P&lt;0.0001). For polyp size ≥5 mm, the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for diagnosing SSA was 90%, 90%, 46%, and 99%, respectively. Left-sided polyps with size &lt;5 mm were 100% of HPs. Five of 249 (2%) patients developed metachronous CRC during the 14-year follow-up from 2003 to 2017, including 2 of 21 (9.5%) patients with SSA diagnosed at intervals of 2.5 and 7 years, and 3 of 228 (1.3%) patients with HP(s) at 7, 10.3, and 11.9 years. Two of 5 cancers showed MMR deficiency with concurrent loss of MLH1/PMS2. Based on the 2019 WHO criteria, the rate of developing metachronous CRC in patients with SSA (P=0.0116) and HP (P=0.0384) was significantly higher than the control group, and no significant difference was observed between patients with SSA and with HP (P=0.241) in this cohort. Patients with either SSA or HP also had higher risk of CRC than average-risk US population (P=0.0002 and 0.0001, respectively). Our data add a new line of evidence that patients with sporadic HP are associated with above-average risk of developing metachronous CRC. Post-polypectomy surveillance for sporadic HP may be adjusted in future practice given the low but increased risk of developing CRC. abstract_id: PUBMED:36912828 Association of Life Expectancy With Surveillance Colonoscopy Findings and Follow-up Recommendations in Older Adults. Importance: Surveillance after prior colon polyps is the most frequent indication for colonoscopy in older adults. However, to our knowledge, the current use of surveillance colonoscopy, clinical outcomes, and follow-up recommendations in association with life expectancy, factoring in both age and comorbidities, have not been studied. Objective: To evaluate the association of estimated life expectancy with surveillance colonoscopy findings and follow-up recommendations among older adults. Design, Setting, And Participants: This registry-based cohort study used data from the New Hampshire Colonoscopy Registry (NHCR) linked with Medicare claims data and included adults in the NHCR who were older than 65 years, underwent colonoscopy for surveillance after prior polyps between April 1, 2009, and December 31, 2018, and had full Medicare Parts A and B coverage and no Medicare managed care plan enrollment in the year prior to colonoscopy. Data were analyzed from December 2019 to March 2021. Exposures: Life expectancy (&lt;5 years, 5 to &lt;10 years, or ≥10 years), estimated using a validated prediction model. Main Outcomes And Measures: The main outcomes were clinical findings of colon polyps or colorectal cancer (CRC) and recommendations for future colonoscopy. Results: Among 9831 adults included in the study, the mean (SD) age was 73.2 (5.0) years and 5285 (53.8%) were male. A total of 5649 patients (57.5%) had an estimated life expectancy of 10 or more years, 3443 (35.0%) of 5 to less than 10 years, and 739 (7.5%) of less than 5 years. Overall, 791 patients (8.0%) had advanced polyps (768 [7.8%]) or CRC (23 [0.2%]). Among the 5281 patients with available recommendations (53.7%), 4588 (86.9%) were recommended to return for future colonoscopy. Those with longer life expectancy or more advanced clinical findings were more likely to be told to return. For example, among patients with no polyps or only small hyperplastic polyps, 132 of 227 (58.1%) with life expectancy of less than 5 years were told to return for future surveillance colonoscopy vs 940 of 1257 (74.8%) with life expectancy of 5 to less than 10 years and 2163 of 2272 (95.2%) with life expectancy of 10 years or more (P &lt; .001). Conclusions And Relevance: In this cohort study, the likelihood of finding advanced polyps and CRC on surveillance colonoscopy was low regardless of life expectancy. Despite this observation, 58.1% of older adults with less than 5 years' life expectancy were recommended to return for future surveillance colonoscopy. These data may help refine decision-making about pursuing or stopping surveillance colonoscopy in older adults with a history of polyps. Answer: Surveillance colonoscopy may be considered for patients with sporadic gastric hyperplastic polyps. A study found that the risk of colorectal adenoma increases in patients with sporadic gastric hyperplastic polyps, suggesting that surveillance colonoscopy should be considered for these patients (PUBMED:25874940). This is in contrast to the general guidelines that recommend surveillance colonoscopy at 10 years for patients with hyperplastic polyps after colonoscopic polypectomy, as they are considered to have normal colonoscopies (PUBMED:18441865). However, the risk of metachronous colorectal cancer in patients with sporadic hyperplastic polyps has been shown to be above average, indicating that post-polypectomy surveillance for sporadic hyperplastic polyps may need to be adjusted in future practice given the low but increased risk of developing colorectal cancer (PUBMED:36895974). It is important to note that overuse of screening and surveillance colonoscopies is common and should be monitored by healthcare systems, with variations in endoscopist recommendations representing targets for interventions to reduce overuse (PUBMED:25266407). Additionally, the risk of adenomas with high-risk characteristics on second surveillance colonoscopy is significantly associated with the results of the two previous colonoscopies, suggesting that in patients with low-risk findings on the first surveillance, screening colonoscopic findings should be considered to determine the optimal surveillance interval (PUBMED:24909388). In summary, while hyperplastic polyps are generally considered low risk and associated with longer surveillance intervals, the presence of sporadic gastric hyperplastic polyps may warrant consideration for more frequent surveillance due to an increased risk of colorectal adenoma and potentially colorectal cancer. However, the decision should be individualized based on the patient's overall risk profile and the characteristics of the polyps, and efforts should be made to avoid overuse of surveillance procedures.
Instruction: Does stapled functional end-to-end anastomosis affect recurrence of Crohn's disease after ileocolonic resection? Abstracts: abstract_id: PUBMED:14571753 Does stapled functional end-to-end anastomosis affect recurrence of Crohn's disease after ileocolonic resection? Background/aims: Crohn's disease is characterized by a high incidence of perianastomotic recurrence after ileocolonic resection. The influence of anastomotic configuration on the incidence of reoperation was evaluated in patients undergoing resection for Crohn's disease. Methodology: 106 patients affected by ileocolonic Crohn's disease were divided in two groups: group I with a hand-sewn end-to-side or side-to-side isoperistaltic anastomosis (30 patients) and group II with functional end-to-end anastomosis made with linear staplers (76 patients). Results: The morbidity was 4.7%: 3 complications and a postoperative death occurred in group I and two complications occurred in group II. There were 5 recurrences (16.7%) in the hand-sewn group and 2 recurrences (2.6%) in the stapled group, with a recurrence rate of 3.54 and 0.92, respectively. Conclusions: Our non-randomized study suggests a better trend toward the functional stapled technique, in terms of recurrence rate. These observations need further investigation with randomized controlled trials, to compare the two different anastomotic procedures. abstract_id: PUBMED:10466883 Stapled functional end-to-end anastomosis versus sutured end-to-end anastomosis after ileocolonic resection in Crohn disease. Background: The aim of this retrospective study was to compare complications and anastomotic recurrence rates after stapled functional end-to-end versus conventional sutured end-to-end anastomosis after ileocolonic resection in Crohn disease. Methods: Between 1988 and 1997, 123 patients underwent ileocolonic resection for Crohn disease. Forty-five patients underwent stapled functional end-to-end anastomosis (stapled group), and 78 underwent sutured end-to-end anastomosis (sutured group). Results: The stapled anastomosis has been more frequently used during the past 3 years; between 1995 and 1997 it was used in 33 (83%) of 40 patients, compared with only 12 (14%) of 83 patients between 1988 and 1994. There was one anastomotic leak (2%) in the stapled group, compared with six (8%) in the sutured group. The overall complication rate was significantly lower in the stapled group (7% versus 23%, P = 0.04). In the stapled group only one patient required reoperation for ileocolonic anastomotic recurrence, compared with 26 in the sutured group. The cumulative 1-, 2- and 5-year rates for ileocolonic recurrences requiring surgery in the stapled group were 0%, 0%, and 3%, which were significantly lower than the 5%, 11%, and 24% in the sutured group (P = 0.007 by log-rank test). Conclusions: Although the follow-up duration was short in the stapled group, these results suggest that stapled functional end-to-end ileocolonic anastomosis is associated with a lower incidence of complications and that early anastomotic recurrence is less common than after sutured end-to-end anastomosis. However, a randomized trial would be necessary to draw clear conclusions. abstract_id: PUBMED:10452253 Stapled functional end-to-end anastomosis in Crohn's disease. The aim of this study was to review our experience of stapled functional end-to-end anastomosis following a bowel resection for Crohn's disease. Between 1988 and 1997, 62 patients underwent 63 stapled functional end-to-end anastomoses, including 54 ileocolonic, 8 ileoileal, and 1 colocolonic. One patient developed an anastomotic leak and required a laparotomy. There were no anastomotic complications in any other patients. The median duration of follow-up was 21 (rang: 4-122) months. Only one patient developed recurrence at the stapled ileocolonic anastomosis 35 months after the operation. However, this patient required no further surgical treatment. Stapled functional end-to-end anastomosis is associated with a low incidence of complications and a low incidence of early recurrence. This technique may therefore be the anastomotic configuration of choice following a bowel resection for Crohn's disease. abstract_id: PUBMED:26988855 Is "functional end-to-end anastomosis" really functional? A review of the literature on stapled anastomosis using linear staplers. Purposes: Anastomosis is one of the basic skills of a gastrointestinal surgeon. Stapling devices are widely used because stapled anastomosis (SA) can shorten operation times. Antiperistaltic stapled side-to-side anastomosis (SSSA) using linear staplers is a popular SA technique that is often referred to as "functional end-to-end anastomosis (FEEA)." The term "FEEA" has spread without any definite validation of its "function." The aim of this review is to show the heterogeneity of SA and conventional hand-sewn end-to-end anastomosis (HEEA) and to advocate the renaming of "FEEA." Methods: We conducted a narrative review of the literature on SSSA. We reviewed the literature on ileocolic and small intestinal anastomosis in colonic cancer, Crohn's disease and ileostomy closure due to the simplicity of the technique. Results: The superiority of SSSA in comparison to HEEA has been demonstrated in previous clinical studies concerning gastrointestinal anastomosis. Additionally, experimental studies have shown the differences between the two anastomotic techniques on peristalsis and the intestinal bacteria at the anastomotic site. Conclusions: SSSA and HEEA affect the postoperative clinical outcome, electrophysiological peristalsis, and bacteriology in different manners; no current studies have shown the functional equality of SSSA and HEEA. However, the use of the terms "functional end-to-end anastomosis" and/or "FEEA" could cause confusion for surgeons and researchers and should therefore be avoided. abstract_id: PUBMED:38408943 Optimising surgical anastomosis in ileocolic resection for Crohn's disease with respect to recurrence and functionality: two international parallel randomized controlled trials comparing handsewn (END-to-end or Kono-S) to stapled anastomosis (HAND2END and the End2End STUDIES). Background: The most common intestinal operation in Crohn's disease (CD) is an ileocolic resection. Despite optimal surgical and medical management, recurrent disease after surgery is common. Different types of anastomoses with respect to configuration and construction can be made after resection for example, handsewn (end-to-end and Kono-S) and stapled (side-to-side). The various types of anastomoses might affect endoscopic recurrence and its assessment, the functional outcome, and costs. The objective of the present study is to compare the three types of anastomoses with respect to endoscopic recurrence at 6 months, gastrointestinal function, and health care consumption. Methods: This is a randomized controlled multicentre superiority trial, allocating patients either to side-to-side stapled anastomosis as advised in current guidelines or a handsewn anastomoses (an end-to-end or Kono-S). It is hypothesized that handsewn anastomoses do better than stapled, and end-to-end perform better than the saccular Kono-S. Two international studies with a similar setup will be conducted mainly in the Netherlands (End2End) and Italy (HAND2END). Patients diagnosed with CD, aged over 16 years in the Netherlands and 18 years in Italy requiring (re)resection of the (neo)terminal ileum are eligible. The first part of the study compares the two handsewn anastomoses with the stapled anastomosis. To detect a clinically relevant difference of 25% in endoscopic recurrence, a total of 165 patients will be needed in the Netherlands and 189 patients in Italy. Primary outcome is postoperative endoscopic recurrence (defined as Rutgeerts score ≥ i2b) at 6 months. Secondary outcomes are postoperative morbidity, gastrointestinal function, quality of life (QoL) and costs. Discussion: The research question addresses a knowledge gap within the general practice elucidating which type of anastomosis is superior in terms of endoscopic and clinical recurrence, functionality, QoL and health care consumption. The results of the proposed study might change current practice in contrast to what is advised by the guidelines. Trial Registration: NCT05246917 for HAND2END and NCT05578235 for End2End ( http://www. Clinicaltrials: gov/ ). abstract_id: PUBMED:10512947 Strategy for surgical management of ileocolonic anastomotic recurrence in Crohn's disease. After resection for ileocecal or ileocolonic Crohn's disease anastomotic recurrence is common, and many patients require further surgery. This study reviews our overall experience of surgery for ileocolonic anastomotic recurrence of Crohn's disease so we can propose a strategy for management. A series of 109 patients who underwent surgery for anastomotic recurrence after ileocecal or ileocolonic resection for Crohn's disease between 1984 and 1997 were reviewed. Ileocolonic recurrence was treated by strictureplasty in 39 patients and resection in 70 (with sutured end-to-end anastomosis, 48; stapled side-to-side anastomosis, 22). Stapled anastomosis has been frequently used between 1995 and 1997. Short recurrence was mainly treated by strictureplasty, and long or perforating disease was resected. Coexisting small bowel disease was more common in the patients having strictureplasty. Septic complications (leak/fistula/abscess) related to the ileocolonic procedure occurred in 1 of 39 patients (3%) after strictureplasty, in 6 of 48 (13%) after resection with sutured anastomosis, and in none of 22 after resection with stapled anastomosis. The median duration of follow-up was 90 months after strictureplasty, 105 months after resection with sutured anastomosis, and 22 months after resection with stapled anastomosis. Altogether 18 of 39 patients (46%) after strictureplasty, 22 of 48 (46%) after resection with sutured anastomosis, and none of 22 after resection with stapled anastomosis required further surgery for suture line recurrence. In conclusion, strictureplasty is useful for short ileocolonic recurrence in patients with multifocal small bowel disease or previous extensive resection. Stapled side-to-side anastomosis was associated with a low incidence of complications, and early recurrence was not observed, although the duration of follow-up was short. abstract_id: PUBMED:36515520 Stapled Antimesenteric Functional End-to-End Anastomosis Following Intestinal Resection for Crohn's Disease. Background: Anastomotic recurrence after bowel resection is problematic in patients with Crohn's disease. Antimesenteric functional end-to-end handsewn (Kono-S) anastomosis is associated with a low risk of anastomotic recurrence in patients with Crohn's disease. Impact Of Innovation: Kono-S anastomosis is effective but may be time-consuming. This study aimed to describe stapled antimesenteric functional end-to-end anastomosis for patients with Crohn's disease. Technology Materials And Methods: The mesentery of the affected bowel segment was divided. A 5-cm-wide stapled functional end-to-end anastomosis was performed approximately 6 cm from the affected segment. The bowel was divided transversely exactly 90° to the intestinal lumen and the mesentery, and a supporting column was then constructed. Preliminary Results: From January 2018 to June 2021, 17 stapled antimesenteric functional end-to-end anastomoses were performed. The mean operative time was 106 (range, 80-135) minutes, and the time to construct the stapled antimesenteric functional end-to-end anastomosis was 21 (range, 18-28) minutes. The mean follow-up time was 8.9 (range, 1-15) months. In total, 10 patients underwent surveillance endoscopy. The average Rutgeerts score was 0.8 (range, 0-4), and the incidence of endoscopic recurrence was 11.8%. No postoperative mortality or anastomotic leakage was observed. Conclusion: Stapled antimesenteric functional end-to-end anastomosis may be a safe and time-saving procedure for patients with Crohn's disease. Future Directions: Further prospective studies with a large sample size are warranted. abstract_id: PUBMED:27943564 Factors affecting the incidence of early endoscopic recurrence after ileocolonic resection for Crohn's disease: a multicentre observational study. Aim: Early endoscopic recurrence is frequently observed in patients following resection for Crohn's disease (CD). However, factors affecting the incidence of an early postoperative endoscopic recurrence (EPER) have not been fully determined. The aim of this study was to evaluate risk factors for EPER after ileocolonic resection for CD. Method: This was a retrospective, international multicentre study, in which 127 patients with a first ileocolonoscopy conducted between 6 and 12 months after ileocolonic resection for CD were included. Endoscopic recurrence was defined as a Rutgeerts score of ≥ i2. The following variables were investigated as potential risk factors for EPER: gender, age at surgery, location and behaviour of CD, smoking, concomitant perianal lesions, preoperative use of steroids, immunomodulators and biologics, previous resection, blood transfusion, surgical procedure (open vs laparoscopic approach), length of resected bowel, type of anastomosis (side-to-side vs end-to-end), postoperative complications, granuloma and postoperative biological therapy. Variables related to the patient, disease and surgical procedure were investigated as potential risk factors for EPER, with univariate and multivariate (logistic regression) analyses. Results: 43/127 (34%) patients had EPER at the time of the first postoperative ileocolonoscopy. In univariate analysis, only preoperative steroid use was significantly associated with a higher rate of EPER [21/45 patients (47%) on steroids and 22/82 patients (27%) without steroids (P = 0.04)]. In multivariate analysis, only preoperative steroid use was a significant independent risk factor for EPER (odds ratio 3.28, 95% confidence interval: 1.30-8.28; P = 0.01). Conclusions: This study found that only preoperative steroid use was a significant risk factor for EPER after ileocolonic resection for CD. Prospective studies are necessary to evaluate precisely the impact of perioperative medications on EPER rates. abstract_id: PUBMED:33162837 Surgical Treatment for Crohn's Disease: A Role of Kono-S Anastomosis in the West. More than 80% of patients with Crohn's disease (CD) will require surgical intervention during their lifetime, with high rates of anastomotic recurrence and stenosis necessitating repeat surgery. Current data show that pharmacotherapy has not significantly improved the natural history of postoperative clinical and surgical recurrence of CD. In 2003, antimesenteric hand-sewn functional end-to-end (Kono-S) anastomosis was first performed in Japan. This technique has yielded very desirable outcomes in terms of reducing the incidence of anastomotic surgical recurrence. The most recent follow-up of these patients showed that very few had developed surgical recurrence. This new approach is superior to stapled functional end-to-end anastomosis because the stumps are sutured together to create a stabilizing structure (a "supporting column"), serving as a supportive backbone of the anastomosis to help prevent distortion of the anastomotic lumen due to disease recurrence and subsequent clinical symptoms. This technique requires careful mesenteric excision for optimal preservation of the blood supply and innervation. It also results in a very wide anastomotic lumen on the antimesenteric side. The Kono-S technique has shown efficacy in preventing surgical recurrence and the potential to become the new standard of care for intestinal CD. abstract_id: PUBMED:30353491 Surgical Recurrence at Anastomotic Site After Bowel Resection in Crohn's Disease: Comparison of Kono-S and End-to-end Anastomosis. Background: Anastomotic surgical recurrence after bowel resection in Crohn's disease patients is problematic. This study was performed to evaluate the increased risk of anastomotic surgical recurrence. Methods: From 2006 to 2016, we performed anastomoses in 215 consecutive Crohn's disease patients. The cohort was divided into two groups: Kono-S anastomosis (n = 117) and end-to-end anastomosis (n = 98). Multivariate analysis of predictors of anastomotic surgical recurrence and Kaplan-Meier analysis for the 5-year anastomotic surgical recurrence rate were evaluated. Results: The two groups showed no statistically significant differences in patient backgrounds. During a median follow-up of 54 months, 28 patients required anastomotic surgical recurrence [4 (3.4%) in the Kono-S group and 24 (24.4%) in the end-to-end group]. Six leaks (5.1%) were detected in the Kono-S group and 17 leaks (17.3%) in the end-to-end group; all were successfully treated conservatively. End-to-end anastomosis, leakage, age &lt; 45 years, and body mass index of ≥ 18 kg/m2 at the first surgery had a higher risk of anastomotic surgical recurrence. Kono-S anastomosis significantly reduced the risk of anastomotic surgical recurrence after 1 year (odds ratio, 0.14). Anastomotic leakage influenced anastomotic surgical recurrence within 1 year (odds ratio, 4.84). The 5-year surgery-free survival rate at the anastomosis site with Kono-S anastomosis (95.0%) was significantly higher than that with end-to-end anastomosis (81.3%; P &lt; 0.001). Conclusions: Anastomotic leakage after bowel resection in Crohn's disease patients increased anastomotic surgical recurrence within 1 year, and Kono-S anastomosis is associated with a low risk of anastomotic surgical recurrence after &gt; 1 year. Answer: The available literature suggests that stapled functional end-to-end anastomosis (FEEA) may have a positive impact on the recurrence of Crohn's disease after ileocolonic resection. Several studies have reported lower rates of perianastomotic recurrence and complications with stapled anastomosis compared to hand-sewn techniques. One non-randomized study indicated a lower recurrence rate with the stapled technique (2.6%) compared to the hand-sewn group (16.7%) (PUBMED:14571753). Another retrospective study found that the stapled group had significantly lower complication rates and early anastomotic recurrence rates than the sutured end-to-end anastomosis group (PUBMED:10466883). Similarly, a review of stapled functional end-to-end anastomosis in Crohn's disease reported a low incidence of complications and early recurrence, suggesting that this technique may be the anastomotic configuration of choice following bowel resection for Crohn's disease (PUBMED:10452253). However, it is important to note that the term "functional end-to-end anastomosis" has been questioned for its functional equivalence to hand-sewn end-to-end anastomosis, and the literature suggests avoiding this term due to potential confusion (PUBMED:26988855). Furthermore, ongoing randomized controlled trials are comparing different types of anastomoses, including handsewn (end-to-end and Kono-S) and stapled (side-to-side), to determine their impact on endoscopic recurrence, gastrointestinal function, and health care consumption (PUBMED:38408943). The Kono-S anastomosis, a hand-sewn technique, has also been reported to have a low risk of anastomotic surgical recurrence (PUBMED:33162837; PUBMED:30353491). In summary, the evidence suggests that stapled functional end-to-end anastomosis may be associated with a lower incidence of recurrence and complications in Crohn's disease after ileocolonic resection, but further randomized controlled trials are needed to confirm these findings and to compare the efficacy of different anastomotic techniques.
Instruction: Short recovery time after percutaneous liver biopsy: should we change our current practices? Abstracts: abstract_id: PUBMED:16234032 Short recovery time after percutaneous liver biopsy: should we change our current practices? Background & Aims: Percutaneous liver biopsy is the gold standard in the diagnosis and staging of a wide variety of hepatic disorders. Complications, post-procedure monitoring, and recovery time have limited the ability for liver biopsies to be performed in a busy gastroenterology community practice. The aim of this study was to determine whether ambulatory patients requiring percutaneous liver biopsy can be safely discharged after a short recovery time period. Methods: All ambulatory patients undergoing a percutaneous liver biopsy at the University of Florida between February 1995 and June 2004 were evaluated in this study. A 15-gauge Jamshidi needle was used after percussion (before February 2002) or ultrasound guidance (starting February 2002). Major complications were defined as those events that required either immediate or delayed hospitalization or resulted in death within 2 weeks after the liver biopsy. Results: Three thousand two hundred fourteen outpatient liver biopsies were performed at our institution from March 1995 to June 2004. During this time, our recovery time was gradually decreased from 6 hours before 1997 to 1 hour in 2002. The majority of the complications occurred within 1 hour of the observation period or within 24 hours after discharge. The major complication rate was &lt; or =1.7%, regardless of the observation period. Conclusions: A shorter observation time after ambulatory percutaneous liver biopsy is safe and might facilitate the physician's ability to optimally utilize procedural space and ancillary staff in a busy ambulatory care unit. abstract_id: PUBMED:26738646 Safety of reducing the recovery time after percutaneous and laparoscopic liver biopsy Background: Liver biopsy is the main diagnostic tool for the study of the liver, and as such, its inherent complications have been minimised as much as possible over the years, through the modification of several factors regarding its procedure, including post-biopsy recovery time. The aim of this study was to evaluate the safety in the reduction of post-liver biopsy recovery time. Material And Methods: A non-blinded, randomised clinical trial was conducted in the "Hermanos Ameijeiras" Hospital from November 2011 to October 2012, on 128 patients in order to assess safety when reducing post-biopsy recovery times. The patients were randomised into 2 groups. Group A was allowed a 6-hour recovery time, while Group B was allowed a 2-hour recovery time after liver biopsy. Complications were fully recorded. The Chi squared test of homogeneity and Student t test was used as appropriate, in the statistical analysis, a significance level of 0.05 was set. Results: The main biopsy indication was elevated plasma transaminases. Pain in the puncture site was the most recurrent complication (67.2%), and the most serious complication was subcapsular liver haematoma in two cases (1.6%). There were no differences regarding the liver biopsy technique that could have caused complications in any group. Conclusions: There were no significant differences between 2 hours and 6 hours post-liver biopsy recovery time in terms of complications, so it is considered that after two hours the patient is incorporated more quickly into their activities, and the institution spends less material and human resources. abstract_id: PUBMED:15017549 Functional liver recovery parallels autologous gut salvage in short bowel syndrome. Background/purpose: Parenteral nutrition (PN) is life saving in short bowel syndrome. However, long-term parenteral nutrition is frequently complicated by a syndrome of progressive cholestatic liver disease that is considered to be irreversible beyond the early stages of cholestasis, particularly in the presence of any degree of fibrosis in the liver. The purpose of this study was to examine apparent improvement in PN-associated liver dysfunction in a cohort of children with short bowel syndrome. Methods: A retrospective case-record review of all patients managed within a dedicated Intestinal Rehabilitation Program (IRP) identified 13 patients with short bowel who had PN-associated liver dysfunction, defined for this purpose as hyperbilirubinemia or an abnormal liver biopsy. Results: At referral, 12 of the 13 patients were exclusively on PN, and one was on 50% PN. At current follow-up, 3 patients have achieved complete enteral autonomy from PN, and 7 patients have had smaller decrements in PN requirements. Specific operative procedures to improve intestinal function were undertaken in 11 patients; 4 patients also underwent cholecystectomies with biliary irrigation at the time of intestinal reconstruction. The median highest bilirubin level in these 13 patients was 10.7 mg% (range, 3.2 to 24.5 mg%). Liver biopsy results indicated that 5 patients were cirrhotic, 3 had bridging fibrosis, and 4 had severe cholestasis or lesser degrees of fibrosis. Of 10 survivors in this series, 9 patients currently have a serum bilirubin less than 1 mg% with a median bilirubin in the group of 0.6 mg% (range, 0.3 to 6.4 mg%). Twelve of the 13 patients in this series were initially referred for liver-small bowel transplantation. Conclusions: This preliminary experience suggests that PN-dependent patients with advanced liver dysfunction in the setting of the short bowel syndrome may, in some instances, experience functional and biochemical liver recovery. The latter appears to parallel autologous gut salvage in most cases. As a corollary, the authors believe that even advanced degrees of liver dysfunction should not preclude attempts at autologous gut salvage in very carefully selected patients. Such a policy of "aggressive conservatism" may help avoid the need for liver/intestinal transplantation in some patients who appear to be not responding to PN. abstract_id: PUBMED:37637354 A Multi-Hospital Survey of Current Practices for Supporting Recovery From Sepsis. Sepsis survivors are at increased risk for morbidity and functional impairment. There are recommended practices to support recovery after sepsis, but it is unclear how often they are implemented. We sought to assess the current use of recovery-based practices across hospitals. Design: Electronic survey assessing the use of best practices for recovery from COVID-related and non-COVID-related sepsis. Questions included four-point Likert responses of "never" to "always/nearly always." Setting: Twenty-six veterans affairs hospitals with the highest (n = 13) and lowest (n = 13) risk-adjusted 90-day sepsis survival. Subjects: Inpatient and outpatient clinician leaders. Interventions: None. Measurements And Main Results: For each domain, we calculated the proportion of "always/nearly always" responses and mean Likert scores. We assessed for differences by hospital survival, COVID versus non-COVID sepsis, and sepsis case volume. Across eight domains of care, the proportion "always/nearly always" responses ranged from: 80.7% (social support) and 69.8% (medication management) to 22.5% (physical recovery and adaptation) and 0.0% (emotional support). Higher-survival hospitals more often performed screening for new symptoms/limitations (49.2% vs 35.1% "always/nearly always," p = 0.02) compared with lower-survival hospitals. There was no difference in "always/nearly always" responses for COVID-related versus non-COVID-related sepsis, but small differences in mean Likert score in four domains: care coordination (3.34 vs 3.48, p = 0.01), medication management (3.59 vs 3.65, p = 0.04), screening for new symptoms/limitations (3.13 vs 3.20, p = 0.02), and anticipatory guidance and education (2.97 vs 2.84, p &lt; 0.001). Lower case volume hospitals more often performed care coordination (72.7% vs 43.8% "always/nearly always," p = 0.02), screening for new symptoms/limitations (60.6% vs 35.8%, p &lt; 0.001), and social support (100% vs 74.2%, p = 0.01). Conclusions: Our findings show variable adoption of practices for sepsis recovery. Future work is needed to understand why some practice domains are employed more frequently than others, and how to facilitate practice implementation, particularly within rarely adopted domains such as emotional support. abstract_id: PUBMED:29159569 Determinants of short and long door-to-balloon time in current primary percutaneous coronary interventions. Primary percutaneous coronary interventions (PCI) have been developed to improve clinical outcomes in patients with ST-segment elevation myocardial infarction (STEMI). In primary PCI, the door-to-balloon time (DTBT) is closely associated with mortality and morbidity. The purpose of this study was to find determinants of short and long DTBT. From our hospital record, we included 214 STEMI patients, and divided into the short DTBT group (DTBT &lt; 60 min, n = 60), the intermediate DTBT group (60 min ≤ DTBT ≤ 120 min, n = 121) and the long DTBT group (DTBT &gt; 120 min, n = 33). In-hospital mortality was highest in the long DTBT group (24.2%), followed by the intermediate DTBT group (5.8%), and lowest in the short DTBT group (0%) (&lt; 0.001). Transfers from local clinics or hospitals (OR 3.43, 95% CI 1.72-6.83, P &lt; 0.001) were significantly associated with short DTBT, whereas Killip class 3 or 4 (vs. Killip class 1 or 2: OR 0.20, 95% CI 0.06-0.64, P = 0.007) was inversely associated with short DTBT in multivariate analysis. In conclusion, transfer from local clinics/hospitals was associated with short DTBT. Our results may suggest the current limitation of ambulance system, which does not include pre-hospital ECG system, in Japan. The development of pre-hospital ECG system would be needed for better management in STEMI. abstract_id: PUBMED:27709857 Distribution of Diseases Causing Liver Function Test Abnormality in Children and Natural Recovery Time of the Abnormal Liver Function. Although liver function test abnormality is frequently noted in children, there is no report about the distribution of the etiology and natural recovery time of the abnormal liver function. From March 2005 to February 2014, clinical information was retrospectively collected from 559 children who had abnormal liver function and were hospitalized or visited the outpatient clinic at the Jeju National University Hospital. The etiology of abnormal liver function was classified into groups and the natural recovery time of abnormal liver function was analyzed. The etiological groups of 559 patients included 'nonspecific hepatitis' in 42 (7.5%), 'infection' in 323 (57.8%), 'rheumatologic and autoimmune' in 66 (11.8%), 'nonalcoholic fatty liver disease' in 57 (10.2%), 'anatomic' in 12 (2.1%), 'toxic' in 13 (2.1%), 'metabolic' in 8 (1.4%), 'hematologic' in 7 (1.3%), 'hemodynamic' in 4 (0.7%), and 'others' in 27 (4.8%). Among the 'infection' group (57.8%), the 'viral infection in the respiratory tract' subgroup, which had 111 patients (19.8%), was the most common. The natural recovery time of the abnormal liver function was 27 days (median) in 'nonspecific hepatitis', 13 days (median) in 'viral respiratory tract disease', 16 days (median) in 'viral gastroenteritis', 42 days (median) in 'viral febrile illness", and 7 days (median) in "Kawasaki disease". The information on the natural recovery time of abnormal liver function may help the physician to perform good clinical consultation for patients and their parents. abstract_id: PUBMED:31728246 One-hour Recovery Time in Subjects Undergoing Percutaneous Liver Biopsy: A Quality Improvement and Patient Safety Project. Introduction Liver biopsy is the gold standard in diagnosing, staging and guiding clinical management in liver disease. There are currently no standard guidelines for liver biopsy recovery time. The aims of this project are to study the safety of a one-hour recovery time after percutaneous liver biopsies and to measure the rate of complications and identify risk factors. Methods A total of 500 consecutive subjects who underwent a percutaneous liver biopsy at a single-center teaching institution (Brooke Army Medical Center) were enrolled between December 2016 and October 2018. Biopsies were performed using a 14-gauge Bard® Monopty® core biopsy needle using bedside ultrasound. Complications were defined as: Pain level &gt; 5 out of 10, hospitalizations, emergency department visits, or other. Major complications were defined as: hospitalizations and emergency department visits. Results The only complication that required hospitalization was identified during the first hour of recovery. Liver biopsies of subjects with body mass index (BMI) ≥35 kg/m2 were not associated with more complications when compared to patients less than 30 kg/m2. Using a spinal needle (3.5'') to anesthetize the liver capsule in subjects with excess subcutaneous tissue did not result in more complications when compared to the standard 1.5'' needle. Only 3% of the patients who received lidocaine alone for the biopsy required post-procedure medications. Conclusion Ultrasound-guided percutaneous liver biopsies, using a 14-gauge needle, were overall found to be safe. A one-hour post recovery period is adequate to identify all immediate major complications. abstract_id: PUBMED:38091982 Bifidobacterium longum promotes postoperative liver function recovery in patients with hepatocellular carcinoma. Timely liver function recovery (LFR) is crucial for postoperative hepatocellular carcinoma (HCC) patients. Here, we established the significance of LFR on patient long-term survival through retrospective and prospective cohorts and identified a key gut microbe, Bifidobacterium longum, depleted in patients with delayed recovery. Fecal microbiota transfer from HCC patients with delayed recovery to mice similarly impacted recovery time post hepatectomy. However, oral gavage of B. longum improved liver function and repair in these mice. In a clinical trial of HCC patients, orally administering a probiotic bacteria cocktail containing B. longum reduced the rates of delayed recovery, shortened hospital stays, and improved overall 1-year survival. These benefits, attributed to diminished liver inflammation, reduced liver fibrosis, and hepatocyte proliferation, were associated with changes in key metabolic pathways, including 5-hydroxytryptamine, secondary bile acids, and short-chain fatty acids. Our findings propose that gut microbiota modulation can enhance LFR, thereby improving postoperative outcomes for HCC patients. abstract_id: PUBMED:33028240 Potential predictors for prognosis and postpartum recovery time of acute fatty liver of pregnancy. Background: Acute fatty liver of pregnancy (AFLP) is a potentially lethal condition of pregnant women with a high mortality rate. Potential predictors related to postpartum recovery time and prognostic factors of AFLP are still unclear. This study aimed to evaluate potential predictors for prognosis and postpartum recovery time of AFLP. Methods: We retrospectively analyzed the clinical data of 76 AFLP patients in our hospital from 2002 to 2017 and investigated potential predictors using univariate analysis and multivariate logistic regression analysis. Results: Hepatic encephalopathy (HE) was found to be associated with prognosis in AFLP patients (P = 0.005, OR = 26.844). The postpartum recovery time analysis showed that AFLP patients with a age &lt; 25 had the shortest recovery time, but no significant difference (P = 0.134, OR = 5.952). The postpartum recovery time of patients with liver failure (LF) was significantly prolonged compared to those without LF (P = 0.036, OR = 10.052). Cryoprecipitate, and plasma infusion showed no significant effect on prognosis or recovery time. Artificial liver support therapy (ALST) had no effect on prognosis, but it might affect postpartum recovery time with no statistical significance (P = 0.128, OR = 5.470). Conclusion: HE is a potential predictor for prognosis of AFLP. LF is a potential predictor for postpartum recovery time. abstract_id: PUBMED:7542898 Changes in the use of percutaneous liver biopsy over time Aim: To study, through the pathology diagnosis obtained, the current use of percutaneous liver biopsy and its change with time. Materials And Methods: Pathology diagnosis of 759 liver biopsies performed during last 5 years are reviewed and compared with our previously published experience. Results: Neoplasms have decreased from 16% of all pathologic diagnosis obtained in 1961-81 to less than 1% in the last 5 years. In 1985-87, liver cirrhosis was the final diagnosis in 60% of all biopsies, while currently it is only 18%, with a steady absolute number of cirrhosis each year. The most remarkable change is the number of biopsy diagnosis of chronic hepatitis, which has increased from 19% in 1985-87 to 51% in the last 5 years. Ten percent of all biopsies was performed in HIV-positive patients; granulomatous hepatitis and tuberculosis were more prevalent in them. Conclusions: Percutaneous liver biopsy has been displaced for the diagnosis of liver cirrhosis and has been focussed on the diagnosis of chronic hepatitis. HIV-positive patients represent an important population for those who perform liver biopsy. Answer: The question of whether current practices for recovery time after percutaneous liver biopsy should be changed is addressed in several studies. The study conducted at the University of Florida evaluated the safety of discharging ambulatory patients after a short recovery period following percutaneous liver biopsy. The recovery time was gradually decreased from 6 hours to 1 hour, and the major complication rate was found to be ≤1.7%, regardless of the observation period. Most complications occurred within 1 hour of the observation period or within 24 hours after discharge. This suggests that a shorter observation time is safe and could facilitate the utilization of procedural space and staff in busy ambulatory care units (PUBMED:16234032). Another study conducted at the "Hermanos Ameijeiras" Hospital compared a 6-hour recovery time with a 2-hour recovery time after liver biopsy. The results showed no significant differences in complications between the two recovery times, indicating that a 2-hour recovery time is safe and allows for quicker patient activity resumption and reduced resource use by the institution (PUBMED:26738646). A quality improvement and patient safety project at Brooke Army Medical Center also supported the safety of a one-hour recovery time after percutaneous liver biopsies. The study found that all immediate major complications were identified within the one-hour post-recovery period, suggesting that a one-hour recovery is adequate for identifying major complications (PUBMED:31728246). These studies collectively indicate that shortening the recovery time after percutaneous liver biopsy to as little as one hour can be safe and may improve the efficiency of healthcare delivery without increasing the risk of complications. Therefore, based on the evidence from these studies, there is a strong argument for changing current practices to allow for shorter recovery times after percutaneous liver biopsy.
Instruction: Does notch size predict ACL insertion site size? Abstracts: abstract_id: PUBMED:21468613 Does notch size predict ACL insertion site size? Purpose: The primary purpose of the current study is to identify a possible correlation between the femoral intercondylar notch size and the ACL insertion site size. The secondary purpose is to determine if there is a difference between male and female notch widths and insertion site sizes. Methods: For this study, 82 patients (41 men and 41 women) with an average age of 24.1 ± 10.0 years (range 13-58 years) undergoing anterior cruciate ligament (ACL) reconstruction were included. Arthroscopic measurements were taken at the base, middle, and top of the notch. Additionally, the notch height was measured at the highest point. The insertion sites of the ACL were identified, marked using electrocautery, and measured. The correlation between notch width and ACL insertion site size was calculated. In addition, differences between men and women with regard to the notch width and ACL insertion site size were determined. Results: Significant positive correlations were found between the notch widths and ACL insertion site measurements and ranged from 0.222 to 0.379 (P &lt; 0.05). There were significant differences between men and women with regard to notch and insertion site size. Conclusion: The results of this study show that there is a significant, but weak correlation between the notch width and the ACL insertion site size. Women had a smaller notch and a smaller insertion site than men. This knowledge could influence pre-operative decision-making with regard to graft choice, single- or double-bundle surgery, and graft size. abstract_id: PUBMED:28342722 The effects of graft size and insertion site location during anterior cruciate ligament reconstruction on intercondylar notch impingement. Background: Intercondylar notch impingement is detrimental to the anterior cruciate ligament (ACL). Notchplasty is a preventative remodeling procedure performed on the intercondylar notch during ACL reconstruction (ACLR). This study investigates how ACL graft geometry and both tibial and femoral insertion site location may affect ACL-intercondylar notch interactions post ACLR. A range of ACL graft sizes are reported during ACLR, from six millimeters to 11mm in diameter. Variability of three millimeters in ACL insertion site location is reported during ACLR. This study aims to determine the post-operative effects of minor variations in graft size and insertion location on intercondylar notch impingement. Methods: Several 3D finite element knee joint models were constructed using three ACL graft sizes and polar arrays of tibial and femoral insertion locations. Each model was subjected to flexion, tibial external rotation, and valgus motion. Impingement force and contact area between the ACL and intercondylar notch compared well with experimental cadaver data from literature. Results: A three millimeter anterior-lateral tibial insertion site shift of the maximum size ACL increased impingement force by 242.9%. A three millimeter anterior-proximal femoral insertion site shift of the maximum size ACL increased impingement by 346.2%. Simulated notchplasty of five millimeters eliminated all impingement for the simulation with the greatest impingement. For the kinematics applied, small differences in graft size and insertion site location led to large increases in impingement force and contact area. Conclusions: Minor surgical variations may increase ACL impingement. The results indicate that notchplasty reduces impingement during ACLR. Notchplasty may help to improve ACLR success rates. abstract_id: PUBMED:31111185 The Chinese ACL injury population has a higher proportion of small ACL tibial insertion sizes than Western patients. Purpose: The study purpose is to characterize the sizes of the anterior cruciate ligament (ACL) insertion site and intercondylar notch in Chinese patients undergoing ACL surgery. The findings will provide a reference for individualized clinical treatment of ACL rupture. Methods: For this study, 137 patients (102 males, 35 females) with an average age of 30.3 ± 9.5 years (range 14-52 years) undergoing ACL reconstruction were included. The tibial ACL insertion site length and width and the intercondylar notch width were measured on MRI and arthroscopically using a ruler. Descriptive statistics of the patients, the distribution of the measurements and the differences between males and females were calculated. Results: The ACL tibial insertion size and intercondylar notch width in Chinese patients with ACL injuries, as obtained by MRI and intra-operatively, exhibited significant individual variability. The tibial ACL insertion site had a mean length of 13.5 ± 2.1 mm and width of 10.9 ± 1.5 mm as measured on MRI and a mean length of 13.3 ± 2.1 mm and width of 11.0 ± 1.6 mm as measured intra-operatively. The mean intercondylar notch width was 15.2 ± 2.4 mm on MRI and the mean length was 15.0 ± 2.5 mm intra-operatively. The inter-rater reliability between MRI and intra-operative measurements confirmed that the two methods were consistent. In 65.7% of individuals, the ACL tibial insertion length was &lt; 14 mm. Conclusion: The distribution of tibial footprint size in Chinese patients is different from that in Western populations. There is a higher proportion of subjects with a tibial footprint size &lt; 14 mm among Chinese patients with ACL injury. Therefore, great care should be taken when treating this population with the double-bundle technique or larger graft options. Level of evidence IV. abstract_id: PUBMED:31526322 ACL Size and Notch Width Between ACLR and Healthy Individuals: A Pilot Study. Background: Given the relatively high risk of contralateral anterior cruciate ligament (ACL) injury in patients with ACL reconstruction (ACLR), there is a need to understand intrinsic risk factors that may contribute to contralateral injury. Hypothesis: The ACLR group would have smaller ACL volume and a narrower femoral notch width than healthy individuals after accounting for relevant anthropometrics. Study Design: Cross-sectional study. Level Of Evidence: Level 3. Methods: Magnetic resonance imaging data of the left knee were obtained from uninjured (N = 11) and unilateral ACL-reconstructed (N = 10) active, female, collegiate-level recreational athletes. ACL volume was obtained from T2-weighted images. Femoral notch width and notch width index were measured from T1-weighted images. Independent-samples t tests examined differences in all measures between healthy and ACLR participants. Results: The ACLR group had a smaller notch width index (0.22 ± 0.02 vs 0.25 ± 0.01; P = 0.004; effect size, 1.41) and ACL volume (25.6 ± 4.0 vs 32.6 ± 8.2 mm3/(kg·m)-1; P = 0.025; effect size, 1.08) after normalizing by body size. Conclusion: Only after normalizing for relevant anthropometrics, the contralateral ACLR limb had smaller ACL size and narrower relative femoral notch size than healthy individuals. These findings suggest that risk factor studies of ACL size and femoral notch size should account for relevant body size when determining their association with contralateral ACL injury. Clinical Relevance: The present study shows that the method of the identified intrinsic risk factors for contralateral ACL injury could be used in future clinical screening settings. abstract_id: PUBMED:35131449 Correlation between ACL size and dimensions of bony structures in the knee joint. It is difficult to measure the dimensions of the anterior cruciate ligament (ACL) in vivo, which makes choosing an individualized graft size for ACL reconstruction particularly troublesome. The morphology and function of porcine ACL have been reported to be similar to the native human ACL. This study aimed to identify bony morphological parameters on X-ray images that were significantly correlated with features of the native ACL. Anteroposterior X-ray images of 19 porcine knees were obtained. The width, height and area of the femoral notch, the widths of the femoral and tibial condyles and the width and area of the interspinal fossa of the tibia were measured. ACL length was measured using a caliper. The ACL was then resected and the outline of the bone insertion sites were marked and photographed for measuring the areas of the insertion sites. The excised ACL substance was scanned using X-ray microscopy and reconstructed to measure the medial-lateral (ML) and anterior-posterior (AP) widths, the long and short axes and cross-sectional area of the ACL isthmus. Linear regression analyses showed that the area of the interspinal fossa of the tibia was significantly correlated with the long axis and ML width of the ACL, and with the AP / ML width (p value = 0.038, 0.032 and 0.016 respectively). The width of the femoral condyle was significantly correlated with the area of the tibial insertion and the length and ML width of the ACL (p value = 0.013, 0.000 and 0.038 respectively). The area of the interspinal fossa of the tibia / notch width index (NWI) was significantly correlated with the long axis / short axis of the ACL (p value = 0.016). The width of the interspinal fossa of tibia / NWI was significantly correlated with the short axis of the ACL isthmus (p value = 0.009). However, the dimensions of the femoral intercondylar notch and the width of the interspinal fossa of the tibia were not significantly correlated with any of the ACL parameters. In conclusion, the dimension of the interspinal fossa of the tibia and the width of the femoral condyle measured from X-ray images might be used to estimate the shape and size of the ACL, which might be helpful for choosing a suitable graft size for ACL reconstruction. DATA STATEMENT: All data relevant to the study are included in the article. abstract_id: PUBMED:27541734 Area of the tibial insertion site of the anterior cruciate ligament as a predictor for graft size. Purpose: To determine the distribution of different sizes of the area of the tibial insertion site among the population and to evaluate whether preoperative MRI measurements correlate with intraoperative findings to enable preoperative planning of the required graft size to cover the tibial insertion site sufficiently. The hypothesis was that the area of the tibial insertion site varies among individuals and that there is good agreement between MRI and intraoperative measurements. Methods: Intraoperative measurements of the tibial insertion site were taken on 117 patients. Three measurements were taken in each plane building a grid to cover the tibial insertion site as closely as possible. The mean of the three measurements in each plane was used for determination of the area. Two orthopaedic surgeons, who were blinded to the intraoperative measurements, took magnetic resonance imaging (MRI) measurements of the area of the tibial insertion site at two different time points. Results: The intraoperative measured mean area was 123.8 ± 21.5 mm2. The mean area was 132.8 ± 15.7 mm2 (rater 1) and 136.7 ± 15.4 mm2 (rater 2) when determined using MRI. The size of the area was approximately normally distributed. Inter-rater (0.89; 95 % CI 0.84, 0.92; p &lt; 0.001) and intrarater reliability (rater 1: 0.97; 95 % CI 0.95, 0.98; p &lt; 0.001; rater 2: 0.95; 95 % CI 0.92, 0.96; p &lt; 0.001) demonstrated excellent test-retest reliability. There was good agreement between MRI and intraoperative measurement of tibial insertion site area (ICCs rater 1: 0.80; 95 % CI 0.71, 0.87; p &lt; 0.001; rater 2: 0.87; 95 % CI 0.81, 0.91; p &lt; 0.001). Conclusion: The tibial insertion site varies in size and shape. Preoperative determination of the area using MRI is repeatable and enables planning of graft choice and size to optimally cover the tibial insertion site. Level Of Evidence: III. abstract_id: PUBMED:31552484 Correlation between the mid-substance cross-sectional anterior cruciate ligament size and the knee osseous morphology. Introduction: One of the final goals of anatomical anterior cruciate ligament (ACL) reconstruction is the restoration of native anatomy. It is essential to obtain more accurate predictors of mid-substance ACL size before surgery. However, to the best of our knowledge, no study has reported correlation between the mid-substance cross-sectional ACL size and the knee osseous morphology. The purpose of this study was to reveal correlation between the mid-substance cross-sectional ACL size and the knee osseous morphology. Materials And Methods: We used 39 non-paired formalin fixed Japanese cadaveric knees. All surrounding muscles, ligaments and soft tissues in the knee were resected. After soft tissue resection, the knee was flexed at 90°, and a tangential plane of the femoral posterior condyles was marked and cut the ACL. Femoral ACL footprint size, Blumensaat's line length, lateral wall of the femoral intercondylar notch size, lateral wall of the femoral intercondylar notch height, tibial ACL footprint size, tibia plateau size, the whole anterior-posterior (AP) length, the medial and the lateral AP length of the tibia plateau, and the medial-lateral (ML) length of the tibia plateau were measured. The Pearson's product movement correlation was calculated to reveal correlation between the mid-substance cross-sectional ACL size and the measured parameters of the knee osseous morphology. Results: The measured mid-substance cross-sectional ACL size was 49.9 ± 16.3 mm2. The tibial ACL footprint size, the tibia plateau size, the whole AP length of the tibia plateau, the lateral AP length of the tibia plateau and the ML length of the tibia plateau were significantly correlated with the mid-substance cross-sectional ACL size. Conclusions: For clinical relevance, some tibial sides of the knee osseous morphology were significantly correlated with the mid-substance cross-sectional ACL size. It might be possible to predict the mid-substance ACL size measuring these parameters. abstract_id: PUBMED:22552621 ACL footprint size is correlated with the height and area of the lateral wall of femoral intercondylar notch. Purpose: The purpose of this study was to reveal the correlation between the size of the native anterior cruciate ligament (ACL) footprint and the size of the lateral wall of femoral intercondylar notch. Methods: Eighteen non-paired human cadaver knees were used. All soft tissues around the knee were resected except the ACL. The ACL was cut in the middle, and the femoral bone was cut at the most proximal point of the femoral notch. The ACL was carefully dissected, and the periphery of the ACL insertion site was outlined on both the femoral and tibial sides. An accurate lateral view of the femoral condyle and the tibial plateau was photographed with a digital camera, and the images were downloaded to a personal computer. The size of the femoral and tibial ACL footprints, length of Blumensaat's line, and the height and area of the lateral wall of femoral intercondylar notch were measured with Image J software (National Institution of Health). Results: The sizes of the native femoral and tibial ACL footprints were 84 ± 25.3 and 144.7 ± 35.9 mm(2), respectively. The length of Blumensaat's line and the height and area of the lateral wall of femoral intercondylar notch were 29.4 ± 2.8 mm, 17.1 ± 2.7 mm, and 392.4 ± 86 mm(2), respectively. Both the height and the area of the lateral wall of femoral intercondylar notch were significantly correlated with the size of the ACL footprint on both the femoral and tibial sides. Conclusion: For clinical relevance, the height and area of the lateral wall of femoral intercondylar notch can be a predictor of native ACL size prior to surgery. However, the length of Blumensaat's line showed no significant correlation with native ACL size. abstract_id: PUBMED:33717966 The relationship of anterior cruciate ligament injuries with MRI based calculation of femoral notch width, notch width index, notch shape - A randomized control study. Background: ACL injuries are infamously known for disability in young adults and require surgical reconstruction. The need of time is to predict predisposing factors and prevent ACL injuries.The incidence of ACL injuries has been associated with various factors related to the morphology of distal femur and proximal tibia.Hence, purpose of this study was to assess the relationship of morphology of distal femur by assessing Notch Width(NW), Notch Width Index (NWI), and Notch shape calculated preoperatively on MRI in association with an ACL tear. Methods: The following randomized control study had 60 patients enrolled with non contact injury to knee who were equally divided into 2 groups i.e. ACL injury group and control group. ACL group had patients who had MRI proven ACL tear along with clinical findings suggestive of ACL tear whereas control contained patients with intact ACL. Demographic data was collected and NW, NWI and Notch shape were determined on coronal sections of MRI sequences. Results: Positive correlation of ACL tear was seen with NW, BCW, NWI, NWP, and NWJ. Smaller Notch Width showed higher incidence of ACL tear (p = 0.019). The mean NWI in the injured and control knee is 0.31 ± 0.01 and 0.27 ± 0.01 respectively and was statistically significant(p &lt; 0.001). A shaped Notch (60%) was commonly seen in ACL tear group and U shaped notch (73.3%) was commonly seen in control group.We found the cut off value for the prediction of ACL tear of NWI was 0.29 with a sensitivity of 90% and specificity of 86.7%. Conclusion: ACL injuries in the given population have shown higher incidence with narrow femoral intercondylar notch, smaller notch width index, 'A' shaped femoral notch. If any of the above findings are present in the MRI, its important to counsel the subjects about the increased risk of ACL injuries in them and take preventive measures. abstract_id: PUBMED:34988237 ACL Size, but Not Signal Intensity, Is Influenced by Sex, Body Size, and Knee Anatomy. Background: Little is known about sex-based differences in anterior cruciate ligament (ACL) tissue quality in vivo or the association of ACL size (ie, volume) and tissue quality (ie, normalized signal intensity on magnetic resonance imaging [MRI]) with knee anatomy. Hypothesis: We hypothesized that (1) women have smaller ACLs and greater ACL normalized signal intensity compared with men, and (2) ACL size and normalized signal intensity are associated with age, activity levels, body mass index (BMI), bicondylar width, intercondylar notch width, and posterior slope of the lateral tibial plateau. Study Design: Cross-sectional study; Level of evidence, 3. Methods: Knee MRI scans of 108 unique ACL-intact knees (19.7 ± 5.5 years, 62 women) were used to quantify the ACL signal intensity (normalized to cortical bone), ligament volume, mean cross-sectional area, and length. Independent t tests were used to compare the MRI-based ACL parameters between sexes. Univariate and multivariate linear regression analyses were used to investigate the associations between normalized signal intensity and size with age, activity levels, BMI, bicondylar width, notch width, and posterior slope of the lateral tibial plateau. Results: Compared with men, women had significantly smaller mean ACL volume (men vs women: 2028 ± 472 vs 1591 ± 405 mm3), cross-sectional area (49.4 ± 9.6 vs 41.5 ± 8.6 mm2), and length (40.8 ± 2.8 vs 38.1 ± 3.1 mm) (P &lt; .001 for all), even after adjusting for BMI and bicondylar width. There was no difference in MRI signal intensity between men and women (1.15 ± 0.24 vs 1.12 ± 0.24, respectively; P = .555). BMI, bicondylar width, and intercondylar notch width were independently associated with a larger ACL (R2 &gt; 0.16, P &lt; .001). Younger age and steeper lateral tibial slope were independently associated with shorter ACL length (R2 &gt; 0.03, P &lt; .04). The combination of BMI and bicondylar width was predictive of ACL volume and mean cross-sectional area (R2 &lt; 0.3). The combination of BMI, bicondylar width, and lateral tibial slope was predictive of ACL length (R2 = 0.39). Neither quantified patient characteristics nor anatomic variables were associated with signal intensity. Conclusion: Men had larger ACLs compared with women even after adjusting for BMI and knee size (bicondylar width). No sex difference was observed in signal intensity, suggesting no difference in tissue quality. The association of the intercondylar notch width and lateral tibial slope with ACL size suggests that the influence of these anatomic features on ACL injury risk may be partially explained by their effect on ACL size. Registration: NCT02292004 and NCT02664545 (ClinicalTrials.gov identifier). Answer: The correlation between femoral intercondylar notch size and the ACL insertion site size has been investigated in several studies. The results indicate that there is a significant, albeit weak, correlation between the notch width and the ACL insertion site size (PUBMED:21468613). This suggests that while notch size may provide some predictive value regarding ACL insertion site size, it is not a strong predictor on its own. Additionally, the study by PUBMED:31111185 found that the Chinese ACL injury population has a higher proportion of small ACL tibial insertion sizes compared to Western patients, indicating that there may be population-specific differences that could affect the predictability of ACL insertion site size based on notch size. Furthermore, the study by PUBMED:22552621 found that the size of the ACL footprint is correlated with the height and area of the lateral wall of the femoral intercondylar notch, suggesting that certain aspects of notch morphology may be more predictive of ACL size than others. However, it is important to note that other factors, such as graft size and insertion site location, can affect outcomes like intercondylar notch impingement after ACL reconstruction (PUBMED:28342722). Therefore, while there may be a correlation between notch size and ACL insertion site size, it is one of many factors that should be considered in pre-operative planning and decision-making. In conclusion, notch size may provide some indication of ACL insertion site size, but it is not a definitive predictor, and individual variability, as well as other anatomical and surgical factors, must be taken into account.
Instruction: Digital ischemia in a gardener: is rose cutting a vascular risk factor? Abstracts: abstract_id: PUBMED:12483737 Cigarette smoking as a significant risk factor for digital vascular disease in patients with systemic sclerosis. Objective: Patients with systemic sclerosis (SSc) are at high risk for digital vascular complications, including amputation and gangrene. Cigarette smoking is an important risk factor for vascular disease in the general population. We investigated the influence of cigarette smoking on digital ischemia in patients with SSc. Methods: We studied 101 patients with SSc (87 women and 14 men, median age 53 years, median disease duration 13 years). Smoking history was defined in terms of current smoking status and total number of pack-years. Digital ischemic events were classified as debridement, hospital admission for intravenous (IV) administration of vasodilators, and digital amputation. The influence of smoking on digital ischemic events was examined using logistic regression, adjusting for age, sex, and disease duration. Results are expressed as the odds ratio (OR) and 95% confidence interval (95% CI). Results: Of the 101 patients, 21 (21%) were current smokers, 37 (37%) were ex-smokers, and 43 (43%) had never smoked. After adjusting for age, sex, and disease duration, current smokers were significantly more likely than never-smokers to have had debridement (OR 4.5, 95% CI 1.1-18.3) or admission for IV vasodilators (OR 3.8, 95% CI 1.1-12.9). Patients smoking at higher intensity were more likely to require admission for IV vasodilators. Conclusion: Among patients with SSc, current smokers are 3-4 times more likely than never-smokers to incur digital vascular complications. Resources should be directed to supporting smoking cessation in patients with SSc. abstract_id: PUBMED:25449678 A PRISMA-driven systematic review for predictive risk factors of digital ulcers in systemic sclerosis patients. Vasculopathy has a major role in the pathogenesis and tissue injury in systemic sclerosis (SSc). Raynaud's phenomenon (RP) is frequently the first clinical manifestation of SSc preceding by years other clinical manifestations. RP in SSc patients is frequent, often very severe and long lasting. The repeated bouts of RP lead to prolonged digital ischemia that may progress to digital ulceration or in extreme to critical digital ischemia with gangrene. Digital ulcers (DU) are a true burden for all patients. They are very painful, with a long and slow healing course, have high risk of infection and are extremely disabling. In adults, up to 40-50% of patients will experience at least one DU in the course of the disease and of these 31-71% will have recurrent ulcers. In order to try to identify predictive risk factors for DU in SSc patients, an extensive literature review was conducted, according to the guidelines proposed at the PRISMA statement. MEDLINE database (PubMed) and Thomson Reuters Web of Knowledge platform were searched for articles published in peer-reviewed journals since 1990 with the last search run on June 2014 and published in English language. The keyword search terms included: digital ulcer/s, systemic sclerosis, scleroderma, digital scars, ischemic complications, autoantibodies, biomarkers, endothelium dysfunction, endothelin-1, vascular endothelial growth factor (VEGF), endostatin, ADMA, endoglin, angiostatin, and capillaroscopy. The following criteria were included: (1) cohorts of SSc patients including patients with DU, (2) endothelium dysfunction and angiogenesis biomarkers compared with a healthy control group, (3) autoantibodies, capillary morphology and distribution, endothelium dysfunction and angiogenesis biomarkers compared between patients with and without digital ulcers, (4) detailed description of the statistical methods used to conclude for predictive factors, and (5) English language. Our search provided a total of 376 citations. Of these, 297 studies were discarded for not meeting the criteria proposed. abstract_id: PUBMED:18841520 Digital ischemia in a gardener: is rose cutting a vascular risk factor? History And Admission Findings: A 48-year-old right handed gardener presented with a white discoloration and numbness of her left ring finger. She reported cutting her roses without protection gloves so repetitive scratchy lesions especially of her left hand occurred. Investigations: On examination the pulse of the left radial artery was absent. Allen's test showed a dominant ulnar supply of the palmar arch. Duplex ultrasound demonstrated an occluded aneurysm of the distal portion of the left radial artery. Furthermore there were occlusions of the first and fourth digital artery on MR angiography probably due to distal emboli of the radial aneurysm. Treatment And Course: After exclusion of systemic disease or vasculitis, an repetitive trauma due to rose thorns was supposed to be the cause of the radial aneurysm. Anticoagulation therapy was initiated and infusion of prostaglanden E1 was performed over 7 days. The digital ischemia resolved within a few days. Therefore a surgical procedure was not recommended. Conclusions: In the presence of a radial artery aneurysm an occupational and recreational trauma history should be sought after. abstract_id: PUBMED:26242908 Predictive value of vascular disease biomarkers for digital ulcers in systemic sclerosis patients. Objectives: To investigate the role of endothelial dysfunction and angiogenesis vascular biomarkers as risk factors and their predictive value for digital ulcers in systemic sclerosis patients. Methods: Endothelin-1 (ET-1), asymmetric dimethylarginine (ADMA), vascular endothelial growth factor (VEGF), endostatin and endoglin were measured in an observational prospective cohort of 77 SSc patients. The primary outcome was the occurrence of one or more new ischaemic digital ulcers during a planned 3-year follow-up. Results: After the 3-year follow-up, 40 patients developed new digital ulcers. Logistic regression confirmed VEGF (HR 1.128, 95% CI 1.010-1.260, p=0.033) and ADMA (HR 0.995, 95% CI 0.991-0.998, p=0.006) as independent predictors of new digital ulcers. Patients with serum levels of ET-1&gt;11.9 pmol/ml (p&lt;0.001) and VEGF&lt;422.47 pg/ml (p=0.028) had significantly more DU in the 3-year follow-up. Although not significant, a trend towards increased serum levels of endoglin&gt;4.215 ng/ml (p=0.053) was associated to a new DU episode. No predictive serum value was found for ADMA (p=0.075) and endostatin (p=0.130). Conclusions: Endothelial dysfunction and angiogenic vascular biomarkers have an important role in the underlying and in the progression of microvascular disease in systemic sclerosis. Increased serum levels of ET-1, ADMA and VEGF are strong predictors of severe microangiopathy complications, namely ischaemic digital ulcers. abstract_id: PUBMED:28645730 The relation of digital vascular function to cardiovascular risk factors in African-Americans using digital tonometry: the Jackson Heart Study. Digital vascular tone and function, as measured by peripheral arterial tonometry (PAT), are associated with cardiovascular risk and events in non-Hispanic whites. There are limited data on relations between PAT and cardiovascular risk in African-Americans. PAT was performed on a subset of Jackson Heart Study participants using a fingertip tonometry device. Resting digital vascular tone was assessed as baseline pulse amplitude. Hyperemic vascular response to 5 minutes of ischemia was expressed as the PAT ratio (hyperemic/baseline amplitude ratio). Peripheral augmentation index (AI), a measure of relative wave reflection, also was estimated. The association of baseline pulse amplitude (PA), PAT ratio, and AI to risk factors was assessed using stepwise multivariable models. The study sample consisted of 837 participants from the Jackson Heart Study (mean age, 54 ± 11 years; 61% women). In stepwise multivariable regression models, baseline pulse amplitude was related to male sex, body mass index, and diastolic blood pressure (BP), accounting for 16% of the total variability of the baseline pulse amplitude. Age, male sex, systolic BP, diastolic BP, antihypertensive medication, and prevalent cardiovascular disease contributed to 11% of the total variability of the PAT ratio. Risk factors (primarily age, sex, and heart rate) explained 47% of the total variability of the AI. We confirmed in our cohort of African-Americans, a significant relation between digital vascular tone and function measured by PAT and multiple traditional cardiovascular risk factors. Further studies are warranted to investigate the utility of these measurements in predicting clinical outcomes in African-Americans. abstract_id: PUBMED:23929745 Serum brain-derived neurotrophic factor and vascular endothelial growth factor levels are associated with risk of stroke and vascular brain injury: Framingham Study. Background And Purpose: Brain-derived neurotrophic factor (BDNF), a major neurotrophin and vascular endothelial growth factor (VEGF) have a documented role in neurogenesis, angiogenesis, and neuronal survival. In animal experiments, they impact infarct size and functional motor recovery after an ischemic brain lesion. We sought to examine the association of serum BDNF and VEGF with the risk of clinical stroke or subclinical vascular brain injury in a community-based sample. Methods: In 3440 Framingham Study participants (mean age, 65±11 years; 56% women) who were free of stroke/transient ischemic attack (TIA), we related baseline BDNF and logVEGF to risk of incident stroke/TIA. In a subsample with brain MRI and with neuropsychological tests available (n=1863 and 2104, respectively; mean age, 61±9 years, 55% women, in each), we related baseline BDNF and logVEGF to log-white matter hyperintensity volume on brain MRI, and to visuospatial memory and executive function tests. Results: During a median follow-up of 10 years, 193 participants experienced incident stroke/TIA. In multivariable analyses adjusted for age, sex, and traditional stroke risk factors, lower BDNF and higher logVEGF levels were associated with an increased risk of incident stroke/TIA (hazard ratio comparing BDNF Q1 versus Q2-Q4, 1.47; 95% confidence interval, 1.09-2.00; P=0.012 and hazard ratio/SD increase in logVEGF, 1.21; 95% confidence interval, 1.04-1.40; P=0.012). Persons with higher BDNF levels had less log-white matter hyperintensity volume (β±SE=-0.05±0.02; P=0.025), and better visual memory (β±SE=0.18±0.07; P=0.005). Conclusions: Lower serum BDNF and higher VEGF concentrations were associated with increased risk of incident stroke/TIA. Higher levels of BDNF were also associated with less white matter hyperintensity and better visual memory. Our findings suggest that circulating BDNF and VEGF levels modify risk of clinical and subclinical vascular brain injury. abstract_id: PUBMED:14975549 Vascular endothelial growth factor: the link between cardiovascular risk factors and microalbuminuria? Background: Microalbuminuria, i.e. slightly elevated urinary albumin excretion, is associated with increased cardiovascular risk factors and cardiovascular morbidity in the general population. Microalbuminuria has been proposed to indicate increased endothelial permeability. Unknown are the mechanisms underlying this increased vascular permeability. Vascular endothelial growth factor (VEGF), also known as vascular permeability factor, increases endothelial permeability. We hypothesised that plasma VEGF levels may be associated with microalbuminuria in a large sample of the general population. Methods: Out of a large sample of the general population, we studied 189 control subjects (urinary albumin excretion (UAE): 0-30 mg/24 h) and 194 microalbuminuric subjects (UAE: 30-300 mg/24 h), matched for age, sex and the presence of ischemia on the electrocardiogram. Results: Subjects with microalbuminuria had significant higher plasma levels of VEGF (p&lt;0.05). The correlation between plasma levels of VEGF and systolic and diastolic blood pressure, cholesterol, glucose, diabetes and body mass index were statistically significant. Using logistic regression analysis, microalbuminuria was significantly associated with VEGF (odds ratio 1.62; 95% confidence interval: 1.15-2.27; p&lt;0.01). This association was dependent on cardiovascular risk factors. Conclusion: This study suggests a relation between increased plasma VEGF levels and subsequent occurrence of microalbuminuria. abstract_id: PUBMED:26523153 Raynaud's phenomenon and digital ischemia: a practical approach to risk stratification, diagnosis and management. Digital ischemia is a painful and often disfiguring event. Such an ischemic event often leads to tissue loss and can significantly affect the patient's quality of life. Digital ischemia can be secondary to a vasculopathy, vasculitis, embolic disease, trauma, or extrinsic vascular compression. It is an especially serious complication in patients with scleroderma. Risk stratification of patients with scleroderma at risk for digital ischemia is now possible with clinical assessment and autoantibody profiles. Because there are a variety of conditions that lead to digital ischemia, it is important to understand the pathophysiology underlying each ischemic presentation in order to target therapy appropriately. Significant progress has been made in the last two decades in defining the pathophysiological processes leading to digital ischemia in rheumatic diseases. In this article we review the risk stratification, diagnosis, and management of patients with digital ischemia and provide a practical approach to therapy, particularly in scleroderma. abstract_id: PUBMED:28707510 Outcomes of sympathectomy and vascular bypass for digital ischaemia in connective tissue disorders. All patients (36 hands) with connective tissue disorders who underwent periarterial sympathectomy of the hand alone or in conjunction with vascular bypass at our institution between 1995-2013 were reviewed. The durable resolution of ulcers was significantly higher in patients treated by periarterial sympathectomy and bypass than in patients treated by periarterial sympathectomy alone. Although there were more digital amputations in patients treated by periarterial sympathectomy alone, the difference was not statistically significant. Vascular bypass in conjunction with sympathectomy may be better than sympathectomy alone in patients with digital ischaemia related to connective tissue disorders. Level Of Evidence: IV. abstract_id: PUBMED:11563889 The effect of vascular endothelial growth factor on angiogenesis: an experimental study. Objective: to evaluate the effects of exogenous vascular endothelial growth factor (VEGF) on angiogenesis in a rabbit model of persistent hind limb ischaemia. Materials And Methods: ischaemia was induced in the hind limbs of 42 New Zealand white rabbits divided into six groups, each of 7 animals. Group 1a and 1b received intramuscular injections of 1 and 2 mg VEGF/day, respectively, into the ischaemic hind limb for 10 days beginning on postoperative 11th day, and group 1c received IM injections of saline only. Group 2a and 2b received similar regimen of VEGF, but administered intra-arterially. Group 2c served as controls. Perfusion of the ischaemic limb was evaluated by thigh blood pressure and thigh circumference at 10, 25 and 40 days following limb ischaemia in all animals and by digital subtraction angiography, perfusion scans, histological examination of capillary density in 2 animals from each group. Results: thigh pressure index and thigh circumference improved significantly in the VEGF treated animals (Groups 1a,b and 2a,b). Collateral formation, as assessed by angiography, scintigraphy and by histological examination, indicated marked formation of collaterals in the VEGF treated animals as compared with the controls. This was most pronounced in groups receiving the highest dose of VEGF. Conclusion: these data suggest that VEGF promotes angiogenesis, that the route of administration is unimportant, but that a dose-response relationship is present in this experimental ischaemic hind limb model. Answer: The case of digital ischemia in a gardener who presented with white discoloration and numbness of her left ring finger after cutting roses without protection gloves suggests that repetitive trauma from rose thorns could be a vascular risk factor. The gardener had an occluded aneurysm of the distal portion of the left radial artery and occlusions of the first and fourth digital artery on MR angiography, likely due to distal emboli from the radial aneurysm. The absence of systemic disease or vasculitis led to the conclusion that repetitive trauma from rose thorns was the cause of the radial aneurysm. The gardener was treated with anticoagulation therapy and infusion of prostaglandin E1, which resolved the digital ischemia within a few days, thus avoiding the need for surgical intervention. This case highlights the importance of considering occupational and recreational trauma history in the presence of a radial artery aneurysm (PUBMED:18841520).
Instruction: Auditory sensory dysfunction in schizophrenia: imprecision or distractibility? Abstracts: abstract_id: PUBMED:11115328 Auditory sensory dysfunction in schizophrenia: imprecision or distractibility? Background: Schizophrenia is associated with large effect-size deficits in auditory sensory processing, as reflected in impaired delayed-tone matching performance. The deficit may reflect either impaired sensory precision, which would be indicative of neural dysfunction within auditory sensory (temporal) regions, or of increased distractibility, which would be indicative of impaired prefrontal function. The present study evaluates susceptibility of schizophrenic subjects to same-modality distraction to determine whether patients fit a "bitemporal" or "prefrontal" model of sensory dysfunction. Methods: Tone-matching ability was evaluated in 15 first-episode patients, 18 outpatients with chronic illness, and 21 patients in long-term residential care, relative to 32 nonpsychiatric controls of a similar age. A staircase procedure determined individual thresholds for attaining criterion level correct performance. Results: Tone-matching thresholds in the absence of distractors were significantly elevated in patients in long-term residential care relative to all other groups (P&lt;.001). The effect size (d) of the difference relative to controls was extremely large (SD, 1.95). Schizophrenic patients, even those with elevated tone-matching thresholds, showed no increased susceptibility to auditory distraction (P =.42). Deficits in tone-matching performance in subjects with chronic illness could not be attributed to medication status or level of symptoms. Conclusions: These findings suggest that sensory processing dysfunction in schizophrenia is particularly severe in a subgroup of patients who can be considered poor-outcome based on their need for long-term residential treatment. Furthermore, the absence of increased auditory distractibility argues against prefrontal dysfunction as an origin for auditory sensory imprecision in schizophrenia. Arch Gen Psychiatry. 2000;57:1149-1155. abstract_id: PUBMED:28109666 Diminished auditory sensory gating during active auditory verbal hallucinations. Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. abstract_id: PUBMED:34285443 Triple model of auditory sensory processing: a novel gating stream directly links primary auditory areas to executive prefrontal cortex. The generally accepted model of sensory processing of visual and auditory stimuli assumes two major parallel processing streams, ventral and dorsal, which comprise functionally and anatomically distinct but interacting processes in which the ventral stream supports stimulus identification, and the dorsal stream is involved in recognizing the stimulus spatial location and sensori-motor integration functions. However, recent studies suggest the existence of a third, very fast sensory processing pathway, a gating stream that directly links the primary auditory cortices to the executive prefrontal cortex within the first 50 milliseconds after presentation of a stimulus, bypassing hierarchical structure of the ventral and dorsal pathways. Gating stream propagates the sensory gating phenomenon, which serves as a basic protective mechanism preventing irrelevant, repeated information from recurrent sensory processing. The goal of the present paper is to introduce the novel 'three-stream' model of auditory processing, including the new fast sensory processing stream, i.e. gating stream, alongside the well-affirmed dorsal and ventral sensory processing pathways. The impairments in sensory processing along the gating stream have been found to be strongly involved in the pathophysiological sensory processing in Alzheimer's disease and could be the underlying issue in numerous neuropsychiatric disorders and diseases that are linked to the pathological sensory gating inhibition, such as schizophrenia, post-traumatic stress disorder, bipolar disorder or attention deficit hyperactivity disorder. abstract_id: PUBMED:24613239 Auditory sensory processing deficits in sensory gating and mismatch negativity-like responses in the social isolation rat model of schizophrenia. Patients with schizophrenia exhibit disturbances in information processing. These disturbances can be investigated with different paradigms of auditory event related potentials (ERP), such as sensory gating in a double click paradigm (P50 suppression) and the mismatch negativity (MMN) component in an auditory oddball paradigm. The aim of the current study was to test if rats subjected to social isolation, which is believed to induce some changes that mimic features of schizophrenia, displays alterations in sensory gating and MMN-like response. Male Lister-Hooded rats were separated into two groups; one group socially isolated (SI) for 8 weeks and one group housed (GH). Both groups were then tested in a double click sensory gating paradigm and an auditory oddball paradigm (MMN-like) paradigm. It was observed that the SI animals showed reduced sensory gating of the cortical N1 amplitude. Furthermore, the SI animals showed significant reduction in cortical MMN-like response compared with the GH animals. No deficits in sensory gating or MMN-like response were observed in the hippocampus (CA3) of the SI animals compared with GH animals. In conclusion, the change in sensory gating of the N1 amplitude supports previous findings in SI rats and the reduced MMN-like response is similar to the deficits of MMN seen in patients with schizophrenia. Since reduced auditory MMN amplitude is believed to be more selectively associated with schizophrenia than other measures of sensory gating deficits, the current study supports the face validity of the SI reared rat model for schizophrenia. abstract_id: PUBMED:33491212 Oscillatory connectivity as a mechanism of auditory sensory gating and its disruption in schizophrenia. Although innumerable studies using an auditory sensory gating paradigm have confirmed that individuals with schizophrenia (SZ) show less reduction in brain response to the second in a pair of clicks, this large literature has not yielded consensus on the circuit(s) responsible for gating nor for the gating difference in SZ. Clinically stable adult inpatients (N = 157) and matched community participants (N = 90) participated in a standard auditory sensory gating protocol. Responses to paired clicks were quantified as peak-to-peak amplitude from a response at approximately 50 ms to a response at approximately 100 ms in MEG-derived source waveforms. For bilateral sources in each of four regions near Heschl's gyrus, the gating ratio was computed as the response to the second stimulus divided by the response to the first stimulus. Spectrally resolved Granger causality quantified effective connectivity among regions manifested in alpha-band oscillatory coupling before and during stimulation. Poorer sensory gating localized to A1 in SZ than in controls confirmed previous results, here found in adjacent brain regions as well. Spontaneous, stimulus-independent effective connectivity within the hemisphere from angular gyrus to portions of the superior temporal gyrus was lower in SZ and correlated with gating ratio. Significant involvement of frontal and subcortical brain regions previously proposed as contributing to the auditory gating abnormality was not found. Findings point to endogenous connectivity evident in a sequence of activity from angular gyrus to portions of superior temporal gyrus as a mechanism contributing to normal and abnormal gating in SZ and potentially to sensory and cognitive symptoms. abstract_id: PUBMED:35348805 N-methyl-D-aspartate receptor antagonism impairs sensory gating in the auditory cortex in response to speech stimuli. Deficits in early auditory sensory processing in schizophrenia have been linked to N-methyl-D-aspartate receptor (NMDAR) hypofunction, but the role of NMDARs in aberrant auditory sensory gating (SG) in this disorder is unclear. This study, conducted in 22 healthy humans, examined the acute effects of a subanesthetic dose of the NMDAR antagonist ketamine on SG as measured electrophysiologically by suppression of the P50 event-related potential (ERP) to the second (S2) relative to the first (S1) of two closely paired (500 ms) identical speech stimuli. Ketamine induced impairment in SG indices at sensor (scalp)-level and at source-level in the auditory cortex (as assessed with eLORETA). Together with preliminary evidence of modest positive associations between impaired gating and dissociative symptoms elicited by ketamine, tentatively support a model of NMDAR hypofunction underlying disturbances in auditory SG in schizophrenia. abstract_id: PUBMED:38246250 Sensory attenuation deficit and auditory hallucinations in schizophrenia: a causal mechanism or a risk factor? Evidence from meta-analyses on N1 ERP-EEG component. Background: Sensory attenuation (SA), the dampened perception of self-generated sensory information, is typically associated with reduced ERP signals, like for the N1 component of auditory ERPs. SA, together with efficient monitoring of intentions and actions, should facilitate the distinction between self-generated and externally-generated sensory events, optimizing the interaction with the world. According to many, SA is deficient in schizophrenia. The question arises whether altered SA reflects a sufficient mechanism to explain positive symptoms like auditory hallucinations. A systematic association of reduced SA in hallucinating patients would support this hypothesis. Methods: We conducted a series of meta-analyses on 15 studies on auditory SA in which the N1 component of ERP-EEG signals was measured during talking (self-generated sensory signals condition) or when listening to pre-recorded vocalizations (externally-generated sensory signals condition). Results: We found that schizophrenic individuals do show some auditory SA, as their N1 signal is significantly attenuated in talking conditions with respect to listening conditions. Yet, the magnitude of such attenuation is reduced with respect to healthy controls. This phenomenon generalizes independently from the stage of the disease, from the severity of positive symptoms, and regardless of whether patients have auditory hallucinations or not. Conclusions: These findings suggest that a reduced SA cannot be a sufficient mechanism for explaining positive symptoms like auditory hallucinations in schizophrenia. As a reduced SA was also present in subjects at risk of schizophrenia, reduced SA may represent a risk factor for the disorder. We discuss the implications of these results for clinical-cognitive models of schizophrenia. abstract_id: PUBMED:31444224 Large-Scale Networks for Auditory Sensory Gating in the Awake Mouse. The amplitude of the brain response to a repeated auditory stimulus is diminished as compared to the response to the first tone (T1) for interstimulus intervals (ISI) lasting up to hundreds of milliseconds. This adaptation process, called auditory sensory gating (ASG), is altered in various psychiatric diseases including schizophrenia and is classically studied by focusing on early evoked cortical responses to the second tone (T2) using 500-ms ISI. However, mechanisms underlying ASG are still not well-understood. We investigated ASG in awake mice from the brainstem to cortex at variable ISIs (125-2000 ms) using high-density EEG and intracerebral recordings. While ASG decreases at longer ISIs, it is still present at durations (500-2000 ms) far beyond the time during which brain responses to T1 could still be detected. T1 induces a sequence of specific stable scalp EEG topographies that correspond to the successive activation of distinct neural networks lasting about 350 ms. These brain states remain unaltered if T2 is presented during this period, although T2 is processed by the brain, suggesting that ongoing networks of brain activity are active for longer than early evoked-potentials and are not overwritten by an upcoming new stimulus. Intracerebral recordings demonstrate that ASG is already present at the level of ventral cochlear nucleus (vCN) and inferior colliculus and is amplified across the hierarchy in bottom-up direction. This study uncovers the extended stability of sensory-evoked brain states and long duration of ASG, and sheds light on generators of ASG and possible interactions between bottom-up and top-down mechanisms. abstract_id: PUBMED:32733293 Deficits in Auditory and Visual Sensory Discrimination Reflect a Genetic Liability for Psychosis and Predict Disruptions in Global Cognitive Functioning. Sensory discrimination thresholds (i.e., the briefest stimulus that can be accurately perceived) can be measured using tablet-based auditory and visual sweep paradigms. These basic sensory functions have been found to be diminished in patients with psychosis. However, the extent to which worse sensory discrimination characterizes genetic liability for psychosis, and whether it is related to clinical symptomatology and community functioning remains unknown. In the current study we compared patients with psychosis (PSY; N=76), their first-degree biological relatives (REL; N=44), and groups of healthy controls (CON; N=13 auditory and visual/N=275 auditory/N=267 visual) on measures of auditory and visual sensory discrimination, and examined relationships with a battery of symptom, cognitive, and functioning measures. Sound sweep thresholds differed among the PSY, REL, and CON groups, driven by higher thresholds in the PSY compared to CON group, with the REL group showing intermediate thresholds. Visual thresholds also differed among the three groups, driven by higher thresholds in the REL versus CON group, and no significant differences between the REL and PSY groups. Across groups and among patients, higher thresholds (poorer discrimination) for both sound and visual sweeps strongly correlated with lower global cognitive scores. We conclude that low-level auditory and visual sensory discrimination deficits in psychosis may reflect genetic liability for psychotic illness. Critically, these deficits relate to global cognitive disruptions that are a hallmark of psychotic illnesses such as schizophrenia. abstract_id: PUBMED:25928147 State-dependent changes in auditory sensory gating in different cortical areas in rats. Sensory gating is a process in which the brain's response to a repetitive stimulus is attenuated; it is thought to contribute to information processing by enabling organisms to filter extraneous sensory inputs from the environment. To date, sensory gating has typically been used to determine whether brain function is impaired, such as in individuals with schizophrenia or addiction. In healthy subjects, sensory gating is sensitive to a subject's behavioral state, such as acute stress and attention. The cortical response to sensory stimulation significantly decreases during sleep; however, information processing continues throughout sleep, and an auditory evoked potential (AEP) can be elicited by sound. It is not known whether sensory gating changes during sleep. Sleep is a non-uniform process in the whole brain with regional differences in neural activities. Thus, another question arises concerning whether sensory gating changes are uniform in different brain areas from waking to sleep. To address these questions, we used the sound stimuli of a Conditioning-testing paradigm to examine sensory gating during waking, rapid eye movement (REM) sleep and Non-REM (NREM) sleep in different cortical areas in rats. We demonstrated the following: 1. Auditory sensory gating was affected by vigilant states in the frontal and parietal areas but not in the occipital areas. 2. Auditory sensory gating decreased in NREM sleep but not REM sleep from waking in the frontal and parietal areas. 3. The decreased sensory gating in the frontal and parietal areas during NREM sleep was the result of a significant increase in the test sound amplitude. Answer: Auditory sensory dysfunction in schizophrenia has been a subject of investigation to understand whether it is due to imprecision in auditory processing or increased distractibility. The research suggests that the dysfunction is more likely related to imprecision rather than distractibility. A study evaluating tone-matching ability in schizophrenic patients found that those in long-term residential care had significantly elevated tone-matching thresholds in the absence of distractors, indicating severe sensory processing dysfunction. Importantly, these patients did not show increased susceptibility to auditory distraction, which argues against the idea that prefrontal dysfunction and increased distractibility are the primary causes of auditory sensory imprecision in schizophrenia (PUBMED:11115328). Further supporting the notion of imprecision, another study linked the onset of auditory verbal hallucinations (AVH) with a failure of auditory sensory gating, an empirically-defined auditory inhibition system. During active AVH, patients exhibited higher gating ratios, suggesting incomplete suppression of auditory stimuli and pointing to a failure in early auditory sensation control processes (PUBMED:28109666). Additionally, a novel model of auditory processing proposes a third sensory processing pathway, the gating stream, which directly links primary auditory cortices to the executive prefrontal cortex within the first 50 milliseconds after stimulus presentation. This gating stream is responsible for sensory gating, a protective mechanism that prevents irrelevant, repeated information from recurrent sensory processing. Impairments in this gating stream have been implicated in the pathophysiological sensory processing in schizophrenia (PUBMED:34285443). Moreover, studies using animal models, such as the social isolation rat model of schizophrenia, have shown deficits in sensory gating and mismatch negativity-like responses, which are similar to the deficits observed in patients with schizophrenia (PUBMED:24613239). In conclusion, the evidence points towards auditory sensory dysfunction in schizophrenia being characterized by imprecision in processing auditory information rather than by increased distractibility. This imprecision may be due to deficits in sensory gating mechanisms and disruptions in the gating stream pathway, which are crucial for filtering and processing auditory stimuli accurately.
Instruction: Predictive factors for non-sentinel lymph node involvement in breast cancer patients with a positive sentinel node: should we consider sentinel node-related factors? Abstracts: abstract_id: PUBMED:29848724 Factors Predictive of Sentinel Lymph Node Involvement in Primary Breast Cancer. Background/aim: Sentinel lymph node biopsy (SLNB) has replaced axillary lymph node dissection (ALND) for axillary staging in patients with early-stage breast cancer. The need for therapeutic ALND is the subject of ongoing debate especially after the publication of the ACOSOG Z0011 trial. In a retrospective trial with univariate and multivariate analyses, factors predictive of sentinel lymph node involvement should be analyzed in order to define tumor characteristics of breast cancer patients, where SLNB should not be spared to receive important indicators for adjuvant treatment decisions (e.g. thoracic wall irradiation after mastectomy with or without reconstruction). Patients And Methods: Between 2006 and 2010, 1,360 patients with primary breast cancer underwent SLNB with/without ALND with evaluation of tumor localization, multicentricity and multifocality, histological subtype, tumor size, grading, lymphovascular invasion (LVI), and estrogen receptor, progesterone receptor and human epidermal growth factor receptor 2 status. These characteristics were retrospectively analyzed in univariate and multivariate logistic regression models to define significant predictive factors for sentinel lymph node involvement. The multivariate analysis demonstrated that tumor size and LVI (p&lt;0.001) were independent predictive factors for metastatic sentinel lymph node involvement in patients with early-stage breast cancer. Conclusion: Because of the increased risk for metastatic involvement of axillary sentinel nodes in cases with larger breast cancer or diagnosis of LVI, patients with these breast cancer characteristics should not be spared from SLNB in a clinically node-negative situation in order to avoid false-negative results with a high potential for wrong indication of primary breast reconstruction or wrong non-indication of necessary post-mastectomy radiation therapy. The prognostic impact of avoidance of axillary staging with SLNB is analyzed in the ongoing prospective INSEMA trial. abstract_id: PUBMED:35123939 Predictive factors of non-sentinel lymph node disease in breast cancer patients with positive sentinel lymph node. Introduction: Management of positive sentinel lymph node biopsy (SLNB) in breast cancer remains a matter of debate. Our aim was to evaluate the incidence and identify predictive factors of non-sentinel lymph node metastases. Methods: Retrospective review of all cN0 breast cancer patients treated between January 2013 and December 2017, with positive SLNB that were submitted to ALND. Results: Of the 328 patients included, the majority of tumors were cT1 or cT2, with lymphovascular invasion in 58.4% of cases. The mean isolated nodes in SLNB was 2.7, with a mean of 1.6 positive nodes, 60.7% with extracapsular extension. Regarding ALND, a mean of 13.9 nodes were isolated, with a mean of 2.1 positive nodes. There was no residual disease in the ALND in 50.9% of patients, with 18.9% having ≥4 positive nodes. In the multivariate analysis, lymphovascular invasion, extracapsular extension in SLN, largest SLN metastases size (&gt;10 mm) and ratio of positive SNL (&gt;50%) were independent predictors of non-sentinel lymph node metastases. These four factors were used to build a non-pondered score to predict the probability of a positive ALND after a positive SLNB. The AUC of the model was 0.69 and 81% of patients with score = 0 and 65.6% with score = 1 had no additional disease in ALND. Conclusion: The absence of non-sentinel lymph node metastases in the majority of patients with 1-2 positive SLN with low risk score questions the need of ALND in this population. The identified predictive factors may help select patients in which ALND can be omitted. abstract_id: PUBMED:26124908 Predictive Factors for Non-Sentinel Lymph Node Metastasis in the Case of Positive Sentinel Lymph Node Metastasis in Two or Fewer Nodes in Breast Cancer. Background: In breast cancer, recent clinical trials have shown that sentinel lymph node biopsy (SLNB) alone without axillary lymph node dissection results in excellent prognosis if there is sentinel lymph node (SLN) metastasis in two or fewer nodes. The aim of the present study was to investigate the association between non-SLN metastasis and clinicopathological factors in case of SLN metastasis in two or fewer nodes in breast cancer. Methods: Patients who underwent SLNB for invasive breast cancer and were found to have positive SLN in two or fewer nodes were evaluated. The associations between non-SLN metastasis and clinicopahological factors were examined. Statistical analyses were performed using the Mann-Whitney and Chi-square tests, with statistical significance set at P &lt; 0.05. Results: A total of 358 patients were enrolled during the study period and all of these patients were female and 54 patients had SLN metastasis (15%). Positive SLN in two or fewer nodes was identified in 44 patients (81.5%). Among these patients, 17 (38.6%) were found to have non-SLN metastasis. Non-SLN metastasis was associated with invasive tumor size (P = 0.015) and lymphatic involvement (P = 0.035). Multivariate analysis showed that tumor size (P = 0.011) and lymphatic involvement (P = 0.019) remained significant independent predictors of non-SLN metastasis, and that an invasive tumor size cut-off point of 28 mm was useful for dividing patients with positive SLN in two or fewer nodes into non-SLN-positive and non-SLN-negative groups. Conclusions: Non-SLN metastasis was found in more than 30% of patients with SLN metastasis present in two or fewer nodes. Large tumor size and the presence of lymphatic involvement were significantly associated with non-SLN metastasis. abstract_id: PUBMED:25399977 Risk factors of non-sentinel lymph node metastasis in breast cancer patients with positive sentinel lymph node ≤ 2 Objective: To explore retrospectively the risk factors of non-sentinel lymph node (NSLN) metastasis in breast cancer patients with sentinel lymph node metastasis ≤ 2 and examine the likelihood of non-sentinel lymph node prediction. Methods: A sentinel lymph node biopsy database containing 455 breast cancer patients admitted between July 2005 and February 2012 at Beijing Cancer Hospital was analyzed retrospectively. The patients had ≤ 2 positive sentinel lymph node and complete axillary lymph node dissection. Results: The SLNS⁺/SLNS ratio (P = 0.001), histological grade (P = 0.075), size of mass (P = 0.023) and onset age (P = 0.074) were correlated with NSLN metastases. Only SLNS⁺/SLNS (OR 0.502 95% CI 0.322-0.7844) , histological grade ratio (histological grade ratio II, III and others vs grade I OR 2.696, 2.102, 3.662) were significant independent predictors for NSLN metastases . The ROC value was 0.62 (0.56, 0.68). Conclusions: For ≤ 2 positive sentinel lymph nodes of breast cancer, ratio of SLNS⁺/SLNS and histological grading are independent factors affecting NSLN metastases. However, the results remain unsatisfactory for predicting the status of NSLN. abstract_id: PUBMED:14672402 Predictive factors for the status of non-sentinel nodes in breast cancer patients with tumor positive sentinel nodes. In patients with tumor positive sentinel nodes, axillary lymph node dissection is routinely performed while a majority of these patients have no tumor involvement in the non-sentinel nodes. The authors tried to identify a subgroup of patients with a tumor positive sentinel node without non-sentinel node tumor involvement. In 135 consecutive patients with tumor positive sentinel nodes and axillary lymph node dissection performed, the incidence of non-sentinel node involvement according to tumor and sentinel node related factors was examined. The size of the sentinel node metastasis, size of primary tumor and number of tumor positive sentinel nodes were the three factors significantly predicting the status of the non-sentinel nodes. The size of the sentinel node metastasis was the strongest predictive factor (P &lt; 0.0001). In a subgroup of 41 patients with a stage T1 tumor and micrometastatic involvement in the sentinel node only 2 patients (5%) had non-sentinel node involvement. In patients with small primary tumors and micrometastatic involvement of the sentinel nodes, the chance of non-sentinel node involvement is small but cannot be discarded. Because the clinical relevance of micrometastases in lymph nodes is still unclear it is not advisable to omit axillary lymph node dissection even in these patients. abstract_id: PUBMED:30197795 Analysis of factors related to non-sentinel lymph node metastasis in 296 sentinel lymph node-positive Chinese breast cancer patients. Objective: Axillary lymph node dissection (ALND) may be unnecessary in 20%-60% of breast cancer patients with sentinel lymph node (NSLN) metastasis. The aim of the present study was to review the medical records of Chinese patients with early-stage breast cancer and positive NSLN metastasis to identify clinicopathological characteristics as risk factors for non-NSLN metastasis. Methods: The medical records of 2008 early-stage breast cancer patients who received intraoperative sentinel lymph node biopsy (SLNB) between 2006 and 2016 were retrospectively reviewed. These patients were clinically and radiologically lymph node-negative and had no prior history of receiving neoadjuvant chemotherapy or endocrinotherapy. The clinicopathological characteristics of patients with positive NSLN metastasis who underwent ALND were investigated. Results: In the present study, 296 patients with positive NSLN metastases underwent ALND. Positive non-NSLN metastases were confirmed in 95 patients (32.1%). On univariate analysis, ≥ 3 positive NSLN metastases (P &lt;0.01), NSLN macrometastases ( P = 0.023), and lymphovascular invasion (P = 0.04) were associated with non-NSLN metastasis (P &lt;0.05). In multivariate analysis, the number of positive SLNs was the most significant predictor of non-SLN metastasis. For patients with 0, 1, 2, or 3 associated risk factors, the non-SLN metastatic rates were 11.5%, 22.5%, 35.2%, and 73.1%, respectively. Conclusions: The number of positive NSLNs, NSLN macrometastases, and lymphovascular invasion were correlated with non-SLN metastasis. The number of positive SLNs was an independent predictor for non-NSLN metastasis. When 2 or 3 risk factors were present in one patient, the probability of non-NSLN was higher than that in the American College of Surgeons Oncology Group Z0011 trial (27.3%); thus, avoiding ALND should be considered carefully. abstract_id: PUBMED:24768478 Prediction of non-sentinel lymph node involvement in breast cancer patients with a positive sentinel lymph node. Completion axillary lymph node dissection (cALND) is the golden standard if breast cancer involves the sentinel lymph node (SLN). However, most non-sentinel lymph nodes (NSLN) are not involved, cALND has a considerable complication rate and does not improve outcome. We here present and validate our predictive model for positive NSLNs in the cALND if the SLN is positive. Consecutive early breast cancer patients from one center undergoing cALND for a positive SLN were included. We assessed demographic and clinicopathological variables for NSLN involvement. Uni- and multivariate analysis was performed. A predictive model was built and validated in two external centers. 21.9% of 470 patients had at least one involved NSLN. In univariate analysis, seven variables were significantly correlated with NSLN involvement: tumor size, grade, lymphovascular invasion (LVI), number of positive and negative SLNs, size of SLN metastasis and intraoperative positive SLN. In multivariate analysis, LVI, number of negative SLNs, size of SLN metastasis and intraoperative positive pathological evaluation were independent predictors for NSLN involvement. The calculated risk resulted in an AUC of 0.76. Applied to the external data, the model was accurate and discriminating for one (AUC = 0.75) and less for the other center (AUC = 0.58). A discriminative predictive model was constructed to calculate the risk of NSLN involvement in case of a positive SLN. External validation of our model reveals differences in performance when applied to data from other institutions concluding that such a predictive model requires validation prior to use. abstract_id: PUBMED:27065658 Factors Influencing Non-sentinel Node Involvement in Sentinel Node Positive Patients and Validation of MSKCC Nomogram in Indian Breast Cancer Population. Current guidelines recommend completion axillary lymphnode dissection (ALND) when sentinel lymphnode (SLN) contains metastatic tumor deposit. In consequent ALND sentinel node is the only node involved by tumor in 40-70 % of cases. Recent studies demonstrate the oncologic safety of omitting completion ALND in low risk patients. Several nomograms (MSKCC, Stanford, MD Anderson score, Tenon score) had been developed in predicting the likelihood of additional nodes metastatic involvement. We evaluated accuracy of MSKCC nomogram and other clinicopathologic variables associated with additional lymph node metastasis in our patients. A total of 334 patients with primary breast cancer patients underwent SLN biopsy during the period Jan 2007 to June 2014. Clinicopathologic variables were prospectively collected. Completion ALND was done in 64 patients who had tumor deposit in SLN. The discriminatory accuracy of nomogram was analyzed using Area under Receiver operating characteristic curve (ROC). SLN was the only node involved with tumor in 69 % (44/64) of our patients. Additional lymph node metastasis was seen in 31 % (20/64). On univariate analysis, extracapsular infiltration in sentinel node and multiple sentinel nodes positivity were significantly associated (p &lt; 0.05) with additional lymph node metastasis in the axilla. Area under ROC curve for nomogram was 0.58 suggesting poor performance of the nomogram in predicting NSLN involvement. Sentinel nodes are the only nodes to be involved by tumor in 70 % of the patients. Our findings indicate that multiple sentinel node positivity and extra-capsular invasion in sentinel node significantly predicted the likelihood of additional nodal metastasis. MSKCC nomogram did not reliably predict the involvement of additional nodal metastasis in our study population. abstract_id: PUBMED:29019543 Predictive factors of axillary metastasis in patients with breast cancer and positive sentinel lymph node biopsy. Objective: to evaluate the risk factors for the presence of non-sentinel axillary metastatic disease in patients with breast cancer and positive sentinel node biopsy. Methods: retrospective cross-sectional study of women with breast cancer operated at the Cancer Institute of Ceará between 2002 and 2012 and submitted to sentinel lymph node biopsy. Results: Among 946 breast cancer patients, 331 underwent sentinel lymph node biopsy, which was positive in 83. These patients underwent axillary lymphadenectomy and 39 (46%) had metastases in other axillary lymph nodes. The variables that were significant for additional axillary disease included Ki67&gt;14 (p=0.043), angiolymphatic invasion (p=0.01) and tumor size (p=0.027). No association was observed with estrogen, progesterone, tumor grade and Her-2 receptors. Discussion: the presence of angiolymphatic invasion and tumor size have also been related to additional axillary metastasis in other studies. In addition to these variables, the same predictive effect was observed when we evaluated Ki67. The validation of these results may allow the customization of breast cancer treatment, which may reduce its morbidity. Conclusion: angiolymphatic invasion, tumor size (T3/T4) and Ki67&gt;14 were factors predictive of axillary metastasis involvement in addition to the sentinel lymph node. abstract_id: PUBMED:25207151 Sentinel lymph node biopsy in breast cancer: predictors of axillary and non-sentinel lymph node involvement. Background: Sentinel lymph node biopsy is a standard method for the evaluation of axillary status in patients with T1-2N0M0 breast cancers. Aims: To determine the prognostic significance of primary tumour-related clinico-histopathological factors on axillary and non-sentinel lymph node involvement of patients who underwent sentinel lymph node biopsy. Study Design: Retrospective clinical study. Methods: In the present study, 157 sentinel lymph node biopsies were performed in 151 consecutive patients with early stage breast cancer between June 2008 and December 2011. Results: Successful lymphatic mapping was obtained in 157 of 158 procedures (99.4%). The incidence of larger tumour size (2.543±1.21 vs. 1.974±1.04), lymphatic vessel invasion (70.6% vs. 29.4%), blood vessel invasion (84.2% vs. 15.8%), and invasive lobular carcinoma subtype (72.7% vs. 27.3%) were statistically significantly higher in patients with positive SLNs. Logistic stepwise regression analysis disclosed tumour size (odds ratio: 1.51, p=0.0021) and lymphatic vessel invasion (odds ratio: 4.68, p=0.001) as significant primary tumour-related prognostic determinants of SLN metastasis. Conclusion: A close relationship was identified between tumour size and lymphatic vessel invasion of the primary tumour and axillary lymph node involvement. However, the positive predictive value of these two independent variables is low and there is no compelling evidence to recommend their use in routine clinical practice. Answer: Yes, sentinel node-related factors should be considered when predicting non-sentinel lymph node involvement in breast cancer patients with a positive sentinel node. Several studies have identified sentinel node-related factors as significant predictors for non-sentinel lymph node metastasis. These factors include: 1. The size of the sentinel node metastasis, which was found to be the strongest predictive factor for the status of the non-sentinel nodes (PUBMED:14672402). 2. The ratio of positive sentinel lymph nodes to the total number of sentinel nodes removed (SLNS⁺/SLNS ratio) and histological grade were identified as independent factors affecting non-sentinel lymph node metastases (PUBMED:25399977). 3. The presence of extracapsular extension in the sentinel lymph node and the size of the largest sentinel lymph node metastasis were also significant predictors (PUBMED:35123939). 4. Lymphovascular invasion (LVI) was consistently identified as a significant predictor of non-sentinel lymph node metastasis in multiple studies (PUBMED:29848724, PUBMED:35123939, PUBMED:26124908, PUBMED:30197795). 5. The number of positive sentinel lymph nodes was the most significant predictor of non-sentinel lymph node metastasis in one study (PUBMED:30197795). 6. Other factors such as the number of negative sentinel lymph nodes and intraoperative positive pathological evaluation of the sentinel lymph node were also found to be independent predictors for non-sentinel lymph node involvement (PUBMED:24768478). In conclusion, sentinel node-related factors are important in predicting non-sentinel lymph node involvement and should be considered alongside primary tumor-related factors when making clinical decisions regarding further axillary treatment in breast cancer patients with a positive sentinel node.
Instruction: Body mass index and acute coronary syndromes: paradox or confusion? Abstracts: abstract_id: PUBMED:24923461 Body mass index and acute coronary syndromes: paradox or confusion? Background: A better prognosis in obese patients has been described in acute coronary syndromes (ACS). However, this evidence is mostly based on retrospective studies and has provided conflicting results. No study reported cause-specific mortality according to body mass index (BMI) in ACS. We aimed to prospectively assess the impact of BMI on mortality and its specific causes in ACS patients. Methods: We included non-selected ACS patients admitted in a tertiary care coronary unit, collecting baseline characteristics, management and clinical course. Patients were stratified into five clinically meaningful BMI subgroups of &lt;20, 20-24.9, 25-29.9, 30-35, &gt;35 kg/m(2). The primary outcome was 1 year mortality, its causes and its association with BMI. This association was assessed by the Cox regression method. Results: We included 2040 patients in our study with a mean age of 62.1 years. Low weight patients (BMI &lt;20) were older, with less cardiovascular risk factors, higher prevalence of chronic obstructive pulmonary disease and worse renal function. Mean follow up was 334 days. The unadjusted analysis showed lower all-cause mortality in all subgroups as compared to low weight patients. After adjusting for potential confounders, this association remained significant for patients with a BMI 20-24.9. Cardiac mortality was similar across BMI subgroups. In contrast, the adjusted analysis showed a significantly lower non-cardiac mortality in patients with a BMI 20-24.9, 25-29.9 and 30-35 as compared to low weight patients. Conclusions: Baseline characteristics in ACS patients significantly differ according to their BMI status. The prognostic impact of BMI seems mostly related to extra-cardiac causes in low weight patients. abstract_id: PUBMED:32190137 Takotsubo syndrome - fatal prognosis of patients with low body mass index in 5-year follow-up. Introduction: The clinical courses of takotsubo syndrome (TS) and of acute coronary syndromes (ACS) seem to be very similar. However, there is limited knowledge about risk factors of poor outcomes. Low body mass index worsens the prognosis of patients with cardiovascular diseases, especially those undergoing surgical treatment. The aim of the study was to evaluate the influence of the body mass index (BMI) on the prognosis in patients diagnosed with TS. Material And Methods: Eighty patients aged 15-89 (mean: 67.9 years), 74 women and 6 men with TS diagnosis according to Mayo Clinic diagnostic criteria were divided into 3 groups: low body mass (BMI &lt; 18.5 kg/m2), normal body mass (18.5 ≥ BMI &lt; 25 kg/m2) and excessive body mass (BMI ≥ 25 kg/m2). Results: Patients with low BMI were older, but with less prevalent risk factors such as hypertension, hypercholesterolemia and positive family history of coronary artery disease and more frequent risk factors such as cigarette smoking, chronic obstructive pulmonary disease (COPD), depressive and anxiety disorders as well as malignancy. They also had higher haemoglobin, lower troponin, creatine kinase, C-reactive protein and lipid fractions. The highest annual, 3-year and 5-year mortality was observed in the group with BMI &lt; 18.5 kg/m2. None of the patients with low BMI survived the 5-year follow-up period (100% vs. 25% vs. 15.2%; p &lt; 0.0001). In group III, mortality among overweight patients (25 ≥ BMI &lt; 30 kg/m2) was 8.3%, and in obese people (BMI ≥ 30 kg/m2) 1 out 5 patients died during follow-up. Conclusions: The majority of typical cardiovascular risk factors are less frequently observed in patients with TS and low body mass. Early prognosis for TS patients and low BMI is relatively favourable, whereas the 5-year follow-up is associated with extremely high mortality. Overweight patients have the best prognosis in the long-term follow-up. abstract_id: PUBMED:32511741 Health-related quality of life and cardiac rehabilitation: Does body mass index matter? Objective: To investigate the relation between body mass index class and changes in health-related quality of life in patients participating in cardiac rehabilitation. Design: Prospective cohort study. Patients: A total of 503 patients with acute coronary syndrome. Methods: Data from the OPTICARE trial were used, in which health-related quality of life was measured with the MacNew Heart Disease HRQOL Instrument at the start, directly after, and 9 months after completion of cardiac rehabilitation. Patients were classed as normal weight, overweight, or obese. Results: During cardiac rehabilitation, global health-related quality of life improved in patients in all classes of body mass index. Patients classed as overweight had a significantly greater improvement in social participation than those classed as normal weight (5.51-6.02 compared with 5.73-5.93, respectively; difference in change 0.30, p = 0.025). After completion of cardiac rehabilitation, health-related quality of life continued to improve similarly in patients in all classes of body mass index. Conclusion: Health-related quality of life improved during cardiac rehabilitation in patients of all classes of body mass index. Patients classed as overweight showed the greatest improvement. The beneficial effects were maintained during extended follow-up after completion of cardiac rehabilitation. abstract_id: PUBMED:30721655 Body Mass Index, Intensive Blood Pressure Management, and Cardiovascular Events in the SPRINT Trial. Background: It is unclear whether intensive blood pressure management is well-tolerated and affects risk uniformly across the body mass index (BMI) spectrum. Methods: The randomized, controlled Systolic Blood Pressure Intervention Trial (SPRINT) included 9361 individuals ≥50 years of age at high cardiovascular risk, without diabetes mellitus, with systolic blood pressure between 130 and 180 mmHg. Participants were randomized to intensive vs standard antihypertensive treatment and evaluated for the primary composite efficacy endpoint of acute coronary syndromes, stroke, heart failure, or cardiovascular death. The primary safety endpoint was serious adverse events. We used restricted cubic splines to determine the relationship between BMI, response to intensive blood pressure lowering, and clinical outcomes in SPRINT. Results: Body mass index could be calculated for 9284 (99.2%) individuals. Mean BMI was similar between the 2 treatment groups (intensive group 29.9±5.8 kg/m2 vs standard group 29.8± 5.7 kg/m2; P = 0.39). Median follow-up was 3.3 years (range 0-4.8 years). Body mass index had a significant, J-shaped association with risk of all-cause mortality, stroke, and serious adverse events (P &lt; .05 for all), but these were no longer significant after accounting for key clinical factors (P &gt; .05 for all). Intensive blood pressure lowering reduced the primary efficacy endpoint and increased the primary safety endpoint compared with standard targets, consistently across the BMI spectrum (Pinteraction &gt; .05). Conclusion: The overall efficacy and safety of intensive blood pressure lowering did not appear to be modified by baseline BMI among high-risk older adults. abstract_id: PUBMED:27090286 Management of rivaroxaban in relation to bodyweight and body mass index. Being overweight or obese is associated with a higher individual risk of venous thromboembolism and poorer postprocedural outcomes after hip or knee replacement surgery. In addition, there is evidence that obesity represents a significant driving factor for the current and projected prevalence of atrial fibrillation. Rivaroxaban and other direct oral anticoagulants offer fixed-dose regimens for these indications. They do not require therapeutic drug monitoring or dose adjustment according to the weight of the patient. However, primary care physicians seem to be hesitant to accept the concept of a fixed-dose regimen for patients at extremes of weight, perhaps because of familiarity with weight-based dosing of other drugs including low molecular weight heparins. The main concerns related to unadjusted dosing are increased exposure in underweight patients leading to a risk of excessive bleeding and conversely to underanticoagulation of overweight patients. Rivaroxaban has shown similar efficacy and a similar or better safety profile compared with standard treatment for several venous and arterial indications, including venous thromboembolism, nonvalvular atrial fibrillation, and acute coronary syndrome. Prespecified subgroup analyses of patients stratified by weight or body mass index demonstrated outcomes that were consistent with the overall analysis and within each weight and body mass index group. The results suggest that standard-dose rivaroxaban can be safely prescribed in adult patients of all weights. abstract_id: PUBMED:30107907 Varying Effects of Body Mass Index and Mortality in Different Risk Groups. It is unclear why high body mass index (BMI) is associated with a lower risk of death in patients with cardiovascular disease (CVD). We hypothesized that the impact of higher BMI varies by the patient's baseline risk of death. We analyzed data from 14 prospective studies conducted by us in 50 countries of 170,470 patients: 22.8% without CVD, 29.4% with stable CVD and 47.8% with acute coronary syndromes (ACS). Compared with overweight (BMI 25 to 29.9 kg/m2), normal weight (BMI 20 to 24.9 kg/m2), and underweight (BMI &lt; 20 kg/m2) were associated with higher mortality in all cohorts. Compared with overweight, the risk of death among those with obesity (BMI ≥ 30 kg/m2) varied depending on the population studied: HR (95% confidence interval) in those without CVD, with stable CVD, and with ACS were 1.20 (1.01 to 1.45; p = 0.04), 1.08 (1.02 to 1.15; p = 0.01), and 1.01 (0.93 to 1.10; p = 0.72), respectively. The BMI associated with lowest mortality increased as cohort risk increased (no-CVD 27.2 kg/m2, stable CVD 28.1 kg/m2, and ACS 30.9 kg/m2; p &lt;0.001). Within each cohort, the optimal BMI value was higher in the high-risk subgroup. In the ACS cohort, low-risk patients had an optimal BMI value of 29.1 kg/m2 while in the high-risk group higher BMI was associated with lower risk of death (interaction-p &lt;0.001). In conclusion, between and within cohorts of people without CVD, stable CVD, and ACS, higher BMI was associated with lower risk of death in higher risk groups. The benefits of increased body fat (i.e., increased caloric reserve) may counteract the hazards in high-risk groups, suggesting an alternative explanation of the obesity paradox. abstract_id: PUBMED:28657464 Body mass index and mortality after acute coronary syndromes: a systematic review and meta-analysis. Objective: The association between body mass index (BMI) and mortality after acute coronary syndromes (ACS) is controversial. The objective of this analysis is to summarize the available evidence of this association and perform meta-analysis using adjusted estimates. Methods And Results: Systematic review from MEDLINE and EMBASE through May 2015 was performed. Studies were considered eligible if they described the association between BMI and all-cause mortality after ACS, and those reporting adjusted estimates were included in the meta-analysis. We included 35 articles with 316,455 participants, with overall poor to moderate quality. No study reported that overweight, type-I or type-II obesity was related to an increased risk of mortality compared to normal weight. Pooled adjusted estimates from 18 studies (137,975 participants) showed lower adjusted mortality both overweight (RR: 0.83; 95% CI: 0.75-0.91; p &lt; .001; I2 51%) and obese (RR: 0.79; 95% CI: 0.71-0.88; p &lt; .001; I2 33%) categories when compared to normal weight. Heterogeneity was not explained in pre-specified subgroups analysis. Conclusions: Increased BMI was associated with increased adjusted survival after ACS when compared to normal BMI. Unexplained heterogeneity and suboptimal quality of studies limit the strength of the results. This seemingly paradoxical finding needs to be confirmed with further research. abstract_id: PUBMED:25433776 Body mass index and prognosis in the elderly patients with acute coronary syndromes Background And Objective: Little information exists about the prognostic impact of body mass index (BMI) in the elderly with acute coronary syndromes (ACS). We aimed to prospectively assess the association between BMI and midterm mortality in consecutive elderly patients with ACS. Patients And Method: We included consecutive ACS patients aged 75 years or older admitted to the Coronary Care Unit. Patients were stratified into 5 BMI subgroups (&lt;20, 20-24.9, 25-29.9, 30-35, &gt; 35 kg/m(2)). Primary endpoint was overall midterm mortality. Analyses were performed by Cox regression method. Results: During the study period 600 patients were admitted, of whom 579 had known BMI values. Low weight patients (BMI&lt;20 kg/m(2)) were older, with higher prevalence of vasculopathy and lower creatinine clearance. Median follow up was 315 days. Patients with BMI&lt; 20 kg/m(2) had higher mortality as compared to the rest of groups. Adjusted analysis showed an attenuation of these differences, while maintaining significance for the groups of BMI 20-24.9 and 30-35 kg/m(2). Conclusions: The association between BMI and mortality was mainly due to worse prognosis in low weight patients. Prospective assessment of comorbidities and other variables linked to ageing should contribute to better understand the association between BMI and mortality. abstract_id: PUBMED:33506916 Body mass index and mortality in patients with cardiovascular disease: an umbrella review of meta-analyses. Objective: Although many previous meta-analyses of epidemiological studies have demonstrated a relationship between body mass index (BMI) and mortality, inconsistent findings among cardiovascular disease patients have been observed. Thus, we performed an umbrella review to understand the strength of evidence and validity of claimed associations between BMI and mortality in patients with cardiovascular diseases. Materials And Methods: We comprehensively re-analyzed the data of meta-analyses of observational studies and randomized controlled trials on associations between BMI and mortality among patients with cardiovascular diseases. We also assessed the strength of evidence of the re-analyzed outcomes, which were determined from the criteria including statistical significance of the p-value of random-effects, as well as fixed-effects meta-analyses, small-study effects, between-study heterogeneity, and a 95% prediction interval. Results: We ran comprehensive re-analysis of the data from the 21 selected studies, which contained a total of 108 meta-analyses; 23 were graded as convincing evidence and 12 were suggestive, 42 were weak, and 23 were non-significant. Conclusions: Underweight increased mortality in acute coronary syndrome (ACS), heart failure, and after therapeutic intervention for patients with cardiovascular diseases. Overweight, on the other hand decreased mortality in patient's ACS, atrial fibrillation, and heart failure with convincing evidence. abstract_id: PUBMED:34961732 Body mass index and efficacy and safety of ticagrelor versus prasugrel in patients with acute coronary syndromes. Introduction And Objectives: The efficacy and safety of ticagrelor vs prasugrel in patients with acute coronary syndromes (ACS) according to body mass index (BMI) remain unstudied. We assessed the efficacy and safety of ticagrelor vs prasugrel in patients with ACS according to BMI. Methods: Patients (n=3987) were grouped into 3 categories: normal weight (BMI &lt;25kg/m2; n=1084), overweight (BMI ≥ 25 to &lt;30kg/m2; n=1890), and obesity (BMI ≥ 30kg/m2; n=1013). The primary efficacy endpoint was the 1 year incidence of all-cause death, myocardial infarction, or stroke. The secondary safety endpoint was the 1 year incidence of Bleeding Academic Research Consortium type 3 to 5 bleeding. Results: The primary endpoint occurred in 63 patients assigned to ticagrelor and 39 patients assigned to prasugrel in the normal weight group (11.7% vs 7.5%; HR, 1.62; 95%CI, 1.09-2.42; P=.018), 78 patients assigned to ticagrelor and 58 patients assigned to prasugrel in the overweight group (8.3% vs 6.2%; HR, 1.36; 95%CI, 0.97-1.91; P=.076), and 43 patients assigned to ticagrelor and 37 patients assigned to prasugrel in the obesity group (8.6% vs 7.3%; HR, 1.18; 95%CI, 0.76-1.84; P=.451). The 1-year incidence of bleeding events did not differ between ticagrelor and prasugrel in patients with normal weight (6.5% vs 6.6%; P=.990), overweight (5.6% vs 5.0%; P=.566) or obesity (4.4% vs 2.8%; P=.219). There was no significant treatment arm-by-BMI interaction regarding the primary endpoint (Pint=.578) or secondary endpoint (Pint=.596). Conclusions: In patients with ACS, BMI did not significantly impact the treatment effect of ticagrelor vs prasugrel in terms of efficacy or safety. Clinical Trial Registration: NCT01944800. Answer: The relationship between body mass index (BMI) and outcomes in patients with acute coronary syndromes (ACS) has been a subject of debate, often referred to as the "obesity paradox." This term describes the counterintuitive observation that overweight and obese patients with ACS may have better prognoses compared to their normal-weight counterparts. However, the evidence is mixed and has led to confusion in the medical community. Several studies have investigated this paradox. One study found that patients with ACS and lower BMI had higher all-cause mortality compared to those with higher BMI, but this association was mostly related to non-cardiac causes in low-weight patients (PUBMED:24923461). Another study on Takotsubo syndrome, which has clinical presentations similar to ACS, found that patients with low BMI had a significantly higher mortality rate in a 5-year follow-up, with none of the low BMI patients surviving, suggesting a worse long-term prognosis for underweight patients (PUBMED:32190137). In the context of cardiac rehabilitation, health-related quality of life improved for ACS patients across all BMI classes, with overweight patients showing the greatest improvement (PUBMED:32511741). The SPRINT trial indicated that intensive blood pressure management was effective and safe across the BMI spectrum in high-risk older adults without diabetes (PUBMED:30721655). Rivaroxaban, an anticoagulant, has been shown to have similar efficacy and safety profiles across all weight groups, suggesting that standard dosing can be safely prescribed without adjustment for BMI (PUBMED:27090286). Another study found that higher BMI was associated with lower risk of death in higher-risk groups of patients with cardiovascular disease, including those with ACS (PUBMED:30107907). A systematic review and meta-analysis also supported the finding that increased BMI was associated with increased survival after ACS compared to normal BMI, although the quality of the studies was variable (PUBMED:28657464). In elderly patients with ACS, low weight was associated with higher mortality, but this association was attenuated after adjusting for confounders (PUBMED:25433776). An umbrella review of meta-analyses found that underweight increased mortality in various cardiovascular conditions, while overweight decreased mortality with convincing evidence (PUBMED:33506916).
Instruction: Do the incentives in 3-tier pharmaceutical benefit plans operate as intended? Abstracts: abstract_id: PUBMED:15697096 Do the incentives in 3-tier pharmaceutical benefit plans operate as intended? Results from a physician leadership survey. Background: Three-tier pharmaceutical benefit systems use graded co-payments to steer patients toward "preferred" formulary medications. Objectives: To evaluate physicians' knowledge of formularies and out-of-pocket costs in such systems, as well as their perceived responsibility for helping patients manage out-of-pocket costs. Study Design: Self-administered written survey. Methods: Physician leaders participating in the California Medical Association Leadership Conference were surveyed. Results: A total of 133 responses were received from 205 participants (65% response rate). Physicians reported that they were often unaware of patients' out-of-pocket costs at the time of prescribing. Fifty-nine percent of physicians reported that they never or seldom were aware of patients' "preferred" (lower cost) formulary options when prescribing, and 70% never or seldom were aware of patients' out-of-pocket costs when prescribing. Although 88% of physicians agreed that it is important that patients' out-of-pocket costs for prescription drugs are managed, only 25% strongly or somewhat agreed that it is their "responsibility" to help. Instead, 69% of physicians believed that it is the responsibility of the pharmacist to be familiar with patients' out-of-pocket costs. Physicians reported that they receive phone calls from pharmacists concerning formulary issues after 18.6% of the prescriptions they write. Conclusions: Physician leaders reported that they often do not possess the knowledge to assist patients in managing out-of-pocket costs for prescription drugs and they depend on pharmacists to communicate patient preferences in making prescribing decisions. As a result, price preferences are communicated indirectly, likely less efficiently, rather than intentionally when prescribing decisions are made. abstract_id: PUBMED:15002642 The changing face of pharmacy benefit design. Employers, health plans, and pharmacy benefit managers-seeking to reduce rapid growth in pharmacy spending-have embraced multi-tier pharmacy benefit packages that use differential copayments to steer beneficiaries toward low-cost drugs. The consensus of fifteen pharmacy benefit design experts whom we interviewed is that such plans will become more prevalent and that the techniques these plans use to promote low-cost drugs will intensify. The effect on health outcomes depends on whether the high-cost drugs whose use is being discouraged have close, low-cost substitutes. abstract_id: PUBMED:18479407 Effects of a consumer driven health plan on pharmaceutical spending and utilization. Objectives: To compare pharmaceutical spending and utilization in a consumer driven health plan (CDHP) with a three-tier pharmacy benefit design, and to examine whether the CDHP creates incentives to reduce pharmaceutical spending and utilization for chronically ill patients, generic or brand name drugs, and mail-order drugs. Study Design: Retrospective insurance claims analysis from a large employer that introduced a CDHP in 2001 in addition to a point of service (POS) plan and a preferred provider organization (PPO), both of which used a three-tier pharmacy benefit. Methods: Difference-in-differences regression models were estimated for drug spending and utilization. Control variables included the employee's income, age, and gender, number of covered lives per contract, election of flexible spending account, health status, concurrent health shock, cohort, and time trend. Results. CDHP pharmaceutical expenditures were lower than those in the POS cohort in 1 year without differences in the use of brand name drugs. We find limited evidence of less drug consumption by CDHP enrollees with chronic illnesses, and some evidence of less generic drug use and more mail-order drug use among CDHP members. Conclusions: The CDHP is cost-neutral or cost-saving to both the employer and the employee compared with three-tier benefits with no differences in brand name drug use. abstract_id: PUBMED:17850042 Perverse incentives in the Medicare prescription drug benefit. This paper analyzes some of the perverse incentives that may arise under the current Medicare prescription drug benefit design. In particular, risk adjustment for a stand-alone prescription drug benefit creates perverse incentives for prescription drug plans when making coverage decisions and/or for pharmaceutical companies when setting prices. This problem is new in that it does not arise with risk adjustment for other types of health care coverage. For this and other reasons, Medicare's drug benefit requires especially close regulatory oversight, now and in the future. We also consider a relatively minor change in financing the benefit that could lead to significant changes in how the benefit functions. In particular, if all plans were required to charge the same premium, there would be less diversity in quality, but also less need to regulate formulary composition, less budgetary uncertainty, and less upward pressure on drug prices. abstract_id: PUBMED:929351 Employee-benefit plans, 1975. Despite the national decline in the number of employed workers in 1975, the number of workers covered lnder most types of employee-benefit plans increased slightly. Contributions and benefit payments underwent the greatest change. Total contributions increased 17 percent and benefit payments rose 14 percent. In part this change can be attributed to inflation, but it also reflects real growth in contributions and payments for most types of plans, particularly those for retirement and health care benefits. Review of the private pension plans shows that an increasing number were terminated in 1975, but the reason has not been determined; contributions to these retirement plans showed one of the largest annual increases ever. abstract_id: PUBMED:34056986 Incentives to promote the US pharmaceutical innovation: empirical research based on the case of Epogen. Objective: This study aims at emphasizing the significant impact of the incentives to promote the United States (US) pharmaceutical innovation. Methods: We carried out a retrospective single-case study. We analyzed the innovation process of Epogen: basic research, applied research, regulatory, and marketing. Results: Incentives and policies of pharmaceutical innovation significantly facilitates the entire life cycle of Epogen. The transfer of patent presented by the Bayh-Dole Act allowed Amgen to purchase the research results of Epogen. Relying on the intellectual property mechanisms and financing incentives, Amgen raised the funds needed for Epogen in applied research. Special review shortened the regulatory of Epogen. Epogen obtained orphan drug designation twice and 8 years of market exclusivity. Tax deduction and research funding provided direct economic compensation. The patent system enabled Epogen to obtain 32 years of patent protection (1983-2015). Monopoly pricing was a significant determinant to increase the sales of Epogen through pricing strategies. Conclusion: We pointed out that Amgen has developed the successful innovation of Epogen taking advantages of the incentives. Effective and flexible incentives and policies are essential to support the entire life cycle of new drugs, ultimately forming a sustainable driver for the long-run pharmaceutical innovation. abstract_id: PUBMED:36429914 Pharmaceutical Enterprises' R&amp;D Innovation Cooperation Moran Strategy When Considering Tax Incentives. Drug R&amp;D innovation contributes to the high-quality development of the pharmaceutical industry, which is related to people's life and health, economic development, and social stability. Tax incentives and industry cooperation are conducive to promoting pharmaceutical enterprises' innovation. Therefore, this paper constructs a Moran process evolutionary game model and analyzes the evolutionary trajectory of N pharmaceutical enterprises' drug R&amp;D innovation strategic choice and considers the choice of R&amp;D innovation strategy and non-R&amp;D innovation strategy. We obtain the conditions for the two strategies to achieve evolutionary stability under the dominance of external factors, the dominance of expected revenue, and the dominance of super expected revenue. The evolutionary process is simulated by MATLAB 2021b. The results show that, firstly, when the number of pharmaceutical enterprises is higher than a threshold, the market is conducive to pharmaceutical enterprises choosing an R&amp;D innovation strategy. Secondly, the higher the tax incentives, the higher the probability of pharmaceutical enterprises choosing an R&amp;D innovation strategy. Thirdly, when the R&amp;D success rate increases, pharmaceutical enterprises gradually change from choosing a non-R&amp;D innovation strategy to choosing an R&amp;D innovation strategy. Fourthly, the threshold of strategy change of pharmaceutical enterprises is the same under the dominance of expected revenue and super expected revenue. This paper puts forward some countermeasures and suggestions for promoting the R&amp;D innovation of pharmaceutical enterprises in practice. abstract_id: PUBMED:33715778 Development and Validation of a Measure Assessing Sustainability of Tier 2 and 3 Behavior Support Systems. To identify the most effective strategies for implementing and sustaining Tier 2 and 3 behavior support systems, a measure of general and tier-specific factors hypothesized to predict sustained implementation is needed. To address this need, we conducted two studies examining the construct validity of the Advanced Level Tier Interventions Treatment Utilization and Durability Evaluation (ALTITUDE) measure: one assessing the content aspects of construct validity (Study 1) and one assessing the structural and external aspects of construct validity (Study 2). In Study 1, participants included an expert panel of 26 members who provided iterative feedback during measure development. The results showed strong content representativeness (content validity index = .93) for assessing elements indicating sustainability. In Study 2, participants were school personnel from 646 schools who completed the measure. The results showed model fit was good for both a three-factor correlated model (CFI = .98, RMSEA = .06, SRMR = .05) and Bifactor S-1 model with correlated Tier 2 and Tier 3 specific residual factors (CFI = .98, RMSEA = .06, SRMR = .05). In addition, ALTITUDE latent factors were found to have both convergent and discriminant evidence in relation to concurrent fidelity of school-wide positive behavioral interventions and supports (SWPBIS) implementation at Tiers 1, 2, and 3. This construct validity evidence will support the use of the ALTITUDE in identifying the strongest tier-general and tier-specific predictors of sustained implementation of Tier 2 and 3 behavior support systems. abstract_id: PUBMED:33725296 The Effect of Participation in Tier 3 Services on the Uptake of Bariatric Surgery. Background: Despite the recognised advantages of bariatric and metabolic surgery, only a small proportion of patients receive this intervention. In the UK, weight management systems are divided into four tiers. Tier 3 is a clinician-lead weight loss service while tier 4 considers surgery. While there is little evidence that tier 3 has any long-term benefits for weight loss, this study aims to determine whether tier 3 improves the uptake of surgery. Method: A retrospective cohort study of all referrals to our unit between 2013 and 2016 was categorised according to source-tier 3, directly from the general practitioner (GP) or from another speciality. The likelihood of surgery was calculated using a regression model after considering patient demographics, comorbidities and distance from our hospital. Results: Of the 399 patients, 69.2% were referred directly from the GP, 21.3% from tier 3, and 9.5% from another speciality of which 69.4%, 56.2%, and 36.8% progressed to surgery (p = 0.01). On regression analysis, patients from another speciality or GP were more likely to decide against surgery (OR 2.44 CI 1.13-6.80 p = 0.03 and OR 1.65 CI 1.10-3.12 p = 0.04 respectively) and more likely to be deemed not suitable for surgery by the MDT (OR 6.42 CI 1.25-33.1 p = 0.02 and OR 3.47 CI 1.11-12.9 p = 0.03) compared with tier 3 referrals. Conclusion: As patients from tier 3 were more likely to undergo bariatric and metabolic surgery, this intervention remains a relevant step in the pathway. Such patients are likely to be better informed about the benefits of surgery and risks of severe obesity. abstract_id: PUBMED:2395031 Using employee benefit plans to fight the nursing shortage. Employee benefit plans can be a viable tool for recruiting and retaining nurses. By understanding the needs of their staff and the alternatives available, nurse administrators can help their institutions select the benefits that will be most beneficial to their staff. Certain employee benefit plans increase employee after-tax income at little cost to the hospital. The authors discuss these nontaxable and tax-deferred benefits as well as the advantages and disadvantages associated with each. Answer: The incentives in 3-tier pharmaceutical benefit plans are designed to steer patients toward lower-cost "preferred" formulary medications by using graded co-payments. However, the effectiveness of these incentives in practice appears to be mixed based on the information provided in the abstracts. Physicians often lack awareness of patients' out-of-pocket costs and "preferred" formulary options at the time of prescribing, with 59% reporting that they never or seldom knew of lower-cost options and 70% being unaware of out-of-pocket costs when prescribing (PUBMED:15697096). Although the majority of physicians agree that managing patients' out-of-pocket costs is important, only 25% feel it is their responsibility to help, instead relying on pharmacists to be familiar with these costs (PUBMED:15697096). Experts believe that multi-tier pharmacy benefit packages will become more prevalent and the techniques used to promote low-cost drugs will intensify (PUBMED:15002642). The impact on health outcomes depends on whether discouraged high-cost drugs have close, low-cost substitutes (PUBMED:15002642). A study comparing a consumer-driven health plan (CDHP) with a three-tier pharmacy benefit design found that CDHP pharmaceutical expenditures were lower than those in the point of service (POS) cohort in one year without differences in the use of brand name drugs (PUBMED:18479407). There was limited evidence of less drug consumption by CDHP enrollees with chronic illnesses, and some evidence of less generic drug use and more mail-order drug use among CDHP members (PUBMED:18479407). In summary, while the intent of 3-tier pharmaceutical benefit plans is to encourage the use of lower-cost medications, the actual operation of these incentives may not always align with their intended purpose. Physicians' lack of awareness and responsibility for managing out-of-pocket costs, along with the complexity of patient preferences and the nuances of drug substitutions, can lead to less efficient communication of price preferences and potentially suboptimal prescribing decisions (PUBMED:15697096).
Instruction: Is serum ferritin an additional cardiovascular risk factor for all postmenopausal women? Abstracts: abstract_id: PUBMED:15583466 Is serum ferritin an additional cardiovascular risk factor for all postmenopausal women? Background: Most of the studies on cardiovascular disease (CVD) risk factors in menopause have focused on serum lipid(lipoprotein) abnormalities and were conducted in populations which were not well controlled for several important influential factors. Methods: Two homogenous groups of 30 apparently healthy Caucasian premenopausal women and 3-5 years postmenopausal women who were nonobese, nonsmoking and not using estrogen were compared in a well-controlled cross-sectional design. Fasting serum ferritin and plasma total homocysteine (tHcy) were evaluated concomitantly to classical serum lipid(lipoprotein) risk factors. Relationships between risk factors and the influence of other contributing variables such as diet and body weight were also examined. Results: Serum total cholesterol (p &lt; 0.01), low-density lipoproteins (LDL; p &lt; 0.05) and triglycerides (p &lt; 0.05) of postmenopausal women were greater than that of their menstruating counterparts, even though they ate a CVD-preventive diet, had similar body weight and body fat distribution. Their serum ferritin was almost 3-fold greater (p &lt; 0.0001) but was still within normal limits, except for the 38.5% of postmenopausal women who exhibited values above the 80 mug/l limit that has been associated with sharp increases in the rate of heart disease in either gender. Serum ferritin was low in one third of the postmenopausal group (as low as in the premenopausal control group, whose dietary iron intake was slightly below the nutritional recommendation). The mean plasma tHcy of the postmenopausal group was almost twice as elevated (p &lt; 0.0001). Both ferritin and tHcy were found to be linked to serum cholesterol. The correlation between tHcy and triglycerides was also significant. Conclusion: Early menopause is not associated with blood iron overload and CVD risk factor in an important proportion of women. abstract_id: PUBMED:8199174 Serum ferritin, sex hormones, and cardiovascular risk factors in healthy women. The protective effect of endogenous sex hormones is commonly believed to explain the gender gap in the risk of coronary heart disease and the diminished protection in women when menopause occurs. Recent reports indicate that iron overload, due to cessation of menstrual bleeding, may be an important factor. We therefore investigated iron stores by serum ferritin measurements in healthy premenopausal (n = 113) and postmenopausal (n = 46) women. Ferritin levels were higher in postmenopausal than in premenopausal women, both in blood donors (43.4 versus 23.1 micrograms/L, P &lt; .001) and in nondonors (71.7 versus 32.8 micrograms/L, P &lt; .001). Serum ferritin was positively correlated with age (r = .36, P &lt; .001). After age adjustment, serum ferritin was positively correlated with hemoglobin, hematocrit, serum total cholesterol, and low-density lipoprotein (LDL) cholesterol. Total cholesterol was correlated with age (r = .66, P &lt; .001), as were LDL cholesterol (r = .60, P &lt; .01) and high-density lipoprotein cholesterol (r = .32, P &lt; .01). Neither ferritin nor serum lipids were directly associated with female sex hormone levels. The mutual relation between ferritin, hemoglobin, and hematocrit probably only indicates their usefulness as measures of body iron. The parallel rise in serum ferritin, total cholesterol, and LDL cholesterol might contribute to the increased risk of coronary heart disease among postmenopausal women. abstract_id: PUBMED:26057821 Effects of long-term risedronate treatment on serum ferritin levels in postmenopausal women with osteoporosis: the impact of cardiovascular risk factor load. Objective: Although ferritin has been considered as a possible link between accelerated bone loss and atherosclerosis, the long-term impact of therapeutic agents widely used to treat osteoporosis, such as bisphosphonates, on ferritin levels has not been investigated. The present study investigated the effects of risedronate on serum ferritin levels in postmenopausal women with osteoporosis. Methods: In an open-label, prospective, uncontrolled study, 68 postmenopausal women with osteoporosis were evaluated. Study participants received risedronate orally at a dose of 35 mg/week during a 6-month treatment period. Blood sampling for lipid profile, hemoglobin A1c, insulin, fibrinogen, C-reactive protein, osteoprotegerin, and ferritin was performed at baseline and after 6 months of treatment. Pulse-wave velocity and augmentation index at baseline were determined using SphygmoCor version 7.1 (AtCor Medical, Sydney, Australia). Results: Mean (SD) serum ferritin decreased significantly from 62.1 (44.8) to 46.7 (29.4) μg/dL (P &lt; 0.0001) during the treatment period. On multiple linear regression analysis, the significant predictors of Δferritin were pulse-wave velocity (P = 0.04; effect size, 0.188), C-reactive protein (P = 0.021; effect size, 0.043), insulin (P = 0.011; effect size, 0.100), and high-density lipoprotein cholesterol (P = 0.046; effect size, 0.132) at baseline. Conclusions: Risedronate treatment is associated with significantly decreased serum ferritin levels in postmenopausal women with osteoporosis and cardiovascular risk factors. abstract_id: PUBMED:16002760 Serum ferritin is a risk factor for stroke in postmenopausal women. Background And Purpose: Iron is an essential element for the human body. It has, however, been suggested that excessive iron stores may increase the risk of vascular disease. So far, epidemiologic studies on stroke are sparse. Methods: We studied the association between iron status and stroke risk in a population-based cohort of 11 471 Dutch postmenopausal women between 49 and 70 years of age. Women were included between 1993 and 1997 and followed up until January 1, 2000, for cerebrovascular events. We conducted a case-cohort study by using all stroke cases (n=63) and a random sample of the baseline cohort (n=1134). Serum ferritin, serum iron, and transferrin saturation were measured as markers of iron status. A weighted Cox proportional-hazards model was used to estimate crude and multivariate-adjusted hazard ratios for tertiles of different iron parameters in relation to stroke. Results: In a multivariate model, the highest tertile of serum ferritin concentration was associated with an increased risk of stroke (hazard ratio [HR], 1.45; 95% confidence interval [CI], 0.87 to 2.42) compared with the lowest tertile. For ischemic stroke, the increase was more pronounced (HR, 2.23; 95% CI, 1.05 to 4.73) and reached statistical significance. Conclusions: Neither serum iron nor transferrin saturation was associated with an increased stroke risk. However, higher serum ferritin concentrations in postmenopausal women are associated with an increased risk of ischemic stroke. abstract_id: PUBMED:29744010 Prevention of Cardiovascular Disease in Women. Cardiovascular diseases are the leading cause of morbidity and mortality among women worldwide. The pathophysiological basis of cardiovascular health among men and women is not identical. This leads to variable cardiovascular responses to stimulus and presentation of cardiovascular disease symptoms, both of which can have a direct effect on treatment outcomes. Traditionally, the enrollment of women in clinical trials has been minimal, resulting in a lack of gender-specific analysis of clinical trial data and, therefore, the absence of concrete risk factor assessment among women. However, scientific progress in the past decade has identified a spectrum of risk factors for cardiovascular diseases that may be specific to women. These risk factors, which may include menopause, hypertensive disease of pregnancy, and depression, confer additional risk in women besides the traditional risk factors. The current state of knowledge and awareness about these risk factors is suboptimal at this time. Therefore, although the treatment of cardiovascular diseases is similar in both genders, appropriate risk stratification may be limited in women compared to men. The purpose of this review is to describe the recent trends in identifying female-specific risk factors for cardiovascular diseases, their utility in risk stratification, and current pharmacological options for women with regard to cardiovascular disease prevention. abstract_id: PUBMED:30106314 Association of serum ferritin levels with non-alcoholic fatty liver disease in postmenopausal women. Objective: This study aimed to investigate the association between serum ferritin levels and the presence of non-alcoholic fatty liver disease (NAFLD) in postmenopausal women. Methods: Two hundred and forty-one postmenopausal women who participated in a heath examination program were enrolled in the present study. Serum ferritin tertiles were categorized as follows: T1, ≤46.4 ng/ml; T2, 46.5-76.1 ng/ml; and T3, ≥76.2 ng/ml. The odds ratios (ORs) and 95% confidence intervals (CIs) for NAFLD were calculated after adjusting for confounding variables across serum ferritin tertiles using multiple logistic regression analysis. Results: The overall prevalence of NAFLD was 41.4% and was significantly increased in accordance with the serum ferritin tertiles as follows: 30.0% for T1, 40.7% for T2, and 54.3% for T3, respectively. As compared with the lowest tertile, the OR (95% CI) for NAFLD in the highest tertile was 2.69 (1.16-5.28) after adjusting for age, body mass index, regular exercise, mean arterial pressure, fasting plasma glucose, triglyceride, high-density lipoprotein cholesterol, alanine aminotransferase, and leukocyte count. Conclusion: The serum ferritin level was positively and independently associated with NAFLD in postmenopausal women and could be a useful additional measure in assessing the risk of NAFLD in postmenopausal women. abstract_id: PUBMED:22230289 Serum ferritin levels associated with increased risk for developing CHD in a low-income urban population. Objective: The present study examined the association of serum ferritin with CHD risk using the Framingham Heart Study's 10-year risk algorithm. Design: Ordinal logistic regression modelling was used to interpret risk. Proportional odds modelling assessed four divisions of ranked CHD risk (4, high; 3, increased; 2, slight; 1, minimal), separately by sex. Setting: Baltimore, MD, USA. Subjects: African-American and white participants (n 1823) from baseline of the Healthy Aging in Neighborhoods of Diversity across the Life Span (HANDLS) study, aged 30-64 years. Results: For men, there was a 0·5 % increase in risk for every 10-unit rise in serum ferritin (pmol/l). Other significant predictors included increased BMI, white race, unemployment and C-reactive protein ≥9·5 mg/l. For women, there was a 1·5 % [corrected] increase in risk per 10-unit rise in serum ferritin (pmol/l). Other significant predictors included increased BMI, lower education, unemployment and C-reactive protein ≥9·5 mg/l. Conclusions: Serum ferritin is a significant predictor of 10-year hard CHD risk for HANDLS study participants, a low-income, urban population. Serum ferritin, independent of elevated C-reactive protein, was associated with increased 10-year CHD risk for HANDLS participants. To our knowledge, these data provide the first evidence of the role of serum ferritin as a risk factor for hard CHD in African-American and white postmenopausal women in the USA. Future research on cardiovascular events from this prospective study may confirm the association. abstract_id: PUBMED:16253641 Association among serum ferritin, alanine aminotransferase levels, and metabolic syndrome in Korean postmenopausal women. We examined the relationships among serum ferritin, alanine aminotransferase (ALT) levels, and cardiovascular risk factors of metabolic syndrome in Korean postmenopausal women. We conducted a cross-sectional study of 959 postmenopausal women without an apparent cause of liver disease. Metabolic syndrome was defined as the presence of at least 3 of the following: elevated blood pressure, low high-density lipoprotein cholesterol, elevated serum triglycerides, elevated plasma glucose, and abdominal obesity. Serum ferritin and ALT levels were found to be correlated (r=0.374, P&lt;.001) and to be associated with the components of metabolic syndrome. Subjects with metabolic syndrome showed significantly higher serum ferritin (74.7+/- 2.0 vs 59.6+/- 2.0 ng/mL, P&lt;.001) and ALT levels (21.3+/-1.6 vs 18.7+/-1.5 IU/L, P&lt;.001). Moreover, the greater the number of metabolic syndrome components present, the higher were the serum ferritin and ALT levels (P&lt;.001). Multiple regression analysis showed that serum ALT levels are significantly associated with serum ferritin levels, waist circumference, fasting blood glucose, age, and white blood cell count (adjusted R(2)=0.147). Elevated iron stores were positively associated with serum ALT levels and metabolic syndrome in Korean postmenopausal women. abstract_id: PUBMED:38358069 Serum iron and ferritin levels in female patients with gingivitis and periodontitis. Aim: The aim of the study was to compare serum ferritin and iron levels and periodontal status in pre- and post-menopausal female patients. Materials And Methods: A total of 108 systemically healthy patients were included in the study, 63 patients in the pre-menopause group and 45 patients in the post-menopause group. The periodontal diagnosis was made according to the new periodontal disease classification of the American Academy of Periodontology and the European Federation of Periodontology Serum Iron (μg/dL) and Serum Ferritin (ng/mL) values were analysed in patients divided into groups according to menopausal status and periodontal status. Result: The mean ferritin in pre-menopausal women (34.96 ± 27.87 ng/mL) was lower than the mean ferritin in post-menopausal women (64.24 ± 36.05 ng/mL), and this difference was found to be statistically significant (P&lt;0.001). There was no statistically significant difference between the averages of iron and ferritin according to periodontal diagnosis in pre-menopausal women (P = 0.200 and P = 0.858). A statistically significant difference was found between serum ferritin averages according to periodontal diagnosis in post-menopausal women (P &lt; 0.001). Serum ferritin at % Interdental bone loss = 0 in Post-M was 33.93 ± 8.65 ng/mL; 47.51 ± 16.17 ng/mL in Stage 1 periodontitis; 72.66 ± 18.12 ng/mL in Stage 2 periodontitis; It was found to be 87.4 ± 47.1 ng/mL in Stage 3-4 periodontitis. Conclusion: Serum ferritin values might vary depending on the presence or severity of periodontitis in post-menopausal women. abstract_id: PUBMED:16899440 Managing cardiovascular risk in menopausal women. Blood pressure control and prevention of glucose intolerance are primary factors in overcoming the increased cardiovascular risks in menopausal women. This heightened risk may partially be explained by the metabolic syndrome - a precursor of type 2 diabetes - in which the renin-angiotensin-aldosterone system may play a pivotal role. Once diabetes occurs, the cardiovascular risk is considerably greater in postmenopausal women than in men - especially if hypertension is also present. An additional risk factor, weight gain, is common in postmenopausal women not treated with hormone replacement therapy. Rigorous control of blood pressure has been shown to be particularly beneficial in women with metabolic syndrome; a reduction in blood pressure can reduce the mortality rate of ischemic stroke. The administration of hormone replacement therapy can also reduce the likelihood of coronary heart disease in postmenopausal women; therefore therapy should be started early in the menopausal transition to maximize cardiovascular protection. As such, an ideal hormone replacement therapy that can overcome hypertension, prevent body weight gain and control serum triglycerides offers an important advance in cardiovascular risk management during the menopause. Answer: The relationship between serum ferritin levels and cardiovascular risk in postmenopausal women is complex and appears to be influenced by various factors. Serum ferritin, a marker of iron stores, has been associated with cardiovascular risk factors and diseases in several studies, but it may not be an additional risk factor for all postmenopausal women. Some studies have found that postmenopausal women have higher serum ferritin levels compared to premenopausal women, and these elevated levels have been linked to an increased risk of cardiovascular disease (CVD) (PUBMED:8199174, PUBMED:16002760). For instance, higher serum ferritin concentrations have been associated with an increased risk of ischemic stroke in postmenopausal women (PUBMED:16002760). Additionally, serum ferritin levels have been positively correlated with age, hemoglobin, hematocrit, serum total cholesterol, and low-density lipoprotein (LDL) cholesterol, which might contribute to the increased risk of coronary heart disease among postmenopausal women (PUBMED:8199174). However, not all postmenopausal women with elevated serum ferritin levels are at equal risk. The study by PUBMED:15583466 suggests that early menopause is not associated with blood iron overload and CVD risk factor in a significant proportion of women. Moreover, the study by PUBMED:26057821 indicates that risedronate treatment, a medication used to treat osteoporosis, is associated with significantly decreased serum ferritin levels in postmenopausal women with osteoporosis and cardiovascular risk factors. Furthermore, the study by PUBMED:30106314 found that serum ferritin levels were positively and independently associated with non-alcoholic fatty liver disease (NAFLD) in postmenopausal women, suggesting that serum ferritin could be a useful additional measure in assessing the risk of NAFLD in this population. In summary, while serum ferritin has been associated with increased cardiovascular risk in some postmenopausal women, it is not a universal risk factor for all. Factors such as the presence of other cardiovascular risk factors, the use of certain medications, and individual health conditions can influence the relationship between serum ferritin levels and cardiovascular risk (PUBMED:15583466, PUBMED:8199174, PUBMED:26057821, PUBMED:16002760, PUBMED:30106314). Therefore, serum ferritin should be considered in the context of a broader cardiovascular risk assessment rather than as an isolated risk factor for all postmenopausal women.
Instruction: Self-care activities and glycated haemoglobin in Iranian patients with type 2 diabetes: can coping styles and social support have a buffering role? Abstracts: abstract_id: PUBMED:25104430 Self-care activities and glycated haemoglobin in Iranian patients with type 2 diabetes: can coping styles and social support have a buffering role? Background: Diabetes self-care is a key element in the overall management of diabetes. However, the importance of psychosocial factors for successful disease management is under investigated. This study aimed at exploring the role of coping styles and social support in the relationship between self-care activities and glycated haemoglobin in patients with type 2 diabetes. Methods: One hundred adults (60% female, aged 40-70 years) with type 2 diabetes completed questionnaires assessing self-care activities, coping styles and social support. In addition, a blood test was performed to obtain glycated haemoglobin levels. Results: Result showed significant relationships of glycated haemoglobin with self-care activities, coping styles and social support. Regression analysis indicated that social support had a moderating role on the relationship between self-care activities and glycated haemoglobin, such that, at very high levels of social support the association, between Self-Care and HbA1c disappears. Conclusions: Findings indicate that health care providers, within the context of the Iranian social and cultural situation, should pay more attention to psychosocial factors when addressing self-care activities. Delineation of the role of coping styles and social support might be useful for identifying patients in need of particular counselling and support for improving self-care activities and HbA1c levels. abstract_id: PUBMED:32504379 How do self-care maintenance, self-care monitoring, and self-care management affect glycated haemoglobin in adults with type 2 diabetes? A multicentre observational study. Purpose: To evaluate how self-care maintenance, self-care monitoring, and self-care management affect glycated haemoglobin in type 2 diabetes mellitus patients and to set cut-off points of the Self-Care of Diabetes Inventory scales using glycated haemoglobin as outcome of interest. Methods: A secondary analysis of a previous multicentre observational cross-sectional study was conducted. Overall, 540 adults with type 2 diabetes mellitus confirmed diagnosis were involved. Socio-demographic and clinical data were collected. Self-care maintenance, self-care monitoring, and self-care management were measured by the Self-Care of Diabetes Inventory. Linear regression models were performed to assess the relationship between self-care maintenance, self-care monitoring, and self-care management and glycated haemoglobin. Receiver operating characteristics curves were carried out to identify the best cut-off score for each self-care scale considering glycated haemoglobin &gt;7% as outcome of interest. Results: Self-care monitoring and self-care management were associated to glycated haemoglobin in both patients without (self-care monitoring p = 0.0008; self-care management p = 0.0178) and with insulin therapy (self-care monitoring p = 0.0007; self-care management p = 0.0224). Self-care maintenance was associated to glycated haemoglobin in patients without insulin therapy (p = 0.0118). Cut-off scores providing the best performance were 70 points for self-care maintenance and self-care monitoring, and 60 points for self-care management. Conclusions: Self-care maintenance, self-care monitoring, and self-care management differently affect glycated haemoglobin in patients with type 2 diabetes mellitus. Clinicians could implement tailored interventions to improve glycaemic control considering the lacking area of self-care. abstract_id: PUBMED:35966954 The relationship between self-care activities, social support, and glycemic control in primary healthcare patients with type 2 diabetes. Objective: Egyptian studies in assessing the relationship between diabetes self-care, social support, and glycemic control in primary healthcare (PHC) are limited. Therefore, this study aimed to assess this relationship, and to evaluate the associated factors of diabetes self-care, social support, and glycemic control in Egyptian PHC patients with type 2 diabetes (T2DM). Methods: A cross-sectional study was conducted on 320 T2DM patients at four PHC settings in Port Said city, affiliated with the General Authority of Healthcare. A semi-structured questionnaire was used to collect data, including demographic characteristics, socioeconomic status scale, disease profile, the Arabic versions of the Summary of Diabetes Self-Care Activities, and the received social support scales. Data were collected from January 2020 to June 2020. Results: Diabetes self-care activities, and self-monitoring of blood glucose had a very weak negative correlations with glycated hemoglobin (HbA1c) levels (rho = - 0.125, p = 0.025, rho = - 0.112, p = 0.044, respectively). Receiving social support on following a meal correlated positively and very weakly with HbA1c levels (rho = 0.145, p = 0.010). Hardly positive correlation was found between receiving emotional support on feelings about diabetes, and following a specific diet (rho = 0.169, p = 0.002). Diabetes self-care activities were positively associated with higher education levels, and elevated BMI. Received social support was negatively associated with having coronary artery disease, and marital status e.g. divorced and widow. Increased age, and female gender were the predictors of good glycemic control. Conclusion: Diabetes self-care activities were linked with reduced HBA1c levels. Further studies are needed to evaluate the buffering effect of social support on glycemic outcomes in PHC patients with T2DM. abstract_id: PUBMED:25976591 The role of social support on the relationship of depressive symptoms to medication adherence and self-care activities in adults with type 2 diabetes. Aim: To examine the mediating role of social support on the relationship of depressive symptoms to medication adherence and self-care activities in Korean adults. Background: Recent evidence suggests that higher levels of social support are associated with improved medication adherence and self-care activities; however, the role of social support on the relationship of depressive symptoms to medication adherence and self-care activities is less well understood. Design: A cross-sectional survey. Methods: The data were collected from 311 Korean adults with type 2 diabetes who were taking hypoglycaemic agents in the period 2012-2013. Depressive symptoms, social support, medication adherence and self-care activities were assessed using structured questionnaires. Multiple regression analysis with adjustment for covariates and the Sobel test were used to examine the mediating effect of social support on the relationship of depressive symptoms to medication adherence and self-care activities. Results: There were statistically significant differences by subgroups with and without depressive symptoms in social support, medication adherence and self-care activities of diet, physical activity and stress management. The Sobel test confirmed that social support mediated the effect of depressive symptoms on medication adherence and self-care activities of diet, physical activity and stress management. Conclusion: The evidence from this study suggests that social support-enhancing interventions that also manage depressive symptoms may be more timely and effective than interventions that target depressive symptoms alone in promoting adherence to medication adherence and self-care activities in this population. abstract_id: PUBMED:31871703 Factors associated with subsequent diabetes-related self-care activities: The role of social support and optimism. Aim: This study aimed to explore how social support (external factor), optimism (internal factor) and their interaction associated with diabetes-related self-care activities (DRSCA) over 3 months among people with type 2 diabetes mellitus (T2DM). Design: Both questionnaire-based and telephone-based survey were used. The data were collected face to face, the first time by questionnaire and the second time by telephone. Methods: One hundred and fifty-five patients completed valid survey questionnaires (response rate was about 70% in the first and 62% in the second round). The association of social support and optimism with subsequent DRSCA was examined after adjusting for demographics and disease information. Results: Based on results, optimism was significantly associated with subsequent DRSCA. In the dimensions of social support, objective social support and support use were significantly associated with subsequent DRSCA. The results showed that the mediation of optimism between the dimensions of social support and DRSCA was not significant after controlling for covariates. The results also indicated that social support and optimism played directly an important role in improving diabetes-related self-care activities. abstract_id: PUBMED:34562349 The Role of Social Support in Diabetes Self-Management Activities Among African-Americans. Understanding the association between social support and diabetes self-management activities in low income African-Americans provides knowledge for increasing culturally competent care. Diabetes mellitus poses a huge socioeconomic burden on the African-American population.  Diabetes self-management is a complex process that requires the ability to manage symptoms, treatment, and physical and psychosocial changes in lifestyle. Norbeck's Social Support Questionnaire (NSSQ) and the Summary of Diabetes Self-Care Activities (SDSA) were used to examine the relationships between types and strengths of social support and selected diabetes self-care activities (e.g., diet, exercise, daily blood sugar testing, foot care, medications) in a random sample of 85 adult African-Americans with diabetes. Results of this study show inconsistent associations between social support and diabetes self-management activities. Significant positive correlations were found between participants washing their feet and emotional support, instrumental support, and functional support, r = .255, .253, .257 (ps &lt; 0.05). However, the self-care activity of foot care was negatively associated with emotional social support, r = -.220, p (two-tailed) = .043 and total functional support, r = .214, p (two-tailed) = 0.05. abstract_id: PUBMED:34828652 Social Support, Self-Care Behaviour and Self-Efficacy in Patients with Type 2 Diabetes during the COVID-19 Pandemic: A Cross-Sectional Study. Diabetes mellitus is a major public health issue that considerably impacts mortality, morbidity, and healthcare costs worldwide. The COVID-19 pandemic has created havoc in diabetes management, too, like other spectrums of life. A descriptive, cross-sectional study was adopted to determine the effect of Social Support, Self-Care Behaviour and Self-Efficacy in Type 2 Diabetes Mellitus (T2D) during this COVID-19 pandemic. Two hundred T2D patients who satisfied the inclusion criteria were chosen using a convenient sampling procedure. The tool consists of four sections, including socio-demographic characteristics, Multidimensional Scale of Perceived Social Support (MSPSS), revised Summary of Diabetes Self-Care Activities (SDSCA) Scale and modified Diabetes Management Self-Efficacy Scale (DMS). Descriptive and inferential statistics were used to analyze the obtained data. The mean and SD of diabetic management self-efficacy is 5.74 (1.95) and 4.37 (1.4), respectively, for patients with HbA1c &lt; 6.5% and HbA1c ≥ 6.5%. The self-care activities of the patients who had good glycemic control were 4.31 (2.06) compared to 3.50 (1.73) who did not. The social support received by the patients was 6.13 (2.13) vs. 5.31 (1.67) among patients with glycemic control vs. no control. The results show that social support (p = 0.04), self-efficacy (p =0.01) and self-care activities (p = 0.001) were significantly related to the level of glycemic control of the T2D patients. A significant relationship was also identified between gender (p = 0.036), age (p = 0.001) and education status (p = 0.000) with HbA1c control of the participants. This study demonstrates a significant relationship between social support, self-care behaviours, self-efficacy and glycemic management in T2D patients. During this COVID-19 pandemic, interventions to enhance the self-care activities like exercise and social support to boost their self-efficacy; for better diabetes management, reducing diabetes complications or prolonging their onset are the need of the hour. abstract_id: PUBMED:36733216 Systematic review of the effectiveness of nurse-led care in reducing glycated haemoglobin in adults with Type 1 or 2 diabetes. Background: Type 1 and 2 diabetes care, especially within primary health-care settings, has traditionally involved doctor-led clinics. However, with increasing chronic disease burden, there is scope for nurses to expand their role in assisting diabetes self-management. Aims: This study aimed to determine the effectiveness of nurse-led care in reducing glycated haemoglobin in adults with Type 1 or 2 diabetes. Methods: Methodology from the Joanna Briggs Institute Method for Systematic Review Research and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed, including identifying publications, assessing study quality, summarizing evidence and interpreting findings. The search strategy involved using the Medical Subject Headings and keyword variations when searching MEDLINE (Ovid), Scopus, PubMed and the Cumulative Index to Nursing and Allied Health Literature (CINAHL) databases. Inclusion criteria were samples with Type 1 or 2 diabetes, mean age of ≥18 years, English language studies and publication date of January 2011-December 2021. Results: Overall, 34 articles from 16 countries met inclusion criteria. Though not always clinically significant, results indicated that nurse-led care had beneficial impacts on glycated haemoglobin values, with reductions from 0.03% to 2.0%. This was evident when nurses received formal training, used treatment algorithms, had limited medical support, utilized technology and offered defined culturally sensitive and appropriate diabetes care. Conclusions: Findings support nurse-led Type 1 and 2 diabetes care. Although further research is required, changes may necessitate increased recognition of nurse-led care and funding. Nurse-led care models should differ according to health-care settings. abstract_id: PUBMED:23411653 Spiritual and religious beliefs and practices and social support's relationship to diabetes self-care activities in African Americans. Purpose: The purpose of this study is to investigate the relationship among spiritual and religious beliefs and practices, social support, and diabetes self-care activities in African Americans with type 2 diabetes, hypothesizing that there would be a positive association. Method: This cohort study used a cross-sectional design that focused on baseline data from a larger randomized control trial. Diabetes self-care activities (summary of diabetes self-care activities) and sociodemographic characteristics were assessed, in addition to spiritual and religious beliefs and practices and social support based on the systems of belief inventory subscales I (beliefs and practices) and II (social support). Results: There were 132 participants: most were women, middle-aged, obese, single, high school educated, and not employed. Based on Pearson correlation matrices, there were significant relationships between spiritual and religious beliefs and practices and general diet. Additional significant relationships were found for social support with general diet, specific diet, and foot care. Based on multiple linear regression, social support was a significant predictor for general diet, specific diet, and foot care. Sex was a significant predictor for specific diet, and income was a significant predictor for blood glucose testing. Conclusions: The findings of this study highlight the importance of spiritual and religious beliefs and practices and social support in diabetes self-care activities. Future research should focus on determining how providers integrate patients' beliefs and practices and social support into clinical practice and include those in behavior change interventions. abstract_id: PUBMED:33258148 The buffering effect of social support on diabetes distress and depressive symptoms in adults with Type 1 and Type 2 diabetes. Background: Social support may buffer or decrease the negative effects of diabetes distress (DD) and depressive symptoms on diabetes outcomes. We assessed the buffering role of social support in the relationship between DD and self-care and depressive symptoms and self-care in adults with Type 1 (T1D) and Type 2 (T2D) diabetes. Methods: Participants completed the Diabetes Distress Scale for T2D or T1D, the Patient Health Questionnaire-9, the Medical Outcomes Study Social Support Survey and the Self-Care Inventory-Revised. We conducted hierarchical multiple regression models using SPSS version 26.0. Results: A total of 325 adults (median age = 40.5 years, 62.2% women, 86.5% White; 59.7% T2D, A1C = 59 ± 6 mmol/mol or 7.5 ± 1.6%; median duration = 11.0 years) participated. Greater social support buffered the negative effects of DD on self-care (R2 Δ = 0.015, p = 0.024) as well as depressive symptoms on self-care (R2 Δ = 0.024, p = 0.004) in participants with T1D and T2D. Both regression models recorded medium effect sizes (F2 = 0.220, F2 = 0.234 respectively). Social support subscale analyses showed tangible support (R2 Δ = 0.016, p = 0.018) and affectionate support (R2 Δ = 0.016, p = 0.020) buffered DD and self-care, and emotional support (R2 Δ = 0.015, p = 0.022), tangible support (R2 Δ = 0.020, p = 0.009), affectionate support (R2 Δ = 0.025, p = 0.004) and positive interaction support (R2 Δ = 0.017, p = 0.018) buffered depressive symptoms and self-care. Conclusions: Findings suggest that social support buffers the impact of DD and depressive symptoms on self-care in adults with T1D and T2D. Additional research is needed to confirm the buffering role of social support on DD and depressive symptoms. Greater understanding of these interactions may help improve clinical care and outcomes. Answer: The study conducted on Iranian patients with type 2 diabetes found that there were significant relationships between glycated haemoglobin (HbA1c) levels and self-care activities, coping styles, and social support. The regression analysis indicated that social support had a moderating role in the relationship between self-care activities and HbA1c levels. Specifically, at very high levels of social support, the association between self-care activities and HbA1c disappeared, suggesting that social support can indeed have a buffering role in the management of type 2 diabetes among Iranian patients (PUBMED:25104430). This implies that healthcare providers should consider psychosocial factors, including coping styles and social support, when addressing self-care activities to improve HbA1c levels in this population.
Instruction: Triangular cord sign in detection of biliary atresia: is it a valuable sign? Abstracts: abstract_id: PUBMED:36276196 'Triangular Cord' Sign in Biliary Atresia Biliary atresia is an unknown etiology of extrahepatic bile duct obstruction with a 'fibrous ductal remnant,' which represents the obliterated ductal remnant in the porta hepatis. The sonographic'triangular cord' (TC) sign has been reported to indicate a fibrous ductal remnant in the porta hepatis. In this review, we discuss the correlations among surgicopathological and sonographic findings of the porta hepatis and the definition, objective criteria, diagnostic accuracy, and differential diagnosis of the TC sign in biliary atresia. abstract_id: PUBMED:34524521 The triangular cord ratio and the presence of a cystic lesion in the triangular cord. Suggested new ultrasound findings in the early diagnosis of Biliary Atresia. Background: In biliary atresia (BA), the ultrasonic triangular cord (TC) sign is positive at ≥ 3 mm, but sometimes there is BA even if it is ≤ 3 mm. For improving the ultrasonographic diagnosis, we have established a new evaluation, adding the ratio of the anterior/posterior thickness (TC ratio) in the hyperechoic area and the presence of a cystic lesion in the triangular cord (TCC). Methods: We examined 24 cases of suspected BA who demonstrated acholic stools from 2006 to 2020. We retrospectively reviewed the timing of ultrasonographic diagnosis, the gallbladder diameter, gallbladder mucosal irregularity, the TC sign, TCC, and the TC ratio. Results: In the BA group (n = 10) vs the Non-BA group (n = 14), the age at ultrasonography was 75 ± 41.7 vs. 81 ± 39.1 days (p = 0.72), the gallbladder diameter was 12.1 ± 9.7 vs. 24.2 ± 6.96 mm (p = 0.02), irregularity of gallbladder mucosa was 7 cases vs. 1 case (p &lt; 0.01), and TC sign was 3.9 ± 1.3 vs. 2.0 ± 0.49 mm (p = 0.01), respectively. TCC was observed in 8/10 cases in the BA group and none in the Non-BA group (p &lt; 0.01). TC ratio was 3.40 ± 0.68 (BA group) and 1.59 ± 0.41 (Non-BA group) (p &lt; 0.01). Conclusion: The ultrasonic TC ratio improves the diagnostic accuracy of BA. TCC is a specific finding in the BA group. abstract_id: PUBMED:12890244 Triangular cord sign and ultrasound features of the gall bladder in infants with biliary atresia. The aim of this study was to reassess the accuracy of the triangular cord sign, the triangular cord sign coupled with abnormal gall-bladder length, and an irregular gall-bladder wall in the diagnosis of biliary atresia. The ultrasonograms of 46 infants with cholestatic jaundice were reviewed for the triangular cord sign, gall-bladder length and gall-bladder wall without knowledge of the clinical data. Of the 23 infants with biliary atresia, 22 had the triangular cord sign whereas 17 infants with other causes of cholestatic jaundice did not have the triangular cord sign. The sensitivity, specificity, accuracy and positive predictive value of the triangular cord sign in the diagnosis of biliary atresia were 95.7, 73.9, 84.8 and 78.6%, respectively. The sensitivity, specificity, accuracy and positive predictive value of the triangular cord sign coupled with abnormal gall-bladder length in the diagnosis of biliary atresia were all 95.7%. Gall-bladder wall irregularity was seen in seven of 14 infants (50%) with biliary atresia whose gall bladders contained bile on ultrasound and in two of 22 infants (9.1%) without biliary atresia whose gall bladders contained bile on ultrasound. At the medical centre where this study was performed and where infants present with cholestatic jaundice at an advanced stage, the ultrasonographic triangular cord sign coupled with abnormal gall-bladder length is more reliable than the ultrasonographic triangular cord sign alone or gall-bladder wall irregularity in the diagnosis of biliary atresia. abstract_id: PUBMED:27553925 The porta hepatis microcyst: an additional sonographic sign for the diagnosis of biliary atresia. Objectives: To describe and evaluate an additional sonographic sign in the diagnosis of biliary atresia (BA), the microcyst of the porta hepatis, in comparison with previously described signs. Methods: Ultrasound performed in 321 infants (mean age 55 days) with cholestasis were retrospectively analyzed. BA was surgically confirmed in 193 patients and excluded in 128. US evaluated gallbladder type (1: normal; 2: consistent with BA; 3: suspicious), triangular cord sign (TCS), microcyst and macrocyst, polysplenia syndrome, portal hypertension, and bile duct dilatation. T test and Pearson χ2 test were used to compare US signs between the two groups, followed by univariate regression analysis. Results: The highest specificity and sensitivity for BA (p &lt; 0.001) were respectively obtained with non-visible gallbladder (100 %-13 %), macrocyst (99 %-10 %), polysplenia (99 %-11 %), microcyst (98 %-20 %), type 2 gallbladder (98 %-34 %), and TCS (97 %-30 %). Combination of signs (macro or microcyst; cyst and no bile duct dilatation; microcyst and/or TCS; type 2 gallbladder and/or cyst) provided better sensitivities (25-49 %) with similar specificities (95-98 %) (p &lt; 0.001). On univariate analysis, the single US signs most strongly associated with BA were polysplenia (odds ratio, OR 16.3), macrocyst (OR 14.7), TCS (OR 13.4) and microcyst (OR 8). Conclusions: Porta hepatis microcyst is a reliable US sign for BA diagnosis. Key Points: • The porta hepatis microcyst is a specific sign of biliary atresia. • It was found in 31 (16.1 %) of 193 patients with biliary atresia. • Its specificity was 98 % (p &lt; 0.001). • High frequency transducer and color Doppler can show the porta hepatis microcyst. abstract_id: PUBMED:28556285 Diagnostic Performance of Sonographic Features in Patients With Biliary Atresia: A Systematic Review and Meta-analysis. Objectives: To evaluate the diagnostic performance of sonographic features of biliary atresia in patients with infantile cholestasis. Methods: The Ovid-MEDLINE and EMBASE databases were searched for studies of the diagnostic performance of sonographic features of biliary atresia in patients with infantile cholestasis. A meta-analysis was performed to evaluate the diagnostic performance of the triangular cord sign and other sonographic features in patients with biliary atresia. Results: Seventeen eligible studies with 1444 patients were included. The triangular cord sign had high accuracy for diagnosing biliary atresia: the meta-analytic summary sensitivity and specificity were 85% (95% confidence interval, 77%-90%) and 97% (95% confidence interval, 94%-99%), respectively. The area under the hierarchical summary receiver operating characteristic curve was 0.97. Meta-regression analysis revealed that the cutoff thickness of the triangular cord sign (3 or 4 mm) was a significant factor affecting study heterogeneity (P &lt; .01). The proportions of nonidentification of the gallbladder (GB) ranged from 0% to 53%. Abnormal GB morphologic characteristics, nonvisualization of the common bile duct, and the presence of hepatic subcapsular flow showed relatively high sensitivities and specificities. Conclusions: The triangular cord sign and other sonographic features, including abnormal GB morphologic characteristics, nonvisualization of the common bile duct, and the presence of hepatic subcapsular flow, had high diagnostic performance for the diagnosis of biliary atresia. abstract_id: PUBMED:27010179 Ultrasound for the Diagnosis of Biliary Atresia: A Meta-Analysis. Objective: The purpose of this meta-analysis was to summarize the evidence on the accuracy of various ultrasound findings for excluding a diagnosis of biliary atresia. Materials And Methods: We searched MEDLINE and the Web of Science databases for the period from January 1990 to May 2015. To be included, studies had to satisfy two criteria. First, the data needed to include 2 × 2 contingency data on the diagnostic accuracy of ultrasound in identifying biliary atresia in at least 10 patients with and 10 patients without disease. Second, the study needed to use surgery or biopsy for biliary atresia and surgery, biopsy, clinical follow-up, or some combination of the three as the reference standard for the exclusion of biliary atresia. The methodologic quality of each study was assessed with version 2 of the Quality Assessment of Diagnostic Accuracy Studies tool. Estimated sensitivity and specificity of each ultrasound characteristic were calculated using a random-effects model. Results: Twenty-three studies published during 1998-2015 were included. Summary sensitivity and specificity were 0.85 (95% CI, 0.76-0.91) and 0.92 (95% CI, 0.81-0.97), respectively, for gallbladder abnormalities in 19 studies; 0.74 (95% CI, 0.61-0.84) and 0.97 (95% CI, 0.95-0.99), respectively, for triangular cord sign in 20 studies; and 0.95 (95% CI, 0.70-0.99) and 0.89 (95% CI, 0.79-0.94), respectively, for the combination of the triangular cord sign and gallbladder abnormalities in five studies. Subgroup analysis of an absent gallbladder in 10 studies yielded a summary specificity of 0.99 (95% CI, 0.93-1.00). Conclusion: The triangular cord sign and gallbladder abnormalities are the two most accurate and widely accepted ultrasound characteristics for diagnosing or excluding biliary atresia. Other ultrasound characteristics are less valuable for diagnosis or exclusion of biliary atresia. abstract_id: PUBMED:19579927 Ultrasound findings in biliary atresia: the role of triangular cord sign. In this prospective study, 49 infants with prolonged jaundice, were studied between January 2005 to April 2007 in Tabriz Children's Hospital. All of these infants were evaluated with ultrasonography and isotopscan and finally biopsy was done in all of them under guide of sonography. In their sonographic evaluation, absence or presence of gall bladder, its size and also its evacuation after feeding was checked and beside this triangular cord sign was controlled. The PPV and specificity of triangular cord sign in diagnosis of extrahepatic biliary atresia (EHBA) was 100% but its NPV was only 78.5%. The overall accuracy, PPV, NPV, specificity and sensitivity ofultrasonographic findings in diagnosis of EHBA was about 94, 92, 94, 97 and 86%. Ultrasonography is a reliable screening method in early diagnosis of EHBA and triangular cord sign increases its accuracy especially its specificity. abstract_id: PUBMED:19229615 Triangular cord sign in detection of biliary atresia: is it a valuable sign? Background: Early detection of biliary atresia (BA) has a vital role in prevention of liver cirrhosis in these patients. There are some evidences that triangular cord (TC) sign, i.e., triangular structure located cranial to the portal vein bifurcation on ultrasonographic examination, is suggestive of BA in suspected cases. The aim of this study is to evaluate and compare the sensitivity, specificity, and accuracy of TC sign with other methods of diagnosis such as hepatobiliary scan. Methods: Fifty-eight infants referred to pediatric gastroenterology ward with diagnosis of infantile cholestasis from March 2004 to March 2008 were evaluated to find the cause of cholestasis. Diagnosis was made by means of history, clinical examination, hepatobiliary scan, and liver biopsy. Ultrasonographic examination was focused on presence of TC sign in patients. If the diagnosis was in favor of BA, patient was sent for direct cholangiography as a gold-standard test for confirmation of the diagnosis. The sensitivity, specificity, and accuracy of the tests were compared with golden standard. Results: Among 58 infants with infantile cholestasis, BA was diagnosed and confirmed in 10 infants (17.2%). Hepatobiliary scintigraphy had 80% sensitivity, 72.9% specificity, and 74.1% accuracy. TC sign had 70% sensitivity, 95.8% specificity, and 91.3% accuracy. Conclusion: TC sign is more accurate than hepatobiliary scan and has acceptable sensitivity and specificity for diagnosis of BA. abstract_id: PUBMED:11521178 Technical innovation for noninvasive and early diagnosis of biliary atresia: the ultrasonographic "triangular cord" sign. In this article, we introduce our experience regarding a new and noninvasive diagnostic tool, using ultrasonography, for the early and definite diagnosis of biliary atresia. We have focussed on the ultrasonographic image of the cone-shaped periportal fibrous mass in infants with biliary atresia since 1992, and have finally identified a triangular or band-like periportal echogenicity ("triangular cord" sign), mainly cranial to the portal vein. Based on our experience and other reports from Japan and Singapore, the ultrasonographic triangular cord sign is a simple, time-saving, highly reliable, and definite tool in the diagnosis of biliary atresia from infantile intrahepatic cholestasis, representing a positive predictive value greater than 95%. We have proposed a new diagnostic strategy in the evaluation of infantile cholestasis, with emphasis on the ultrasonographic triangular cord sign. abstract_id: PUBMED:10663514 Making the diagnosis of biliary atresia using the triangular cord sign and gallbladder length. Background: To evaluate the accuracy and utility of the triangular cord sign and gallbladder length in diagnosing biliary atresia by sonography. Materials And Methods: Sixty fasted infants with cholestatic jaundice aged 2-12 weeks were examined sonographically using a 5-10 MHz linear array transducer, focusing on the triangular cord sign (as described by Choi et al. [1]), the gallbladder, and ducts. The triangular cord is defined as a triangular or tubular echogenic density seen immediately cranial to the portal vein bifurcation; it represents the fibrotic remnant of the obliterated cord in biliary atresia. The findings were blinded to blood chemistry, (99 m)Tc-DISIDA hepatobiliary scintigraphy, and liver biopsy. Diagnosis of biliary atresia was confirmed at surgery and histology. Non-biliary atresia infants resolved medically. Comparative charges of the various investigations was made. Results: ++ Twelve infants had biliary atresia, and ten demonstrated a definite triangular cord. The two false-negatives had small or nonvisualized gallbladders. No false-positives were recorded. Gallbladder length ranged from 0-1.45 cm with a mean of 0. 52 cm in biliary atresia compared to a mean of 2.39 cm in nonbiliary atresia infants. (99 m)Tc-DISIDA hepatobiliary scintigraphy showed no excretion (false-positive) in 23 % of nonbiliary atresia cases. Scintigraphy and liver biopsy charges were 2 and 6 times that of sonography, respectively. Conclusion: The triangular cord sign and gallbladder length together are noninvasive, inexpensive, and very useful markers for biliary atresia. Answer: Yes, the triangular cord sign is considered a valuable sign in the detection of biliary atresia. The triangular cord sign, which is a sonographic finding, has been reported to indicate a fibrous ductal remnant in the porta hepatis, which is characteristic of biliary atresia (PUBMED:36276196). Studies have shown that the triangular cord sign has high diagnostic accuracy for biliary atresia, with a meta-analytic summary sensitivity and specificity of 85% and 97%, respectively (PUBMED:28556285). Additionally, the presence of the triangular cord sign coupled with abnormal gall-bladder length has been found to be more reliable than the triangular cord sign alone or gall-bladder wall irregularity in the diagnosis of biliary atresia (PUBMED:12890244). Further research has suggested that the triangular cord ratio and the presence of a cystic lesion in the triangular cord can improve the ultrasonographic diagnosis of biliary atresia (PUBMED:34524521). The triangular cord sign has also been associated with a positive predictive value (PPV) and specificity of 100% in one study, indicating its reliability as a screening method for early diagnosis of extrahepatic biliary atresia (PUBMED:19579927). Another study found that the triangular cord sign had a sensitivity of 70%, a specificity of 95.8%, and an accuracy of 91.3%, making it more accurate than hepatobiliary scan for the diagnosis of biliary atresia (PUBMED:19229615). Overall, the triangular cord sign is considered a noninvasive, time-saving, and highly reliable diagnostic tool for distinguishing biliary atresia from infantile intrahepatic cholestasis, with a positive predictive value greater than 95% (PUBMED:11521178). When combined with gallbladder length measurement, the triangular cord sign can serve as a very useful marker for biliary atresia (PUBMED:10663514).
Instruction: Are manic symptoms that 'dip' into depression the essence of mixed features? Abstracts: abstract_id: PUBMED:26717522 Are manic symptoms that 'dip' into depression the essence of mixed features? Background: Three symptoms of (hypo)mania that clinically represent mood disorders mixed states have been omitted from the DSM-5 mixed features specifier because 'they fail to discriminate between manic and depressive syndromes'. Therefore, the present study examined the role of distractibility, irritability and psychomotor agitation (DIP) in characterising mixed depressive states. Methods: Fifty in-patients at a specialist mood disorders unit underwent a detailed longitudinal clinical evaluation (3-6 weeks) and were assessed on a range of standardized measures to characterise their illness according to depression subtype, duration of illness and clinical features-including specifically depressive and manic symptoms and the context in which these occur. Results: 49 patients met criteria for major depressive episode, and of these, 34 experienced at least one dip symptom. Patients who endorsed distractibility were more likely to be diagnosed with Bipolar Disorder than Major Depressive Disorder; patients who endorsed irritable mood were more likely to have non-melancholic depression (admixture of depressive and anxiety symptoms), and patients who reported psychomotor agitation experienced a significantly greater number of distinct periods of (hypo)manic symptoms compared with those who did not. Limitations: The present study used a modest sample size and did not control for medication or comorbid illness. Although this is inevitable when examining real-world patients in a naturalistic setting, future research needs to allow for comorbidity and its impact, specifically anxiety. Conclusions: The present findings suggest that all 3 symptoms that have been excluded from DSM-5 may be cardinal features of mixed states, as they 'dip' into depressive symptoms to create a mixed state. abstract_id: PUBMED:37485405 Prevalence of mixed features in patients with first-episode depression: An exploratory multicentric study. Background And Aim: There is limited literature on the prevalence of mixed features in patients with depression, especially from countries in Asia. Our aim was to evaluate the prevalence of "mixed features" in patients with first-episode depression. Materials And Methods: Patients with first-episode depression were evaluated for the presence of mixed features as per the Diagnostic and Statistical Manual (DSM)-5 criteria. They were additionally evaluated on Hamilton Depression Rating Scale (HDRS) and Young Mania Rating Scale (YMRS). Results: About one-sixth (16%) of the patients fulfilled the DSM-5 criteria for the mixed features specifier. The most common manic/hypomanic clinical feature was increased talkativeness or pressure of speech, followed by elevated expansive mood (12.5%), and inflated self-esteem or grandiosity was the least common feature (8.7%). Those with mixed features had higher prevalence of comorbid tobacco dependence and psychotic symptoms. In terms of frequency of depressive symptoms as assessed on HDRS, compared to those without mixed features, those with mixed features had higher frequency of symptoms such as depressed mood, insomnia during early hours of morning, work and activities, agitation, gastrointestinal somatic symptoms, genital symptoms, hypochondriasis, and poorer insight. Conclusion: Mixed features specifier criteria were fulfilled by 16% patients with first-episode depression. This finding suggests that the extension of this specifier to depression can be considered as a useful step in understanding the symptom profile of patients with depression. abstract_id: PUBMED:27869049 Exclusion of overlapping symptoms in DSM-5 mixed features specifier: heuristic diagnostic and treatment implications. This article focuses on the controversial decision to exclude the overlapping symptoms of distractibility, irritability, and psychomotor agitation (DIP) with the introduction of the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) mixed features specifier. In order to understand the placement of mixed states within the current classification system, we first review the evolution of mixed states. Then, using Kraepelin's original classification of mixed states, we compare and contrast his conceptualization with modern day definitions. The DSM-5 workgroup excluded DIP symptoms, arguing that they lack the ability to differentiate between manic and depressive states; however, accumulating evidence suggests that DIP symptoms may be core features of mixed states. We suggest a return to a Kraepelinian approach to classification-with mood, ideation, and activity as key axes-and reintegration of DIP symptoms as features that are expressed across presentations. An inclusive definition of mixed states is urgently needed to resolve confusion in clinical practice and to redirect future research efforts. abstract_id: PUBMED:32008688 Mixed Features in Depression: The Unmet Needs of Diagnostic and Statistical Manual of Mental Disorders Fifth Edition. The Diagnostic and Statistical Manual of Mental Disorders Fifth Edition introduced the specifier "with mixed features" including 3 or more nonoverlapping typical manic symptoms during a major depressive episode in bipolar disorder type I or II or unipolar major depressive disorder. Excluding overlapping excitatory symptoms, which are frequently observed in mixed depression, leaves many patients with mixed depression undiagnosed. As a consequence, alternative diagnostic criteria have been proposed, claiming for the inclusion in the rubric of mixed features the following symptoms: psychomotor agitation, mood lability, and aggressiveness. A deeper diagnostic reconsideration of mixed features in depression should be provided by the new nosologic classification systems. abstract_id: PUBMED:24856554 Efficacy of olanzapine monotherapy in the treatment of bipolar depression with mixed features. Background: This analysis investigated the correlations between the efficacy of olanzapine monotherapy and the number of concurrent manic symptoms in patients treated for bipolar depression. Methods: Pooled data from 2 placebo-controlled olanzapine studies in patients with bipolar I depression were analyzed (total 1214 patients; 690 olanzapine monotherapy patients and 524 placebo patients). Patients were categorized for mixed features by the number of concurrent manic symptoms at baseline (0, 1 or 2, and ≥3, respectively, as measured by a Young Mania Rating Scale item score ≥1). Efficacy was evaluated by change in Montgomery-Åsberg Depression Rating Scale (MADRS) total score from baseline to 6 weeks. Results: Least-squares mean differences between olanzapine and placebo in the change of MADRS total scores were -3.76 (p=0.002), -3.20 (p&lt;0.001), and -3.44 (p=0.002) for mixed features 0, 1 or 2, and ≥3, respectively. The response rates for olanzapine versus (vs.) placebo were 52.6% vs. 39.8%, 50.3% vs. 40.0%, and 42.2% vs. 33.7% for mixed features 0, 1 or 2, and ≥3, respectively. The remission rates for olanzapine vs. placebo group were 46.1% vs. 34.3%, 39.5% vs. 32.0%, and 34.8% vs. 24.1% for mixed features 0, 1 or 2, and ≥3, respectively. No significant interaction between mixed features and treatment was seen in the MADRS changes or response and remission rates. Limitations: Post hoc analyses of the data from 2 previous randomized clinical studies. Conclusions: Olanzapine monotherapy was shown to be effective in the treatment of bipolar depression irrespective of the presence of concurrent manic symptoms. abstract_id: PUBMED:27349656 Depression with Mixed Features in Adolescent Psychiatric Patients. Depression with mixed features is poorly understood, especially in pediatric samples. This study compares symptoms and correlates of depressed adolescent inpatients with mixed features to inpatients with bipolar disorder and major depression. 407 adolescents were administered diagnostic interviews and self-reports, and 262 were categorized as Depression with Mixed Features (MXD; n = 38), Consensus Bipolar (CB; n = 79), or Depression Only (DO; n = 145). Demographic and morbidity information were collected via chart reviews. MXD adolescents evidenced elevated mania-related symptoms compared to DO adolescents. MXD adolescents had elevated Unusually Energetic symptoms and increases for six additional category B mania-related symptoms compared to CB adolescents. MXD adolescents met criteria for more comorbid disorders and reported elevated suicidality, anger, and trauma symptoms compared to CB and DO adolescents. Overall, MXD adolescents evidenced elevated symptomatology compared to other groups, suggesting mixed depression may represent a unique constellation of symptoms meriting further investigation. abstract_id: PUBMED:36645972 A comparative study of prevalence of mixed features in patients with unipolar and bipolar depression. Background: There is a lack of data on the mixed specifier from developing countries like India. Aim: In this background, the present study aimed to evaluate the prevalence of "mixed specifier" in patients with unipolar depression and bipolar depression. The additional aim was to evaluate the sociodemographic and clinical correlates of the mixed specifier. Methodology: 110 patients (51 diagnosed with current episode unipolar depression and 59 diagnosed with current episode bipolar depression) were evaluated on DSM-5 criteria for mixed specifier for depression, Clinically Useful Depression Outcome Scale, Koukopoulos Mixed Depression Rating Scale, Hamilton depression rating scale (HDRS) and Young mania rating scale. Result: According to DSM-5, 11 (21.56%) out of the 51 patients with unipolar depression fulfilled at least 3 out of the 7 criteria for the mixed specifier for depression, and 14 (23.72%) out of 59 patients with bipolar depression fulfilled the criteria for the mixed specifier, with no significant difference in the prevalence across the 2 groups. There was no significant difference in the sociodemographic and clinical profile of those with and without mixed features in both unipolar and bipolar depression groups. However, those with mixed and without mixed features differ on certain depressive symptoms as assessed on HDRS. Conclusions: About one-fifth of patients with unipolar and bipolar depression have mixed features during the acute phase of depression. abstract_id: PUBMED:28462772 Mixed features in major depressive disorder: diagnoses and treatments. For the first time in 20 years, the American Psychiatric Association (APA) updated the psychiatric diagnostic system for mood disorders in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5). Perhaps one of the most notable changes in the DSM-5 was the recognition of the possibility of mixed symptoms in major depression and related disorders (MDD). While MDD and bipolar and related disorders are now represented by 2 distinct chapters, the addition of a mixed features specifier to MDD represents a structural bridge between bipolar and major depression disorders, and formally recognizes the possibility of a mix of hypomania and depressive symptoms in someone who has never experienced discrete episodes of hypomania or mania. This article reviews historical perspectives on "mixed states" and the recent literature, which proposes a range of approaches to understanding "mixity." We discuss which symptoms were considered for inclusion in the mixed features specifier and which symptoms were excluded. The assumption that mixed symptoms in MDD necessarily predict a future bipolar course in patients with MDD is reviewed. Treatment for patients in a MDD episode with mixed features is critically considered, as are suggestions for future study. Finally, the premise that mood disorders are necessarily a spectrum or a gradient of severity progressing in a linear manner is argued. abstract_id: PUBMED:30055852 Assessment of the psychometric properties of the 17- and 6-item Hamilton Depression Rating Scales in major depressive disorder, bipolar depression and bipolar depression with mixed features. Assessing therapeutic response in depression requires scales that adequately measure the core symptoms of depressive symptomatology. The main goal of this study was to assess the psychometric properties of the 17-item Hamilton Depression Rating Scale (HAM-D17) and the 6-item Hamilton Depression Rating Scale (HAM-D6) in patients with Major Depressive Disorder (MDD), bipolar depression and bipolar depression with mixed features. We conducted a reanalysis of a pragmatic clinical trial in an outpatient clinic for mood disorders that included eight weeks of follow-up. A Mokken analysis was performed to evaluate the unidimensionality of the HAM-D17 and HAM-D6, and the Spearman correlation was used to assess concurrent validity between the HAM-D17, the HAM-D6 and quality of life scale (SF-36 and WHOQOL-BREF) scores. A total of 237 patients with a mean age of 40.2 years (±11.7) were included. According to the DSM-IV criteria, 58 (24.5%) were diagnosed with MDD and 73 (30.8%) were diagnosed with bipolar depression. Bipolar depression with mixed features was diagnosed in 106 (44.7%) patients according to the DSM-IV and supplemented by the Cincinnati criteria. Only the HAM-D6 scale proved to be unidimensional, showing strong homogeneity for evaluating MDD, moderate homogeneity for bipolar depression and weak homogeneity for bipolar depression with mixed features. Both the HAM-D17 and the HAM-D6 had inverse, significant correlations at baseline with SF-36 and WHOQOL-BREF scores. This is the first study to include bipolar depression patients with mixed features in an assessment of HAM-D6 unidimensionality. abstract_id: PUBMED:36580584 Distractibility, anxiety, irritability, and agitation symptoms are associated with the severity of depressive and manic symptoms in mixed depression. Objective: To explore whether there is an association between distractibility, anxiety, irritability, and agitation (DAIA) symptoms and the severity of depressive and manic symptoms. Methods: Patients with unipolar and bipolar disorder (I and II) and mixed depression were evaluated. DAIA symptoms were assessed using previously described definitions. Results: The full analysis set comprised 100 patients. The severity of depressive symptoms in mixed depression, assessed by Montgomery-Åsberg Depression Rating Scale (MADRS), was significantly associated with the presence of two or more DAIA symptoms in the bipolar sample, influenced mainly by anxiety. The severity of manic symptoms in mixed depression, assessed by Young Mania Rating Scale (YMRS), was significantly associated with the presence of two or more DAIA symptoms in the bipolar sample and three or four DAIA symptoms in the unipolar sample. Conclusion: DAIA symptoms were associated with greater severity of manic symptoms in mixed depression. DAIA symptoms must be evaluated in all patients with mixed features and are associated with the severity of depressive and manic symptoms in mixed depression. Answer: The concept of manic symptoms that 'dip' into depression is indeed central to the understanding of mixed features in mood disorders. The study by PUBMED:26717522 suggests that distractibility, irritability, and psychomotor agitation (DIP) are cardinal features of mixed states, as they 'dip' into depressive symptoms to create a mixed state. This finding challenges the DSM-5 decision to exclude these symptoms from the mixed features specifier due to their failure to discriminate between manic and depressive syndromes. The study's results indicate that patients with these symptoms may experience a greater number of distinct periods of (hypo)manic symptoms, suggesting their relevance in characterizing mixed depressive states. The exclusion of overlapping symptoms such as DIP in the DSM-5 mixed features specifier has been a subject of controversy, as discussed in PUBMED:27869049. The article suggests that these symptoms may be core features of mixed states and advocates for a return to a Kraepelinian approach to classification, which would include mood, ideation, and activity as key axes and reintegrate DIP symptoms as features expressed across presentations. Furthermore, PUBMED:32008688 highlights the unmet needs in the DSM-5's approach to mixed features in depression, suggesting that excluding overlapping excitatory symptoms frequently observed in mixed depression leaves many patients undiagnosed. The article calls for a deeper diagnostic reconsideration of mixed features in depression by future nosologic classification systems. In support of the significance of mixed features, PUBMED:37485405 reports that about 16% of patients with first-episode depression fulfilled the DSM-5 criteria for the mixed features specifier, indicating that the extension of this specifier to depression is useful in understanding the symptom profile of patients with depression. Additionally, PUBMED:36645972 found that approximately one-fifth of patients with unipolar and bipolar depression have mixed features during the acute phase of depression, further emphasizing the prevalence of mixed features in mood disorders. Overall, the evidence from these studies suggests that manic symptoms that 'dip' into depression are indeed essential to the essence of mixed features, and there is a need for diagnostic systems to recognize and include these symptoms for accurate characterization and treatment of mixed states.
Instruction: Is there a relation between histopathologic characteristics of pterygium and recurrence rates? Abstracts: abstract_id: PUBMED:23335315 Is there a relation between histopathologic characteristics of pterygium and recurrence rates? Purpose: To explore the interrelationships of histopathologic characteristics of pterygium and postoperative recurrence. Methods: Consecutive patients with primary pterygium or recurrent pterygium treated in our tertiary center between January 2007 and January 2010 were included in the study. All the patients were surgically treated by limbal-conjunctival autograft transplantation and postoperatively followed up for at least 1 year. Histopathologic changes were classified as inflammation intensity, degree of vascularization, and fibrinoid change. The results of examination of histopathologic and clinical characteristics of pterygium were comparatively analyzed. A total of 101 consecutive patients were included in the study. Ninety eyes of 90 patients who had primary pterygium (PP group) were compared with 11 eyes of 11 patients who had recurrent pterygium (RP group). In the PP group, 7 of the 90 (7.8%) patients had evidence of recurrence, while in the RP group, 2 of the 11 (18.2%) patients had evidence of recurrence (p=0.254). Results: No significant difference was found in inflammation intensity, degree of vascularization, and fibrinoid change between PP and RP groups (p&gt;0.05). In the PP group, no significant difference was found for inflammation intensity, degree of vascularization, and fibrinoid change between patients with (7/90) and without recurrences (83/90) (p&gt;0.05). In the RP group, no significant difference was detected for inflammation intensity, degree of vascularization, and fibrinoid change between patients with (2/11) and without recurrences (9/11) (p&gt;0.05). Conclusions: No significant correlation between the histology of pterygium and recurrence rate could be established. abstract_id: PUBMED:23474249 Peripheral hypertrophic subepithelial corneal degeneration: nomenclature, phenotypes, and long-term outcomes. Purpose: To describe the long-term outcomes of peripheral hypertrophic subepithelial corneal degeneration. Design: Retrospective case series. Participants: Twenty-two patients under the care of the External Disease Service, Moorfields Eye Hospital. Methods: All patients matching clinical diagnostic criteria were included. Symptomatic patients were managed either conservatively or were offered superficial keratectomy in progressive cases where symptom control was inadequate. All excised tissue was examined histologically. Main Outcome Measures: Clinical phenotype, symptoms, recurrence rate after surgery, and histopathologic results. Results: Twenty-two white patients (20 women and 2 men; age range, 27-88 years; median age, 42 years) had peripheral, usually bilateral (20/22 [91%]), elevated circumferential peripheral subepithelial corneal opacities and adjacent abnormal limbal vasculature, with or without pseudopterygia in 9 patients (41%), among whom 7 (32%) patients had bilateral disease. Apart from the abnormal vasculature, there were no signs of chronic ocular surface inflammation. Symptoms were ocular surface discomfort in 10 patients (45%), reduced vision in 4 patients (18%), and both of these in 5 patients (23%). Three patients (14%) were asymptomatic. There was no treatment in 6 patients (27%), topical lubricants in 8 patients (36%), spectacles in 1 patient (5%), and superficial keratectomy in 7 patients (32%; 5 bilateral), which was repeated for incomplete primary excision in 1 eye of 2 patients (9%) and for a bilateral recurrence in 1 patient (5%). Median follow-up after excision was 5 years (range, 1-11 years). Histopathologic changes were similar to those observed in pterygia (vascular component) and Salzmann's nodular degeneration (corneal component). Conclusions: Peripheral hypertrophic subepithelial corneal degeneration is an uncommon, usually bilateral, idiopathic disorder, occurring mostly in white women with a distinct phenotype. The condition was first described in 2003. Further cases among some patients have been described in more recent case series of Salzmann's nodular degeneration and diffuse keratoconjunctival proliferation, as well as in the previously described familial pterygoid corneal degeneration. Surgical excision was required in 30% of patients in this series, with infrequent short-term recurrences. The cause is uncertain. Financial Disclosure(s): The author(s) have no proprietary or commercial interest in any materials discussed in this article. abstract_id: PUBMED:21211842 Progressive keratolysis with pseudopterygium associated with erythema elevatum diutinum. Objective: To report a patient with a rare case of progressive keratolysis with pseudopterygium associated with erythema elevatum diutinum (EED). Design: Interventional case report and literature review. Participant: One patient with ocular presentation of EED was identified. Methods: A 64-year-old woman presented with recurrent pterygium and painful corneal thinning with impending perforation of the left eye. Examination revealed intensely inflamed pseudopterygium, paracentral and peripheral corneal ulcerations, and descemetocele with a pinpoint leakage. Main Outcome Measures: Clinical course and laboratory and histopathologic findings. Results: Disseminated skin lesions developed during hospital admission. Examination revealed multiple, well-demarcated, discrete, red to violaceous, firm nodules and plaques on the knuckles and dorsum of the hands, and the extensor surface of the forearms, elbows, knees, right ankle, and buttocks. A skin biopsy of the recent lesions showed a dense and diffuse inflammatory cell infiltration, predominantly neutrophilic infiltrates and nuclear dust (leukocytoclasis), located around small blood vessels in the dermis, consistent with the diagnosis of EED. The diagnosis of progressive keratolysis with pseudopterygium associated with EED was made. Spontaneous, rapid resolution of both cutaneous and ocular lesions occurred after dapsone therapy. Conclusions: Erythema elevatum diutinum is a rare, chronic, and recurrent disease that has both dermatologic and ocular manifestations. Peripheral ulcerative keratitis seems to be the most common ocular finding and may be the initial presenting feature of EED. Ophthalmologists should be aware of this rare entity in the differential diagnosis of ulcerative or nonulcerative peripheral keratitis. abstract_id: PUBMED:9186439 Comparison of conjunctival autografts, amniotic membrane grafts, and primary closure for pterygium excision. Objective: The purpose of the study is to determine whether amniotic membrane can be used as an alternative to conjunctival autograft after pterygium excision. Design: A prospective study of amniotic membrane grafts (group A) and primary closure (group B) was compared retrospectively with conjunctival autografts (group C) in patients with pterygia. Participants: Group A included 46 eyes with primary pterygia and 8 eyes with recurrent pterygia, group B had 20 eyes with primary pterygia, and group C consisted of 78 eyes with primary and 44 eyes with recurrent pterygia. Intervention: For the above three different surgeries, the amount of tissue removed was estimated from histopathologic analysis, and the result was evaluated by clinical examination. Main Outcome Measures: Recurrence, survival analysis, and final appearance were compared. Results: In group A, the recurrence rate was 10.9%, 37.5%, and 14.8% for primary, recurrent, and all pterygia, respectively (mean follow-up, 11 months). These three rates were significantly higher than 2.6%, 9.1%, and 4.9% noted in group C (mean follow-up, 23 months) (P &lt; 0.001, 0.018, and 0.01, respectively). However, the latter recurrence rate was significantly lower than 45% (mean follow-up, 5.2 months) in group B for primary pterygia (P &lt; 0.001). The onset of recurrence was delayed significantly in group C as compared with that of groups A and B. Conclusions: The relatively low recurrence rate for primary pterygia allows one to use amniotic membrane transplantation as an alternative first choice, especially for advanced cases with bilateral heads or those who might need glaucoma surgery later. abstract_id: PUBMED:23775605 The study of etiological and demographic characteristics of pterygium recurrence: a consecutive case series study from Pakistan. To determine the agents of recurrence, the demographic features, and the management and outcomes of pterygium cases treated at Aga Khan University Hospital (AKUH), Karachi. This hospital-based descriptive study included 147 patients with pterygium (147 eyes) from both sexes, who presented at the Ophthalmology Department at AKUH between January 1, 2002 and December 31, 2010. Data regarding their demographic profile and other factors were collected in a well-structured proforma. Data were analyzed using frequencies, proportions, group means, standard deviations, and Pearson's chi squared test. The male to female ratio in our study was 2:1, with factors such as younger age (&lt;50 years) at presentation (P = 0.04, Pearson's chi squared test) and the presence of secondary pterygium (P = 0.050) correlated significantly with the recurrence of pterygium. There was a definite trend of decreased recurrence seen with lower grade of pterygium (P = 0.06). The recurrence rate in our study was 8.8 %. These results suggest that a younger age coupled with a higher grade of pterygium as well as a history of multiple recurrences leads to increased susceptibility of recurrence after surgical excision. Early diagnosis and appropriate management will lead to a better prognosis. abstract_id: PUBMED:38031689 Clinical characteristics and management outcomes in isolated nail lichen planus: A retrospective case series. Background Isolated nail lichen planus (NLP) without mucocutaneous involvement is rare. Literature about the clinical profile and management is scarce. Aims/Objective We attempted to characterize the clinico-demographic profile and analyze the management outcome of patients with isolated nail lichen planus. Methods Records of 15 patients were evaluated, and their demographic profile, clinical features of the nail matrix and nail bed disease, onychoscopy findings, histopathological features, treatment given, and follow-up progress were analysed. Results Data from 15 patients were collected. The mean age of the patients was 28.6 ± 19.0 years (range 3.5 years to 56 years). The gender ratio was 0.8 with 7 males and 8 females. The average disease duration at presentation was 2.8 ± 2.6 years (6 months-9 years). The average number of nails involved was 16.4 ± 4.6 (8-20 nails). All 20 nails were involved in 9 (60%) patients. Nail matrix -involvement was seen in all, with onychorrhexis being the most common manifestation, which was seen in 11 (73.3%) patients. Nail bed involvement was seen in 11 (73.3%) patients, with onycholysis being the most common presentation. Severe nail disease was seen in 7 (46.7%) patients, and 5 (33.3%) had pterygium involving an average of 3.4 nails. Moderate to good improvement was observed in 9 (60%) patients after an average of 6.1 ± 2.4 (3-9) treatment sessions with intramuscular and intramatricial triamcinolone acetonide injection. Of these, 2/9 (22.2%) developed disease recurrence in a few nails after an interval of 1 and 1.5 years, respectively. Two patients achieved complete clinical cures that persisted beyond 2 years of follow-up. Limitations Retrospective nature of the series and the small sample size are the major limitations. Conclusion The risk of permanent disfigurement is high in NLP and calls for an early diagnosis and prompt treatment. Intralesional and intramuscular steroids are first-line therapeutic options depending upon the number of nails involved. abstract_id: PUBMED:33975560 Characteristics and recurrence of pterygium in Saudi Arabia: a single center study with a long follow-up. Background: This study described the clinical features of patients with pterygium and analyzed the recurrence rate of conjunctival autografting alone, conjunctival autografting combined with intraoperative mitomycin C, and amniotic membrane grafting. Methods: A retrospective cohort study of primary pterygium was conducted between January 2017 and February 2020. Factors associated with pterygium severity and recurrence were analyzed by univariate analysis and logistic regression models. Results: The study included 292 patients with an average age of 53.3 ± 14.1 years, while the number of operated cases was 94. Pterygia involving the cornea were observed in 55 % of the cases. The overall rate of recurrence for the three procedures was 17 %. The average time of recurrence was 14.2 ± 11.9 months, with 37 % of the recurrences occurring after the first year. The only factor associated with a significant risk of recurrence was dry eye disease in both univariate (p = 0.021) and multivariate analysis (p = 0.026). The recurrence rates following conjunctival autografting with and without mitomycin C were 15.6 and 15.8 %, respectively. The recurrence rate following the amniotic membrane graft was twofold (OR= 2.02) (27 %) that following the conjunctival autograft (15.8 %). Conclusions: The only factor associated with the recurrence of pterygium was dry eye disease. More than one-third of recurrences developed after the first year, which stresses the importance of a long follow-up. The recurrence rate in our study following conjunctival graft was slightly higher compared to the literature mainly due to differences in study areas, populations, and follow-up periods. abstract_id: PUBMED:32174734 Pterygium excision with conjunctival autograft fixed with sutures, glue, or autologous blood. Aims And Objectives: This retrospective study compared surgical outcome of pterygium excision with conjunctival autograft fixed with sutures, tissue glue or autologous blood in relation to recurrence rate and surgical complications. Materials And Methods: Surgical records of 148 patients operated for excision of primary nasal pterygium with conjunctival autograft were reviewed retrospectively for the period between January 2015 and June 2018. Based on surgical technique used to fix the graft, patients were divided into three groups. In Group A, 8 "0" vicryl suture was used to fix the graft in 90 patients. In Group B, fibrin glue was used to fix the graft in 23 patients. In Group C, autologous blood was used to fix the graft in 35 patients. Patients who were operated by single surgeon and had followed up for minimum six months were included in the study. Results: Group A had recurrence in 7 cases (7.78%) whereas; Group B and C had no recurrence. But, in Group C two patients (5.71%) lost their graft. Overall recurrence rate in the study was 4.72%. Conclusion: Among the three techniques used in the study, recurrence was seen in the suture group and autologous blood group had loss of graft. The fibrin glue group was free of complications. abstract_id: PUBMED:8664228 Surgery for pterygium using a conjunctival pedunculated flap slide. Eight hundred and eighty patients (913 eyes) with primary pterygium who were surgically treated from 1983-93 were followed up for 5.7 years on average. Based on the large number of cases and a 10 year period of practice, it was found that pterygium excised with a pedunculated conjunctival flap slide was effective and safe in the treatment of primary pterygium. The recurrence rate of 1.6% (15 out of 913 eyes) in this series compared favourably with other reports. The characteristics and techniques concerning the operating process are described in detail. abstract_id: PUBMED:26060460 Young patient's age determines pterygium recurrence after surgery. Background: It is not clear whether demographic or pterygium characteristics or limbal stem cell deficiency determine pterygium recurrence after surgery. Purpose: To determine whether the demographic, pterygium characteristics, or limbal stem cell deficiency determine pterygium recurrence after excision. Methods: Of 190 patients operated and followed-up for 6 months, 101 and 89 underwent free conjunctival autotransplant (CAT) or limbal conjunctival autotransplant (LCAT) respectively. The age, gender, occupation, grade of pterygium extent and degree of fleshiness, and laterality were compared between recurrent and no recurrent pterygia. Multivariate analysis was performed to determine the predictors of pterygium recurrence. Recurrence rates after surgery were compared between CAT and LCAT. Results: The age range of the 190 patients was 22-65 years, mean ±SD 46.4 ±10.8 years. Pterygium recurred in 52 (27.4%). Thirty-nine (75%) of 52 patients with pterygia that recurred were aged &lt;50 years (young) vs. 72 (52%) of 138 young patients with no recurrence; odds ratio (OR) = 1.54; 95% confidence interval (95% CI) = 0.70-3.36; p = 0.28. Thirty-one (60%) of 52 participants with post-surgical recurrent pterygia had large pre-operative pterygium (grade ≥3) vs. 130 (94%) of 138 patients with large pterygia that did not recur; OR = 0.11; 95% CI = 0.04-0.28; p &lt;0.001. Of 101 patients undergoing CAT, 29 (28.7%) experienced recurrence vs. 23 (25.8%) of 89 undergoing LCAT; p = 0.66. Conclusions: Young age seems to be associated with pterygium recurrence after excision followed by conjunctival graft. Large pterygia were protective. Answer: Based on the information provided in the abstracts, there does not appear to be a significant correlation between the histopathologic characteristics of pterygium and recurrence rates. A study included in the abstracts (PUBMED:23335315) specifically investigated the relationship between histopathologic features such as inflammation intensity, degree of vascularization, and fibrinoid change in primary and recurrent pterygium groups. The results showed no significant difference in these histopathologic characteristics between patients with primary pterygium and those with recurrent pterygium. Additionally, within the primary pterygium group, no significant difference was found between patients with and without recurrences concerning these histopathologic features. Similarly, no significant difference was detected in the recurrent pterygium group between patients with and without recurrences. The conclusion drawn from this study was that no significant correlation between the histology of pterygium and recurrence rate could be established.
Instruction: Citrulline levels in a paediatric age group: does measurement on dried blood spots have additional value? Abstracts: abstract_id: PUBMED:21129371 Citrulline levels in a paediatric age group: does measurement on dried blood spots have additional value? Background: Citrulline is considered to be a marker of absorptive enterocyte mass. Citrulline levels can be measured in plasma or dried blood spot (DBS) samples. The purpose of this study is to calculate reference intervals for plasma and DBS citrulline concentrations in children and to examine the effect of age and gender. Methods: In 151 healthy subjects ranging from 1 month to 20 years of age, plasma and DBS citrulline concentration were determined by using Liquid Chromatography-tandem Mass Spectrometry. Citrulline concentrations were examined in relation to age and gender. Reference values were calculated according to the guidelines of the International Federation of Clinical Chemistry and the National Committee on Clinical Laboratory Standards. Results: No significant influence of age and gender could be discerned on plasma or DBS citrulline concentration. In children, the reference intervals for citrulline bounded by the 2.5 and 97.5 percentiles are 13.31-69.05 μmol/L and 23.70-49.04 μmol/L for plasma and DBS samples respectively. Conclusions: The reference intervals for citrulline levels in healthy children are widely dispersed. Measuring citrulline concentrations in dried blood spots delivers no additional value to plasma measurements for the calculation of reference intervals in children. abstract_id: PUBMED:25040362 Citrulline concentration in routinely collected neonatal dried blood spots cannot be used to predict necrotising enterocolitis. Aim: Low citrulline concentration is a marker of low functional enterocyte mass, which may predispose neonates to necrotising enterocolitis (NEC). We hypothesised that citrulline could be used to assess the NEC risk that could not be accounted for by gestational age and birthweight. This study investigated whether citrulline concentrations routinely measured in neonatal dried blood spots (DBS) could predict NEC. Methods: We used national Danish registries to retrospectively identify all 361 babies born between 2003 and 2009 who were diagnosed with NEC and had a valid citrulline concentration measured from a DBS sample. The control group comprised 1083 healthy newborns, with three controls for every newborn with NEC, matched for birthweight and gestational age. Results: Neonatal dried blood spots were collected between 2 and 21 days of life, with a median of 8 days. The results showed that NEC was not associated with low citrulline concentration, either in a direct comparison between the NEC and control groups or in a multivariate logistic regression (p = 0.73). Conclusion: The findings of this study show that the citrulline concentrations found in routine DBS samples between 2003 and 2009 did not predict NEC in newborn babies. abstract_id: PUBMED:37489487 Important Lessons on Long-Term Stability of Amino Acids in Stored Dried Blood Spots. Residual heel prick Dried Blood Spots (DBS) are valuable samples for retrospective investigation of inborn metabolic diseases (IMD) and biomarker analyses. Because many metabolites suffer time-dependent decay, we investigated the five-year stability of amino acids (AA) in residual heel prick DBS. In 2019/2020, we analyzed 23 AAs in 2170 residual heel prick DBS from the Dutch neonatal screening program, stored from 2013-2017 (one year at +4 °C and four years at room temperature), using liquid chromatography mass-spectrometry. Stability was assessed by AA changes over the five years. Hydroxyproline could not be measured accurately and was not further assessed. Concentrations of 19 out of the remaining 22 AAs degraded significantly, ranked from most to least stable: aspartate, isoleucine, proline, valine, leucine, tyrosine, alanine, phenylalanine, threonine, citrulline, glutamate, serine, ornithine, glycine, asparagine, lysine, taurine, tryptophan and glutamine. Arginine, histidine and methionine concentrations were below the limit of detection and were likely to have been degraded within the first year of storage. AAs in residual heel prick DBS stored at room temperature are subject to substantial degradation, which may cause incorrect interpretation of test results for retrospective biomarker studies and IMD diagnostics. Therefore, retrospective analysis of heel prick blood should be done in comparison to similarly stored heel prick blood from controls. abstract_id: PUBMED:26427641 Reduced early dried blood spot citrulline levels in preterm infants with meconium obstruction of prematurity. Background: Citrulline is a non-protein amino acid synthesized in the enterocytes of the small bowel. Recent studies have reported that plasma citrulline levels correlate with functional enterocyte mass. Aims: This study aimed to determine the normal dried blood spot (DBS) citrulline levels and to determine the existence of a correlation between citrulline levels and meconium obstruction of prematurity (MOP). Study Design And Subjects: A retrospective cohort study was performed involving 285 infants born at less than 32weeks gestation who were admitted to the neonatal intensive care unit between Oct 2009 and Aug 2014. Outcome Measures: We analyzed the DBS citrulline levels, which are routinely measured via newborn screening at 7days following birth, using liquid chromatography-MS/MS. We investigated the relationship between DBS citrulline levels and clinical parameters such as gestational age (GA), body measurements at birth, gender, or the presence or absence of either necrotizing enterocolitis or MOP. Results: A total of 229 infants with a median GA of 29.6weeks and a median birth weight of 1160g were included. DBS citrulline levels were not associated with GA, body measurements at birth or gender. DBS citrulline levels were significantly decreased when patients presented with MOP (p=0.037). Conclusions: Early DBS citrulline levels were not associated with either GA or body measurements at birth but were reduced among preterm infants with MOP compared with the control infants. These results may be indicative of abnormal fetal intestinal development and reduced functional enterocyte mass among preterm infants with MOP. abstract_id: PUBMED:30581393 A Subset of Patients With Autism Spectrum Disorders Show a Distinctive Metabolic Profile by Dried Blood Spot Analyses. Autism spectrum disorder (ASD) is currently diagnosed according to behavioral criteria. Biomarkers that identify children with ASD could lead to more accurate and early diagnosis. ASD is a complex disorder with multifactorial and heterogeneous etiology supporting recognition of biomarkers that identify patient subsets. We investigated an easily testable blood metabolic profile associated with ASD diagnosis using high throughput analyses of samples extracted from dried blood spots (DBS). A targeted panel of 45 ASD analytes including acyl-carnitines and amino acids extracted from DBS was examined in 83 children with ASD (60 males; age 6.06 ± 3.58, range: 2-10 years) and 79 matched, neurotypical (NT) control children (57 males; age 6.8 ± 4.11 years, range 2.5-11 years). Based on their chronological ages, participants were divided in two groups: younger or older than 5 years. Two-sided T-tests were used to identify significant differences in measured metabolite levels between groups. Näive Bayes algorithm trained on the identified metabolites was used to profile children with ASD vs. NT controls. Of the 45 analyzed metabolites, nine (20%) were significantly increased in ASD patients including the amino acid citrulline and acyl-carnitines C2, C4DC/C5OH, C10, C12, C14:2, C16, C16:1, C18:1 (P: &lt; 0.001). Näive Bayes algorithm using acyl-carnitine metabolites which were identified as significantly abnormal showed the highest performances for classifying ASD in children younger than 5 years (n: 42; mean age 3.26 ± 0.89) with 72.3% sensitivity (95% CI: 71.3;73.9), 72.1% specificity (95% CI: 71.2;72.9) and a diagnostic odds ratio 11.25 (95% CI: 9.47;17.7). Re-test analyses as a measure of validity showed an accuracy of 73% in children with ASD aged ≤ 5 years. This easily testable, non-invasive profile in DBS may support recognition of metabolic ASD individuals aged ≤ 5 years and represents a potential complementary tool to improve diagnosis at earlier stages of ASD development. abstract_id: PUBMED:33210481 Effects of delivery and storage conditions on concentrations of amino acids and carnitines in neonatal dried blood spots Objective: To explore effects of different delivery and storage conditions on concentrations of amino acids and carnitines in neonatal dried blood spots (DBS), so as to provide evidence for improving accurate and reliable detection by tandem mass spectrometry. Methods: A total of 1 254 616 newborn DBS samples in Newborn Screening Center of Zhejiang Province were delivered and stored at room temperature (group A, n=338 467), delivered by cold-chain logistics system and stored at low temperature (group B, n=480 021), or delivered by cold-chain logistics system and stored at low temperature and low humidity (group C, n= 436 128), respectively. The concentrations of amino acids and carnitines in DBS were detected by tandem mass spectrometry. Data analysis was performed by SPSS 24.0 to explore the influence of temperature and humidity on the concentrations of amino acids and carnitines. Results: The concentrations of amino acids and carnitines in the three groups were skewed, and the differences in amino acid and carnitine concentrations among groups were statistically significant (all P&lt;0.01). The median concentration of tyrosine was lower in group A than those in group B and group C by 18%and 16%respectively, while there was no significant difference between the last two groups. The median concentrations of methionine were lower in group A and group B than that in group C by 15%and 11%, respectively. The median concentrations of arginine were lower in group A and group B than that in group C by 12%and 25%, respectively. The median concentration of free carnitine (C0) was higher in group A than that in group C by 12%, while there was no significant difference between group A and group B. The median concentrations of acetylcarnitine (C2), propionyl carnitine (C3), C3DC+C4OH, C5DC+C6OH and hexadecanoyl carnitine (C16) were lower in group A than those in group B and group C by 21%-64%. The concentrations of other amino acids and acylcarnitines differed little among three groups. The monthly median coefficients of variation of other amino acids and carnitines in group A were higher than those in group B and group C except for citrulline, C4DC+C5OH and isovalerylcarnitine (C5). Conclusions: Cold-chain logistics system and storage in low temperature and low humidity can effectively reduce degradation of some amino acids and carnitines in DBS, improve the accuracy and reliability of detection, and thus ensures the quality of screening for neonatal metabolic diseases. abstract_id: PUBMED:36832540 Neuroactive Amino Acid Profile in Autism Spectrum Disorder: Results from a Clinical Sample. Biological bases of autism spectrum disorder (ASD) include both genetic and epigenetic causes. Patients with ASD show anomalies in the profile of certain plasma amino acids, including neuroactive amino acids. Monitoring plasma amino acids may be relevant for patient care and interventions. We evaluated the plasma amino acid profile in samples extracted from dry blood spots by electrospray ionization-tandem mass spectrometry. Fourteen amino acids and eleven amino acid ratios were examined in patients with ASD and intellectual disability (ID), and neurotypical control subjects (TD). The amino acid profile in the ASD group showed reduced levels of ornithine (p = 0.008), phenylalanine (p = 0.042) and tyrosine (p = 0.013). The statistically significant amino acid ratios were Leu+Val/Phe+Tyr (p = 0.002), Tyr/Leu (p = 0.007) and Val/Phe (p = 0.028), such differences remaining significant only in the comparison between ASD and TD. Finally, a positive correlation emerged between the score of the restricted and repetitive behavior on ADOS-2 and the citrulline levels in the ASD group (p = 0.0047). To conclude, patients with ASD may show a distinguishable metabolic profile useful for studying their metabolic pathways in order to develop screening tests and targeted therapies. abstract_id: PUBMED:21114968 Effect of specimen storage conditions on newborn dried blood spots used to assess Toxoplasma gondii immunoglobulin M (IgM). Background: Newborn screening programs store-under varying conditions-residual dried blood spots (DBS). Residual DBS were used to investigate the contribution of congenital infection with Toxoplasma gondii to the etiology of hydrocephalus and as a key step, we assessed the effect of storage conditions on the stability of newborn screening biomarkers. Methods: Infants with hydrocephalus (410 cases) were identified using population-based birth defects surveillance systems in California, North Carolina, and Texas. Infants without birth defects (448 controls) were randomly selected from the same geographic areas and time periods. California stores DBS with controlled temperature, while North Carolina and Texas store DBS under ambient conditions. After removal of personal identifiers, DBS were tested for Toxo-specific immunoglobulin-M (Toxo-IgM). Because of poor elution of DBS stored in ambient conditions, additional biomarkers were tested on a specimen subset. Results: Among 858 DBS tested, Toxo-IgM was found in 3 cases and no controls from California (N=515) and in no specimens from North Carolina or Texas (N=343). Among the 98 specimens tested for selected biomarkers, statistically significant differences were found for California vs. combined North Carolina and Texas DBS (thyroid stimulating hormone, phenylalanine, methionine, leucine and citrulline p&lt;0.0001; tyrosine and valine p&lt;0.001). Conclusions: Storage conditions for residual DBS had an effect on the ability to extract, recover, and accurately measure Toxo-IgM and other biomarkers from the filter paper matrix. abstract_id: PUBMED:33190319 The role of orotic acid measurement in routine newborn screening for urea cycle disorders. Urea cycle disorders (UCDs), including OTC deficiency (OTCD), are life-threatening diseases with a broad clinical spectrum. Early diagnosis and initiation of treatment based on a newborn screening (NBS) test for OTCD with high specificity and sensitivity may contribute to reduction of the significant complications and high mortality. The efficacy of incorporating orotic acid determination into routine NBS was evaluated. Combined measurement of orotic acid and citrulline in archived dried blood spots from newborns with urea cycle disorders and normal controls was used to develop an algorithm for routine NBS for OTCD in Israel. Clinical information and genetic confirmation results were obtained from the follow-up care providers. About 1147986 newborns underwent routine NBS including orotic acid determination, 25 of whom were ultimately diagnosed with a UCD. Of 11 newborns with OTCD, orotate was elevated in seven but normal in two males with early-onset and two males with late-onset disease. Orotate was also elevated in archived dried blood spots of all seven retrospectively tested historical OTCD patients, only three of whom had originally been identified by NBS with low citrulline and elevated glutamine. Among the other UCDs emerge, three CPS1D cases and additional three retrospective CPS1D cases otherwise reported as a very rare condition. Combined levels of orotic acid and citrulline in routine NBS can enhance the detection of UCD, especially increasing the screening sensitivity for OTCD and differentiate it from CPS1D. Our data and the negligible extra cost for orotic acid determination might contribute to the discussion on screening for proximal UCDs in routine NBS. abstract_id: PUBMED:17272485 Long-term stability of amino acids and acylcarnitines in dried blood spots. Background: Dried blood filter cards, collected for newborn screening, are often stored for long periods of time. They may be suitable for the retrospective diagnosis of inborn errors of metabolism, but no data are currently available on the long-term stability of amino acids and acylcarnitine species. Methods: We analyzed amino acids and acylcarnitines by tandem mass spectrometry in 660 anonymous, randomly selected filter cards from 1989 through 2004. We assessed long-term stability of metabolites by linear regression and estimated annual decrease of concentration for each metabolite. Results: Concentrations of free carnitine increased by 7.6% per year during the first 5 years of storage and decreased by 1.4% per year thereafter. Alanine, arginine, leucine, methionine, and phenylalanine decreased by 6.5%, 3.3%, 3.1%, 7.3%, and 5.7% per year, respectively. Acetylcarnitine, propionylcarnitine, citrulline, glycine, and ornithine decreased by 18.5%, 27.4%, 8.1%, 14.7%, and 16.3% per year during the first 5 years, respectively; thereafter the decline was more gradual. Tyrosine decreased by 1.7% per year during the first 5 years and 7.9% per year thereafter. We could not analyze medium- and long-chain acylcarnitine species because of low physiological concentrations. Conclusions: Estimation of the annual decrease of metabolites may allow for the retrospective diagnosis of inborn errors of metabolism in filter cards that have been stored for long periods of time. Answer: The measurement of citrulline levels in dried blood spots (DBS) does not appear to offer additional value over plasma measurements for establishing reference intervals in healthy children. A study conducted on 151 healthy subjects ranging from 1 month to 20 years of age found no significant influence of age and gender on plasma or DBS citrulline concentration. The reference intervals for citrulline levels in children were widely dispersed, and the conclusion was that measuring citrulline concentrations in DBS does not deliver additional value to plasma measurements for the calculation of reference intervals in children (PUBMED:21129371). Furthermore, the use of citrulline concentration in routinely collected neonatal DBS to predict necrotizing enterocolitis (NEC) was investigated and found not to be associated with NEC. This study, which included 361 babies diagnosed with NEC and 1083 healthy newborns as controls, showed that citrulline concentrations in routine DBS samples did not predict NEC in newborn babies (PUBMED:25040362). However, it is important to note that the stability of amino acids, including citrulline, in DBS can be affected by storage conditions. A study on the long-term stability of amino acids in stored DBS found that concentrations of most amino acids, including citrulline, degraded significantly over five years, which may impact the interpretation of test results for retrospective biomarker studies and inborn metabolic disease diagnostics (PUBMED:37489487). In summary, while DBS are a convenient and less invasive sample type, the current evidence suggests that for the purpose of establishing reference intervals for citrulline levels in healthy children, DBS measurements do not provide additional value compared to plasma measurements. Additionally, the stability of citrulline in DBS can be compromised over time, which is an important consideration for long-term storage and retrospective analyses.
Instruction: Are micromegakaryocytes specific for refractory cytopenia of childhood (RCC)? Abstracts: abstract_id: PUBMED:27285854 Are micromegakaryocytes specific for refractory cytopenia of childhood (RCC)? A study of 38 pediatric patients with thrombocytopenia unrelated to RCC. Background: Micromegakaryocytes (microMKs) are considered the most reliable dysplastic feature for myelodysplastic syndrome (MDS), particularly refractory cytopenia of childhood (RCC); there is no minimal threshold for the diagnosis of RCC. Since most RCC patients present with thrombocytopenia, the presence of microMKs should raise concern for MDS/RCC. This study attempted to investigate the prevalence of microMKs and associated marrow fibrosis in patients with thrombocytopenia unrelated to MDS/RCC and the need for establishing a threshold for microMKs for the diagnosis of MDS/RCC. Design: Bone marrow biopsies of pediatric patients with thrombocytopenia unrelated to RCC were examined for microMKs and fibrosis by CD61 immunohistochemical and reticulin stains respectively. Result: Thirty eight patients (1-18 years old) were included: 33 immune thrombocytopenia (ITP), 3 chronic thrombocytopenia, and 2 inherited macrothrombocytopenia. Fourteen cases (37%) had microMKs; four cases showed increased marrow fibrosis associated with microMKs (two had ITP and two had macrothrombocytopenia). All patients are alive and none developed MDS (follow up: 3months to 4 years). Conclusion: MicroMKs can be seen in pediatric patients with thrombocytopenia unrelated to RCC. Hence the mere presence of microMKs is insufficient for the diagnosis of RCC in the pediatric population, and a quantitative threshold needs to be established. abstract_id: PUBMED:28679999 Bone marrow failure in childhood: central pathology review of a nationwide registry. Refractory cytopenia of childhood (RCC) was proposed as a provisional entity in the 2008 WHO classification of myelodysplastic syndromes (MDS). It is defined as a childhood MDS featuring persistent cytopenia without increase blasts in bone marrow (BM) or peripheral blood (PB). Because the majority of RCC cases feature hypocellularity and pancytopenia, it is quite challenging to differentiate RCC from acquired aplastic anemia (AA) and many kinds of inherited bone marrow failure syndromes (IBMFS). Diagnosis of RCC requires BM histology of characteristic features such as isolated erythroid islet with left shift, abnormal localization and micromegakaryocytes. The Japanese Society of Pediatric Hematology/Oncology has opened the central registry review system since 2009 to evaluate childhood cases of bone marrow failure (BMF). It has reviewed cytology and BM pathology of all registered BMF cases, which number more than 1,700. In the evaluation of BMF, BM pathology is important to assess the mechanism of hematopoiesis. Pathological dysplasia should be differentiated from cytological dysplasia. A central review system is important for rare diseases, such as pediatric BMF. Standardization of pathological diagnosis should be established upon consensus findings, descriptions, and diagnostic approaches. In this review, the pathology of pediatric BMF syndromes is summarized. abstract_id: PUBMED:30062702 Differences in the bone marrow histology between childhood myelodysplastic syndrome with multilineage dysplasia and refractory cytopenia of childhood without multilineage dysplasia. Aims: Refractory cytopenia of childhood (RCC) is subdivided into myelodysplastic syndrome with multilineage dysplasia (MDS-MLD) and RCC without (w/o) multilineage dysplasia (RCC without MLD). Although RCC is a histomorphological distinct entity, the bone marrow (BM) histology of RCC is not yet characterised in relation to multilineage dysplasia. We investigated the BM histological features of RCC to clarify the characteristics of BM histology of MDS-MLD in childhood compared to RCC without MLD. Methods And Results: The BM histology and cytology in 60 RCC patients from the nationwide registry of Japanese Childhood AA-MDS Study Group were reviewed retrospectively. Although a thorough genetic assessment, including GATA2 and/or SAMD9, was not performed, inherited BM failure disorders were excluded by a cytogenetic test, a chromosome fragility test and a telomere length measurement along with careful clinical assessments. Among the 60 patients, 20 (33%) of MDS-MLD and 40 (67%) of RCC w/o MLD were classified according to their BM cytology. We then investigated the BM histological features and compared them between the two groups. The BM cellularity, distribution pattern of haematopoiesis, frequency of left-shifted granulopoiesis, numbers of micromegakaryocytes and p53 immunostaining-positive cells were significantly different between the groups. The BM histology of MDS-MLD in childhood showed higher cellularity, the more common occurrence of diffuse distribution pattern, more frequently left-shifted granulopoiesis and more micromegakaryocytes and p53 immunostaining-positive cells than RCC without MLD. Conclusions: Our results showed that MDS-MLD in childhood had a characteristic BM histology compared to RCC without MLD. The clinical relevance of MDS-MLD in childhood needs to be evaluated. abstract_id: PUBMED:22160017 Classification of childhood aplastic anemia and myelodysplastic syndrome. Hypoplastic BM disorders in children and adolescents comprise a broad spectrum of disorders. Acquired severe aplastic anemia (SAA), refractory cytopenia of childhood (RCC), a subtype of myelodysplastic syndrome (MDS), and inherited BM failure (IBMF) disorders are the main and most difficult hematological differential diagnoses. Whereas IBMF disorders can often be diagnosed by their clinical features and/or underlying genetic aberrations, the morphological distinction between SAA and hypocellular RCC has been controversial. The histopathological pattern of RCC consists of islands of immature erythroid precursors accompanied by sparsely distributed granulocytic cells. Megakaryocytes are significantly decreased or absent and, rarely, micromegakaryocytes are detected on immunohistochemistry. Because fatty tissue between areas of hematopoiesis can mimic SAA, 2 biopsies are recommended to facilitate the detection of representative BM spaces. Recent data indicate that the response to immunosuppressive therapy is inferior in RCC compared with SAA. Furthermore, approaches to allogeneic hematopoietic transplantation differ. Controlled prospective clinical studies in patients with hypoplastic BM failure disorders will require comprehensive guidelines for diagnosing SAA, RCC, and the different IBMF disorders. abstract_id: PUBMED:32610381 Differences in the bone marrow histopathology between pediatric acquired aplastic anemia and refractory cytopenia of childhood Objective: To study the differences in the bone marrow histopathology between acquired aplastic anemia (AAA) in children and refractory cytopenia of childhood (RCC) to facilitate their diagnoses and differential diagnosis. Methods: The clinical data and bone marrow biopsies of the RCC and AAA cases diagnosed from January 2008 to December 2018 in Xinhua Hospital, Shanghai Jiaotong University School of Medicine and Shanghai Children's Medical Center affiliated to Shanghai Jiaotong University School of Medicine were analyzed. Results: A total of 71 AAA and 79 RCC cases were analyzed. There were 52 males and 19 females, age ranged 1.0-15.0 years (median, 8.9 years) in the AAA group, and 53 males and 26 females, age ranged 0.5-16.0 years (median, 5.0 years) in the RCC group. All the biopsy specimens of AAA patients had severe hypocellularity; the cellularity of 88.7% (63/71) specimens was under 5.0%, and 11.3%(8/71) was 5%-24%. None of the AAA specimens showed any dysplastic change. All the biopsy specimens of RCC patients had hypocellularity, including 94.9%(75/79) of the specimens with a cellularity of 5%-50%. All of the RCC specimens showed a patchy distribution of hematopoiesis. A dysplastic change of erythroid cells and micromegakaryocytes was found in 40.5% (32/79) and in 60.8% (48/79) of the RCC cases, respectively. Conclusions: The degree of hypocellularity, the distribution pattern of hematopoiesis, the cell composition and localization of erythroid cell clusters and the appearance of micromegaryocytes could help the diagnosis and differential diagnosis of AAA and RCC. abstract_id: PUBMED:36445482 The International Consensus Classification (ICC) of hematologic neoplasms with germline predisposition, pediatric myelodysplastic syndrome, and juvenile myelomonocytic leukemia. Updating the classification of hematologic neoplasia with germline predisposition, pediatric myelodysplastic syndrome (MDS), and juvenile myelomonocytic leukemia (JMML) is critical for diagnosis, therapy, research, and clinical trials. Advances in next-generation sequencing technology have led to the identification of an expanding group of genes that predispose to the development of hematolymphoid neoplasia when mutated in germline configuration and inherited. This review encompasses recent advances in the classification of myeloid and lymphoblastic neoplasia with germline predisposition summarizing important genetic and phenotypic information, relevant laboratory testing, and pathologic bone marrow features. Genes are organized into three major categories including (1) those that are not associated with constitutional disorder and include CEBPA, DDX41, and TP53; (2) those associated with thrombocytopenia or platelet dysfunction including RUNX1, ANKRD26, and ETV6; and (3) those associated with constitutional disorders affecting multiple organ systems including GATA2, SAMD9, and SAMD9L, inherited genetic mutations associated with classic bone marrow failure syndromes and JMML, and Down syndrome. A provisional category of germline predisposition genes is created to recognize genes with growing evidence that may be formally included in future revised classifications as substantial supporting data emerges. We also detail advances in the classification of pediatric myelodysplastic syndrome (MDS), expanding the definition of refractory cytopenia of childhood (RCC) to include early manifestation of MDS in patients with germline predisposition. Finally, updates in the classification of juvenile myelomonocytic leukemia are presented which genetically define JMML as a myeloproliferative/myelodysplastic disease harboring canonical RAS pathway mutations. Diseases with features overlapping with JMML that do not carry RAS pathway mutations are classified as JMML-like. The review is based on the International Consensus Classification (ICC) of Myeloid and Lymphoid Neoplasms as reported by Arber et al. (Blood 140(11):1200-1228, 2022). abstract_id: PUBMED:34729770 Prospective validation of the provisional entity of refractory cytopenia of childhood, proposed by the World Health Organization. In 2008, the World Health Organization proposed a new entity of childhood myelodysplastic syndrome (MDS), which was referred to as refractory cytopenia of childhood (RCC). However, whether this morphological classification reflects clinical outcomes remains unclear. We performed a prospective evaluation of bone marrow morphology in 252 children with acquired bone marrow failure between 2009 and 2013. Of 252 patients, 63 were diagnosed with aplastic anaemia (AA), 131 with RCC without multilineage dysplasia (RCC-w/o-MLD) and 58 with RCC with MLD (RCC-MLD). One patient with AA, three with RCC-w/o-MLD and nine with RCC-MLD presented with chromosomal abnormalities at diagnosis (P = 0·001). The response rates to immunosuppressive therapy (IST) at 6 months and the cumulative incidence of clonal evolution at 5 years did not significantly differ among the three groups. A multivariate analysis revealed that the morphological classification of RCC-MLD was a significant risk factor for secondary graft failure after haematopoietic cell transplantation (HCT) (P = 0·003). In view of these findings, RCC could be divided into two categories, RCC-w/o-MLD and RCC-MLD, because children with this condition exhibited a distinct morphology, frequent chromosomal abnormalities at diagnosis and a high frequency of secondary graft failure after HCT. abstract_id: PUBMED:26427728 Comparison of immune manifestations between refractory cytopenia of childhood and aplastic anemia in children: A single-center retrospective study. This retrospective single-center study assessed the incidence and clinical features of immune manifestations of refractory cytopenia of childhood (RCC) and childhood aplastic anemia (AA). We evaluated 72 children with RCC and 123 with AA between February 2008 and March 2013. RCC was associated with autoimmune disease in 4 children, including 1 case each with autoimmune hemolytic anemia, rheumatoid arthritis, systemic lupus erythematosus, and anaphylactoid purpura. No children with AA were diagnosed with autoimmune diseases. Immune abnormalities were common in both RCC and AA; the most significant reductions were in the relative numbers of CD3-CD56+ subsets found in RCC. Despite the many similar immunologic abnormalities in AA and RCC, the rate of autoimmune disease was significantly lower in childhood AA than RCC (p=0.008, χ2=6.976). The relative numbers of natural killer cells were significantly lower in RCC patients than AA patients. By month 6, there was no significant difference in autoimmune manifestations between RCC and AA in relation to the response to immunosuppressive therapy (p=0.907, χ2=0.014). The large overlap of analogous immunologic abnormalities indicates that RCC and childhood AA may share the same pathogenesis. abstract_id: PUBMED:36198526 Morphological diagnosis of childhood bone marrow failure syndromes In 2008, the World Health Organization proposed a provisional entity of childhood myelodysplastic syndrome (MDS) without a blasts increase, which was referred to as the refractory cytopenia of childhood (RCC). We performed a central review of bone marrow morphology in 252 children with acquired bone marrow failure syndromes to clarify the clinical relevance of the RCC. The RCC was divided two categories, namely, RCC without multilineage dysplasia (MLD) and RCC with MLD, which is similar to MDS with MLD in adult MDS. Furthermore, the clinical outcomes were investigated for cases diagnosed with aplastic anemia, RCC without MLD, and RCC with MLD. The response rates to immunosuppressive therapy and the incidence of the development of the new chromosomal aberration did not significantly differ among the three groups. The RCC with MLD can be adopted in childhood MDS since children with this condition exhibited a frequent chromosomal aberration at the time of diagnosis and a high frequency of secondary graft failure after a hematopoietic cell transplantation. abstract_id: PUBMED:25580823 Pediatric aplastic anemia and refractory cytopenia: A retrospective analysis assessing outcomes and histomorphologic predictors. Pediatric acquired aplastic anemia (AA) is a bone marrow disorder that is difficult to distinguish from inherited bone marrow failure syndromes and hypocellular refractory cytopenia of childhood (RCC). Historically, patients with hypocellular RCC have been given the diagnosis of AA. To assess the clinical and histologic distinction between RCC and AA, we performed a retrospective analysis of 149 patients previously diagnosed with AA between 1976 and 2010. We evaluated event free survival (EFS), overall survival (OS), response rates to immunosuppressive therapy, treatment-related toxicities and clonal evolution. The 5-year EFS and OS were 50.8% ± 5.5% and 73.1% ± 4.7%, respectively. Patients with very severe AA had worse OS compared to patients with severe and moderately severe AA. Seventy-two patients had diagnostic pathology specimens available for review. Three pediatric hematopathologists reviewed and reclassified these specimens as AA, RCC or Other based on 2008 WHO Criteria. The concordance between pathologists in the diagnosis of AA or RCC was modest. RCC was associated with a trend toward improved OS and EFS and was not prognostic of immunosuppression therapy treatment failure. There was a low rate of clonal evolution exclusively associated with moderately severe AA. Our findings indicate that a diagnosis of RCC is difficult to establish with certainty and does not predict outcomes, calling into question the reproducibility and clinical significance of the RCC classification and warranting further studies. Answer: Micromegakaryocytes (microMKs) are not specific for refractory cytopenia of childhood (RCC). While microMKs are considered a reliable dysplastic feature for myelodysplastic syndrome (MDS), particularly RCC, and their presence should raise concern for MDS/RCC in patients presenting with thrombocytopenia, they can also be seen in pediatric patients with thrombocytopenia unrelated to RCC (PUBMED:27285854). This suggests that the mere presence of microMKs is insufficient for the diagnosis of RCC in the pediatric population, and a quantitative threshold for microMKs needs to be established for the diagnosis of MDS/RCC (PUBMED:27285854). Moreover, the study found that 37% of cases with thrombocytopenia unrelated to RCC had microMKs, indicating that microMKs are not exclusive to RCC (PUBMED:27285854).
Instruction: Can a school-based intervention increase children's fruit and vegetable consumption in the home setting? Abstracts: abstract_id: PUBMED:24215014 Can a school-based intervention increase children's fruit and vegetable consumption in the home setting? Aims: Although previous research has shown the Food Dudes programme increases children's fruit and vegetable consumption at school, the evidence for the effectiveness in the home setting is more equivocal. The school environment is identified as a logical setting for targeting children's fruit and vegetable consumption; however, to produce sustainable changes in behaviour, it is equally important that interventions target consumption in the home setting. This study aimed to establish whether the Food Dudes intervention can influence home consumption of fruit and vegetables and the extent to which any changes in eating behaviour following the intervention were maintained in the long term. Methods: A total of 34 children aged 4-11 years from eight primary schools (four intervention and four control groups) in the West Midlands, United Kingdom, completed a 7-day photographic food diary at baseline (prior to the intervention), a 3-month follow-up (post-intervention) and a 12-month follow-up. Results: The Food Dudes programme did not influence either short- or long-term changes in children's consumption of fruit and vegetables at home during weekdays or at the weekend. Conclusions: The Food Dudes programme had no effect on changing children's fruit and vegetable consumption in the home environment. Further development of the programme could consider how parental and home environmental factors may be combined with the principles of the Food Dudes programme to influence children's fruit and vegetable consumption in this setting. abstract_id: PUBMED:27123430 Pertinence of the recent school-based nutrition interventions targeting fruit and vegetable consumption in the United States:a systematic review. Background: Schools are the major locations for implementing children's dietary behavior related educational or interventional programs. Recently, there has been an increase in school-based nutrition interventions. The objective of this systematic review was to overview the evidence for the effectiveness of school-based nutrition intervention on fruit and vegetable consumption. Methods: PubMed was used to search for articles on school-based nutrition interventions that measured students' fruit and vegetable consumption. Our search yielded 238 articles.The article was included if published in a peer-reviewed journal, written in English language,administered in the United States, and conducted among a population-based sample of children in Kindergarten through eighth grade. A total of 14 publications met the inclusion criteria. Results: Eight articles successfully showed the positive effect on increasing fruit and or vegetable consumption while the other six did not. Several factors, including (but not limited to) intervention duration, type of theory used, style of intervention leadership, and positively affecting antecedents of fruit and vegetable consumption were compared; however, no dominant factor was found to be shared among the studies with significant findings. Given that the criteria for selection were high, the lack of consistency between interventions and positive outcomes was surprising. Conclusion: With high levels of scrutiny and budget constraints on school nutrition, it is imperative that more research be conducted to identify the effective intervention components. abstract_id: PUBMED:36235849 Increasing Fruit and Vegetable Intake of Primary School Children in a Quasi-Randomized Trial: Evaluation of the Three-Year School-Based Multicomponent Intervention. Insufficient consumption of fruit and vegetables was found in primary school children. To address this problem, a three-year school-based multicomponent intervention was conducted in 14 primary schools in the City of Zagreb. The aim of the study was therefore to evaluate one of the primary goals of the intervention—the increase in fruit and vegetable intake among primary school children. A total of 681 children were allocated to the intervention (n = 300 in the control group and n = 381 in the intervention group). The intervention included 23 interactive classroom workshops, 10 cross-curricular activities, 13 homework challenges, visual exposure with educational posters in classrooms, parent education via the website, and the implementation of new dishes into the school food system. Fruit and vegetable intake was assessed using a semi-quantitative food frequency questionnaire at baseline and after the intervention. Data were analyzed by per-protocol analysis. The study involved 259 children (50.2% girls; age 7.7 ± 0.4 years; n = 116 in the control group and n = 143 in the intervention group) who completed a food intake frequency questionnaire at both time points. Children in the intervention group showed a significant increase (p &lt; 0.001) in total daily fruit and vegetable intake (before: 332.1 ± 164.9 g; after: 430.1 ± 186.7 g) compared to the control group (before: 350.2 ± 187.5; after: 382.6 ± 196.8) after the intervention. The increase in fruit and vegetable intake was achieved in 89% of children, while 25% more children reached the daily recommendation of 400 g. The use of the multicomponent intervention showed potential to increase fruit and vegetable intake in primary school children. abstract_id: PUBMED:33304772 Long-term impact of a school-based nutrition intervention on home nutrition environment and family fruit and vegetable intake: A two-year follow-up study. Long-term data on maintenance of intervention effects of health promotion programs targeting fruit and vegetable (F&amp;V) intake in children is lacking. We conducted a two-year follow-up of Brighter Bites, a school-based nutrition education and food co-op intervention found to be effective in increasing child intake of F&amp;V. A one-group, pre-post evaluation design was used to assess the two-year post intervention impact of the program on child and parent dietary intake and home nutrition environment. In 2016-2017 school year, we conducted a follow up of 262 parent-child dyads who had previously participated in Brighter Bites in a 2013-2015 evaluation study in six low-income Texas elementary schools. Child dietary intake was measured using a parent-reported food frequency questionnaire, and surveys measured parent F&amp;V intake, and home nutrition environment. Results of a multi-level regression analysis showed that, two years post-intervention, as compared to baseline, there was a significant increase in child intake of fruit, vegetable, and fiber, and significant decreases in total fat intake and percent daily calories from sugary beverages (p &lt; 0.05). Parent dietary data showed significant increases in fruit intake, and intake of F&amp;V combined (p &lt; 0.05). Changes in home nutrition environment included: increased frequency of cooking behaviors, increased usage of nutrition facts labels in making grocery purchasing decisions, and increased food availability of F&amp;V (p &lt; 0.05). This study demonstrates potential long-term sustained impact of a comprehensive school-based intervention among low-income children and their families. abstract_id: PUBMED:38269621 Mediators and moderators of the effects of a school-based intervention on adolescents' fruit and vegetable consumption: the HEIA study. Objective: To examine whether targeted determinants mediated the effects of the HEalth In Adolescents (HEIA) intervention on fruit and vegetable (FV) consumption and explore if these mediating effects were moderated by sex, parental education or weight status. Design: Cluster-randomised controlled trial. Setting: The HEIA study (2007-2009) was a Norwegian 20-month multi-component school-based intervention to promote healthy weight development. FV consumption and targeted determinants were self-reported at baseline, mid-way (8 months) and post-intervention (20 months). Participants: Adolescents (11-13-year-old) in twenty-five control schools (n 746) and twelve intervention schools (n 375). Results: At post-intervention, more adolescents in the intervention group compared with the control group had knowledge of the FV recommendations (OR: 1·4, 95 % CI 1·1, 1·9) and reported a decreased availability of vegetables at home (β: -0·1, 95 % CI -0·2, 0·0). Availability/accessibility of FV at home, availability of vegetables at dinner, taste preferences for different types of FV and knowledge of the FV recommendations were positively associated with the consumption of FV. However, none of the post-intervention determinants significantly mediated the intervention effects on FV consumption. Although no moderating influences by sex, parental education or weights status were observed on the mediating effects, exploratory analyses revealed significant moderations in the b-paths. Conclusions: Since none of the targeted determinants could explain the increase in FV consumption, it remains unclear why the intervention was effective. Reporting on a wide range of mediators and moderators in school-based interventions is needed to reveal the pathways through which intervention effects are achieved. abstract_id: PUBMED:33811943 Strategies to increase primary school children's fruit and vegetable intake during 10AM snack time. In this observational study, children's fruit and vegetable (F&amp;V) consumption during the 10AM school break was assessed for two promising strategies: a 5-day-a-week F&amp;V policy and free provision of F&amp;V presented in an attractive fruit bowl. Schools without a school food policy served as reference group. A secondary aim was to explore the feasibility of the two strategies. A total of 569 children aged 6-9 years participated. Children's F&amp;V consumption at school was assessed on two different week days via observations and weighing snack portions. Parents completed an online-questionnaire on their perceptions and experiences with the two strategies. Main outcomes were the proportion of children and frequency of eating F&amp;V, and children's F&amp;V portion sizes. Data were analysed via ANOVA and Chi-Square (p &lt; 0.05). Average F&amp;V consumption differed significantly between the three situations (p &lt; 0.001): 250 g in the fruit bowl situation, 130 g in the 5-day-policy situation and 60 g in the no-policy situation. The proportion of children eating F&amp;V was high for the 5-day-policy (97-98%) and fruit bowl situation (87-97%), whereas this was substantially lower (50-60%; p &lt; 0.001) in the no-policy situation. The majority of parents considered both strategies as feasible. A 5-day-a-week F&amp;V policy seems an effective, feasible and structural strategy to support children's fruit and vegetable consumption at school. The fruit bowl strategy with an additional eating moment may enhance children's intake even further, although additional requirements are needed for structural implementation at school. abstract_id: PUBMED:34165028 Fruit and vegetable intake at and away from school during participation in the FIT Game. Background: The FIT Game is a multicomponent school-based incentive program aimed at increasing children's fruit and vegetable (FV) intake. There has been no previous report on how playing the game at school influences FV intake away from school. Aim: To examine children's (n=37) FV intake away from school while participating in the FIT Game program at school. Methods: FV intake away from school was assessed using the ASA24-Kids-2014 Dietary Assessment Tool. Paired samples t-tests and the generalized linear model repeated measures analysis of variance were used to examine the difference in children's mean FV intake away from school. Results: During the final three days of the FIT Game intervention, we observed no change in FV consumption away from school (p=0.30). Similarly, no differences were observed between FV intake away from school before the implementation of the FIT Game and during the final three days (p=0.81). Conclusions: The FIT Game modestly decreased the children's FV intake in a nonstatistically significant manner away from school. Our previous report showed an increase in children's FV intake at school; thus, the net effect of the game was a significant increase in total daily FV intake. abstract_id: PUBMED:28202096 Potential in-class strategies to increase children's vegetable consumption. Objective: The Crunch&amp;Sip programme is a school-based nutrition initiative designed to increase the fruit, vegetable and water intakes of primary-school children. In recognition of the notable deficits in children's vegetable consumption, the present study explored the receptivity of school staff to a realignment of the Crunch&amp;Sip programme to feature a primary focus on vegetable consumption. This involved investigating school staff members' perceptions of relevant barriers, motivators and facilitators. Design: A multi-method approach was adopted that involved four focus groups and a survey (administered in paper and online formats) containing a mixture of open- and closed-ended items. Setting: Western Australia. Subjects: Staff from Western Australian schools participated in the focus groups (n 37) and survey (n 620). Results: School staff were strongly supportive of modifying the Crunch&amp;Sip programme to focus primarily on children's vegetable consumption and this was generally considered to be a feasible change to implement. Possible barriers identified included children's taste preferences and a perceived lack of parental support. Suggested strategies to overcome these barriers were education sessions for parents and children, teachers modelling vegetable consumption for their students and integrating vegetable-related topics into the school curriculum. Conclusions: School staff are likely to support the introduction of school-based nutrition programmes that specifically encourage the consumption of vegetables. Potential barriers may be overcome through strategies to engage parents and children. abstract_id: PUBMED:36330727 A home gardening intervention to improve vegetable consumption among urban poor children: A study protocol for randomised controlled trial. Background: One of the key importance of vegetable consumption is to obtain sufficient micronutrients, dietary fibre, and for the prevention of childhood obesity. Most Malaysian children did not meet the recommended intake of vegetable consumption, and this is especially vulnerable among the urban poor population due to food insecurity. Efforts are needed to promote vegetable consumption that fall short of the recommended intake level. Aim: This trial aims to examine the effectiveness of the "GrowEat" project, as a nutrition intervention programme integrated with home gardening activities to improve vegetable consumption among urban poor children in Kuala Lumpur, Malaysia. Methods: This is a single-blinded parallel two-arm cluster randomised controlled trial (RCT) that include 134 children. Two zones in Kuala Lumpur will be randomly selected, and three low-cost housing flats from each zone will be selected as the intervention and control groups respectively. The trial is designed based on the social cognitive theory (SCT). Children from the intervention group (n = 67) will attend a 12-week programme, which consists of home garden-based activities, gardening and nutrition education session. Assessment will be conducted for both groups at three time points: baseline, post-intervention and follow-up phase at 3 months after the intervention. Conclusion: We anticipate positive changes in vegetable consumption and its related factors after the implementation of the "GrowEat" project. The current intervention may also serve as a model and can be extended to other urban poor population for similar interventions in the future to improve vegetable consumption, agriculture and nutrition awareness. abstract_id: PUBMED:34886055 Can the School Fruit and Vegetable Scheme Be an Effective Strategy Leading to Positive Changes in Children's Eating Behaviours? Polish Evaluation Results. Background: The School Fruit and Vegetable Scheme (SFVS) was developed to form the habit of eating fruit and vegetables (F&amp;V) among children. The survey aimed to identify both the strengths of the scheme and areas that required support and strengthening in the further implementation of school schemes. Methods: The study was conducted from 2012 to 2015 among students of randomly selected 85 primary schools that participated in the programme (intervention group) or did not participate therein (control group). The F&amp;V consumption among the students was evaluated based on the 3 day food record method. Other behaviours were evaluated via frequency and preference questionnaires. Results: Over the three years of implementing SFVS, fruit consumption significantly increased by approximately 30 g/day, i.e., by 18%. In the control group, it increased only by approximately 4%. At the same time, no increase in vegetable consumption was observed. A number of other positive effects of SFVS were also found. These concerned students' nutritional attitudes and behaviours, such as a further increase in the children's knowledge on the health aspects of F&amp;V consumption, the levels of their consumption and an increased preference for fruit in general. Conclusions: The results indicate that providing F&amp;V in schools free of charge can be an effective strategy for enhancing F&amp;V consumption among children, in particular by raising the awareness of the health importance of F&amp;V consumption and gradually influencing children's eating habits, especially when it comes to the habit of fruit consumption. The issue of vegetable consumption is an area for intervention enhancement. There is also a need for further, in-depth analyses, taking into account the impact of potential confounding factors. Answer: The evidence regarding the effectiveness of school-based interventions in increasing children's fruit and vegetable consumption in the home setting is mixed. Some studies have found that school-based interventions do not significantly influence children's consumption of fruits and vegetables at home. For instance, the Food Dudes programme, which was successful in increasing fruit and vegetable consumption at school, did not lead to short- or long-term changes in children's consumption at home (PUBMED:24215014). Similarly, a systematic review of school-based nutrition interventions in the United States found that while some interventions successfully increased fruit and/or vegetable consumption, there was no dominant factor shared among the studies with significant findings, and the lack of consistency between interventions and positive outcomes was surprising (PUBMED:27123430). However, other studies have shown more promising results. A three-year school-based multicomponent intervention in Zagreb demonstrated a significant increase in total daily fruit and vegetable intake among primary school children, with 89% of children in the intervention group showing an increase and 25% more children reaching the daily recommendation of 400g (PUBMED:36235849). Additionally, a two-year follow-up study of the Brighter Bites programme found a significant increase in child intake of fruits, vegetables, and fiber, as well as positive changes in the home nutrition environment, suggesting a potential long-term sustained impact of comprehensive school-based interventions (PUBMED:33304772). Other studies have explored specific strategies to increase fruit and vegetable intake, such as a 5-day-a-week policy and free provision of fruits and vegetables presented in an attractive fruit bowl, which showed promising results in increasing consumption during snack time at school (PUBMED:33811943). However, the FIT Game, a school-based incentive program, did not significantly change children's fruit and vegetable intake away from school, although it did increase overall daily intake when combined with school consumption (PUBMED:34165028). In conclusion, while some school-based interventions have not been effective in changing children's fruit and vegetable consumption at home, others have shown potential for increasing overall intake and making positive changes in the home nutrition environment.
Instruction: Do we need to do overcorrection in Blount's disease? Abstracts: abstract_id: PUBMED:24817156 Do we need to do overcorrection in Blount's disease? Purpose: In order to prevent recurrent deformity, overcorrection in Blount's disease has been a common practice by most paediatric orthopaedic surgeons. However, some patients have persistent valgus alignment resulting in awkward deformity. The femoro-tibial angle (FTA) was measured in this series of cases to determine the necessity of such practice. Method: During 1998-2010, patients with Blount's disease stage 2 by Langenskiold, aged from 30 to 40 months who had failed from bracing and underwent valgus osteotomy were included. Seventeen legs had postoperative FTA 7-13° (group 1) and 48 legs had postoperative FTA more than 13° (group 2). ROC curve was used to determine the appropriate FTA that was suitable to prevent recurrence. Results: Four legs had recurrence (28.6%) in group 1 and six legs (12.5%) had recurrence in group 2. Chi-square test between two groups were not statistically significant in recurrence (p = 0.434). Age and BMI were not statistically significant between recurrent and non-recurrent groups. The ROC curve shows that overcorrection more than 15° did not show benefit to prevent the recurrence in Blount's stage 2. Conclusion: Our study showed that the overcorrection group had non-statistically significant recurrence compared to the non-overcorrection group, and overcorrection more than valgus 15° has no benefit to prevent recurrence. abstract_id: PUBMED:34209445 Deformity Reconstruction Surgery for Blount's Disease. Blount's disease is an idiopathic developmental abnormality affecting the medial proximal tibia physis resulting in a multi-planar deformity with pronounced tibia varus. A single cause is unknown, and it is currently thought to result from a multifactorial combination of hereditary, mechanical, and developmental factors. Relationships with vitamin D deficiency, early walking, and obesity have been documented. Regardless of the etiology, the clinical and radiographic findings are consistent within the two main groups. Early-onset Blount's disease is often bilateral and affects children in the first few years of life. Late-onset Blount's disease is often unilateral and can be sub-categorized as juvenile tibia vara (ages 4-10), and adolescent tibia vara (ages 11 and older). Early-onset Blount's disease progresses to more severe deformities, including depression of the medial tibial plateau. Additional deformities in both groups include proximal tibial procurvatum, internal tibial torsion, and limb length discrepancy. Compensatory deformities in the distal femur and distal tibia may occur. When non-operative treatment fails the deformities progress through skeletal maturity and can result in pain, gait abnormalities, premature medial compartment knee arthritis, and limb length discrepancy. Surgical options depend on the patient's age, weight, extent of physeal involvement, severity, and number of deformities. They include growth modulation procedures such as guided growth for gradual correction with hemi-epiphysiodesis and physeal closure to prevent recurrence and equalize limb lengths, physeal bar resection, physeal distraction, osteotomies with acute correction and stabilization, gradual correction with multi-planar dynamic external fixation, and various combinations of all modalities. The goals of surgery are to restore normal joint and limb alignment, equalize limb lengths at skeletal maturity, and prevent recurrence. The purpose of this literature review is to delineate basic concepts and reconstructive surgical treatment strategies for patients with Blount's disease. abstract_id: PUBMED:29263762 The challenges of a comprehensive surgical approach to Blount's disease. Purpose: The purpose of this study is to evaluate a surgical technique that aims to address the multi-planar deformities occurring in early onset Blount's disease. Methods: Seven patients (eight lower limbs) with early onset Blount's disease were treated with a surgical technique that used an external fixator to simultaneously address all aspects of deformity. Presenting radiographic parameters including mechanical axis deviation (MAD), lateral distal femoral angle (LDFA), medial proximal tibial angle (MPTA) and medial plateau angle (MPA) were compared with final follow-up and the unaffected limb. Data was also collected on surgical complications and functional outcomes. Results: After a mean interval of 4.6 years (2.2 to 9), the MAD of the affected limb improved from 4.85 cm to 1.88 cm, MPTA from 43.58° to 75.46° and MPA from 43.41° to 20.71°. The mean time in the external fixator was 5.6 months (3.5 to 10.4). Recurrence was noted in three patients; two had evidence of nerve injury and one patient had premature fibular consolidation. Conclusion: The surgical technique described in this study has attempted to address the complex multiaxial deformity that is encountered in Blount's disease. It identifies difficulties and limitations in developing a comprehensive surgical protocol and highlights complications associated with this approach. abstract_id: PUBMED:27583129 Case Report: 'Z' osteotomy - a novel technique of treatment in Blount's disease. Blount's disease is a progressive form of genu varum due to asymmetrical inhibition of the postero medial portion of the proximal tibial epiphysis. The surgical treatments involved in correction of Blount's disease are often technically demanding, complicated procedures. These procedures can lead to prolonged recovery times and poor patient compliance. In such a context we are suggesting "fibulectomy with Z osteotomy" of the proximal tibia, a relatively simple and highly effective technique. This technique is based on correcting the mechanical axis of the lower limb thereby restoring growth from the medial physis of proximal tibia. We have used a new surgical technique, which includes fibulectomy followed by a Z-shaped osteotomy. We have used this simple technique in a 5 year-old boy with unilateral Blount's disease. The femoro-tibial angle was corrected from 18.2° of varus to 4.2° of valgus. The angular correction obtained after operation was 22°. There were no postoperative complications. This technique has the advantages of correcting both angular and rotational deformities simultaneously. The purpose of this case study is to introduce a new surgical technique in the treatment of Blount's disease. abstract_id: PUBMED:25932193 Melt-metaphyseal and diphyseal osteotomy for correction of infantile Blount's disease: a long-term follow-up study. Blount's disease is an uncommon disorder of postero-medial proximal tibial physis. Blount described infantile and adolescent types. This study aims to describe using femur, tibia and fibula osteotomies to treat infantile Blount's disease. From May 1992 to May 2005, 7 patients of Blount's disease (3 males, 4 females) were included, whose age was range from 17 to 62 months. Femorotibial angle (FTA) was 31 ± 6° (range from 27° to 41°). Metaphyseal-diaphyseal angle (MDA) was 16 ± 4° (range from 13° to 24°). The femoral vara angle was 10 ± 4° (range from 2° to 23°). According to Langenskiold's classification, 3 patients were in stage II, 7 patients in stage III, and 2 patients in stage IV. Five cases were affected bilateral and 2 unilaterally, treated by famur, tibia and fibula valgus osteotomies, and a hip spica cast were used for 6 weeks after operation. Results indicated that all patients were followed up 3 to 16 years. FTA, MDA and femur diaphysis were measured, FTA was 2 ± 7°valgus (from 4° vara to 13° valgus). MDA was 1 ± 2°valgus (range from 0° to 12°). Femoral diaphyseal angle was 1 ± 3°valgus (range from 3° vara to 7° valgus). Six patients could walk without any knee pain, except for 1 patient with bilateral disorder feels his left genu uncomfortable after long time stand or work. His MDA was 12°, and FAT was -4°. In conclusion, femur, tibia and fibula osteotomies are useful for correction of Blount's disease. Recurrence and complication are less than those reported for Blount's disease. abstract_id: PUBMED:29481866 Blount disease. Blount disease is an asymmetrical disorder of proximal tibial growth that produces a three-dimensional deformity. Tibia vara is the main component of the deformity. Blount disease exists as two clinical variants, infantile or early-onset, and adolescent or late-onset, defined based on whether the first manifestations develop before or after 10 years of age. The pathophysiological mechanisms are unclear. In the Americas and Caribbean, Blount disease chiefly affects black obese children. Without treatment, the prognosis is often severe, particularly in the infantile form due to the development of medial tibial epiphysiodesis at about 6 to 8 years of age. In other parts of the world, the associations with black ethnicity and obesity are less obvious and the prognosis is often less severe. A consensus exists about the optimal treatment in two situations: before 4 years of age, progressive Blount disease should be corrected, preferably by a simple osteotomy; and once medial tibial epiphysiodesis has developed, both a complementary epiphysiodesis and gradual external fixator correction of the other alignment abnormalities, rotational deformity, and limb length are required. After 4 years of age, the outcome in the individual patient is difficult to predict. Magnetic resonance imaging supplies information on the morphology and vascularisation of the growth regions, thereby helping to guide treatment decisions. In the adolescent form, morbid obesity limits the treatment options. Untreated Blount disease in adults is rarely encountered. A more common occurrence is the presence of residual abnormalities at skeletal maturity in patients treated for Blount disease in childhood. Premature osteoarthritis may develop. In this situation, osteotomy may delay the need for total knee arthroplasty. abstract_id: PUBMED:34622139 Blount Disease. Teaching point: MRI allows to assess deformity and viability of the tibia in Blount disease. abstract_id: PUBMED:25606156 Infantile blount disease: a case report. Blount disease is an acquired growth disorder of the medial aspect of the proximal tibial physis, epiphysis and metaphysis. Infantile Blount disease present with bowing and length discrepancy in the lower limbs. The deformed medial tibial metaphysis represent as nontender bony protuberance can be palpated along the medial aspect of the proximal tibia. Here, we present an 18-month-old boy presented with the complaints of bilateral bowing of lower limb with normal biochemical investigation and radiological survey revealed Blount disease. abstract_id: PUBMED:35800658 Racial differences in late-onset Blount disease. Purpose: Blount disease is most common among obese Black children. The reason for Blount's racial predisposition is unclear. Given that obesity is a risk factor for Blount disease and the known associations between race, obesity, and socioeconomic status in the United States, we hypothesized that socioeconomic status and severity of obesity differ between Black and non-Black children with late-onset Blount disease. We additionally examined differences in treatment types between Black and non-Black children. Methods: One hundred twenty-five patients from two institutions were included. Age at presentation, age of onset, body mass index, race, sex, and treatment type were recorded. These variables were compared between Black and non-Black children. Insurance type and estimated household income were used as markers of socioeconomic status. Results: Of the 125 patients with late-onset Blount disease, body mass index percentiles were higher for Black patients (96th ± 12th percentile) than non-Black patients (89th ± 22nd percentile) (p = 0.04). Black patients also had lower estimated incomes (US$48,000 ± US$23,000 vs US$62,000 ± US$30,000) (p = 0.01) and much higher rates of Medicaid enrollment (69% vs 24%) (p &lt; 0.01) than did non-Black patients. Regarding treatment types, osteotomy was more common among Black patients (60%) than non-Black patients (38%) (p = 0.033). Conclusion: The race-related associations we found between obesity and socioeconomic status suggest that non-genetic factors may contribute to observed racial differences in the prevalence of Blount disease. Level Of Evidence: level III. abstract_id: PUBMED:35582157 Comparing complications of outpatient management of slipped capital femoral epiphysis and Blount's disease: A database study. Background: Currents trends in pediatric orthopaedics has seen an increase in surgeries being successfully completed in an outpatient setting. Two recent examples include slipped capital femoral epiphysis (SCFE) and Blount's disease. Surgical indications are well-studied for each pathology, but to our knowledge, there is an absence in literature analyzing safety and efficacy of inpatient vs outpatient management of either condition. We believed there would be no increase in adverse outcomes associated with outpatient treatment of either conditions. Aim: To investigate whether outpatient surgery for SCFE and Blount's disease is associated with increased risk of adverse outcomes. Methods: The 2015-2017 American College of Surgeons National Surgical Quality Improvement Program Pediatric Registries were used to compare patient characteristics, rates of complications, and readmissions between outpatient and inpatient surgery for SCFE and Blount's disease. Results: Total 1788 SCFE database entries were included, 30% were performed in an outpatient setting. In situ pinning was used in 98.5% of outpatient surgeries and 87.8% of inpatient surgeries (P &lt; 0.0001). Inpatients had a greater percent of total complications than outpatients 2.57% and 1.65% respectively. Regarding Blount's disease, outpatient surgeries constituted 41.2% of the 189 procedures included in our study. The majority of inpatients were treated with a tibial osteotomy, while the majority of outpatients had a physeal arrest (P &lt; 0.0001). Complications were encountered in 7.4% of patients, with superficial surgical site infections and wound dehiscence being the most common. 1.6% of patients had a readmission. No differences in complication and readmission risks were found between inpatients and outpatients. Conclusion: The current trend is shifting towards earlier discharges and performing procedures in an outpatient setting. This can be safely performed for a large portion of children with SCFE and Blount's disease without increasing the risk of complications or readmissions. Osteotomies are more commonly performed in an inpatient setting where monitoring is available. Answer: The necessity of overcorrection in Blount's disease has been questioned based on the findings from a study that included patients with Blount's disease stage 2 by Langenskiold. The study compared two groups of patients who underwent valgus osteotomy: one with a postoperative femoro-tibial angle (FTA) of 7-13° (group 1) and another with an FTA of more than 13° (group 2). The results showed that four legs had recurrence (28.6%) in group 1 and six legs (12.5%) had recurrence in group 2. The difference in recurrence rates between the two groups was not statistically significant (p = 0.434). Additionally, the study found that overcorrection more than 15° did not show a benefit in preventing recurrence in Blount's stage 2 (PUBMED:24817156). Therefore, based on this study, overcorrection of more than 15° does not appear to provide a benefit in preventing recurrence of Blount's disease, and the practice of overcorrection may not be necessary. It is important to note that this conclusion is specific to patients with Blount's disease stage 2 by Langenskiold and may not necessarily apply to all cases or stages of Blount's disease. Other factors such as age, BMI, and the severity of the deformity may also play a role in the decision-making process for the treatment of Blount's disease.
Instruction: Histopathology of coronary lesions with early loss of minimal luminal diameter after successful percutaneous transluminal coronary angioplasty: is thrombus a significant contributor? Abstracts: abstract_id: PUBMED:9812074 Histopathology of coronary lesions with early loss of minimal luminal diameter after successful percutaneous transluminal coronary angioplasty: is thrombus a significant contributor? Background: Early loss of minimal luminal diameter of &gt;0.3 mm after successful percutaneous transluminal coronary angioplasty (PTCA) is associated with a higher incidence of restenosis. The underlying mechanism of this early loss is unknown and thrombus may be a contributing factor. Methods: We performed a prospective study using quantitative computerized planimetry on coronary tissue specimens obtained by directional coronary atherectomy of 24 lesions in which early loss occurred 22+/-9 minutes after successful PTCA. Results: Thrombus was present in 9 (37%) of 24 coronary specimens. Segmental areas (mm2) and percentage of total area were distributed as follows: sclerotic tissue, 4.07+/-0.7 mm2 (63%+/-6%); fibrocellular tissue, 0.97+/-0.27 mm2 (16%+/-4%); hypercellular tissue, 0.99+/-0.29 mm2 (12%+/-3%); atheromatous gruel, 0.18+/-0.07 mm2 (3%+/-0.1%); and thrombus, 0.24+/-0.15 mm2 (6%+/-0.4%). There was no difference in the relative early loss index between lesions with or without thrombus (35%+/-7% vs 26%+/-2%, respectively; P= .87). Multiple stepwise regression analysis did not identify any histologic predictors of relative early loss index. Conclusion: Histopathologic analysis of coronary lesions with early loss after successful PTCA suggests that thrombus may not play a significant role in this angiographic phenomenon. abstract_id: PUBMED:7484897 Time course and mechanism of early luminal diameter loss after percutaneous transluminal coronary angioplasty. To assess the time course and mechanism of early minimal luminal diameter (MLD) loss, serial angiographic observations were performed. Seventy-four patients (with 74 severe narrowings [ &gt; or = 70%]) with acute ischemic syndromes who had an early loss in MLD of &gt; 0.3 mm at 24 hours after percutaneous transluminal coronary angioplasty (PTCA) also underwent 1 hour post-PTCA angiography. In 12 consecutive patients with early loss 1 hour after PTCA, angioscopy was also performed to assess the mechanism of early loss. The percent diameter stenosis for the 74 lesions was 16.8 +/- 8.4% immediately after PTCA, 35.1 +/- 14.2% 1 hour after PTCA (p &lt; 0.002 vs immediately after), and 41.4 +/- 13.2% at 24 hours (p &lt; 0.10 vs 1 hour after). The MLD also showed similar differences: 2.6 +/- 0.3 mm immediately after to 2.0 +/- 0.4 mm 1 hour after(p &lt; 0.002) to 1.8 +/- 0.4 mm 24 hours after PTCA (p &lt; 0.10 vs 1 hour). In 60 patients (81%), the &gt; 0.3 mm loss was detected 1 hour after PTCA. These 60 patients had no further decreases in MLD at 24 hours (1.9 +/- 0.4 vs 1.8 +/- 0.4 mm at 1 and 24 hours, respectively, p = NS). Adequate angioscopic images available in 11 patients showed that red thrombus was present in 1, minor or multiple dissection in 5, and neither thrombus nor dissection in 5 other patients (consistent with early wall recoil). Thus, in narrowings demonstrating early loss in MLD at 24 hours, 81% showed that the early loss occurred within 1 hour after PTCA. Early loss is not related to thrombus but usually to dissection or recoil. abstract_id: PUBMED:9143764 Exaggerated luminal loss a few minutes after successful percutaneous transluminal coronary angioplasty in patients with recent myocardial infarction compared with stable angina: an intracoronary ultrasound study. This study investigates the mechanisms of exaggerated acute luminal loss after successful coronary angioplasty in patients with recent myocardial infarction compared with stable angina by angiography and intracoronary ultrasound (ICUS). We studied 15 consecutive patients (group 1) who, after a successful thrombolysis for myocardial infarction, underwent delayed (8 +/- 2 days after the myocardial infarction) successful balloon coronary angioplasty. Group 1 patients were individually matched with 15 stable angina patients (group 2). The percentage of stenosis and acute luminal loss were measured by quantitative coronary analysis. The ultrasound characteristics of lumen pathology were described as soft, hard, calcified, eccentric, concentric, thrombotic, and dissection lesions. Matching by stenosis location, reference diameter, sex, and age resulted in 2 comparable groups of 15 lesions with identical baseline characteristics. Immediately after percutaneous transluminal coronary angioplasty (PTCA), the minimal luminal diameter increased from 0.5 +/- 0.3 mm to 2.4 +/- 0.3 mm and from 0.5 +/- 0.2 mm to 2.4 +/- 0.3 mm in groups 1 and 2, respectively. Similar balloon sizes were used in both groups. The acute luminal loss (the difference between the maximal dilated balloon diameter and the minimal luminal diameter) immediately after PTCA was 0.4 +/- 0.2 mm and 0.3 +/- 0.3 mm (14 +/- 8% and 10 +/- 11% of balloon size) (P = not significant [NS]) in groups 1 and 2, respectively. After ICUS (mean 24 min after the last balloon deflation), the acute luminal loss was 0.9 +/- 0.3 mm and 0.5 +/- 0.4 mm (29 +/- 11% and 17 +/- 8% of balloon size) (P = 0.01) in groups 1 and 2, respectively. There was a significantly higher prevalence of intracoronary thrombus formation as detected by ICUS in group 1 compared with group 2 (80% vs. 20%; P &lt; 0.001). In matched groups of successfully treated coronary angioplasty, patients with recent myocardial infarction had a similar magnitude of acute gained luminal loss immediately after the procedure. However, an exaggerated luminal loss a few minutes after the last balloon deflation in patients with recent myocardial infarction was noted because of mural thrombus formation compared with patients with stable angina. abstract_id: PUBMED:2387929 Angiographic patterns of balloon inflation during percutaneous transluminal coronary angioplasty: role of pressure-diameter curves in studying distensibility and elasticity of the stenotic lesion and the mechanism of dilation. There are few in vivo data concerning the mechanisms of balloon inflation during coronary angioplasty. To characterize how lesions dilate, videodensitometry was used to measure the diameter of the inflated balloon across 29 coronary lesions in 27 patients. Pressure-diameter curves for each lesion were derived with use of a standardized incremental inflation protocol in which pressures between 2 and 6 atm in 3 mm low profile balloons approximated normal vessel diameter. The diameter of coronary stenosis before and after angioplasty was also measured. Pressure-diameter curves showed that the most improvement in luminal caliber occurred at low inflation pressure. A distensibility factor was defined as the ratio of the amount of balloon inflation at 2 atm compared with the balloon diameter at 6 atm. Eccentric irregular lesions (n = 11) had a greater distensibility factor (0.49 +/- 0.17) than did lesions (n = 18) without this configuration (0.33 +/- 0.14) (p less than 0.02). The former were soft, presumably because of thrombus in these lesions. In addition, there were no differences in patterns of balloon inflation for lesions requiring additional inflation or for dilations resulting in an intimal crack or dissection after angioplasty. There was often a loss of luminal caliber when balloon diameter at 6 atm was compared with the diameter after angioplasty. This was defined as elasticity or recoil. There was a significant direct correlation between the amount of elasticity and the extent of balloon inflation at 6 atm (that is, lesions more fully dilated at 6 atm showed more elasticity).(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:7586347 Relation of coronary angioscopic findings at coronary angioplasty to angiographic restenosis. Background: Discordant results have been reported regarding morphological predictors of restenosis after percutaneous transluminal coronary angioplasty (PTCA). These discrepancies may be related to the limitations of angiography in the study of plaque morphology. Methods And Results: We studied 117 consecutive patients who underwent successful PTCA and who underwent coronary angioscopy before and immediately after the procedure. Angiographic follow-up was performed in 99 (85%) patients. We analyzed the relationship between angioscopic variables at the time of PTCA and the occurrence of restenosis assessed by quantitative coronary angiography. Plaque shape and color had no effect on late loss in luminal diameter (late loss: smooth lesions, 0.55 +/- 0.68 mm; complex lesions, 0.76 +/- 0.60 mm; white plaques, 0.51 +/- 0.56 mm; yellow plaques, 0.65 +/- 0.72 mm; P = NS). An angioscopic protruding thrombus at the PTCA site was associated with significantly greater loss in luminal diameter (late loss: no thrombus, 0.47 +/- 0.54 mm; lining thrombus, 0.59 +/- 0.67 mm; protruding thrombus, 1.07 +/- 0.77 mm; P &lt; .05). Dissection assessed by angioscopy immediately after PTCA had no effect on late loss in luminal diameter (late loss: no dissection, 0.60 +/- 0.60 mm; simple dissection, 0.82 +/- 0.75 mm; complex dissection, 0.57 +/- 0.80 mm; P = NS). Conclusions: These results show that coronary angioscopy may be helpful in predicting the risk of restenosis after PTCA. The high rate of angiographic recurrence observed when PTCA is performed at thrombus-containing lesions supports a role for thrombus in the process of luminal renarrowing after PTCA. abstract_id: PUBMED:8353925 Luminal narrowing after percutaneous transluminal coronary angioplasty. A study of clinical, procedural, and lesional factors related to long-term angiographic outcome. Coronary Artery Restenosis Prevention on Repeated Thromboxane Antagonism (CARPORT) Study Group. Background: The renarrowing process after successful percutaneous transluminal coronary angioplasty (PTCA) is now believed to be caused by a response-to-injury vessel wall reaction. The magnitude of this process can be assessed by the change in minimal lumen diameter (MLD) at follow-up angiography. The aim of the present study was to find independent patient-related, lesion-related, and procedure-related risk factors for this luminal narrowing process. A model that accurately predicts the amount of luminal narrowing could be an aid in patient or lesion selection for the procedure, and it could improve assessment of medium-term (6 months) prognosis. Modification or control of the identified risk factors could reduce overall restenosis rates, and it could assist in the selection of patients at risk for a large loss in lumen diameter. This population could then constitute the target population for pharmacological intervention studies. Methods And Results: Quantitative angiography was performed on 666 successfully dilated lesions at angioplasty and at 6-month follow-up. Multivariate linear regression analysis was performed to obtain variables with an independent contribution to the prediction of the absolute change in minimal lumen diameter. Diabetes mellitus, duration of angina &lt; 2.3 months, gain in MLD at angioplasty, pre-PTCA MLD, lesion length &gt; or = 6.8 mm, and thrombus after PTCA were independently predictive of change in MLD. Overall prediction of the model was poor, however, percentage-correct classification for a predicted change between -0.1 to -0.4 mm was approximately 10%. Lesions showing no change or regression (change &gt; -0.1 mm) and lesions showing large progression (&lt; or = -0.4 mm) were more predictable (correct classification, 59.5% and 49.7%, respectively). Conclusions: Renarrowing after successful PTCA as determined with contrast angiography is a process that cannot be accurately predicted by simple clinical, morphological, and lesion characteristics. abstract_id: PUBMED:34950765 Propensity-score-matched comparison of safety, efficacy, and outcome of intravascular lithotripsy versus high-pressure PTCA in coronary calcified lesions. Background: Data regarding safety, efficacy, and outcome of intravascular lithotripsy (IVL) in comparison to standard techniques are lacking. This study sought to compare IVL with non-compliant high-pressure balloon percutaneous coronary angioplasty (PTCA). Methods And Results: We performed a retrospective propensity-score-matched study to compare procedural success in 57 consecutive patients who received IVL-guided PCI in calcified coronary lesions with 171 matched patients who were treated with high-pressure PTCA with a non-compliant (NC)-balloon. The mean minimal lumen diameter (MLD) for the IVL group was 1.08 ± 0.51 mm, and the median percent diameter stenosis on quantitative angiography was 70.2% (interquartile range, 60.2-78.6%). MLD in the high-pressure dilatation group was 0.97 ± 0.43 mm, and the median percent diameter stenosis was 71.5% (interquartile range, 58.5-77.0%). IVL-guided PCI reduced median stenosis to 17.5% (interquartile range, 9.3-19.8%) with an acute gain of 0.93 ± 0.7 mm. High-pressure dilatation resulted in a final median stenosis of 19.3% (interquartile range, 13.33-28.5%). Procedural success was significantly higher (82.5% vs. 61.4%; p: 0.0035) in the IVL group. MACE through 12 months occurred in 10.5% of cases in the IVL group and in 11.1% of the high-pressure group (p = 0.22). Angiographic complications (coronary dissection, slow or no reflow, new coronary thrombus formation, abrupt vessel closure) were very low (0.2% vs. 0.12%). Conclusion: IVL resulted in a significantly higher rate of procedural success compared to high- pressure NC-balloon dilatation in patients with calcified coronary lesions. The rate of MACE through 12 months was similar to the standard therapy. abstract_id: PUBMED:9398106 Coronary artery stenting in cardiac allograft vascular disease. Cardiac allograft vascular disease is characterized by diffuse and multifocal heterogeneous myointimal hyperplasia with or without vascular remodeling. Catheter-based interventions are indicated in selected patients. This study documents our experience with percutaneous transluminal coronary angioplasty and coronary stents (n = 48) in a group of 27 patients 5.7 +/- 2.9 years after heart transplantation. Early and intermediate results were controlled by angiography and intravascular ultrasound. Conventional percutaneous transluminal coronary angioplasty resulted in a mild and mostly inadequate gain in luminal dimensions (lumen area: 3.17 +/- 0.92 mm2 to 3.70 +/- 1.21 mm2; minimal lumen diameter: 1.84 +/- 0.23 mm to 2.04 +/- 0.36 mm). Coronary stenting led to a further improvement of luminal gain (lumen area: 3.70 +/- 1.21 mm2 to 5.86 +/- 1.76 mm2; minimal lumen diameter: 2.04 +/- 0.36 mm to 2.53 +/- 0.38 mm). These results were stabilized by application of aspirin and ticlopidine only. There were no stent thromboses or bleeding complications, and early hospital discharge of the patients was possible. At follow-up (mean follow-up period 7.72 +/- 5.45 months (range 0.50 to 23.13 months) all patients were clinically event free. In six of 24 stented vessels (25%) in 16 patients, significant restenosis (&gt;50%) was found by intravascular ultrasound (n = 20) or by angiography (n = 4) 6 months after stent placement. We conclude that in eligible cardiac allograft vascular disease lesions primary stenting may be the method of choice. However, further evaluation of the modalities of stent application and different stent designs with respect to long-term survival is necessary. abstract_id: PUBMED:1752620 Percutaneous transluminal coronary angioplasty of distal lesions: comparison of results with proximal lesions. The results of percutaneous transluminal coronary angioplasty (PTCA) of 57 distal lesions were compared with 55 proximal lesions in 42 patients, aged 31 to 66 years (mean +/- SD: 51 +/- 9 yrs). Twenty nine (69%) had multivessel and 13 (31%) single vessel disease. The lesions classified as distal were located in left anterior descending (LAD) artery beyond the origin of second diagonal (D2), left circumflex (LCx) after the main obtuse marginal (OM) and right coronary artery (RCA) after the origin of acute marginal branch. Also included in this category were lesions in the second diagonal and obtuse marginal branches, two centimeters from their origin and stenosis in the posterior descending and posterolateral left ventricular branches of RCA. Out of 57 distal lesions 18 were 'complex' because of tandem location (5 patients), ulceration (6 lesions) and intraluminal thrombi (2 lesions). There was no significant difference in the mean luminal diameter stenosis between distal and proximal lesions, before and after PTCA. The primary success rate of angioplasty was 89.5% for the distal and 94.5% for the proximal lesions (P = NS). Inability to position the balloon across the lesion accounted for more failures in distal (3) compared to proximal (1) location. There were no major complications. Our results show that PTCA of distal lesions can be performed with a high rate of success, which is comparable to those with classical proximal lesions in the same patients. abstract_id: PUBMED:2081617 Evaluation of lesions suitable for percutaneous transluminal coronary angioplasty after thrombolytic therapy of acute myocardial infarction. Forty-two patients of acute myocardial infarction (AMI) and clinically successful thrombolysis underwent coronary angiography 7.6 +/- 3.6 days after the AMI. The infarct related artery was patent in 33 of 42 (78.5%) patients, and 27 of these 33 (82%) had residual diameter stenosis of 70 per cent or more. Arteries showing more than 70 per cent luminal diameter narrowing were considered suitable for percutaneous transluminal coronary angioplasty (PTCA) if the lesion was less than 1 cm in length and there was no significant left main or distal lesion. Based on the above criteria, 22 of the 33 patients (66%) with recanalised infarct-related artery were found to have lesions suitable for PTCA. Thus, after successful thrombolysis, significant proportion of patients of acute myocardial infarction have residual lesions that are suitable for PTCA. Answer: The histopathology of coronary lesions with early loss of minimal luminal diameter after successful percutaneous transluminal coronary angioplasty (PTCA) does not indicate that thrombus is a significant contributor. A prospective study using quantitative computerized planimetry on coronary tissue specimens obtained by directional coronary atherectomy of 24 lesions with early loss occurring shortly after successful PTCA found that thrombus was present in 37% of the specimens. However, there was no significant difference in the relative early loss index between lesions with or without thrombus, and multiple stepwise regression analysis did not identify any histologic predictors of relative early loss index (PUBMED:9812074). Additionally, another study assessing the time course and mechanism of early minimal luminal diameter (MLD) loss through serial angiographic observations and angioscopy in patients with early loss in MLD at 24 hours after PTCA found that early loss is not related to thrombus but usually to dissection or recoil. In this study, 81% of the early loss occurred within 1 hour after PTCA (PUBMED:7484897). Furthermore, a study comparing patients with recent myocardial infarction to those with stable angina found that while patients with recent myocardial infarction had a similar magnitude of acute gained luminal loss immediately after the procedure, they experienced an exaggerated luminal loss a few minutes after the last balloon deflation due to mural thrombus formation compared with patients with stable angina (PUBMED:9143764). This suggests that while thrombus may not be a significant contributor to early loss in general, it can have an impact in specific patient populations, such as those with recent myocardial infarction. In summary, while thrombus formation can occur in coronary lesions post-PTCA, the evidence does not support it as a significant contributor to the early loss of minimal luminal diameter in most cases.
Instruction: National health insurance scheme enrolment and antenatal care among women in Ghana: is there any relationship? Abstracts: abstract_id: PUBMED:24219504 National health insurance scheme enrolment and antenatal care among women in Ghana: is there any relationship? Objectives: The objective of this study was to examine whether enrolment in the National Health Insurance Scheme (NHIS) affects the likelihood and timing of utilising antenatal care among women in Ghana. Methods: Data were drawn from the Ghana Demographic and Health Survey, a nationally representative survey collected in 2008. The study used a cross-sectional design to examine the independent effects of NHIS enrolment on two dependent variables (frequency and timing of antenatal visits) among 1610 Ghanaian women. Negative binomial and logit models were fitted given that count and categorical variables were employed as outcome measures, respectively. Results: Regardless of socio-economic and demographic factors, women enrolled in the NHIS make more antenatal visits compared with those not enrolled; however, there was no statistical association with the timing of the crucial first visit. Women who are educated, living in urban areas and are wealthy were more likely to attend antenatal care than those living in rural areas, uneducated and from poorer households. Conclusion: The NHIS should be strengthened and resourced as it may act as an important tool for increasing antenatal care attendance among women in Ghana. abstract_id: PUBMED:30624875 Predictors of Enrolment in the National Health Insurance Scheme Among Women of Reproductive Age in Nigeria. Background: Despite the implementation of the National Health Insurance Scheme (NHIS) since 2005 in Nigeria, the level of health insurance coverage remains low. The study aims to examine the predictors of enrolment in the NHIS among women of reproductive age in Nigeria. Methods: Secondary data from the 2013 Nigeria Demographic and Health Survey (NDHS) were utilized to examine factors influencing enrolment in the NHIS among women of reproductive age (n=38 948) in Nigeria. Demographic and socio-economic characteristics of women were determined using univariate, bivariate and multivariate analyses. Data analysis was performed using STATA version 12 software. Results: We found that 97.9% of women were not covered by health insurance. Multivariate analysis indicated that factors such as age, education, geo-political zone, socio-economic status (SES), and employment status were significant predictors of enrolment in the NHIS among women of reproductive age. Conclusion: This study concludes that health insurance coverage among women of reproductive age in Nigeria is very low. Additionally, demographic and socio-economic factors were associated with enrolment in the NHIS among women. Therefore, policy-makers need to establish a tax-based health financing mechanism targeted at women who are young, uneducated, from poorest households, unemployed and working in the informal sector of the economy. Extending health insurance coverage to women from poor households and those who work in the informal sector through a tax-financed non-contributory health insurance scheme would accelerate progress towards universal health coverage (UHC). abstract_id: PUBMED:30460332 Trends and characteristics of enrolment in the National Health Insurance Scheme in Ghana: a quantitative analysis of longitudinal data. Background: In 2004, Ghana started experimenting a National Health Insurance Scheme (NHIS) to reduce out-of-pocket payment for healthcare. Like many other social health insurance schemes in Africa, the NHIS is striving for universal health coverage (UHC). This paper examines trends and characteristics of enrolment in the scheme to inform policy decisions on attainment of UHC. Methods: We conducted trend analysis of longitudinal enrolment data of the NHIS for the period, 2010-2017. Descriptive statistics were used to examine trends and characteristics of enrolment by geographical region and member groups. Results: Over the 8-year period, the population enrolled in the scheme increased from 33% (8.2 million) to 41% (11.3 million) between 2010 and 2015 and dropped to 35% (10.3 million) in 2017. Members who renewed their membership increased from 44% to 75.4% between 2010 and 2013 and then dropped to 73% in 2017. On average, the urban regions had significantly higher number of new enrolments than the rural ones. Similarly, the urban and peri-urban regions recorded significantly higher number of renewals than the other regions. In addition, persons below the age of 18 years and the informal sector workers had significantly higher number of enrolment than any other member group. Conclusions: Enrolment in the NHIS is declining and there are significant differences among geographical regions and member groups. Managers of the NHIS need to enforce the mandatory enrolment provision in the Act governing the scheme, employ innovative strategies such as mobile phone application for registration and renewals and address delays in healthcare provider claims to improve enrolment. abstract_id: PUBMED:29177126 Migrant female head porters' enrolment in and utilisation and renewal of the National Health Insurance Scheme in Kumasi, Ghana. Purpose: As a social protection policy, Ghana's National Health Insurance Scheme (NHIS) aims to improve access to healthcare, especially for the vulnerable. Migrant female head porters (kayayoo), who are part of the informal economic workforce, are underscored as an ethnic minority and vulnerable group in Ghana. This study aimed to analyse the factors associated with enrolment in and renewal and utilisation of the NHIS among migrant female head porters in the Kumasi Metropolis. Method: We purposively sampled 392 migrant female head porters in the Kejetia, Asafo and Bantama markets. We used a binary logit regression model to estimate associations among baseline characteristics, convenience and benefit factors and enrolment in and renewal and utilisation of the NHIS. Result: Age and income significantly increased the probability of NHIS enrolment, renewal and utilisation. Long waiting times at NHIS offices significantly reduced the likelihood of renewal, while provision of drugs highly significantly increased the tendency for migrant female head porters to enrol in, renew and use the NHIS. Consulting and surgery also significantly increased renewal and utilisation of the NHIS. Conclusion: Political commitment is imperative for effective implementation of the decentralisation policy of the NHIS through the National Health Insurance Authority in Kumasi. We argue that retail offices should be well equipped with logistic facilities to ensure convenience in NHIS initial enrolment and renewal processes by citizenry, and by vulnerable groups in particular. abstract_id: PUBMED:31885056 Health insurance coverage, type of payment for health insurance, and reasons for not being insured under the National Health Insurance Scheme in Ghana. Background: Ghana's National Health Insurance Scheme has improved access to care, although equity and sustainability issues remain. This study examined health insurance coverage, type of payment for health insurance and reasons for being uninsured under the National Health Insurance Scheme in Ghana. Methods: The 2014 Ghana Demographic Health Survey datasets with information for 9396 women and 3855 men were analyzed. The study employed cross-sectional national representative data. The frequency distribution of socio-demographics and health insurance coverage differentials among men and women is first presented. Further statistical analysis applies a two-stage probit Hackman selection model to determine socio-demographic factors associated with type of payment for insurance and reasons for not insured among men and women under the National Health insurance Scheme in Ghana. The selection equation in the Hackman selection model also shows the association between insurance status and socio-demographic factors. Results: About 66.0% of women and 52.6% of men were covered by health insurance. Wealth status determined insurance status, with poorest, poorer and middle-income groups being less likely to pay themselves for insurance. Women never in union and widowed women were less likely to be covered relative to married women although this group was more likely to pay NHIS premiums themselves. Wealth status (poorest, poorer and middle-income) was associated with non-affordability as a reason for being not insured. Geographic disparities were also found. Rural men and nulliparous women were also more likely to mention no need of insurance as a reason of being uninsured. Conclusion: Tailored policies to reduce delays in membership enrolment, improve positive perceptions and awareness of National Health Insurance Scheme in reducing catastrophic spending and addressing financial barriers for enrolment among some groups can be positive precursors to improve trust and enrolments and address broad equity concerns regarding the National Health Insurance Scheme. abstract_id: PUBMED:29728110 Mundane? Demographic characteristics as predictors of enrolment onto the National Health Insurance Scheme in two districts of Ghana. Background: In 2003, Ghana passed a law to establish a National Health Insurance Scheme (NHIS) to serve as the main vehicle for achieving universal health coverage. Over 60% of the population had registered by 2009. Current active membership is however 40%. The stagnation in growth has been recorded across all the membership categories. Clearly, the Scheme is falling short of its core objective. This analysis is a critical thematic contextual examination of the effects of demographic factors on enrolment onto the Scheme. Methods: Demographic secondary data for 625 respondents collected (using a structured questionnaire) during a cross-sectional household survey in an urban, Ashaiman, and rural, Adaklu, districts was analyzed in univariate and multivariate logistic regression models using Statistical Package for Social Scientists (SPSS). Statistical significance was set at P-value &lt; 0.05. Variables included in the analysis were age, gender, education, occupation and knowledge about the NHIS. Results: Seventy-nine percent of the survey respondents have ever enrolled onto the NHIS with three-fifths being females. Of the ever enrolled, 63% had valid cards. Age, gender and educational level were significant predictors of enrolment in the multivariate analysis. Respondents between the ages 41-60 years were twice (p = 0.05) more likely to be enrolled onto a district Scheme compared with respondents between the ages 21-40 years. Females were thrice (p = 0.00) more likely to enroll compared with males. Respondents educated to the tertiary, five times (p = 0.02), and post-graduate, four times (p = 0.05), levels were more likely to enroll compared with non-educated respondents. No significant association was observed between occupation and enrolment. Conclusion: Uptake of the scheme is declining despite high awareness and knowledge. Leadership, innovation and collaboration are required at the district Scheme level to curtail issues of low self-enrolment and to grow membership. Otherwise, the goal of universal coverage under the NHIS will become merely a slogan and equity in financial access to health care for all Ghanaians will remain elusive. abstract_id: PUBMED:34909095 Health care payment practice, perception and awareness of national health insurance scheme by market women in Enugu Metropolis South-East Nigeria. Introduction: health insurance is in the fore front of health financing and achievement of universal health coverage for all. It provides a means of coping with some of the risks faced by individuals in achieving optimal healthcare. Women are vital in the family especially when it comes to the health of their children. We therefore set out in this study to assess the healthcare payment method of women in the informal sector and their awareness of the National health insurance scheme (NHIS) in Nigeria. Methods: the study was a cross sectional descriptive survey involving women traders who were aged 18 years and above. Three hundred and fifty-three (353) womens were recruited using semi-structured interviewer questionnaire and data was analyzed using SPSS version 22. Results: the prevalence of awareness of NHIS among the women was 57.5% and educational status was contributor to awareness of the scheme. Also, only 9.9% of the women were registered under the NHIS and out of pocket payment for healthcare was practiced by as much as 88.7% of the respondents. Most respondents had poor perception about the scheme with 60% believing that the government cannot be trusted to keep its end of the bargain with regards to the NHIS. Conclusion: the need for awareness of the NHIS especially in the informal sector for women is brought to fore. The policy -makers should take into account how women in informal sectors should be captured to enrol in the NHIS in order to expand its coverage and this enrolment should be compulsory. abstract_id: PUBMED:34969387 Association between health insurance enrolment and maternal health care service utilization among women in Ethiopia. Background: Health insurance was considered as the third global health transition which can increase access to health care services by eliminating monetary obstacles to maternal health care use, particularly in emerging nations. Hence, this study aimed to assess the association between health insurance enrolment and maternal health care service utilization among women in Ethiopia. Methods: A cross-sectional study was conducted using the 2016 Ethiopia Demographic and Health Survey (EDHS) data set. About 4278 mothers who had delivered at least one child in the last five years of the survey were selected in the study. Multivariate logistic regression analysis was performed to measure the relationship between health insurance enrolment and maternal health care service utilization by controlling confounders An adjusted odds ratio with a 95% confidence interval and p-values &lt; 0.05 were well-thought-out to state the imperative association. Results: The overall health insurance coverage among the women was 4.7%. About, 18.1% of women from households in the poorest wealth quantile had no health insurance coverage for maternal health care services. Moreover, 84% of women lived in a rural area did not enclose by health insurance. According to multivariate logistic regression, the likelihoods of ANC utilization were 1.54 times (AOR: 1.54; 95% CI: 1.06-2.25) higher among mothers who were enrolled in health insurance compared to their counterparts. In the same vein, the likelihoods of been attended by a skilled birth attendant were 1.84 times (AOR: 1.84; 95% CI: 1.1-3.08) higher among mothers who were enrolled in health insurance. Conclusions: This study has shown that women enrolled in health insurance were associated with skilled delivery and recommended ANC utilization than women who did not enroll in health insurance. Health insurance enrolment enterprises must be available to all pregnant women, particularly those of poorer socioeconomic rank. abstract_id: PUBMED:36002934 Exploring heterogeneity of national health insurance scheme enrolment among persons in the informal sector. The Ghanaian government began implementing the National Health Insurance Scheme (NHIS) in 2005, anchored on the universal health coverage principle. However, informal sector workers contribute to the low enrolment into the scheme. This paper examines the factors that influence enrolment status in Ghana's NHIS among individuals in the informal sector. The study employed the fixed-effects logit model, using the Ghana Socioeconomic Panel Survey datasets. The findings revealed that gender, age, education, marital status, household size, physical activity, income, savings and subjective social welfare determine enrolment decisions of persons in the informal sector. Drivers of NHIS enrolment for male and female household heads were secondary education and household size. Heterogeneously, age, Junior High School education, married, physical activity, subjective social welfare and savings influenced only male-headed households' enrolment status. The study shows differences in enrolment status among rural and urban residents, as age, education, marital status, household size, and physical activity influenced rural residents, while for urban inhabitants', savings determined their NHIS enrolment decisions. However, gender, obesity, income and subjective social welfare predicted enrolment decisions for rural and urban residents. Finally, two variables, gender (male) and physical activity predicted the NHIS enrolment decision among the poor, whereas subjective social welfare and savings influenced the wealthiest enrolment decisions. For Ghana to achieve the Sustainable Development Goals of universal health coverage, the study recommends enhanced public education on NHIS benefits among informal workers to promote enrolment. abstract_id: PUBMED:31266841 Is enrolment in the national health insurance scheme in Ghana pro-poor? Evidence from the Ghana Living Standards Survey. Objectives: This article examines equity in enrolment in the Ghana National Health Insurance Scheme (NHIS) to inform policy decisions on progress towards realisation of universal health coverage (UHC). Design: Secondary analysis of data from the sixth round of the Ghana Living Standards Survey (GLSS 6). Setting: Household based. Participants: A total of 16 774 household heads participated in the GLSS 6 which was conducted between 18 October 2012 and 17 October 2013. Analysis: Equity in enrolment was assessed using concentration curves and bivariate and multivariate analyses to determine associated factors. Main Outcome Measure: Equity in NHIS enrolment. Results: Survey participants had a mean age of 46 years and mean household size of four persons. About 71% of households interviewed had at least one person enrolled in the NHIS. Households in the poorest wealth quintile (73%) had enrolled significantly (p&lt;0.001) more than those in the richest quintile (67%). The concentration curves further showed that enrolment was slightly disproportionally concentrated among poor households, particularly those headed by males. However, multivariate logistic analyses showed that the likelihood of NHIS enrolment increased from poorer to richest quintile, low to high level of education and young adults to older adults. Other factors including sex, household size, household setting and geographic region were significantly associated with enrolment. Conclusions: From 2012 to 2013, enrolment in the NHIS was higher among poor households, particularly male-headed households, although multivariate analyses demonstrated that the likelihood of NHIS enrolment increased from poorer to richest quintile and from low to high level of education. Policy-makers need to ensure equity within and across gender as they strive to achieve UHC. Answer: Yes, there is a relationship between enrolment in the National Health Insurance Scheme (NHIS) and antenatal care among women in Ghana. A study examining the independent effects of NHIS enrolment on the frequency and timing of antenatal visits among Ghanaian women found that women enrolled in the NHIS make more antenatal visits compared with those not enrolled. However, the study did not find a statistical association with the timing of the crucial first visit. It was also noted that women who are educated, living in urban areas, and are wealthy were more likely to attend antenatal care than those living in rural areas, uneducated, and from poorer households (PUBMED:24219504). This suggests that NHIS enrolment is associated with increased utilization of antenatal care services, although other socio-economic and demographic factors also play a significant role in antenatal care attendance.
Instruction: Is poor self-rated health associated with low-grade inflammation in 43,110 late adolescent men of the general population? Abstracts: abstract_id: PUBMED:27113233 Is poor self-rated health associated with low-grade inflammation in 43,110 late adolescent men of the general population? A cross-sectional study. Objective: Self-rated health is a powerful predictor of long-term health and mortality, hence the importance of a better understanding of its biological determinants. Previous studies have shown that low-grade inflammation is associated with poor self-rated health in clinical and healthy populations, but the evidence is sparse in men and completely lacking for men in late adolescence. The aim of this study was to investigate the association between low-grade inflammation and self-rated health among conscripts. It was hypothesised that high levels of inflammatory factors would be associated with poor self-rated health. Design: Data from 49,321 men (18-21 years) conscripted for military service in 1969 and 1970 were used. Inflammation had been measured through erythrocyte sedimentation rate (ESR). Self-rated health had been assessed on a five-point scale, and was dichotomised into Good ('Very good'/'Good'/'Fair') versus Poor ('Poor'/'Very poor'). Data from 43,110 conscripts with normal levels of ESR, and who reported self-rated health remained after exclusion of those with ESR &lt;1 and &gt;11 mm/h. Associations were calculated using logistic regression analyses. Adjustments were made for body mass index, socioeconomic position, inflammatory disease, emotion regulation, smoking, risky use of alcohol and physical activity. Results: High levels of ESR were associated with higher odds for poor self-rated health (OR: 1.077 for each unit mm/h increase in ESR, 95% CI 1.049 to 1.105). Conclusions: The present study shows for the first time a significant association between a marker of inflammation and self-rated health in late adolescent men, adding to evidence of an association between low-grade inflammation and subjective health perception also in men, as previously demonstrated in women. Further support for inflammation as part of a general psychobiological process that underpins subjective health perception is hereby provided. abstract_id: PUBMED:22850264 Poor self-rated health is significantly associated with elevated C-reactive protein levels in women, but not in men, in the Japanese general population. Objective: Self-rated health (SRH) is associated with risk for mortality, but its biological basis is poorly understood. We examined the association between SRH and low-grade inflammation in a Japanese general population. Methods: A total of 5142 men and 11,114 women aged 40 to 69years were enrolled. SRH was assessed by a single question and classified into four categories: good, rather good, neither good nor poor, and poor. Serum high-sensitivity C-reactive protein (hsCRP) levels were measured by the latex-enhanced immunonephelometric method. Elevated CRP was defined as hsCRP level of 1.0mg/L or higher. The association between SRH and elevated CRP was evaluated by using logistic regression with adjustment for age, socioeconomic status (job status, education and marital status), health-related behaviors (smoking status, drinking status, exercise habits and sleep duration), and cardiovascular risk factors (body mass index, systolic blood pressure, total- and HDL-cholesterol, HbA1c and prevalent stroke and/or myocardial infarction). Results: Compared to persons with good SRH, persons with poor SRH had significantly higher risk for elevated CRP: age-adjusted ORs (95% CIs) were 1.33 (1.01-1.76) in men and 1.66 (1.36-2.02) in women. The significant association remained even after adjustment for socioeconomic status, health-related behaviors and cardiovascular risk factors in women, whereas the significance disappeared in men. Conclusion: Poor SRH is associated with low-grade inflammation in both sexes. In women, but not in men, the association is independent of potential confounders. These findings provide an insight into the biological background of SRH in a general population. abstract_id: PUBMED:29349273 The contribution of three dimensions of allostatic load to racial/ethnic disparities in poor/fair self-rated health. Objective: This study evaluates whether different dimensions of physiological dysregulation, modeled individually rather than additively mediate racial/ethnic disparities in self-reported health. Methods: Using data from the National Health and Nutrition Examination Survey (2005-2010) and the Karlson, Hold, and Breen (KHB) mediation model, this paper explores what operationalization of biomarker data most strongly mediate racial/ethnic disparities in poor/fair self-rated health (SRH) among adults in the United States, net of demographic, socioeconomic, behavioral, and medication controls. Results: Non-Hispanic blacks and Hispanics had significantly higher odds of reporting poor/fair self-rated health in comparison to non-Hispanic whites. Operationalizations of allostatic load that disaggregate three major dimensions of physiological dysregulation mediate racial/ethnic disparities strongly between non-Hispanic blacks and non-Hispanic whites, but not between Hispanics and non-Hispanic whites. Disaggregating these dimensions explains racial/ethnic disparities in poor/fair SRH better than the continuous score. Analyses on sex-specific disparities indicate differences in how individual dimensions of allostatic load contribute to racial/ethnic disparities in poor/fair SRH differently. All individual dimensions are strong determinants of poor/fair SRH for males. In contrast, for females, the only dimension that is significantly associated with poor/fair SRH is inflammation. For the analytic sample, additive biomarker scores fit the data as well or better than other approaches, suggesting that this approach is most appropriate for explaining individual differences. However, in sex-specific analyses, the interactive approach models fit the data best for men and women. Conclusions: Future researchers seeking to explain racial/ethnic disparities in full or sex-stratified samples should consider disaggregating allostatic load by dimension. abstract_id: PUBMED:30558565 The association between self-rated health and high-sensitivity C-reactive protein level: a cross-sectional and 5-year longitudinal study. Background: Although self-rated health (SRH) independently predicts mortality, the biological background of this association remains unexplained. This study aimed to examine the association between SRH and serum high-sensitivity C-reactive protein (hsCRP) level. Methods: Subjects were 899 participants aged 35-69 years (237 men and 662 women) in the Daiko Study, part of the Japan Multi-Institutional Collaborative Cohort Study. They were enrolled from 2008 to 2010. Of the subjects, 666 participated in a second survey 5 years later. Lifestyle factors and SRH were assessed using a self-administered questionnaire. Serum hsCRP level was measured using a latex-enhanced immunonephelometric assay. The association between SRH and serum hsCRP level was evaluated using a general linear model with covariates. We further longitudinally investigated whether higher serum hsCRP level at baseline predicts poor SRH after 5 years using an unconditional logistic regression model. Results: A higher serum hsCRP level was significantly associated with poor SRH at baseline after adjusting for covariates (p for trend = 0.023). The age- and sex-adjusted odds ratio and 95% confidence interval (CI) for poor SRH after 5 years was 1.45 (95% CI: 0.76-2.78) for the highest tertile compared with the lowest tertile of serum hsCRP level at baseline with a significant linear trend (p for trend = 0.033), although the risk increase disappeared after adjustment for other covariates. Conclusions: The present study demonstrated that poor SRH is cross-sectionally associated with higher serum hsCRP level. However, the longitudinal data did not support the relationship between serum hsCRP level at baseline and future SRH. Further longitudinal studies that include data on mortality and multiple inflammatory markers are warranted to elucidate the possible role of low-grade inflammation in the association between SRH and mortality risk. abstract_id: PUBMED:31745859 Self-rated health and venous thromboembolism among middle-aged women: a population-based cohort study. Venous thromboembolism (VTE) is one of the most common types of cardiovascular diseases (CVDs) and is associated with increased mortality-risk. Poor-self rated health (SHR) has been associated with elevated inflammatory markers and CVDs. However, little is known about as a predictor of incident VTE. To examine the association between self-rated health, lifestyle and incident VTE among middle-aged women. 6917 women aged 50-64 years, followed for 20 years in the Women's Health In the Lund Area (WHILA) study. After exclusion of those who medicated with anticoagulants, were living in nursing homes or suffered from cancer, stroke, VTE or CHD before baseline, a cohort of 5626 women remained. Cox regression was used to analyse the relationship between self-rated health and time to VTE, censored for any of the previous mentioned diseases during follow-up. Data were collected by questionnaires, physical examinations and Swedish registers. In total, 220 women were affected by VTE corresponding to an incidence rate of 3.9 per 1000 person-years. Adjustment for self-rated health did not significantly predict incident VTE, and neither did any of the lifestyle-related habits (e.g. physical activity and dietary habits including alcohol consumption), besides smoking. This study supports previous results with varicose veins and waist circumference as strong predictors of VTE. Poor self-rated health does not seem to be a valid predictor of VTE. Among lifestyle-related parameters, smoking was significantly associated with risk of VTE. We could also confirm the effect of the other already known risk factors. abstract_id: PUBMED:27490853 Low-Grade Inflammation and Ambulatory Cortisol in Adolescents: Interaction Between Interviewer-Rated Versus Self-Rated Acute Stress and Chronic Stress. Objective: To determine whether the association between self-rated or interviewer-rated recent acute stress exposures and low-grade inflammation and daily cortisol production in adolescents is moderated by chronic stress ratings. Methods: Acute and chronic stress exposures were assessed in 261 adolescents aged 13 to 16 years using a semistructured life stress interview. The negative impact of acute stressors was independently rated by both adolescents (self-rated) and interviewers (interviewer-rated). Markers of inflammation (interleukin (IL)-6, IL-1ra, C-reactive protein) were measured from peripheral blood samples obtained via antecubital venipuncture. Participants collected 4 saliva samples at home on each of 6 consecutive days for the analysis of diurnal salivary cortisol profiles. Results: There were no main effects of acute stressors (self- and interviewer-rated) and chronic family or peer stress on adolescent inflammation markers and cortisol (p values &gt; .10). However, the interaction between interviewer-rated acute stress and chronic family stress was significantly associated with adolescent inflammation markers (IL-6, IL-1ra). Specifically, as chronic family stress increased, the association between acute stressor impact (interviewer-rated) and inflammation markers became more positive (IL-6 (B = .054, SE = .023, p = .022); IL-1ra (B = .030, SE = .014, p = .034)). Interactions between self-rated acute stress and chronic family stress were not associated with any biological measures (p values &gt; .10). Interactions between acute stressor impact (both self- and interviewer-rated) and chronic peer stress were also not significantly associated with any biological measures (p values &gt; .05). Conclusions: Among adolescents, interviewer-based ratings of acute stressor impact may allow for better prediction of health-relevant inflammation markers than adolescents' own ratings. abstract_id: PUBMED:15272103 Self-rated health is related to levels of circulating cytokines. Objective: Self-rated health is a powerful and independent predictor of long-term health, but its biological basis is unknown. Because factors associated with poor self-rated health (eg, pain, daily discomforts, and low energy and fitness) resemble symptoms of a generalized cytokine-induced sickness response, we examined the relationship between circulating cytokines and self-rated health. Methods: In 265 consecutive primary health care patients (174 women and 91 men), we examined self-rated and physician-rated health, circulating levels of interleukin (IL)-1beta, IL-1 receptor antagonist (IL-1ra), IL-6, and tumor necrosis factor (TNF)-alpha as determined from plasma samples using high-sensitivity enzyme-linked immunoassay. Results: Self-rated health correlated with levels of IL-1beta (r = 0.27; p &lt;.001), IL-1ra (r = 0.19; p &lt;.05) and TNF-alpha (r = 0.46; p &lt;.001) in women but not in men. Thus, poorer subjective health was associated with higher levels of inflammatory cytokines. Even when controlling for age, education, physical health, and diagnoses in multiple regression analyses, self-rated health was an independent and more robust predictor of cytokine levels than physician-rated health. Conclusions: The present findings suggest that an individual's health perception may be coupled to circulating cytokines. Because epidemiological research established that self-rated health predicts morbidity and mortality, the biological correlates and mechanisms of self-rated health need to be understood. abstract_id: PUBMED:22496462 Inflammation and positive affect are associated with subjective health in women of the general population. Poor subjective health has been associated with higher levels of inflammatory cytokines. We investigated whether such an association would apply to women of the general population. Levels of cytokines, affect and subjective health were assessed in 347 women of the general population aged 45 to 90 years. Higher levels of interleukin-6 were associated with poor subjective health, especially in participants over 65 years of age. Positive affect was a more robust determinant of subjective health than negative affect. The presence of low-grade inflammation and absence of positive affect, rather than presence of negative affect, may be important determinants of subjective health. abstract_id: PUBMED:21601365 Poorer self-rated health is associated with elevated inflammatory markers among older adults. Objective: Self-rated health is a strong independent predictor of mortality after accounting for objective health status, behavioral risk factors, and sociodemographic characteristics. However, mechanisms underlying this association are largely unexplained. Inflammation has been associated with increased risk of morbidity and mortality in the elderly. The current study aimed to: (1) examine associations between self-rated health and serum inflammatory markers in older adults; (2) examine the relative strength of these associations for self-rated health versus self-rated change in recent health; (3) examine components of self-rated health that may underlie the association between inflammation and global self-rated health. Methods: Self-rated health, as measured by the RAND health survey, and serum interleukin (IL)-6 and C-reactive protein (CRP) were assessed among 250 generally healthy older adults (185 women, 65 men; average age=63.8±13.7 years). Results: A series of linear regression analyses demonstrated that poorer self-rated health was significantly associated with higher IL-6 and CRP. These relationships remained after controlling for age, body mass index, gender, and objective health conditions. These associations also remained after controlling for depressive symptoms, neuroticism, perceived change in health over the past year, and health behaviors (smoking, sleep quality, and physical activity). Analyses of RAND component measures demonstrated that poorer physical functioning was significantly associated with IL-6; the relationship between global self-rated health and both IL-6 and CRP remained after accounting for perceived physical functioning. Conclusions: Poorer self-rated health is associated with elevated serum inflammatory markers among generally healthy older adults. The relationship of self-rated health with inflammatory markers is not secondary to depressive symptoms, neuroticism, or recent changes in perceived health. Subjective ratings of health provide important clinical information regarding inflammatory status, beyond traditional objective risk factors, even among generally healthy individuals. abstract_id: PUBMED:28927291 Differentially conserved transcriptomic response to adversity related to self-rated health in the multi-ethnic study of atherosclerosis. Self-rated health (SRH) is considered a strong indicator of well-being and clinical health status and has been linked to inflammatory markers. The objective of this work was to examine how self-rated physical health (SRPH) and mental health (SRMH) influence the immune system through the regulation of a stress-related gene expression profile known as the 'conserved transcriptional response to adversity' (CTRA), which involves the up-regulation of pro-inflammatory genes and down-regulation of genes involved in type I interferon (IFN) response and antibody synthesis. CTRA expression data were derived from genome-wide transcriptional data on purified monocytes in 1264 adult participants from the multi-ethnic study of atherosclerosis. SRPH and SRMH were assessed through the SF-12 questionnaire. Multiple linear regression models were used to determine the association between the composite score of the CTRA subsets and SRPH and SRMH. Higher scores of SRPH and SRMH were associated with an increased expression of the overall CTRA profile. The individual gene subsets analysis did not reveal an increased expression of pro-inflammatory genes in persons with lower scores of SRH. However, we observed that higher scores of SRPH positively modulate the immune response through the up-regulation of both type I interferon response and antibody synthesis-related genes, while better scores of SRMH were associated with a down-regulation of genes involved in antibody synthesis. The significant association between SRH and a gene expression profile related to type I IFN response and antibody synthesis suggests that SRH may be linked to the immunocompetence status. Impact statement In this work, we evaluated for the first time how self-rated mental (SRMH) and physical health (SRPH) influence the immune response at the molecular level in a large multi-ethnic cohort. We observed that both SRMH and SRPH are related to immunocompetence status. These findings indicated that the link between how we perceive our health and poorer health outcomes could be explained by alterations in the immune response by shifting the expression of genes related to the type I IFN response and antibody synthesis. Answer: Yes, poor self-rated health is associated with low-grade inflammation in 43,110 late adolescent men of the general population. A study investigating this association found that high levels of erythrocyte sedimentation rate (ESR), a marker of inflammation, were associated with higher odds for poor self-rated health (OR: 1.077 for each unit mm/h increase in ESR, 95% CI 1.049 to 1.105) (PUBMED:27113233).
Instruction: Does disadvantage start at home? Abstracts: abstract_id: PUBMED:33844990 Factors Associated with Timing of the Start-of-Care Nursing Visits in Home Health Care. Objectives: Home health care patients have critical needs requiring timely care following hospital discharge. Although Medicare requires timely start-of-care nursing visits, a significant portion of home health care patients wait longer than 2 days for the first visit. No previous studies investigated the pattern of start-of-care visits or factors associated with their timing. This study's purpose was to examine variation in timing of start-of-care visits and characterize patients with visits later than 2 days postdischarge. Design: Retrospective cohort study. Setting/participants: Patients admitted to a large, Northeastern US, urban home health care organization during 2019. The study included 48,497 home care episodes for 45,390 individual patients. Measurement: We calculated time to start of care from hospital discharge for 2 patient groups: those seen within 2 days vs those seen &gt;2 days postdischarge. We examined patient factors, hospital discharge factors, and timing of start of care using multivariate logistic regression. Results: Of 48,497 episodes, 16,251 (33.5%) had a start-of-care nursing visit &gt;2 days after discharge. Increased odds of this time frame were associated with being black or Hispanic and having solely Medicaid insurance. Odds were highest for patients discharged on Fridays, Saturdays, and Mondays. Factors associated with visits within 2 days included surgical wound presence, urinary catheter, pain, 5 or more medications, and intravenous or infusion therapies at home. Conclusions And Implications: Findings provide the first publication of clinical and demographic characteristics associated with home health care start-of-care timing and its variation. Further examination is needed, and adjustments to staff scheduling and improved information transfer are 2 suggested interventions to decrease variation. abstract_id: PUBMED:35227604 Home advantage and LBW decisions: The contribution of umpiring to the home-ground (dis)advantage in cricket. Objectives: This study aimed to determine whether elite cricket umpires' decisions contribute to home advantage via leg-before-wicket decisions. Design: Leg-before-wicket decisions (n = 4971) from actual elite level cricket matches in Australia between 2009 and 2016 were analysed using a binomial logistic regression to predict the umpires' leg-before-wicket decisions. Methods: Regressions were performed to determine whether the batter's team (home or away) influenced the likelihood that the umpire would give them out leg-before-wicket. Results: We found no evidence of home advantage in umpires' leg-before-wicket decisions. In fact, we found evidence that umpires in some instances disadvantage the home team. Conclusions: We suggest that the increased professionalism of sports officials, and the scrutiny they are placed under, may lead umpires to reduce or even overcompensate for any existing biases in their decision making. abstract_id: PUBMED:28783460 Engaging Parents in Early Head Start Home-Based Programs: How Do Home Visitors Do This? Parental engagement is considered elemental to successful outcomes for parents and their children in early childhood home visiting programs. Engagement is that piece of parental involvement that refers to the working relationship between the parent and the home visitor. Multiple papers have called for research to pinpoint the ways in which home visitors work with parents to form these working relationships, and form partnerships to achieve positive outcomes. Analysis revealed that in individualizing their efforts to each family, home visitors follow semi-sequential steps in implementing engagement. This article presents a model of the process home visitors describe that resulted from analysis. Grounded theory techniques were used to analyze 29 interviews with Early Head Start (EHS) home visitors and 11 supervisors across four EHS programs in one region of the United States. The process of engagement as described emerges in three phases: (1) learning the parent's culture and style; (2) deepening the working partnership; and (3) balancing the ongoing work. Analysis further revealed specific strategies and goals that guide the work of home visitors in each of these three phases. This not only adds rich detail to the literature, but also provides a useful guide for programs and policy makers through identifying the areas where training and support will increase home visitor ability to engage parents. abstract_id: PUBMED:34967749 Exploring Reasons for Delayed Start-of-Care Nursing Visits in Home Health Care: Algorithm Development and Data Science Study. Background: Delayed start-of-care nursing visits in home health care (HHC) can result in negative outcomes, such as hospitalization. No previous studies have investigated why start-of-care HHC nursing visits are delayed, in part because most reasons for delayed visits are documented in free-text HHC nursing notes. Objective: The aims of this study were to (1) develop and test a natural language processing (NLP) algorithm that automatically identifies reasons for delayed visits in HHC free-text clinical notes and (2) describe reasons for delayed visits in a large patient sample. Methods: This study was conducted at the Visiting Nurse Service of New York (VNSNY). We examined data available at the VNSNY on all new episodes of care started in 2019 (N=48,497). An NLP algorithm was developed and tested to automatically identify and classify reasons for delayed visits. Results: The performance of the NLP algorithm was 0.8, 0.75, and 0.77 for precision, recall, and F-score, respectively. A total of one-third of HHC episodes (n=16,244) had delayed start-of-care HHC nursing visits. The most prevalent identified category of reasons for delayed start-of-care nursing visits was no answer at the door or phone (3728/8051, 46.3%), followed by patient/family request to postpone or refuse some HHC services (n=2858, 35.5%), and administrative or scheduling issues (n=1465, 18.2%). In 40% (n=16,244) of HHC episodes, 2 or more reasons were documented. Conclusions: To avoid critical delays in start-of-care nursing visits, HHC organizations might examine and improve ways to effectively address the reasons for delayed visits, using effective interventions, such as educating patients or caregivers on the importance of a timely nursing visit and improving patients' intake procedures. abstract_id: PUBMED:36253635 Promoting Child Development During the COVID-19 Pandemic: Parental Perceptions of Tele-Home Visits in Early Head Start Programs. Objective: The COVID-19 pandemic prompted families to receive Early Head Start (EHS) home-based services virtually. This qualitative study evaluated parental perceptions of EHS tele-home visits. Methods: EHS parents who had transitioned to tele-home visits using any video-chat platform were recruited to participate in a Spanish or English virtual focus group that assessed their perceptions of tele-home visits. Using an iterative, consensus-seeking inductive content analysis approach, themes and subthemes were identified. Results: Thirty-five mothers of children newborn to 3-years-old, where the majority were Latino and Spanish-speaking, participated in four focus groups. Several patterns pertaining to technology, child engagement, child learning and development, and parent-home visitor relationship emerged in the qualitative analysis. Mothers revealed varying degrees of digital proficiency, device preference, and technology challenges. Mothers reported variability in child engagement and concerns with missed socialization opportunities for children as a results of tele-home visits, but also reported increased self-efficacy in supporting child development, positive relationships with their home visitor, and overall satisfaction with services. Conclusion: Parents revealed tele-home visits have the potential to be a viable service delivery method for EHS home-based programs. While parents perceived increased engagement and an uncompromised parent-home visitor relationship, they revealed areas of needed support that would optimize the use of tele-home visits. abstract_id: PUBMED:26161886 Culture and the Home-Field Disadvantage. The home-field disadvantage refers to the disadvantage inherent in research that takes a particular cultural group as the starting point or standard for research, including cross-cultural research. We argue that home-field status is a serious handicap that often pushes researchers toward deficit thinking, however good the researchers' intentions may be. In this article, we aim to make this home-field bias more explicit and, in doing so, more avoidable. We discuss three often-overlooked disadvantages that result from this home-field status: the problem of marked versus unmarked culture, the problem of homogenous versus heterogeneous culture, and the problem of regression toward the mean. We also recommend four interventions researchers can apply to avoid the home-field disadvantage or, at the least, attenuate its deleterious effects. abstract_id: PUBMED:36507044 The unique and compensatory effects of home and classroom learning activities on Migrant and Seasonal Head Start children's Spanish and English emergent literacy skills. Children of migrant and seasonal farmworkers (MSFW) are among the most underprivileged, underserved groups in the United States. The current study examined how home and classroom language and literacy experiences uniquely and interactively contributed to MSFW children's emergent literacy skills in English and Spanish. Participants were 255 Spanish-English dual language learning children (Mage = 49 mon; 98.3% Latino/Hispanic) and their parents and 47 teachers, drawn from the Migrant and Seasonal Head Start (MSHS) Study. Parents reported how often the target children engaged in language and literacy activities (i.e., teaching letters, words, or numbers, book-reading, singing, and storytelling) with their family members. Teachers reported how often the target children engaged in classroom language and literacy activities (e.g., book-reading, learning letters, retelling stories, etc.). Children's emergent literacy skills in English and Spanish were assessed by standard tests. After controlling for demographic variables, home and classroom language and literacy activities uniquely predicted children's emergent literacy skills in Spanish, but not in English. Additionally, home and classroom activities compensated one another in supporting children's English and Spanish emergent literacy development. That is, language and literacy activities in one context showed a stronger effect for children who experienced less frequent activities in the other context. Together, these findings shed light on ways to support MSFW children's emergent literacy skills and reveal the importance of integrating and connecting home and school learning experiences. abstract_id: PUBMED:29315625 How Does the Neighborhood "Come through the Door?" Concentrated Disadvantage, Residential Instability, and the Home Environment for Preschoolers. Living in a disadvantaged neighborhood is associated with heightened risk for poor school readiness and health outcomes in early childhood, and the home environment is thought to be a primary mechanism by which neighborhood context impacts preschoolers. This study examined the effects of neighborhood concentrated disadvantage and neighborhood residential instability on the home physical environment and home learning environment for preschoolers in economically disadvantaged families (N = 187). Using structural equation modeling, mothers' perceived neighborhood disorder and depressive symptoms were examined as mechanisms by which neighborhood context "comes through the door." Mothers' neighborhood social embeddedness was also explored as a protective factor. Results showed that concentrated disadvantage was negatively associated with the quality of the home physical environment, and residential instability was negatively associated with the quality of the home learning environment. Concentrated disadvantage had an indirect effect on the home learning environment through mothers' perceived neighborhood disorder and depressive symptoms. The effects of concentrated disadvantage on the home environment were buffered by mothers' neighborhood social embeddedness. Study findings advance understanding of socioeconomic- and place-based disparities in developmental outcomes and identify potential targets for interventions aimed at lessening effects of neighborhood disadvantage on families with young children. abstract_id: PUBMED:3132289 Accidents in the home among children under 5: ethnic differences or social disadvantage? Accidents in the home to children under 5 in a multiracial population with a high level of social disadvantage were studied by interviewing at home the parents of 402 children attending the accident department of a west London hospital during one year. The parents' country of birth, whether they were employed, and their housing conditions were recorded using the definitions of the 1981 census. Four ethnic groups (British (183 children), Asian (127), Caribbean (61), and other (31)) were identified. Though attendance rates based on the populations of electoral wards at the census and standardised for distance from the hospital showed no significant differences among the ethnic groups, there was a strong gradient by social class and strong associations with unemployment of the mother (although not of the father), overcrowding, and tenure of housing. Social disadvantage seems to be more important than ethnicity as a determinant of accidents to children in the home. abstract_id: PUBMED:25937703 Neighborhood Economic Disadvantage and Children's Cognitive and Social-Emotional Development: Exploring Head Start Classroom Quality as a Mediating Mechanism. Past research has shown robust relationships between neighborhood socioeconomic disadvantage and children's school achievement and social-emotional outcomes, yet the mechanisms for explaining these relationships are poorly understood. The present study uses data from 1,904 Head Start participants enrolled in the Head Start Impact Study to examine the role that classroom structural and relational quality play in explaining the association between neighborhood poverty and children's developmental gains over the preschool year. Results suggest that neighborhood poverty is directly related to lower levels of classroom quality, and lower gains in early literacy and math scores. Indirect relationships were also found between neighborhood poverty and children's social-emotional outcomes (i.e., approaches to learning and behavior problems) via differences in the physical resources and negative student-teacher relationships within classrooms. These findings highlight the need for policy initiatives to consider community characteristics as potential predictors of disparities in classroom quality and children's cognitive and social-emotional development in Head Start. Answer: Disadvantage can indeed start at home, as evidenced by several studies that have explored the impact of home and neighborhood environments on various aspects of individual development and service delivery. For instance, research has shown that living in a disadvantaged neighborhood is associated with heightened risk for poor school readiness and health outcomes in early childhood, with the home environment being a primary mechanism by which neighborhood context impacts preschoolers (PUBMED:29315625). Additionally, accidents in the home among children under 5 have been linked to social disadvantage, with factors such as unemployment of the mother, overcrowding, and housing tenure being more determinant than ethnicity (PUBMED:3132289). In the context of educational programs, the home learning environment and home language and literacy activities have been found to uniquely and interactively contribute to children's emergent literacy skills, with home and classroom activities compensating for one another in supporting children's development (PUBMED:36507044). Moreover, neighborhood economic disadvantage has been shown to directly relate to lower levels of classroom quality in Head Start programs, affecting children's cognitive and social-emotional development (PUBMED:25937703). In healthcare, studies have identified that home health care patients, particularly those who are black or Hispanic and have Medicaid insurance, are more likely to experience delayed start-of-care nursing visits post-hospital discharge (PUBMED:33844990). Reasons for these delays include no answer at the door or phone, patient/family requests to postpone or refuse services, and administrative or scheduling issues (PUBMED:34967749). Furthermore, the concept of the "home-field disadvantage" in research highlights the potential biases that can arise when a particular cultural group is taken as the standard, which can lead to deficit thinking and overlook the complexities of cultural differences (PUBMED:26161886). Overall, these studies suggest that disadvantage can indeed start at home, influenced by a combination of social, economic, and cultural factors that affect health, education, and overall well-being.
Instruction: Is diagnosis relevant in the hospitalization of potentially dangerous children and adolescents? Abstracts: abstract_id: PUBMED:9785714 Is diagnosis relevant in the hospitalization of potentially dangerous children and adolescents? Objective: This study tests the assumption that psychiatric diagnosis facilitates clinical evaluations of need in emergency care before and after controlling for danger. Method: The data are from structured crisis assessments completed by emergency clinicians in four ethnically diverse locales (N = 653). Clinician-assigned diagnosis was categorized as adjustment, disruptive, mood, psychotic, and other, and a Danger scale score reflected danger to self or others. Results: Mood and psychotic disorders significantly increased hospital rates in multivariate analyses which controlled for demographic characteristics, site, and danger when relevant. The model with the best fit included both diagnosis and danger. Conclusions: Decisions should be linked to verifiable ratings of need and attention to danger, and its measurement should complement the current focus on diagnosis. abstract_id: PUBMED:33276754 Potentially avoidable hospitalization for asthma in children and adolescents by migrant status: results from the Italian Network for Longitudinal Metropolitan Studies. Background: Global migration toward Europe is increasing. Providing health assistance to migrants is challenging because numerous barriers limit their accessibility to health services. Migrants may be at a greater risk of developing asthma and receiving lower quality healthcare assistance than non-migrants. We aim to investigate whether immigrants as children and adolescents have higher rates of potentially avoidable hospitalization (PAH) for asthma compared to Italians. Methods: We performed a retrospective longitudinal study using six cohorts of 2-17-year-old residents in North and Central Italy from 01/01/2001 to 31/12/2014 (N = 1,256,826). We linked asthma hospital discharges to individuals using anonymized keys. We estimated cohort-specific age and calendar-year-adjusted asthma PAH rate ratios (HRRs) and 95% confidence intervals (95%CIs) among immigrants compared to Italians. We applied a two-stage random effect model to estimate asthma PAH meta-analytic rate ratios (MHRRs). We analyzed data by gender and geographical area of origin countries. Results: Three thousand three hundred four and 471 discharges for asthma PAH occurred among Italians and immigrants, respectively. Compared to Italians, the asthma PAH cohort-specific rate was higher for immigrant males in Bologna (HRR:2.42; 95%CI:1.53-3.81) and Roma (1.22; 1.02-1.45), and for females in Torino (1.56; 1.10-2.20) and Roma (1.82; 1.50-2.20). Asthma PAH MHRRs were higher only among immigrant females (MHRRs:1.48; 95%CI:1.18-1.87). MHRRs by area of origin were 63 to 113% higher among immigrants, except for Central-Eastern Europeans (0.80; 0.65-0.98). Conclusion: The asthma PAH meta-analytic rate was higher among female children and adolescent immigrants compared to Italians, with heterogeneity among cohorts showing higher cohort-specific PAH also among males, with some differences by origin country. Access to primary care for children and adolescent immigrants should be improved and immigrants should be considered at risk of severe asthma outcomes and consequently targeted by clinicians. abstract_id: PUBMED:31548838 Effects of eating breakfast on children and adolescents: A systematic review of potentially relevant outcomes in economic evaluations. Background: Breakfast is often described as the most important meal of the day. Several studies have focused on examining if breakfast habits have any short-term effects on school attendance, academic achievement, and general health in children and adolescents. Informed decisions of whether to promote eating breakfast or not require a more long-term perspective. Objective: The aim of this study was to conduct a systematic review of scientific publications studying the effects identified as potentially relevant for the economic evaluation of eating breakfast in children and adolescents. Design: A systematic literature review was conducted. Studies were identified by searching the electronic databases PubMed, CINAHL, Web of Science, and PsycINFO between January 2000 and October 2017. The inclusion criteria applied were published articles from peer-reviewed journals with full text in English, quantitative studies collecting primary data with school-aged children, and adolescents aged from 6 to 18 years as participants, performed entirely or partly in countries with advanced economies, except Japan and Taiwan. Results: Twenty-six studies fulfilled the inclusion criteria, and studies that were judged to be of at least moderate quality were included in the analysis. The results of the review of eating breakfast studies showed positive and conclusive effects on cognitive performance, academic achievement, quality of life, well-being and on morbidity risk factors. Conclusions: The overall assessment of the studies indicated positive effects of eating breakfast. How the identified effects influence societal costs and an individual's quality-adjusted life years require further research. abstract_id: PUBMED:35706910 Risk Factors for COVID-19 Hospitalization in School-Age Children. Introduction: With the recent emergence of the Omicron variant, there has been a rapid and alarming increase in the number of COVID-19 cases among pediatric populations. Yet few US pediatric cohort studies have characterized the clinical features of children with severe COVID-19. The objective of this study was to identify those chronic comorbidities that increase the risk of hospitalization for pediatric populations with severe COVID-19. Methods: A retrospective cohort study that utilized the Texas Inpatient Public Use Data file was conducted. The study included 1187 patients (ages 5 to 19) from 164 acute-care Texas hospitals with the primary or secondary ICD-10CM diagnosis code U07.1 (COVID-19, virus identified). The baseline comparison group included 38 838 pediatric patients who were hospitalized in 2020. Multivariable binary logistic regression, controlling for patient characteristics, sociodemographic factors, and health insurance, was used to estimate the adjusted risk of hospitalization for COVID-19. Results: Obesity and type 1 diabetes increased the risk of hospitalization with COVID-19 among both children (5-12 years) and adolescents (13-19 years). Adolescents with morbid obesity were 10 times more likely to have severe COVID-19 (p &lt; 0.001). Regardless of age, single-parent households (%) for the patient's zip code was associated with an increased risk of hospitalization with COVID-19 (AOR = 1.02, CI: 1.01-1.03, p &lt; 0.01). Other risk factors included chronic kidney disease (p &lt; 0.05), male gender (p &lt; 0.001), Medicaid (p &lt; 0.001), and charity care (p &lt; 0.001). Conclusion: Pediatric providers and public health officials should consider the need to tailor clinical management and mitigation efforts for pediatric populations with identifiable risk factors for severe COVID-19. These findings can be used to improve risk communication with families of children with underlying medical conditions and to prioritize prevention measures, including vaccinations. abstract_id: PUBMED:38136048 Psychiatric Hospitalization for Psychopathological Disorders and Self-Injurious Behaviors in Italian Children and Adolescents during COVID-19. The evidence shows that the COVID-19 pandemic dramatically increased the number of urgent psychiatric consultations for children and adolescents in hospital emergency departments (EDs). However, what needs to be further investigated are the characteristics of psychiatric hospitalization in children and adolescents admitted to the Child and Adolescent Neuropsychiatry Unit wards in EDs. Specifically, this retrospective study aimed to examine changes in (i) the number of inpatients and (ii) the distribution of psychopathological disorders and self-injurious behaviors in our Child and Adolescent Neuropsychiatry Unit ward during the COVID-19 lockdown in Italy (March-June 2020; October 2020-January 2021) compared with the same months of previous years. We found a significantly lower number of inpatients during the first four quarantine months than the first four reference months and a higher number of inpatients during the second four quarantine months than the second four reference months. Additionally, we found an increased frequency of mood disorders, non-suicidal self-injurious behavior, and suicidal ideation during the COVID-19 lockdown compared to the reference periods. Our findings underline the need to develop psychological healthcare services for future emergency periods in order to identify and treat psychological distress in children and adolescents early, reducing the risk of psychiatric hospitalization. abstract_id: PUBMED:27215704 What timing of vaccination is potentially dangerous for children younger than 2 years? Vaccine-preventable diseases still occur although measured coverage rates at 2 y of age are high. The occurrence of these diseases may be explained in part by untimely, that is, late vaccination. Our objective was to identify potentially dangerous vaccination delays for each dose of each vaccine in children younger than 2 y. A 3-round Delphi process was conducted by e-mail. We recruited 37 French experts in vaccines for children: 16 from the Infovac-France group and 21 from the French study group for pediatric infectious diseases. Items were generated by a literature review for the 10 vaccine doses recommended before 2 y of age. Item reduction in round 1 and 2 and any consensus in round 3 used a 70% consensus cutoff. The mean participation rate was 79%. Delays that should not be exceeded were identified for all vaccine doses. The 70% consensus was reached for 6 of the 10 vaccine doses: 15 d after the recommended date for the first 2 doses of the diphtheria-tetanus-acellular pertussis-inactivated polio vaccine/Haemophilus influenzae b vaccine and for the second dose of the pneumococcal conjugate vaccine, 1 month for the meningococcal C vaccine and for the first dose of the measles-mumps-rubella vaccine, and 11 y of age for completion of the hepatitis B vaccination. This Delphi process identified potentially dangerous vaccination delays for children to the age of 2 y. These can be used as new indicators in further studies of vaccine effectiveness and can help to improve the quality of vaccine protection in children. abstract_id: PUBMED:35204913 Etiology, Risk Factors, and Diagnosis of Back Pain in Children and Adolescents: Evidence- and Consensus-Based Interdisciplinary Recommendations. Using a structured approach and expert consensus, we developed an evidence-based guideline on the diagnosis of back pain and the treatment of non-specific back pain in children and adolescents. The first part comprises etiology, risk factors, and diagnosis. The second part, published in the same issue, includes treatment and prevention. A comprehensive and systematic literature search was conducted to identify relevant guidelines and studies. Based on the findings of this literature search, recommendations on risk factors and diagnosis were formulated and voted on by experts in a structured consensus-building process. Notable red flags for specific back pain and evidence-based risk factors for non-specific back pain in children and adolescents were identified. Only three evidence-based recommendations could be formulated for causes, red flags, and risk factors for back pain, while two recommendations are based on expert consensus. Regarding diagnostics, eight expert consensus recommendations and one evidence-based recommendation could be provided. Despite the importance of adequate diagnosis for the treatment of back pain in children and adolescents, results of this work confirm the deficit in research investment in this area. abstract_id: PUBMED:36753646 Hospitalizations and deaths of Brazilian children and adolescents with Severe Acute Respiratory Syndrome caused by COVID-19. Introduction: Since the onset of the pandemic, COVID-19 has affected the entire world population, however, data on child morbidity and mortality are scarce. This study aimed to analyze the profile of hospitalizations and factors associated with the deaths of Brazilian children and adolescents with Severe Acute Respiratory Syndrome caused by COVID-19. Methodology: This cross-sectional study uses public domain data from the Influenza Epidemiological Surveillance System in 2021. The prevalence rates, lethality, and duration of hospitalization of children and adolescents with Severe Acute Respiratory Syndrome caused by COVID-19 were analyzed. Logistic regression and adjusted Odds Ratio were used. A p &lt; 0.05 was considered significant. Results: The prevalence of hospitalization among children and adolescents was 3.6%, and the case lethality rate was 4.0%. Higher lethality rates occurred among adolescents, natives, rural residents, those living in Brazil's northern and northeastern regions, and those who became critically ill and had comorbidities. Hospitalization time was longer for adolescents who became critically ill or had comorbidities. The highest chance of death was associated with: children under one year of age and adolescents, natives, and residents from the North, Northeast, and Southeast regions, who became critically ill and had comorbidities. Conclusions: Despite lower hospitalization and death rates than adults, the prevalence, complications, and mortality from COVID-19 in the pediatric population are relevant. Knowledge of the profile of children and adolescents hospitalized due to COVID-19 and the factors associated with these deaths allows the guidance of response efforts directed to assist this vulnerable population. abstract_id: PUBMED:36284348 Development and validation of a tool for the assessment of benefit from treatment of allergic rhinitis in children and adolescents (PBI-AR-K). Background: Allergic rhinitis (AR) is frequent in children and adolescents and can severely affect their lives. This article describes the development and validation of a questionnaire to assess treatment needs and benefits in children and adolescents, the PBI-AR-K, in a sample of patients receiving grass pollen sublingual immunotherapy. Patients And Methods: The PBI-AR-K was developed based on an open survey including children and adolescents and expert consensus between methodologists, patients, and physicians. The PBI-AR-K assesses patient needs before the treatment and perceived benefit during or at the end of a treatment. A weighted benefit score can be calculated ranging from 0 to 4 (4 = highest possible benefit). The validation was conducted in children (5-12 years) and adolescents (13-17 years) receiving sublingual immunotherapy. Subscales were developed based on factor analysis. Psychometric properties of items and scales were assessed with descriptive statistics, internal consistency, and convergent validity. Results: The final PBI-AR-K consists of 19 items. For validation, data from 345 patients (mean age 11.1; 60.9% male; n = 223 children; n = 122 adolescents) was analysed. Factor analyses resulted in four subscales for children and three subscales for adolescents. The items with the highest importance ratings were about choice of leisure activities (mean value in children: 3.5) and about being free of AR symptoms (adolescents: 3.3). The weighted PBI-AR-K scores reflected considerable patient-reported benefit (2.08-2.82) in both children and adolescents. Internal consistency of all scales was good or acceptable. In the children's sample, the global scale and three of four subscales were quite consistently correlated with convergent variables, while the subscale 'treatment burden' was significantly correlated only with change in average impairments due to rhinitis symptoms. The adolescents' sample showed more inconsistent results with only change in rhinitis severity being significantly associate with all subscales. Conclusion: The newly developed PBI-AR-K is a reliable and valid questionnaire for use in children; for the use in adolescents, it should be further elaborated. abstract_id: PUBMED:32974759 Acute infection as cause of hospitalization of asylum-seeking children and adolescents in Stockholm, Sweden 2015-2016. We aimed to identify hospitalizations due to infectious diseases among asylum seekers and compare them to those of the resident population 1.6.2015-31.10.2016. Administrative numbers assigned to hospitalized non-resident children made them identifiable in the discharge register. The examined populations, expressed as person-years, were 334,573 residents and 7565 asylum seekers. There were 2500 episodes of infectious disease in 2240 resident children and 139 episodes in 121 asylum seekers. Among prevalent infections contracted before or during migration, there were 33 cases of tuberculosis, four of malaria, and one of louse-borne relapsing fever, all of which occurred in 13-17-year-old unaccompanied minors. Among younger asylum seekers, there were no significant differences in the spectrum of infectious discharge diagnoses compared to residents, but across all incident infections, 0-6-year-old asylum seekers had 3.2-fold and 7-12-year-old a 4.7-fold greater risk of being admitted. Screening for multidrug-resistant bacteria showed that 45/160 (28%) of the asylum seekers were colonized, but clinical infections caused by these species were rare.Conclusion: There was a high rate of hospitalizations for acute infectious diseases in asylum-seeking children, but the spectrum and severity of infections were similar to that in resident children. What is known: • Mental and physical health problems are common in immigrant children and adolescents. What is new: • Hospitalizations due to acute infections in asylum-seeking children and adolescents are common. In the context of this study, the severity and spectrum of infectious diseases seemed to be the same in the two groups; the increased hospitalization rate in asylum seekers may be due to social factors and perceived need for more support. Answer: Yes, diagnosis is relevant in the hospitalization of potentially dangerous children and adolescents. A study that tested the assumption that psychiatric diagnosis facilitates clinical evaluations of need in emergency care found that mood and psychotic disorders significantly increased hospital rates in multivariate analyses, which controlled for demographic characteristics, site, and danger when relevant. The model with the best fit included both diagnosis and danger, suggesting that decisions should be linked to verifiable ratings of need and attention to danger, and its measurement should complement the current focus on diagnosis (PUBMED:9785714).
Instruction: Prognostic significance of hemoglobin concentration in nasopharyngeal carcinoma: does treatment-induced anemia have negative effect? Abstracts: abstract_id: PUBMED:14598613 Prognostic significance of hemoglobin concentration in nasopharyngeal carcinoma: does treatment-induced anemia have negative effect? Purpose: To assess the value of hemoglobin concentration (HC) in predicting treatment outcomes in nasopharyngeal carcinoma (NPC) patients treated with chemotherapy (CT) and radiotherapy (RT). Patients And Methods: Ninety-eight NPC patients treated with three courses of platinum-based neoadjuvant CT (NCT) and conventional RT were grouped as having normal HC (&gt; or = 12.0 g/dl in females and &gt; or = 13 g/dl in males), having mild anemia (MA) (13.0 g/dl &gt; Hb &gt; or = 11 g/dl in males, 12.0 g/dl &gt; Hb &gt; or = 11 g/dl in females) and having severe anemia (SA) (Hb &lt; 11 g/dl). Massive decrease in HC (MDHC) was defined as &gt; or = 1.5 g/dl decline with NCT. Along with other known risk factors (sex, age, histopathology, T stage, N stage, bilateral neck involvement, cranial nerve involvement and total RT time), the prognostic value of SA and MDHC were evaluated by Cox-regression. Results: SA increased from 7 to 33% with NCT (p &lt; 0.0001). Patients with SA had lower 5-year locoregional control (LRC) (49 vs. 73%, p = 0.03), disease-free survival (DFS) (42 vs. 68%, p = 0.01) and overall survival (OS) (43 vs. 69%, p = 0.01) rates than patients with normal HC or MA. The presence of MDHC worsened 5-year LRC (p = 0.002), DFS (p = 0.001) and OS (p = 0.02) rates. In multivariate analyses, MDHC and SA had pronounced effect on LRC and survival, respectively. Conclusion: MDHC and SA adversely affect treatment outcome in NPC patients treated with CT and RT. This may favour concomitant scheduling of RT and CT over neoadjuvant setting. Correction of HC before RT can improve the efficacy of RT and should be further evaluated. abstract_id: PUBMED:37154238 Low hemoglobin levels predict increased radiation-induced trismus rates in nasopharyngeal cancer. Purpose: To investigate the predictive significance of hemoglobin (Hb) values in the incidence of radiation-induced trismus (RIT) in locally advanced nasopharyngeal carcinoma (LA-NPC) patients who received concurrent chemoradiotherapy (C-CRT). Methods: Data of LA-NPC patients were examined before and after C-CRT and to confirm the presence of RIT, maximum mouth openings (MMO) were measured; RIT is defined as an MMO of ≤35 mm. All Hb values were derived from complete blood count tests obtained on the first day of C-CRT. The receiver operating characteristic (ROC) curve analysis was used to scrutinize a possible connection between pre-treatment Hb values and RIT status. Results: Two hundred and twenty three patients were included in the study and RIT was diagnosed in 46 (20.6%) patients. The Hb cutoff in ROC curve analysis that separated the patients into two groups was 12.05 g/dL [Area under the curve (AUC): 82.7%; sensitivity: 72.9%; and specificity: 71.3%]. RIT was significantly more prevalent in the Hb ≤ 12 g/dL group than in its counterpart (41.9% vs. 7.3%; p &lt; 0.001). In multivariate analysis, Hb ≤ 12, anemia, pre-C-CRT MMO &lt; 41.4 mm, and masticatory apparatus doseV58 Gy &lt; 32% groups were found to be independently associated with significantly increased rates of RIT. Conclusion: Low pre-C-CRT Hb and anemia status are novel biological markers that independently predict higher RIT rates in LA-NPC undergoing C-CRT. abstract_id: PUBMED:30864463 Baseline hemoglobin &lt;11.0 g/dL has stronger prognostic value than anemia status in nasopharynx cancers treated with chemoradiotherapy. Background: To retrospectively investigate the influence of pretreatment anemia and hemoglobin levels on the survival of nasopharyngeal carcinoma patients treated with concurrent chemoradiotherapy (C-CRT). Methods: A total of 149 nasopharyngeal carcinoma patients who received C-CRT were included. All patients had received 70 Gy to the primary tumor plus the involved lymph nodes, and 59.4 Gy and 54 Gy to the intermediate- and low-risk neck regions concurrent with 1-3 cycles of cisplatin. Patients were dichotomized into non-anemic and anemic (hemoglobin &lt;12 g/dL (women) or &lt;13 g/dL (men)) groups according to their pre-treatment hemoglobin measures. Receiver operating characteristic (ROC) curve analysis was utilized for accessibility of a pre-treatment hemoglobin cut-off that impacts outcomes. Potential interactions between baseline anemia status and hemoglobin measures and overall survival, locoregional progression-free survival (LRPFS), and progression-free survival were assessed. Results: Anemia was evident in 36 patients (24.1%), which was related to significantly shorter overall survival (P=0.007), LRPFS (P&lt;0.021), and progression-free survival (P=0.003) times; all three endpoints retained significance in multivariate analyses (P&lt;0.05, for each). A baseline hemoglobin value of 11.0 g/dL exhibited significant association with outcomes in ROC curve analysis: hemoglobin &lt;11.0 g/dL (N=26) was linked with shorter median overall survival (P&lt;0.001), LRPFS (P=0.004), and progression-free survival (P&lt;0.001) times, which also retained significance for all three endpoints in multivariate analyses and suggested a stronger prognostic worth for the hemoglobin &lt;11.0 g/dL cut-off value than the anemia status. Conclusion: Pre-C-CRT hemoglobin &lt;11.0 g/dL has a stronger prognostic worth than the anemia status with regard to LRPFS, progression-free survival, and overall survival for nasopharyngeal carcinoma patients. abstract_id: PUBMED:26313452 Is Hemoglobin Level in Patients with Nasopharyngeal Carcinoma Still a Significant Prognostic Factor in the Era of Intensity-Modulated Radiotherapy Technology? Background: Hemoglobin (Hb) levels are regarded as an important determinant of outcome in a number of cancers treated with radiotherapy. However, for patients treated with intensity modulated radiotherapy (IMRT), information regarding the prognostic value of hemoglobin level is scarce. Patients And Methods: A total of 650 patients with nasopharyngeal carcinoma (NPC), enrolled between May, 2005, and November, 2012, were included in this study. The prognostic significance of hemoglobin level (anemia or no-anemia) at three different time points was investigated, including before treatment, during treatment and at the last week of treatment. Univariate and multivariate analyses were conducted using the log-rank test and the Cox proportional hazards model, respectively. Results: The 5-year OS (overall survival) rate of patients who were anemia and no-anemia before treatment were 89.1%, and 80.7% (P = 0.01), respectively. The 5-year DMFS (distant metastasis-free survival) rate of patients who were anemia and no-anemia before treatment were 88.9%, and 78.2% (P = 0.01), respectively. The 5-year OS rate of patients who were anemia and no-anemia during treatment were 91.7% and 83.3% (P = 0.004). According to multivariate analysis, the pre-treatment Hb level predicted a decreased DMFS (P = 0.007, HR = 2.555, 95% CI1.294-5.046). Besides, the mid-treatment Hb level predicted a decreased OS (P = 0.013, HR = 2.333, 95% CI1.199-4.541). Conclusions: Hemoglobin level is a useful prognostic factor in NPC patients receiving IMRT. It is important to control the level of hemoglobin both before and during chemoradiotherapy. abstract_id: PUBMED:29581758 A Prognostic Score for Nasopharyngeal Carcinoma with Bone Metastasis: Development and Validation from Multicenter. Background: To establish a prognostic score based on clinical routine factors to stratify nasopharyngeal carcinoma patients with bone metastasis into risk groups with different survival rates. Materials and Methods: Total 276 patients from multicenter were retrospectively analyzed. Kaplan-Meier method and Cox regression were used to confirm independent risk factors, which were checked for internal validity by bootstrapping method. The prognostic score, deriving from the corresponding regression coefficients in Cox model, classified patients into low and high risk groups. Finally, two independent cohorts were used for external validation. Results: In development cohort, six risk factors were identified: age&gt;46 year-old (point=1), N&gt;0 stage (point=2), anemia (point=2), bone metastasis free interval≤12 months (point=1), without radiotherapy to primary sites (point=1), and without radiotherapy to first metastasis sites (point=1). The derived prognostic score divided patients into low (score, 0-4) and high (score, 5-8) risk groups, with highly significant differences of 5-year overall survival rates (high vs. low risk: 24.6% vs. 58.2%, HR 3.47, P&lt;0.001). Two external validations presented congruent results. Conclusion: A feasible and applicative prognostic score was successfully established and validated to discriminate bone metastatic nasopharyngeal carcinoma into low/high risk groups, which will be useful for individual treatment. abstract_id: PUBMED:20426909 Continuous fall in hemoglobin level is a poor prognostic factor in patients with nasopharyngeal carcinoma treated with radiotherapy. Background And Objective: Anemia can not only reduce the quality of life of patients with cancer, but also affect their survival. This study was to investigate the prognostic value of hemoglobin (Hb) level in patients with nasopharyngeal carcinoma (NPC) treated with radiotherapy. Methods: Clinical data of 520 NPC patients received definitive radiotherapy between 2000 and 2002 at Sun Yat sen University Cancer Center were analyzed. Patients were stratified into normal Hb level and anemia groups according to their Hb levels before, during, and after radiation. Anemia was defined according to World Health Organization criteria as Hb level &lt; 130 g/L in men and &lt; 120 g/L in women. Hb continuous decrease group and non decrease group were defined according to Hb changes in the patients during radiotherapy. Loco regional recurrence free survival (LRFS) and overall survival (OS) rates were estimated using the Kaplan Meier method. Multivariate analysis was performed using the Cox model to analyze the prognostic factors. Results: Before radiation, the 5 year LRFS rates were 60.9% in anemia group and 63.9% in normal Hb level group (P = 0.337); the 5 year OS rates were 65.2% and 71.0%, respectively (P = 0.299). During radiation, the 5 year LRFS rates were 56.7% in anemia group and 67.9% in normal Hb level group (P = 0.013); the 5 year OS rates were 61.0% and 75.9%, respectively (P = 0.001). After radiation, the 5 year LRFS rates were 59.6% in anemia group and 64.9% in normal Hb level group (P = 0.169); the 5 year OS rates were 65.0% and 71.9%, respectively (P = 0.090). The 5 year LRFS and OS rates were significantly lower in Hb continuous decrease group than in Hb non decrease group (59.1% vs. 69.3%, P = 0.032; 66.2% vs. 76.4%, P=0.011). Multivariate analysis showed that the continuous decrease of Hb was an independent prognostic factor for OS. Conclusion: The change in Hb level during radiotherapy is an important prognostic factor affecting the OS of NPC patients. abstract_id: PUBMED:15241828 Prognostic impact of hemoglobin levels on treatment outcome in patients with nasopharyngeal carcinoma treated with sequential chemoradiotherapy or radiotherapy alone. Background: The goal of the current study was to investigate the impact of hemoglobin (Hb) levels on treatment outcome in a randomized Phase III trial of patients with nasopharyngeal carcinoma (NPC) treated with induction chemotherapy followed by radiotherapy or with radiotherapy alone. Methods: Between September 1989 and August 1993, 334 patients with advanced NPC were entered into a randomized trial comparing 3 cycles of induction chemotherapy (cisplatin and epirubicin) followed by radiotherapy with radiotherapy alone. Only evaluable patients who completed radiation were included in the analysis (n = 286). Patients were stratified into normal and low Hb groups according to baseline, preradiation, and midradiation Hb levels. Local recurrence-free, distant metastasis-free, and disease-specific survival rates were estimated using the Kaplan-Meier method. Multivariate analysis was performed using the Cox model. Results: In the chemotherapy arm, the mean baseline, preradiation, and midradiation Hb levels were 13.6, 11.0, and 11.8 g/dL, respectively. In the radiotherapy arm, the mean baseline/preradiation and midradiation Hb levels were 13.7 and 12.9 g/dL, respectively. A midradiation Hb level &lt; or = 11 g/dL was associated with significantly poorer 5-year local recurrence-free (60% vs. 80%; P = 0.0059) and disease-specific survival rates (51% vs. 68%; P = 0.001), with no difference in distant metastasis-free rates (69% vs. 67%; P = 0.83). No significant difference in treatment outcome according to baseline or preradiation Hb levels was noted. Multivariate analysis showed that a low midradiation Hb level, but not a low baseline or preradiation Hb level, was an independent predictor of local disease recurrence and malignancy-related death. Conclusions: The current study showed that midradiation Hb level was an important prognostic factor with respect to local control and survival in patients with NPC. The high incidence of anemia after chemotherapy has a negative impact on treatment outcome, and this condition may reduce the benefit of induction chemotherapy. Attempts to correct anemia during radiation and the impact of anemia on treatment outcome requires further study. abstract_id: PUBMED:429549 Production of erythrocytes that contain fetal hemoglobin in anemia. Transient in vivo changes. Serial microscopic immunodiffusion assays of F cells, i.e., erythrocytes that contain fetal hemoglobin (HbF), in four individuals recovering from anemia demonstrate initial increases in the percentage of circulating reticulocytes that contain HbF (F reticulocytes) and subsequent increases in the percentage of mature erythrocytes that contain HbF (F erythrocytes). In one individual responding to therapy for iron-deficiency anemia, the average percentage of F reticulocytes increased from 4.8+/-1.1 to 16.0+/-2.8% (mean+/-SD), while the mean level of F erythrocytes increased from 3.5+/-0.7 to 7.2+/-0.6%. Two normal children with transient erythroblastopenia exhibited F reticulocyte percentages of 71.3+/-6.7 and 41.5+/-1.5%, respectively, when erythropoiesis resumed. With recovery these values fell to finally measured values of 33.7+/-4.7 and 12.6+/-1.1%, respectively. In an adolescent with sickle cell anemia, F-reticulocyte percentages fluctuated between 0.6+/-1.1 and 34.0+/-2.8% and paralleled the rise and fall of total reticulocytes associated with therapy for a nasopharyngeal carcinoma. Such findings suggest that first, the production of F cells and non-F cells are separately regulated. Second, F-cell production is preferentially stimulated during escape from erythropoietic suppression and selectively depressed at the start of suppression. Third, during escape from erythropoietic suppression, F-cell production in vivo resembles that reported for in vitro cultures of erythroid stem cells. Fourth, individuals with sickle cell anemia, like individuals without hemoglobinopathies, can change their relative level of F-cell production. abstract_id: PUBMED:26358251 Pretreatment anemia and survival in nasopharyngeal carcinoma. Due to the low incidence of pretreatment anemia in nasopharyngeal carcinoma (NPC), the true prognostic impact of pretreatment anemia may be underestimated before. We retrospectively analyzed the association of pretreatment anemia with disease-specific survival (DSS), distant-metastasis-free survival (DMFS), and locoregional-relapse-free survival (LRFS) by Cox regression in a cohort of 5830 patients, stratifying by midtreatment anemia, smoking, body mass index (BMI), etc. Pretreatment anemia was significantly associated with adverse DSS (hazard ratio (HR) = 2.15, 95 % confidence interval (CI) 1.62-2.85, P &lt; 0.001) and DMFS (HR = 1.53, 95 % CI 1.08-2.17, P = 0.018), comparing to patients with normal hemoglobin, after adjusting for covariates. Moreover, the association with DSS remained unchanged regardless of smoking status and clinical stage, whereas it was limited in the subgroups of above 45 years, male sex, and BMI &lt;25 kg/m(2). With restriction to midtreatment anemic patients, pretreatment anemia was still strongly correlated with inferior DSS and DMFS. This study, in the largest reported cohort, is the first to show the adverse prognostic impact of pretreatment anemia on DSS and DMFS in NPC. abstract_id: PUBMED:12855259 Design of a prognostic index score for metastatic nasopharyngeal carcinoma. The survival outcome of patients with systemic cancer differs significantly between individuals even within the same tumour type. We set out to illustrate this by analysing the factors determining survival in patients with metastatic disease from nasopharyngeal carcinoma (NPC) and to design a scoring system based on these prognostic factors. Patients referred between January 1994 and December 1999 were retrospectively analysed. Factors analysed included patient (age group, gender, performance status (BS) at diagnosis of metastases), disease (number of metastatic sites, specific metastatic sites, disease-free interval (DFI), metastases at presentation, presence of locoregional recurrence), and laboratory factors (leucocyte count, haemoglobin level, albumin level). Univariate and multivariable analyses were performed using the Cox proportion hazards model. A numerical score was derived from the regression coefficients of each independent prognostic variable. The prognostic index score (PIS) of each patient was calculated by totalling up the scores of each independent variable. Independently significant, negative prognostic factors were liver metastasis, lung metastasis, anaemia, poor PS, distant metastasis at initial diagnosis, and a DFI of &lt;6 months. Three prognostic groups based on the PIS were obtained: (i) good risk (PIS=0-6); (ii) intermediate risk (7-10); (iii) poor risk (&gt;or=11). The median survivals for these groups were 19.5, 10, and 5.8, months, respectively, (log rank test: P&lt;0.0001). The variable prognosis of patients with disseminated NPC can be assessed by using easily available clinical information (patient, disease and laboratory factors). The PIS system will need to be validated on prospectively collected data of another cohort of patients. Answer: The prognostic significance of hemoglobin concentration (HC) in nasopharyngeal carcinoma (NPC) is supported by several studies, which suggest that lower HC levels, particularly treatment-induced anemia, can have a negative effect on treatment outcomes. One study found that severe anemia (SA) and massive decrease in HC (MDHC) during treatment with chemotherapy (CT) and radiotherapy (RT) adversely affected locoregional control (LRC), disease-free survival (DFS), and overall survival (OS) in NPC patients. The presence of MDHC worsened 5-year LRC, DFS, and OS rates, indicating that correction of HC before RT could potentially improve the efficacy of RT (PUBMED:14598613). Another study demonstrated that low pre-treatment hemoglobin levels and anemia status were independent biological markers predicting higher rates of radiation-induced trismus (RIT) in patients undergoing concurrent chemoradiotherapy (C-CRT) for locally advanced NPC (PUBMED:37154238). Furthermore, a baseline hemoglobin value of less than 11.0 g/dL was found to have a stronger prognostic value than anemia status, being significantly associated with shorter median overall survival, locoregional progression-free survival (LRPFS), and progression-free survival times in NPC patients treated with C-CRT (PUBMED:30864463). In the era of intensity-modulated radiotherapy (IMRT), hemoglobin level remains a useful prognostic factor in NPC patients. Controlling hemoglobin levels both before and during chemoradiotherapy is important, as pre-treatment and mid-treatment anemia were associated with decreased DMFS and OS (PUBMED:26313452). Additionally, a continuous fall in hemoglobin level during radiotherapy was identified as a poor prognostic factor, with significant impacts on loco-regional recurrence-free survival (LRFS) and overall survival (OS) rates (PUBMED:20426909). Overall, these studies indicate that both pretreatment anemia and treatment-induced anemia are associated with poorer survival outcomes in NPC, suggesting that maintaining adequate hemoglobin levels during treatment could be beneficial for patient prognosis.
Instruction: Can quality of movement be measured? Abstracts: abstract_id: PUBMED:31645813 Relationship between depression and movement quality in normal young adults. [Purpose] Somatic symptoms, such as hyposomnia and anorexia, can affect depression and result in decreased movement quality; however, the relationship between movement quality and somatic symptoms is unclear. The present study investigated the characteristics of this relationship using observable posture and movements. [Participants and Methods] Twenty healthy young adults (mean age 20.9 ± 0.4 years) participated in the study. First, we administered the Self-rating Depression Scale questionnaire to the participants. Second, a qualified physical therapist assessed the movement quality of the volunteers using the Body Awareness Rating Scale-Movement Quality and Experience. We classified the participants into high- and low-score groups based on their Self-rating Depression Scale scores and then analyzed the Body Awareness Rating Scale-Movement Quality and Experience scores in both groups. [Results] There was a significant difference in the movement quality between the Self-rating Depression Scale high- and low-score groups. The Self-rating Depression Scale total score ranged from 20 to 80, with higher scores indicating a greater severity of depression. The low-score group had higher scores for the Body Awareness Rating Scale-Movement Quality and Experience than the high-score group in all items. [Conclusion] The present study showed that movement quality was related to symptoms of depression, thereby suggesting that early detection and treatment interventions for depression are possible by an assessment of movement quality. abstract_id: PUBMED:37511949 Pilot Study: The Relationship between Foot Posture and Movement Quality in Non-Professional Male Football Players. (1) Background: This study focusses on establishing the relationship between quality of movement (based on the functional movement screen, or FMS) and foot posture (based on the foot posture index, or FPI-6). We hypothesised that a poor FMS test score may be derived from the foot position in the space measured by FPI-6. (2) Methods: a quasi-experimental and cross-sectional study was designed to analyse foot posture in 30 healthy football players, using the foot posture index and the functional movement screen. (3) Results: No significant relationships were found between movement quality and foot posture. Poor movement quality is present in more than half of all foot positions, supination, pronation, and neutral. Good quality seems to be more associated with a neutral foot position (23.3%) and supinated (16.6%) than a pronated foot position (6.6%). (4) Conclusions: this study found no relationship between the two tests; therefore, we cannot demonstrate that foot posture is relevant in the quality of the movement of the football players studied. abstract_id: PUBMED:38013786 Characteristics of 24-hour movement behaviours and their associations with mental health in children and adolescents. Background: Time-use estimates are typically used to describe 24-hour movement behaviours. However, these behaviours can additionally be characterised by other easily measured metrics. These include sleep quality (e.g., sleep efficiency), 24-hour rest-activity rhythmicity (e.g., between-day rhythm variability), and directly measured acceleration metrics (e.g., intensity gradient). Associations between these characteristics and youth mental health are unclear. This study aimed to [1] compare 24-hour movement behaviour characteristics by sex and age groups, [2] determine which movement behaviour characteristics were most strongly associated with mental health outcomes, and [3] investigate the optimal time-use behaviour compositions for different mental health outcomes. Methods: Three-hundred-and-one children and adolescents (age 9-13 y; 60% girls) wore accelerometers for 24-hours/day over 7-days. Overall mental health, externalising, and internalising problems were self-reported using the Strengths and Difficulties Questionnaire. 24-hour movement behaviour characteristics were categorised as time-use estimates, sleep quality, 24-hour activity rhythmicity, and directly measured acceleration. Linear mixed models and compositional data analysis were used to analyse the data in alignment with the study aims. Results: Time-use estimates, directly measured accelerations, and 24-hour rest-activity rhythm metrics indicated that children were significantly more physically active (p = .01-&lt;0.001) than adolescents. Children were also less sedentary (p &lt; .01), slept longer (p = .02-0.01), and had lower sleep efficiency. Boys were significantly more active than girls (p &lt; .001) who in turn accrued more time in sleep (p = .02). The timing of peak activity was significantly later among adolescents (p = .047). Overall mental health and externalising problems were significantly associated with sleep, sedentary time, sleep efficiency, amplitude, and inter-daily stability (p = .04-0.01). The optimal time-use compositions were specific to overall mental health and externalising problems and were characterised by more sleep, light and vigorous physical activity, and less sedentary time and moderate physical activity than the sample's mean time-use composition. Conclusions: Extracting and examining multiple movement behaviour characteristics from 24-hour accelerometer data can provide a more rounded picture of the interplay between different elements of movement behaviours and their relationships with mental health than single characteristics alone, such as time-use estimates. Applying multiple movement behaviour characteristics to the translation of research findings may enhance the impact of the data for research users. Supplementary Information: The online version contains supplementary material available at 10.1186/s44167-023-00021-9. abstract_id: PUBMED:31009266 A vocabulary describing health-terms of movement quality - a phenomenological study of movement communication. Purpose: The aim of the study was to develop a vocabulary targeting communication of health-terms of movement quality, establishing professional knowledge of a movement terminology usefull within rehabilitation.Methods: A phenomenological study design was chosen, inviting movement experts working in rehabilitation to describe movement observations when a change into more functional, health related ways of moving appeared in the rehabilitation processes. 15 physiotherapy experts were recruited, five from the field of neurology, primary health care and psychiatry. The informants had between 12-38 years of clinical practice, treating patients of all ages with a wide specter of diagnoses. Data collection followed a qualitative study design, of individual, in-depth interviews, based on a semi-structured interview guide. The interviews were taped, transcribed and sent to the informants for validation. Data analysis followed recommendation of Giorgi, modified by Malterud. Ethical considerations were followed.Results: Data revealed a vocabulary, clustered in five themes, Biomechanical, Physiological, Psycho-socio-cultural, Existential and Overarching perspective, 16 underlying categories and 122 descriptive health-terms of movement quality.Conclusion: The study demonstrated a multi-perspective movement vocabulary of 122 health characteristic terms, developed to facilitate movement communication within the broad field of rehabilitation. The result calls for further research concerning a movement vocabulary.Implications for RehabilitationThe phenomenon of movement quality has a potential for promoting rehabilitation-specific skills.A vocabulary describing health-terms of movement quality is useful within the overall rehabilitation field providing enhanced and specific health directed communication.A movement specific health-terminology will have impact on implications and facilitating a person-centered and goal directed rehabilitation.Rehabilitation professionals will have a multi-perspective, movement specific and structured terminology to communicate direct and concretely with patients, the multi-professional team, in society, and with politicians. abstract_id: PUBMED:31681501 RELIABILITY of a MOVEMENT QUALITY ASSESSMENT TOOL to GUIDE EXERCISE PRESCRIPTION (MOVEMENTSCREEN). Background/purpose: Movement quality is commonly assessed to identify movement limitations and guide exercise prescription. Rapid growth in the movement assessment landscape has led to the development and utilization of various movement quality assessments, many without reliability estimates. MovementSCREEN is a novel, tablet-based, video-recorded movement assessment tool, currently without published reliability information. Therefore, the purpose of this study was to determine the intra and inter-rater reliability of the MovementSCREEN, including the impact of rater experience, and provide estimates of measurement error and minimal detectable change. Study Design: Cross-sectional design; reliability study. Methods: Thirty healthy young adults (14M:16F, mean age 28.4 yrs, SD 9.1) were video recorded completing the nine MovementSCREEN assessment items on two occasions, two weeks apart. Each individual movement was assessed against objective scoring criteria (component items: yes/no) and using a 100-point sliding scale. To create an overall score for each movement, the scale score is weighted against the objective items to provide a score out of 100. At the completion of all nine individual movements, a mean composite score of movement quality is also established (0-100). The first recording was scored twice by two expert and two novice assessors to investigate inter- and intra-rater reliability. The second recording was scored by one expert assessor to investigate within-subject error. Inter- and intra-rater reliability was calculated using intraclass correlation coefficients (ICCs) and Kappa statistics. The standard error of measurement (SEM), and minimal detectable change (MDC95) for the overall score for each movement, and the composite score of movement quality, were calculated. Results: Intra-rater reliability for the component items ranged from κ = 0.619 - 1.000 (substantial to near perfect agreement) and 0.233 - 1.000 (slight to near perfect agreement) for expert and novice assessors, respectively. The ICCs for the overall movement quality scores for each individual movement ranged from 0.707 - 0.952 (fair to high) in expert and 0.502 - 0.958 (poor to high) in novice assessors. Inter-rater agreement for the component items between expert assessors ranged from κ = 0.242 - 1.000 (slight to almost perfect agreement), while for novice assessors ranged from 0.103 - 1.000 (less than chance to almost perfect agreement). ICCs for the overall scores for each individual movement from expert and novice assessors ranged from 0.294 - 0.851 (poor to good) and 0.249 - 0.775 (poor to fair), respectively. The SEM for the composite score was 2 points, while the MDC95 was 6 points, with an ICC 0.901. Conclusions: The MovementSCREEN can assess movement quality with fair to high reliability on a test-retest basis when used by experienced assessors, although reliability scores decrease in novice assessors. Comparisons between assessors involve greater error. Therefore, the training of inexperienced assessors is recommended to improve reliability. Level Of Evidence: 2b. abstract_id: PUBMED:26776538 Comparison of Prescribed and Measured Dialysate Sodium: A Quality Improvement Project. Background: There is controversy regarding the optimal dialysate sodium concentration for hemodialysis patients. Dialysate sodium concentrations of 134 to 138 mEq/L may decrease interdialytic weight gain and improve hypertension control, whereas a higher dialysate sodium concentration may offer protection to patients with low serum sodium concentrations and hypotension. We conducted a quality improvement project to explore the hypothesis that prescribed and delivered dialysate sodium concentrations may differ significantly. Study Design: Cross-sectional quality improvement project. Setting & Participants: 333 hemodialysis treatments in 4 facilities operated by Dialysis Clinic, Inc. Quality Improvement Plan: Measure dialysate sodium to assess the relationships of prescribed and measured dialysate sodium concentrations. Outcomes: Magnitude of differences between prescribed and measured dialysate sodium concentrations. Measurements: Dialysate sodium measured pre- and late dialysis. Results: The least square mean of the difference between prescribed minus measured dialysate sodium concentration was -2.48 (95% CI, -2.87 to -2.10) mEq/L. Clinics with a greater number of different dialysate sodium prescriptions (clinic 1, n=8; clinic 2, n=7) and that mixed dialysate concentrates on site had greater differences between prescribed and measured dialysate sodium concentrations. Overall, 57% of measured dialysate sodium concentrations were within ±2 mEq/L of the prescribed dialysate sodium concentration. Differences were greater at higher prescribed dialysate sodium concentrations. Limitations: We only studied 4 facilities and dialysate delivery machines from 2 manufacturers. Because clinics using premixed dialysate used the same type of machine, we were unable to independently assess the impact of these factors. Pressures in dialysate delivery loops were not measured. Conclusions: There were significant differences between prescribed and measured dialysate sodium concentrations. This may have beneficial or deleterious effects on clinical outcomes, as well as confound results from studies assessing the relationships of dialysate sodium concentrations to outcomes. Additional studies are needed to identify factors that contribute to differences between prescribed and measured dialysate sodium concentrations. Quality assurance and performance improvement (QAPI) programs should include measurements of dialysate sodium. abstract_id: PUBMED:31096132 Reliability of upper limb movement quality metrics during everyday tasks. Background: Quantitative assessments of an individual's functional status commonly involve the use of movement quality metrics. Research Question: The purpose of this work was to quantify the reliability of movement quality metrics in healthy adults during a variety of unconstrained activities of daily living (ADLs). Methods: Nineteen participants performed six ADLs (lifting a laundry basket, applying deodorant, turning a doorknob, placing a pill in a pillbox, placing a pushpin in a bulletin board, and drinking water from a glass) during two separate sessions. The ADLs were divided into reaching and object manipulation phases. Movement quality for each phase was assessed using three measures of smoothness (log dimensionless jerk, spectral arc length, and number of submovements) and one measure of straightness (index of curvature). Within- and between-session reliability was quantified using intraclass correlation coefficients (ICCs) and minimum detectable changes in measured units and as a percentage of their mean value (MDC%). Results: Reliability was generally lower within-session than between-session and for object manipulation tasks compared to reaching tasks. The ICCs exceeded 0.75 for 5% of the within-session metrics and 73% of the between-session metrics. The average MDC% was 35% for the within-session metrics and 20% for the between-session metrics. Reliability was similar for most metrics when averaged across the tasks, but the number of submovements consistently indicated much lower reliability. Significance: Unconstrained ADLs can reliably be used to assess movement quality in functional settings that mimic real-world challenges. However, the specific movement quality metrics used in the assessment should be chosen carefully since some metrics perform dissimilarly when applied to the same data. In particular, it may be advisable to use the number of submovements in combination with other metrics, if it is to be used at all. abstract_id: PUBMED:38500546 Evaluation of hop test movement quality to enhance return to sport testing. A cross-sectional study. Introduction: Return to Sport tests with functional hop tests are often used to decide when a person is ready to return to sport after an anterior cruciate ligament (ACL) injury. Poor movement quality, such as knee valgus, hip adduction and hip internal rotation is considered a risk factor for ACL injury. However, it is unclear whether existing tests adequately cover the aspect of movement quality. This study aims to investigate whether there is a relationship between the calculated limb symmetry index (LSI) of hop tests as an indication of performance and the total score of the "Quality First" assessment (movement quality). The second aim is to examine the reliability of the newly developed "Quality First" assessment for evaluating movement quality in hop tests. Methods: The cross-sectional study recruited 34 patients with an ACL reconstruction. The vertical hop, single-leg hop for distance, and side hop tests were performed and recorded. The video recordings were assessed using the "Quality First" assessment. The Spearman correlation coefficient was calculated using the LSI and the "Quality First" total score. Intraclass correlation coefficients (ICC) and standard error of measurements (SEM) were used to calculate intra- and interrater reliability. In addition, the minimal detectable change (MDC) was determined. Results: The correlation test between the LSI and the "Quality First" total score showed no correlation for all three jumps (r = -0.1-0.02/p-value = 0.65-0.93). The interrater reliability of the "Quality First" assessment showed fair to good reliability (ICC2: 0.45-0.60), with SEM ranging from 1.46 to 1.73 and the MDC from 4.06 to 4.8. Intrarater reliability was good to excellent (ICC3: 0.73-0.85), with SEM values ranging from 0.89 to 1.09 and the MDC from 2.47 to 3.01. Conclusion: The quality of movement, measured with the "Quality First" assessment, indicated no correlation with the calculated LSI from jump performance, therefore movement quality should also be examined in Return to Sport tests. The "Quality First" assessment shows fair to good reliability when used by different raters. When used multiple times by the same rater, the assessment has good to excellent reliability. abstract_id: PUBMED:30470314 The impact of non-motor symptoms on the health-related quality of life in patients with functional movement disorders. Objective: Pain, fatigue, cognitive complaints and psychiatric comorbidities are common in patients with functional movement disorder and may significantly affect their quality of life. The aim of the study was to assess the impact of motor and non-motor symptoms on health-related quality of life in patients with functional movement disorder. Methods: Sixty-one patients with clinically established functional movement disorder and 61 matched healthy controls completed standardized questionnaires for depression, anxiety, cognitive complaints, fatigue, pain, sleepiness, apathy and health-related quality of life. Motor disorder severity was assessed using The Simplified Functional Movement Disorders Rating Scale. Personality traits were assessed using the 44-Item Big Five Inventory. Results: Compared to controls, patients reported significantly lower health-related quality of life and higher levels of all assessed non-motor symptoms except for apathy. No difference was found in personality traits. In both groups, health-related quality of life scores negatively correlated with depression, anxiety, pain, cognitive complaints, apathy, and neuroticism. No correlation was found between health-related quality of life and motor symptom severity in patients with functional movement disorder. Multiple regression analysis of the predictors of health-related quality of life showed significant impact of trait anxiety and cognitive complaints scores. Conclusions: Multiple non-motor symptoms but not motor symptom severity correlated with impaired health-related quality of life in patients with functional movement disorder. Impaired health-related quality of life was predicted by anxiety and cognitive complaints. Our results highlight the importance of assessing and treating both motor and non-motor symptoms in patients with functional movement disorder. abstract_id: PUBMED:33011646 Automated classification of movement quality using the Microsoft Kinect V2 sensor. Practitioners commonly perform movement quality assessment through qualitative assessment protocols, which can be time-intensive and prone to inter-rater measurement bias. The advent of portable and inexpensive marker-less motion capture systems can improve assessment through objective joint kinematic analysis. The current study aimed to evaluate various machine learning models that used kinematic features from Kinect position data to classify a performer's Movement Competency Screen (MCS) score. A Kinect V2 sensor collected position data from 31 physically active males as they performed bilateral squat, forward lunge, and single-leg squat; and the movement quality was rated according to the MCS criteria. Features were extracted and selected from domain knowledge-based kinematic variables as model input. Multiclass logistic regression (MLR) was then performed to translate joint kinematics into MCS score. Performance indicators were calculated after a 10-fold cross validation of each model developed from Kinect-based kinematic variables. The analyses revealed that the models' sensitivity, specificity, and accuracy ranged from 0.66 to 0.89, 0.58 to 0.86, and 0.74 to 0.85, respectively. In conclusion, the Kinect-based automated movement quality assessment is a suitable, novel, and practical approach to movement quality assessment. Answer: Yes, quality of movement can be measured. Various studies have developed and utilized different assessment tools and metrics to evaluate movement quality in different populations. For instance, the Body Awareness Rating Scale-Movement Quality and Experience was used to assess movement quality in relation to symptoms of depression in young adults, showing that movement quality was related to depression symptoms (PUBMED:31645813). Similarly, the functional movement screen (FMS) and the foot posture index (FPI-6) were used to investigate the relationship between movement quality and foot posture in non-professional male football players, although no significant relationship was found in this particular study (PUBMED:37511949). In children and adolescents, characteristics of 24-hour movement behaviors, including sleep quality and 24-hour rest-activity rhythmicity, were associated with mental health outcomes, suggesting that multiple movement behavior characteristics can provide insights into the interplay between movement behaviors and mental health (PUBMED:38013786). A vocabulary describing health-terms of movement quality was also developed to facilitate communication within rehabilitation, indicating that movement quality can be described and communicated using specific terms (PUBMED:31009266). The reliability of movement quality assessment tools, such as MovementSCREEN, has been studied, showing that movement quality can be assessed with fair to high reliability, especially when used by experienced assessors (PUBMED:31681501). Additionally, movement quality metrics during everyday tasks have been quantified for reliability in healthy adults, with various measures of smoothness and straightness used to assess movement quality (PUBMED:31096132). Furthermore, the "Quality First" assessment was developed to evaluate movement quality in hop tests for patients with an anterior cruciate ligament (ACL) reconstruction, showing that movement quality should be examined in Return to Sport tests and that the assessment has fair to good reliability (PUBMED:38500546). Lastly, non-motor symptoms such as pain and cognitive complaints were found to impact health-related quality of life in patients with functional movement disorders, highlighting the importance of assessing both motor and non-motor symptoms (PUBMED:30470314). Machine learning models have also been used to classify movement quality using kinematic features from Kinect position data, providing a novel and practical approach to movement quality assessment (PUBMED:33011646). Overall, these studies demonstrate that quality of movement can indeed be measured using various tools and methods, and it is an important factor in health and rehabilitation contexts.
Instruction: Is dynamic contrast-enhanced MRI useful for assessing proximal fragment vascularity in scaphoid fracture delayed and non-union? Abstracts: abstract_id: PUBMED:23653220 Is dynamic contrast-enhanced MRI useful for assessing proximal fragment vascularity in scaphoid fracture delayed and non-union? Objective: To assess dynamic contrast-enhanced magnetic resonance imaging (DCE MRI) as a measure of vascularity in scaphoid delayed-union or non-union. Materials And Methods: Thirty-five patients (34 male, one female; mean age, 27.4 ± 9.4 years; range, 16-51 years) with scaphoid delayed-union and non-union who underwent DCE MRI of the scaphoid between September 2002 and October 2012 were retrospectively reviewed. Proximal fragment vascularity was classified as good, fair, or poor on unenhanced MRI, contrast-enhanced MRI, and DCE MRI. For DCE MRI, enhancement slope, Eslope comparison of proximal and distal fragments was used to classify the proximal fragment as good, fair, or poor vascularity. Proximal fragment vascularity was similarly graded at surgery in all patients. Paired t test and McNemar test were used for data comparison. Kappa value was used to assess level of agreement between MRI findings and surgical findings. Results: Twenty-five (71 %) of 35 patients had good vascularity, four (11 %) had fair vascularity, and six (17 %) had poor vascularity of the proximal scaphoid fragment at surgery. DCE MRI parameters had the highest correlation with surgical findings (kappa = 0.57). Proximal scaphoid fragments with surgical poor vascularity had a significantly lower Emax and Eslope than those with good vascularity (p = 0.0043 and 0.027). The sensitivity, specificity, positive and negative predictive value and accuracy of DCE MRI in predicting impaired vascularity was 67, 86, 67, 86, and 80 %, respectively, which was better than that seen with unenhanced and post-contrast MRI. Flattened time intensity curves in both proximal and distal fragments were a feature of protracted non-union with a mean time interval of 101.6 ± 95.5 months between injury and MRI. Conclusions: DCE MRI has a higher diagnostic accuracy than either non-enhanced MRI or contrast enhanced MRI for assessing proximal fragment vascularity in scaphoid delayed-union and non-union. For proper interpretation of contrast-enhanced studies in scaphoid vascularity, one needs to incorporate the time frame between injury and MRI. abstract_id: PUBMED:15336746 Gadolinium enhanced MR assessment of proximal fragment vascularity in nonunions after scaphoid fracture: does it predict the outcome of reconstructive surgery? This study investigated whether the outcome of bone graft and internal fixation surgery for nonunion of scaphoid fractures could be predicted by gadolinium-enhanced MR assessments of proximal fragment vascularity. Sixteen established scaphoid fracture nonunions underwent gadolinium-enhanced MR scanning before surgical treatment with bone grafting and internal fixation. No relationship was found between MR enhancement and the outcome of surgery. Union was achieved in eight of the 12 nonunions with more than 50% enhancement, and three of the four with less than 50% enhancement, of the proximal pole. Furthermore, union was achieved in both of the nonunions which had less than 25% enhancement of the proximal pole. We conclude that enhanced MR assessments of the vascularity of the proximal fragment of a scaphoid fracture nonunion do not accurately predict the outcome of reconstructive surgery. abstract_id: PUBMED:25145596 Usefulness of dynamic contrast-enhanced MRI in the evaluation of the viability of acute scaphoid fracture. Objective: To evaluate the usefulness of dynamic gadolinium-enhanced magnetic resonance imaging (MRI) for assessing the viability of the proximal pole of the scaphoid in patients with acute scaphoid fractures. Methods: Eighteen consecutive patients with acute scaphoid fracture who underwent dynamic gadolinium-enhanced MRI 7 days or less before surgery were prospectively included between August 2011 and December 2012. All patients underwent MR imaging with unenhanced images, enhanced images, and dynamic enhanced images. A radiologist first classified the MRI results as necrotic or viable based on T1- and T2-weighted images only, followed by a second blinded interpretation, this time including analysis of pre- and post-gadolinium administration images and a third blinded interpretation based on the time-intensity curve of the dynamic enhanced study. The standard of reference was the histologic assessment of a cylindrical specimen of the proximal pole obtained during surgery in all patients. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated for unenhanced, enhanced, and dynamic gadolinium-enhanced MRI studies. Results: The sensitivity, specificity, PPV, and NPV were 67, 67, 50, and 80 % for unenhanced images, 83, 100, 100, and 92 for enhanced images, and 83, 92, 83, and 92 for dynamic contrast-enhanced images. Conclusions: Our data are consistent with previously reported data supporting contrast-enhanced MRI for assessment of viability, and showing that dynamic imaging with time-intensity curve analysis does not provide additional predictive value over standard delayed enhanced imaging for acute scaphoid fracture. abstract_id: PUBMED:34274497 Reconstruction of proximal pole scaphoid non-union with avascular necrosis using proximal hamate: A four-case series. Union of proximal pole scaphoid fracture is challenging because the vascular supply is limited. The avascular necrosis accompanying non-union and progressive fragmentation make surgical reconstruction difficult. The present study aimed to report short-term results of reconstruction with proximal hamate in proximal pole scaphoid non-union with avascular necrosis. Four patients (3 male, 1 female) were assessed for non-union following proximal pole scaphoid fracture. Mean age was 25.7 years (range, 22-35 years). Mean interval between injury and reconstruction was 7.5 years (range, 5-9 years). In 3 patients, no treatment was given following the fracture and in the other internal fixation was performed following cast immobilization. The non-united avascular proximal scaphoid was resected and reconstructed with proximal hamate autograft. Mean follow-up was 14.0 months (range, 9-22 months) after reconstruction. At the last control, cases were evaluated in terms of clinical and functional status, and radiologic union was determined on CT scan. Three patients reported subjective decrease in pain. Mean grip strength proportionally to the unaffected side was 24% (range, 8-51%), mean flexion-extension was 55.3% (range, 21-60%), and radial-ulnar deviation was 63.6% (range, 25-100%). Mean QuickDASH score was 32.3 (range, 13.6-52.2). Union was observed in all patients. Proximal hamate reconstruction provided union and pain relief, but only moderate objective functional results. It can be used as an alternative technique in proximal pole scaphoid non-union with avascular necrosis. abstract_id: PUBMED:34550858 A meta-analysis of union rate after proximal scaphoid fractures: terminology matters. Heterogeneity in the anatomical definition of 'proximal' affects the comparison of outcomes of these scaphoid fractures. This study aims to review published outcomes of all variants to determine both, differences in terminology, and union rate based upon definition. A literature search was conducted to identify articles that reported descriptions and union rate of all acute (&lt;8 weeks of injury) proximal scaphoid fractures in adult patients (&gt;16 years old). Proximal fractures were grouped as reported ('third', 'pole', 'fifth' or 'undefined'). The data were pooled using a fixed-effects method, and a meta-analysis was conducted to compare relative risk (RR) of non-union against non-proximal fractures. Qualitative analysis of 12 articles included three main definitions: 'proximal' (1 article), 'proximal third' (3 articles), and 'proximal pole' (8 articles). Only 6 articles adopted a specific anatomical or ratio description. In a pooled meta-analysis of union rates (15 articles), 'proximal third' and 'proximal pole' fractures demonstrated a relative risk (RR) of non-union of 2.3 and 3.4 in comparison to non-proximal fractures, respectively. Operative management yielded lower non-union rates than non-operative for all fracture types (6% vs. 18%). In conclusion, non-union risk varies depending on definition, with non-standardised classifications adding heterogeneity to reported outcomes. We recommend an approach utilizing fixed anatomical landmarks on plain radiographs (referencing scaphoid length and scapho-capitate joint) to standardise reporting of proximal fracture union in future studies. Abbreviations: CI: confidence intervals; CT: computer tomography; Df: degrees of freedom; DL: dersimonian and laird estimator; MRI: magnetic resonance imaging; NICE: national institute for health and care excellence; OTA: orthopaedic trauma association; PA: posterior-anterior; PRISMA: preferred reporting items for systematic reviews and meta analyses; RCT: randomised controlled trial; RR: relative risk; SNAC: scaphoid non-union advanced collapse; UK: United Kingdom. abstract_id: PUBMED:9509596 Gadolinium-enhanced dynamic MRI of the fractured carpal scaphoid: preliminary results. The aim of the present report was to evaluate the vascularity of fracture fragments of the fractured carpal scaphoid in the acute (&lt; 4 weeks) and chronic (&gt; 3 months) phases using a gadolinium-enhanced dynamic MRI sequence. Eight patients with acute scaphoid fractures, six patients with chronic scaphoid fractures, and three control patients without fractures were evaluated using a T1-weighted fast spoiled gradient recalled acquisition (fSPGR) sequence with gadolinium-DTPA enhancement (0.1 mmol/kg bodyweight). Signal intensity over time plots were obtained using region of interest measurements from both fracture fragments. Enhancement factors (EF) were then calculated from the plots. No enhancement of the scaphoid was seen in control subjects (EF: distal scaphoid pole 1.04 +/- 0.01, proximal pole 1.07 +/- 0.08). In acute fracture patients, enhancement of the distal pole was greater than that of the proximal in all cases but one in which the two poles enhanced in a similar fashion (EF: distal 1.99 +/- 0.77, proximal 1.43 +/- 0.99). In chronic fracture patients the enhancement pattern was reversed, as the proximal pole enhanced to a greater degree than the distal with the exception of one case where both poles enhanced equally (EF: distal 1.74 +/- 0.52, proximal 2.64 +/- 0.50). Using a two-tailed non-parametric Mann-Whitney U-test, the difference in enhancement of the proximal poles between the acute and chronic groups was found to be highly significant (P &lt; 0.003). Dynamic contrast-enhanced (fSPGR) MRI demonstrates significant differences in the enhancement patterns of the scaphoid when chronic and acute fractures are compared. abstract_id: PUBMED:33269213 Radiographic evaluation of vascularity in scaphoid nonunions: A review. Scaphoid fractures, particularly those that occur more proximally, are unreliable in achieving union due to the retrograde blood supply of the scaphoid bone. Vascular compromise is associated with the development of nonunions and avascular necrosis of the proximal pole. Due to the tenuous blood supply of the scaphoid, it is imperative that the vascularity be assessed when creating diagnostic and treatment strategies. Early detection of vascular compromise via imaging may signal impending nonunion and allow clinicians to perform interventions that aid in restoring perfusion to the scaphoid. Vascular compromise in the scaphoid presents a diagnostic challenge, in part due to the non-specific findings on plain radiographs and computed tomography. Magnetic resonance imaging techniques have dramatically improved our ability to assess the blood supply to the scaphoid and improve time to intervention. This review aims to summarize these advances and highlights the importance of imaging in assessing vascular compromise in scaphoid nonunion and in reperfusion following surgical intervention. abstract_id: PUBMED:35909374 Arthroscopic bone graft and fixation for proximal scaphoid nonunions. Aims: This study aims to report the outcomes in the treatment of unstable proximal third scaphoid nonunions with arthroscopic curettage, non-vascularized bone grafting, and percutaneous fixation. Methods: This was a retrospective analysis of 20 patients. All cases were delayed presentations (n = 15) or failed nonoperatively managed scaphoid fractures (n = 5). Surgery was performed at a mean duration of 27 months (7 to 120) following injury with arthroscopic debridement and arthroscopic iliac crest autograft. Fracture fixation was performed percutaneously with Kirschner (K)-wires in 12 wrists, a headless screw in six, and a combination of a headless screw and single K-wire in two. Clinical outcomes were assessed using grip strength, patient-reported outcome measures, and wrist range of motion (ROM) measurements. Results: Intraoperatively, established avascular necrosis of the proximal fragment was identified in ten scaphoids. All fractures united within 16 weeks, confirmed by CT. At a mean follow-up of 31 months (12 to 64), there were significant improvements in the Patient-Rated Wrist Evaluation, Mayo Wrist Score, abbreviated Disabilities of the Arm, Shoulder and Hand score, wrist ROM, grip strength, and the patients' subjective pain score. No peri- or postoperative complications were encountered. Conclusion: Our data indicate that arthroscopic bone grafting and fixation with cancellous autograft is a viable method in the treatment of proximal third scaphoid nonunions, regardless of the vascularity of the proximal fragment. Cite this article: Bone Joint J 2022;104-B(8):946-952. abstract_id: PUBMED:25398469 Scaphoid fracture nonunion: correlation of radiographic imaging, proximal fragment histologic viability evaluation, and estimation of viability at surgery: diagnosis of scaphoid pseudarthrosis. Purpose: The purpose of this study was to correlate the pre-operative imaging, vascularity of the proximal pole, and histology of the proximal pole bone of established scaphoid fracture non-union. Methods: This was a prospective non-controlled experimental study. Patients were evaluated pre-operatively for necrosis of the proximal scaphoid fragment by radiography, computed tomography (CT) and magnetic resonance imaging (MRI). Vascular status of the proximal scaphoid was determined intra-operatively, demonstrating the presence or absence of puncate bone bleeding. Samples were harvested from the proximal scaphoid fragment and sent for pathological examination. We determined the association between the imaging and intra-operative examination and histological findings. Results: We evaluated 19 male patients diagnosed with scaphoid nonunion. CT evaluation showed no correlation to scaphoid proximal fragment necrosis. MRI showed marked low signal intensity on T1-weighted images that confirmed the histological diagnosis of necrosis in the proximal scaphoid fragment in all patients. Intra-operative assessment showed that 90% of bones had absence of intra-operative puncate bone bleeding, which was confirmed necrosis by microscopic examination. Conclusions: In scaphoid nonunion MRI images with marked low signal intensity on T1-weighted images and the absence of intra-operative puncate bone bleeding are strong indicatives of osteonecrosis of the proximal fragment. abstract_id: PUBMED:26051777 Trans-Scaphoid Perilunate Dislocation: Union of an Extruded Scaphoid Proximal Pole Fragment. We present an unusual case of a closed perilunate dorsal-dislocation of the carpus, with an associated scaphoid fracture. In this extreme case, the proximal scaphoid pole was extruded volarly and proximally. After closed manipulation, the proximal pole of the scaphoid was further dislocated dorsally, a phenomenon not previously described in the literature. At open reduction this fragment was noted to have no soft tissue attachment but after reduction, distal radius bone graft and compression screw fixation the scaphoid went on to unite with a good functional result. This case highlights a rare but serious injury to the wrist with an unusual dislocation pattern not previously described. It demonstrates that early surgical intervention to fix such fractures with an avascular fragment can still achieve fracture union, despite the severity. Answer: Dynamic contrast-enhanced MRI (DCE MRI) has been found to be useful for assessing proximal fragment vascularity in scaphoid fracture delayed-union and non-union. A study with thirty-five patients showed that DCE MRI parameters had the highest correlation with surgical findings, with a kappa value of 0.57 indicating a moderate level of agreement between MRI findings and surgical findings. The sensitivity, specificity, positive and negative predictive value, and accuracy of DCE MRI in predicting impaired vascularity were 67%, 86%, 67%, 86%, and 80%, respectively, which was better than that seen with unenhanced and post-contrast MRI (PUBMED:23653220). However, another study investigating the outcome of bone graft and internal fixation surgery for nonunion of scaphoid fractures found no relationship between MR enhancement and the outcome of surgery. This study concluded that enhanced MR assessments of the vascularity of the proximal fragment of a scaphoid fracture nonunion do not accurately predict the outcome of reconstructive surgery (PUBMED:15336746). Furthermore, a study evaluating the usefulness of dynamic gadolinium-enhanced MRI for assessing the viability of the proximal pole of the scaphoid in patients with acute scaphoid fractures found that dynamic imaging with time-intensity curve analysis does not provide additional predictive value over standard delayed enhanced imaging for acute scaphoid fracture (PUBMED:25145596). In summary, while DCE MRI has shown higher diagnostic accuracy than other MRI techniques for assessing proximal fragment vascularity in scaphoid delayed-union and non-union (PUBMED:23653220), its predictive value for the outcome of reconstructive surgery is not supported by all studies (PUBMED:15336746). Additionally, for acute scaphoid fractures, dynamic imaging may not offer significant advantages over standard delayed enhanced imaging (PUBMED:25145596).
Instruction: Deep brain stimulation for obsessive-compulsive disorder: is the side relevant? Abstracts: abstract_id: PUBMED:32816768 Neuroimaging Advances in Deep Brain Stimulation: Review of Indications, Anatomy, and Brain Connectomics. Deep brain stimulation is an established therapy for multiple brain disorders, with rapidly expanding potential indications. Neuroimaging has advanced the field of deep brain stimulation through improvements in delineation of anatomy, and, more recently, application of brain connectomics. Older lesion-derived, localizationist theories of these conditions have evolved to newer, network-based "circuitopathies," aided by the ability to directly assess these brain circuits in vivo through the use of advanced neuroimaging techniques, such as diffusion tractography and fMRI. In this review, we use a combination of ultra-high-field MR imaging and diffusion tractography to highlight relevant anatomy for the currently approved indications for deep brain stimulation in the United States: essential tremor, Parkinson disease, drug-resistant epilepsy, dystonia, and obsessive-compulsive disorder. We also review the literature regarding the use of fMRI and diffusion tractography in understanding the role of deep brain stimulation in these disorders, as well as their potential use in both surgical targeting and device programming. abstract_id: PUBMED:29406245 Progress and challenges in deep brain stimulation for obsessive-compulsive disorder. This narrative review summarizes the recent literature on deep brain stimulation for treatment resistant obsessive-compulsive disorder highlighting both progress and challenges of this novel treatment. Common targets of psychiatric deep brain stimulation involve both white matter trajectories (anterior limb of the internal capsule, inferior thalamic peduncle, and medial forebrain bundle) and grey matter subcortical nuclei (nucleus accumbens, nucleus subthalamicus, and bed nucleus of the stria terminalis) each of which have been reported with a relevant beneficial effect on obsessive-compulsive symptoms. The mechanisms of action are only partially understood but increasing evidence points towards network effects involving the prefrontal cortex, the striatum and possibly anxiety-related anatomical structures. Deep brain stimulation is a promising therapeutical technique for otherwise treatment refractory patients, but many major issues are unresolved and thorough investigations are needed. Relevant topics for future investigations include treatment predictors and therapeutical augmentation. An international registry of patients treated with deep brain stimulation could improve our understanding of adverse events and target specific effects. In order to step forward, researchers must face inconvenient questions and outperform the status quo of clinical research in this field. abstract_id: PUBMED:30098652 Deep Brain Stimulation: Clinical Applications. Deep brain stimulation has been used for decades in neurology to treat movement disorders. More recent work has focused on developing applications for deep brain stimulation in psychiatric illness. Initial studies have demonstrated positive results for treatment-refractory obsessive-compulsive disorder. Initial open-label studies of deep brain stimulation at targets for treatment-resistant depression have been encouraging. However, the only 2 published controlled trials that were conducted for potential FDA approval for treatment-resistant depression were both negative. Future directions include potential use of alternate clinical trial designs, using tractography for more refined deep brain stimulation electrode targeting, and closed-loop deep brain stimulation approaches. abstract_id: PUBMED:19660670 Deep brain stimulation in obsessive-compulsive disorder. The use of deep brain stimulation in psychiatric disorders has received great interest owing to the small risk of the operation, the reversible nature of the technique, and the possibility of optimizing treatment postoperatively. Currently, deep brain stimulation in psychiatry is investigated for obsessive-compulsive disorder, Gilles de la Tourette's syndrome, and major depression. This chapter reviews the application of deep brain stimulation in obsessive-compulsive disorder. Preliminary results suggest that deep brain stimulation in obsessive-compulsive disorder can effectuate a decrease of 40-60% in at least half of the patients. Although various side effects occur, most of these are transitory and linked to specific stimulation parameters which can be changed. Because only a few studies have been performed with a limited number of patients in accordance with varying research protocols, appliance of deep brain stimulation to obsessive-compulsive disorder is still at an experimental stage. The speed of the effect of deep brain stimulation causes fundamental assumptions on the pathophysiology of obsessive-compulsive disorder. abstract_id: PUBMED:31760410 Deep brain stimulation: new possibilities for the treatment of mental disorders. Deep brain stimulation (DBS) is a treatment method that is currently getting more and more attention from psychiatrists. It has proven to be efficacious and safe in the treatment of neurological disorders, mainly Parkinson's disease (PD), dystonia and essential tremor. DBS has very often contributed to successful treatment in cases that had proved resistant to all other methods of treatment. Nowadays treatment-resistant obsessive-compulsive disorder (OCD) is the main psychiatric indication for DBS. Many studies have focused on assessing the efficacy and safety of this method in different mental disorders, including depressive disorders, Alzheimer's disease, anorexia nervosa, Tourette syndrome, substance addiction or aggressive behaviors. Single cases of successful treatment in bipolar disorder, schizophrenia and post-traumatic stress disorder have also emerged in recent years. In this review the current state of knowledge on the applicability of DBS in psychiatry is presented, based on the available systematic reviews, clinical trials and case studies, as well as on neurophysiological and neuroimaging data. abstract_id: PUBMED:29676713 Deep brain stimulation for psychiatric disorders - Deep brain stimulation (DBS) corrects pathological activity of neuropsychiatric brain networks with high frequency current via implanted brain electrodes.- DBS is an effective and safe treatment for therapy-refractory obsessive-compulsive disorder and potentially also for therapy-refractory major depressive disorder.- Experimental psychiatric indications for DBS are Tourette syndrome, addiction, anorexia nervosa, post-traumatic stress disorder, autism and schizophrenia.- DBS influences brain networks that are relevant for a variety of psychiatric symptoms. Potentially, in the future this interventional technique may therefore be deployed more broadly. abstract_id: PUBMED:19024063 Deep brain stimulation in psychiatric disorders Deep brain stimulation is a technique in which electrodes are stereotactically implanted in specific regions of the brain to administer electric pulses. After implantation, the activity of the electrodes can be adjusted. The field of psychiatry shows great interest in deep brain stimulation because there is little risk of complications and because it is reversible and postoperatively adjustable to optimalize the treatment. Deep brain stimulation is being investigated as a potential treatment for obsessive compulsive disorder, Tourette's syndrome, and major depressive disorder. Deep brain stimulation for psychiatric conditions is still strictly experimental. It is expected that deep brain stimulation will be the treatment of choice for chronic therapy-refractory psychiatric disorders. The speed of responses following deep brain stimulation is urging psychiatrists to reconsider basic pathophysiological mechanisms of psychiatric disorders. abstract_id: PUBMED:27551867 Deep brain stimulation for psychiatric disorders Deep brain stimulation for psychiatric disorders Deep brain stimulation is an established treatment for movement disorders. It has been proven to be a safe method; only minor complications have been reported in larger studies. New indications for deep brain stimulation are under investigation; among them major depressive disorder and obsessive-compulsive disorder. Deep brain stimulation for severe and therapy-resistant major depressive disorder and obsessive compulsive-disorder shows promising results. However, the experience of deep brain stimulation in psychiatric disorders is limited. Several potential target areas for stimulation have been presented; which are the most effective is still an open question. Deep brain stimulation in psychiatric disorders is a highly experimental treatment and should only be performed by a multidisciplinary team with extensive experience with deep brain stimulation in other conditions. abstract_id: PUBMED:24964565 Deep brain stimulation in psychiatry. Introduction: Deep brain stimulation is a stereotactic neurosurgical method used in the treatment of Parkinson's disease and some other movement disorders. The application of deep brain stimulation in the treatment of certain psychiatric disorders has been intensively investigated taking into account the current knowledge of neurobiological basis of mood regulation, cognition, and behaviour. This paper has been aimed at presenting the available data on experience in the application of deep brain stimulation in the treatment of psychiatric disorders. It gives an overview of scientific and professional literature, bearing in mind all the contemporary approaches in the treatment of certain psychiatric disorders. Research results available so far in the treatment of treatment-resistant depression, obsessive-compulsive disorder, Gilles de la Tourette syndrome, addiction and Alzheimer's dementia, are affirmative concerning the efficacy of the method and low risk of adverse effects. Deep brain stimulation, as a relatively new neurosurgical method in the treatment of psychiatric disorders, is being intensively developed, and it is certainly going to be one of the treatments of choice, primarily of treatment-resistant disorders. abstract_id: PUBMED:31974655 The effect of fornix deep brain stimulation in brain diseases. Deep brain stimulation is used to alleviate symptoms of neurological and psychiatric disorders including Parkinson's disease, epilepsy, and obsessive-compulsive-disorder. Electrically stimulating limbic structures has been of great interest, and in particular, the region of the fornix. We conducted a systematic search for studies that reported clinical and preclinical outcomes of deep brain stimulation within the fornix up to July 2019. We identified 13 studies (7 clinical, 6 preclinical) that examined the effects of fornix stimulation in Alzheimer's disease (n = 9), traumatic brain injury (n = 2), Rett syndrome (n = 1), and temporal lobe epilepsy (n = 1). Overall, fornix stimulation can lead to decreased rates of cognitive decline (in humans), enhanced memory (in humans and animals), visuo-spatial memorization (in humans and animals), and improving verbal recollection (in humans). While the exact mechanisms of action are not completely understood, studies suggest fornix DBS to be involved with increased functional connectivity and neurotransmitter levels, as well as enhanced neuroplasticity. Answer: The abstracts provided do not explicitly discuss the relevance of the side (left or right hemisphere) in deep brain stimulation (DBS) for obsessive-compulsive disorder (OCD). However, they do highlight the importance of targeting specific brain regions and networks that are implicated in OCD. For instance, common targets for DBS in OCD include white matter trajectories such as the anterior limb of the internal capsule, inferior thalamic peduncle, and medial forebrain bundle, as well as grey matter subcortical nuclei like the nucleus accumbens, nucleus subthalamicus, and bed nucleus of the stria terminalis (PUBMED:29406245). These targets are chosen based on their involvement in the pathophysiology of OCD and their connections to prefrontal cortex and striatum, which are thought to be part of the network affected in OCD (PUBMED:29406245). The literature suggests that the mechanisms of action of DBS in OCD are related to network effects, which involve multiple brain regions and structures (PUBMED:29406245). Given that OCD is considered a network disorder, it is possible that the side of stimulation could be relevant if the pathological network is lateralized or if there are differences in the connectivity or function of the targeted regions between the hemispheres. However, without specific data on lateralization effects in the treatment of OCD with DBS, it is not possible to draw definitive conclusions about the relevance of the side of stimulation from the provided abstracts. In summary, while the side of DBS may be an important consideration in the treatment of OCD, the provided abstracts do not offer detailed information on this aspect. Further research would be needed to determine whether the side of stimulation has a significant impact on the outcomes of DBS in OCD patients.
Instruction: Clinical queries addressed in patients with systemic autoimmune diseases. Can cardiovascular magnetic resonance give the final solution? Abstracts: abstract_id: PUBMED:25412817 Clinical queries addressed in patients with systemic autoimmune diseases. Can cardiovascular magnetic resonance give the final solution? Objectives: To evaluate the potential of cardiovascular magnetic resonance (CMR) to answer queries, addressed in systemic autoimmune diseases (SAD). Methods: Thirty-six patients aged 52±6 years, (range 27-71) with SAD and suspected cardiac disease underwent CMR by a 1.5 T, after routine evaluation, including clinical, ECG and echocardiographic examination. Steady-state, free precession cines, STIR T2-W and late gadolinium enhanced (LGE) images were evaluated. Results: Abnormal findings were detected by: clinical evaluation in 14/36, ECG in 17/36, echocardiography in 11/36 and CMR in 30/36 SAD. Clinical, ECG and echocardiographic examination could not assess cardiac disease acuity and lesions'pathophysiology. In contrary, CMR identified cardiac lesions' etiology, acuity, need for catheterization and heart disease persistence, even if SAD was quiescent. Conclusion: Clinical, ECG and echocardiographic abnormalities may suggest, but not always interpret cardiac involvement in SAD. CMR can help to identify both etiology and acuity of cardiac lesions and guide further diagnostic and/or therapeutic approach in these patients. abstract_id: PUBMED:32695640 Cardiovascular magnetic resonance imaging for inflammatory heart diseases. Inflammatory myocardial diseases represent a diverse group of conditions in which abnormal inflammation within the myocardium is the primary driver of cardiac dysfunction. Broad causes of myocarditis include infection by cardiotropic viruses or other infectious agents, to systemic autoimmune disease, or to toxins. Myocarditis due to viral aetiologies is a relatively common cause of acute chest pain syndromes in younger and middle-aged patients and often has a benign prognosis, though this and other forms of myocarditis also cause serious sequelae, including heart failure, arrhythmia and death. Endomyocardial biopsy remains the gold standard tool for tissue diagnosis of myocarditis in living individuals, although new imaging technologies have a crucial and complementary role. This review outlines the current state-of-the-art and future experimental cardiovascular magnetic resonance (CMR) imaging approaches for the detection of inflammation and immune cell activity in the heart. Multiparametric CMR, particularly with novel quantitative T1- and T2-mapping, is a valuable and widely-available tool for the non-invasive assessment of inflammatory heart diseases. Novel CMR molecular contrast agents will enable a more targeted assessment of immune cell activity and may be useful in guiding the development of novel therapeutics for myocarditis. abstract_id: PUBMED:28667382 Magnetic Resonance Imaging and Clinical Features in Acute and Subacute Myelopathies. Differential diagnosis of acute and subacute transverse myelopathy includes inflammatory, infectious, vascular, metabolic and paraneoplastic etiologies. Information on the diagnostic approach to transverse myelopathy with regard to daily clinical practice is provided. The differentiation between five lesion patterns on magnetic resonance imaging (MRI) in myelitis may be helpful: (1) longitudinal extensive transverse myelitis, (2) short segment ovoid or peripherally located, (3) "polio-like", (4) granulomatous and (5) segmental with rash. A correlation with these imaging features is supported if the clinical course and neurological symptoms are known. Although the mean interval from onset to nadir of symptoms in spinal cord infarction is 1 h, an overlap with a fulminant course of myelitis is possible, and impaired diffusion may also occur in acute inflammatory processes. As a result, laboratory testing, including aquaporin-4 antibodies and cerebrospinal fluid analysis, is crucial for the correct interpretation of imaging findings. Moreover, the discrimination of acute complete and acute partial transverse myelitis is advantageous in order to identify diverse entities, the latter often being a precursor to multiple sclerosis. Additional brain imaging is mandatory in suspected demyelinating, infectious, neoplastic and systemic autoimmune disease. A symmetrical lesion pattern restricted to individual tracts or dorsal columns indicates subacute combined degeneration of the spinal cord and, in addition to deficiency syndromes, a paraneoplastic etiology should be considered. abstract_id: PUBMED:31670481 Ultrasound versus magnetic resonance imaging in the evaluation of shoulder joint pathologies in a cohort of rheumatoid arthritis patients. Background: Rheumatoid arthritis (RA) is a systemic autoimmune disease that has a great impact on different joints, may result in their destruction and loss of function. Although the shoulder is affected in a large portion of patients with RA, it does not receive much attention during the follow up of RA. The precise diagnosis of shoulder pain in RA is a clinical challenge and benefits from a reliable imaging modality to detect its exact origin. Aim: To determine the diagnostic accuracy of ultrasound (US) in detecting shoulder joint pathologies in RA, considering magnetic resonance imaging (MRI) as the gold standard. Materials And Methods: This cross-sectional, observational study was carried out on 30 RA patients complaining of unilateral or bilateral shoulder pain. Patients were subjected to history taking, clinical shoulder examination, plain X-ray, US examination following a standardized protocol, and MRI. The results were correlated with each other. Results: In comparison with the MRI findings, US showed high accuracy in terms of sensitivity (Sn) and specificity (Sp) in supraspinatus tendinopathy (Sn 96.6%; Sp 93.3%), biceps tenosynovitis (Sn 87.5%; Sp 97.6%), subacromial-subdeltoid bursitis (Sn 72.7%; Sp 95.7%), humeral erosions (Sn 90.5%; Sp 97.3%), and acromioclavicular osteoarthritis (Sn 85.7%; Sp 95.7%). In terms of reliability, the agreement between US and MRI was almost perfect (κ = .9, P &lt; .001). Conclusion: US may have a role as the initial imaging modality in RA patients with shoulder pain, as it is highly sensitive and specific in detecting different pathological abnormalities of the shoulder. abstract_id: PUBMED:25778740 Cardiovascular involvement in pediatric systemic autoimmune diseases: the emerging role of noninvasive cardiovascular imaging. Cardiac involvement in pediatric systemic autoimmune diseases has a wide spectrum of presentation ranging from asymptomatic to severe clinically overt involvement. Coronary artery disease, pericardial, myocardial, valvular and rythm disturbances are the most common causes of heart lesion in pediatric systemic autoimmune diseases and cannot be explained only by the traditional cardiovascular risk factors. Therefore, chronic inflammation has been considered as an additive causative factor of cardiac disease in these patients. Rheumatic fever, juvenile idiopathic arthritis, systemic lupus erythematosus, ankylosing spondylitis/spondyloarthritides, juvenile scleroderma, juvenile dermatomyositis/polymyositis, Kawasaki disease and other autoimmune vasculitides are the commonest pediatric systemic autoimmune diseases with heart involvement. Noninvasive cardiovascular imaging is an absolutely necessary adjunct to the clinical evaluation of these patients. Echocardiography is the cornerstone of this assessment, due to excellent acoustic window in children, lack of radiation, low cost and high availability. However, it can not detect disease acuity and pathophysiologic background of cardiac lesions. Recently, the development of cardiovascular magnetic resonance imaging holds the promise for early detection of subclinical heart disease and detailed serial evaluation of myocardium (function, inflammation, stress perfusion-fibrosis) and coronary arteries (assessment of ectasia and aneurysms). &lt;/p&gt;. abstract_id: PUBMED:27429684 Correlations of Clusters of Non-Convulsive Seizure and Magnetic Resonance Imaging in a Case With GAD65-Positive Autoimmune Limbic Encephalitis. With the increased availability of laboratory tests, glutamic acid decarboxylase (GAD) antibody-positive limbic encephalitis has become an emerging diagnosis. The myriad symptoms of limbic encephalitis make the diagnosis challenging. Symptoms range from seizures, memory loss, dementia, confusion, to psychosis. We present a case of a 21-year-old female with GAD65 antibody-positive limbic encephalitis. The case is unique because the clinical course suggests that non-convulsive seizures are the major cause of this patient's clinical manifestations. The following is the thesis: systemic autoimmune disease, associated with the GAD65 antibody, gives rise to seizures, in particular, non-convulsive seizures. Temporal lobes happen to be the most susceptible sites to develop seizures. The greater part of these seizures can be non-convulsive and hard to recognize without electroencephalogram (EEG) monitoring. The variable symptoms mirror the severity and locations of these seizures. The magnetic resonance imaging (MRI) signal abnormities in the bilateral hippocampus, fornix, and mammillary body correlate with the density of these seizures in the similar manner, which suggests it is secondary to post-ictal edema. abstract_id: PUBMED:18086336 Inner-ear obliteration in ulcerative colitis patients with sensorineural hearing loss. Objective: Systemic autoimmune diseases, including ulcerative colitis, may involve the inner ear. Several ulcerative colitis cases presenting with sensorineural hearing loss have been reported. We report the T2-weighted, three-dimensional, inner-ear magnetic resonance imaging findings in the inner ears of two such patients. Methods: Case reports and a review of the literature concerning autoimmune disease and sensorineural hearing loss are presented. Results: We describe two cases of ulcerative colitis with sensorineural hearing loss in which three-dimensional magnetic resonance imaging revealed obliteration of the inner ear. Those inner ears with obliteration had severe hearing loss, and responded poorly to steroid therapy. Conclusion: To our knowledge, there has been no previous published report of the T2-weighted, inner-ear magnetic resonance imaging findings of cases of ulcerative colitis with sensorineural hearing loss. This paper represents the first published report in the world literature of inner-ear obliteration in such patients. Three-dimensional magnetic resonance imaging is beneficial in elucidating the pathophysiology of the inner-ear involvement seen in ulcerative colitis. abstract_id: PUBMED:26179688 Relapsing Polychondritis Presenting with Meningoencephalitis and Dementia: Correlation with Neuroimaging and Clinical Features. Purpose: Relapsing polychondritis (RP) is a rare systemic autoimmune disease affecting cartilaginous and non-cartilaginous structures. Neurological involvement is rarer but results in profound disability. Early identification and treatment of underlying RP may promote neurological recovery. Case Report: We illustrated a 53-year-old man diagnosed with dementia. Neuroimaging and cerebrospinal fluid studies disclosed meningoencephalitis. "Prominent ear sign" was evident on diffusion-weight magnetic resonance imaging. After glucocortisone administration, the improvement of clinical manifestations was closely correlated subsequent neuroimaging findings. Conclusion: The importance of better understanding of this disease in terms of the prevention of further tissue damage in patients with RP cannot be overemphasized. abstract_id: PUBMED:36348486 A rare case report of primary Sjögren's syndrome with clinical characteristics similar to those of CLIPPERS. Background: Primary Sjögren's syndrome (pSS) is an autoimmune inflammatory disease characterized by dryness of the eyes, mouth and other mucous membranes. Patients with pSS can also present with extraglandular manifestations, such as pulmonary, kidney and nervous system involvement. Central nervous system (CNS) manifestations have rarely been described in pSS. Case Presentation: A 33-year-old man was admitted with a one-month history of dizziness, speech disturbance, and walking instability. His brain enhanced magnetic resonance imaging (MRI) showed symmetrical, enhanced "salt-and-pepper-like" speckled lesions in the brainstem, basal ganglia, and subcortical regions, and his diagnosis was considered possible chronic lymphocytic inflammation with pontine perivascular enhancement responsive to steroids (CLIPPERS). Further examination revealed that anti-SSA antibody was positive, and the Schirmer test and labial salivary gland histopathology were abnormal, which supported the diagnosis of pSS. Conclusion: pSS is a chronic systemic autoimmune disease that involves neurological complications. This case suggests that CNS lesions of pSS can present with clinical and MRI findings similar to those of CLIPPERS. abstract_id: PUBMED:36649452 Clinical Utilization of Multimodality Imaging for Myocarditis and Cardiac Sarcoidosis. Myocarditis is defined as inflammation of the myocardium according to clinical, histological, biochemical, immunohistochemical, or imaging findings. Inflammation can be categorized histologically by cell type or pattern, and many causes have been implicated, including infectious, most commonly viral, systemic autoimmune diseases, vaccine-associated processes, environmental factors, toxins, and hypersensitivity to drugs. Sarcoid myocarditis is increasingly recognized as an important cause of cardiomyopathy and has important diagnostic, prognostic, and therapeutic implications in patients with systemic sarcoidosis. The clinical presentation of myocarditis may include an asymptomatic, subacute, acute, fulminant, or chronic course and may have focal or diffuse involvement of the myocardium depending on the cause and time point of the disease. For most causes of myocarditis except sarcoidosis, myocardial biopsy is the gold standard but is limited due to risk, cost, availability, and variable sensitivity. Diagnostic criteria have been established for both myocarditis and cardiac sarcoidosis and include clinical and imaging findings particularly the use of cardiac magnetic resonance and positron emission tomography. Beyond diagnosis, imaging findings may also provide prognostic value. This case-based review focuses on the current state of multimodality imaging for the diagnosis and management of myocarditis and cardiac sarcoidosis, highlighting multimodality imaging approaches with practical clinical vignettes, with a discussion of knowledge gaps and future directions. Answer: Cardiovascular magnetic resonance (CMR) has shown significant potential in addressing clinical queries in patients with systemic autoimmune diseases (SADs). CMR is a valuable tool for the non-invasive assessment of inflammatory heart diseases, which are a diverse group of conditions where abnormal inflammation within the myocardium is the primary driver of cardiac dysfunction (PUBMED:32695640). It can identify both the etiology and acuity of cardiac lesions and guide further diagnostic or therapeutic approaches in patients with SADs, even when the disease is quiescent (PUBMED:25412817). CMR imaging approaches, including multiparametric CMR with novel quantitative T1- and T2-mapping, are crucial for detecting inflammation and immune cell activity in the heart. These imaging technologies are complementary to endomyocardial biopsy, which remains the gold standard for tissue diagnosis of myocarditis in living individuals (PUBMED:32695640). In pediatric systemic autoimmune diseases, noninvasive cardiovascular imaging is essential for the assessment of cardiac involvement, which can range from asymptomatic to severe clinically overt involvement. CMR holds promise for early detection of subclinical heart disease and detailed serial evaluation of myocardium and coronary arteries (PUBMED:25778740). In conclusion, CMR can provide a comprehensive assessment of cardiac involvement in SADs, offering insights into the etiology, acuity, and pathophysiology of cardiac lesions. It can help in identifying the need for catheterization and monitoring heart disease persistence, thereby potentially providing a final solution to the clinical queries addressed in patients with SADs (PUBMED:25412817).
Instruction: Is the short-form Child Perceptions Questionnaire meaningful and relevant to children with malocclusion in the UK? Abstracts: abstract_id: PUBMED:20439924 Is the short-form Child Perceptions Questionnaire meaningful and relevant to children with malocclusion in the UK? Objective: To explore the face and content validity of the 16-item short-form of CPQ11-14 (CPQ ISF-16), an oral health-related quality of life measure (OHRQoL) measure, with young people with malocclusion in the UK. Design: Qualitative study. Setting: Interviews about CPQ ISF-16 were conducted in participants' homes. Subjects And Methods: The sample included 10 participants aged 12-14 years attending an orthodontic clinic of a dental hospital in the UK. All participants were judged to be in definite need of orthodontic treatment. Interventions: The interviews covered the impact of malocclusion on young people's daily lives and participants were then asked to complete CPQ ISF-16 and describe their interpretation of the meaning of questions, any ambiguities of wording and comment on the layout of the questionnaire and response format. Results: Having a malocclusion 'bothered' young people to varying degrees. For those participants for whom having a malocclusion impacted on their lives, concerns were about self-consciousness with dental appearance. On analysis of the content validity some items were felt to be relevant to the impact of malocclusion, particularly questions from the emotional and social well-being subscales; however, other questions were felt not to be relevant. Concerns about face validity were expressed about several facets of the measure including the response format, the use of 'double' questions and interpretation of certain words. Conclusion: Concerns were expressed by young people with malocclusions about the face and content validity of CPQ ISF-16. Further consideration should be given to the need for a child-centred malocclusion-specific OHRQoL measure. abstract_id: PUBMED:30397369 Validity and Reliability of the Hindi Version of the Modified Child Perceptions Questionnaire 11 to 14. Introduction: The study was conceived to formulate a tool to evaluate child perceptions related to oral health in 11- to 14-year-olds tailor-made for Indian children in Hindi. The original child perceptions questionnaire (CPQ11-14) was translated into Hindi and it was tested for validity and reliability. Materials And Methods: The original CPQ11-14 was translated into Hindi and some questions were rephrased to suit the sociocultural situation in India. The domains of self-esteem and psychological well-being were added to the questionnaire to broaden the scope of parameters to thoroughly assess the impact on child perceptions toward oral health. The English and Hindi versions of the CPQ11-14 were administered during the first visit to test for validity and the children were recalled after 1 week and administered the Hindi questionnaire again after 1 week to check for reliability. Results: The results showed significant positive correlation between oral symptoms, decayed, missing and filled teeth (DMFT) functional limitation and malocclusion. The mean functional limitation score was found to be higher in subjects with malocclusion in both the English and Hindi questionnaires. The test-retest samples were evaluated using the paired t-test and showed no significant difference between the first and second administration which suggested good reliability. Conclusion: The translated and modified Hindi CPQ11-14 was found to be valid and highly reliable for use in India. The adaptation of the original questionnaire by modifying certain questions to suit the Indian condition was found to be culturally relevant.How to cite this article: Kumar P, Mandanna DCM, Londhe SM, Sharma M. Validity and Reliability of the Hindi Version of the Modified Child Perceptions Questionnaire 11 to 14. Int J Clin Pediatr Dent 2018;11(4):271-276. abstract_id: PUBMED:31094265 Agreement between children and parents in rating oral health-related quality of life using the Swedish versions of the short-form Child Perceptions Questionnaire 11-14 and Parental Perceptions Questionnaire. Objective: To explore the agreement between children and parents on children's oral health-related quality of life (OHRQoL) when using the Swedish short forms of CPQ11-14 and P-CPQ, and to evaluate the impact on agreement of oral health including malocclusion and background characteristics (dental fear, family situation, gender of informant). Material and methods: A total of 257 children and their accompanying parents were asked to fill in the Swedish versions of the short-form CPQ11-14 and P-CPQ separately in connection with a clinical examination. Results: The participants comprised 247 child-parent pairs: 116 (47%) boys, 131 (53%) girls, 166 (67%) mothers and 81 (33%) fathers. The agreement between the child and parental ratings of the children's OHRQoL was low, with an ICC of 0.22 (95% CI: 0.04-0.37) for the total scale. Conclusions: There was a low agreement between children's and parents' answers. For best care, it is advisable to consider perceptions of both children and parents because they can complement each other in estimating the child's OHRQoL. abstract_id: PUBMED:30611272 Modification and psychometric evaluation of the child perceptions questionnaire (CPQ11-14) in assessing oral health related quality of life among Lithuanian children. Background: Oral health related quality of life (OHRQoL) research among children and adolescents in Lithuania is just starting and no measures have been validated to date. Therefore, this study aimed to validate a Lithuanian version of the full (37 items) Child Perceptions Questionnaire (CPQ11-14) within a random sample of children aged 11 to 14. Methods: A cross-sectional survey among a randomly selected sample of schoolchildren (N = 307) aged 11 to14 was conducted. An anonymous questionnaire included the full CPQ11-14 and items on global life satisfaction, oral health and oral life quality self-rating. The questionnaire was translated into Lithuanian using translation guidelines. In addition, an item on the oral pain was modified identifying the pain location. Standard tests (Cronbach's α, construct validity and discriminant validity), supplemented with both exploratory and confirmatory factor analyses, were employed for psychometric evaluation of the instrument. The questionnaire was also tested by comparison students' and their parents' (N = 255) responses about oral symptoms and functional limitations. Results: The modified Lithuanian version of CPQ11-14 revealed good internal consistency reliability (Cronbach's alpha for the total scale was 0.88). The measure showed significant associations with perceived oral health status and oral well-being, as well as with global life satisfaction (p &lt; 0.01). Discriminant validity of the instrument was approved by comparison of children's groups defined by self-reported caries experience and malocclusion. Factor analysis revealed a complex structure with two or three factors in each of four domains of the CPQ11-14. Excellent or acceptable levels of indices of model fitting with the given data were obtained for oral symptoms, functional limitations and emotional well-being domains, but not for the social well-being domain. A significant association between child and parental responses was found (intraclass correlation coefficient was 0.56 and 0.43, correspondingly in domains of oral symptoms and functional limitations). Conclusion: The Lithuanian version of the CPQ11-14 (with a modified item that identifies location of oral pain) appears to be a valid instrument to be used in further studies for measuring OHRQoL among 11 to 14 year old children in Lithuania. abstract_id: PUBMED:19049524 Epidemiological evaluation of short-form versions of the Child Perception Questionnaire. The objective of the study was to compare the performance of four short-form versions of the Child Perceptions Questionnaire (CPQ(11-14)) with that of the long-form version in a random population sample of 12- and 13-yr-old children from New Zealand in order to determine which short-form version was the most valid. Children (n = 430, participation rate 74.1%) completed the 37-item CPQ(11-14). Two separate 8- and 16-item short-form versions were previously developed using (a) item impact and (b) regression methods. The four different short-form scales were compared with the full CPQ(11-14) on their construct validity. The children were examined for malocclusion (using the Dental Aesthetic Index) and for dental caries by a single examiner (L.F.P.). All short-form versions revealed substantial variability in overall oral health-related quality of life (OHRQoL). Cronbach's alpha ranged from 0.73 (Regression Short Form [RSF]-8) to 0.86 (RSF-16). For all short-form versions, mean scores were positively associated with self-rated oral health and overall wellbeing; associations with the latter were stronger. All short-form versions detected OHRQoL gradients, as hypothesized, across ascending categories of caries and malocclusion. These findings suggest that the short-form versions of the CPQ(11-14) all show acceptable properties, but that the 16-item versions perform better (and are essentially equivalent); however, the stronger theoretical underpinning of the item-impact-derived 16-item short-form version suggests that it shows the most promise. abstract_id: PUBMED:25001361 Effect of year one orthodontic treatment on the quality of life of adolescents, assessed by the short form of the Child Perceptions Questionnaire. Aim: To evaluate the effect of the first 12 months of orthodontic treatment on the quality of life of Brazilian adolescents and to examine the evaluative properties of the short form of the Child Perceptions Questionnaire (CPQ11-14). Study Design And Methods: This study involved a sample of 101 adolescents undergoing orthodontic treatment with a fixed appliance. Participants were asked to answer the Brazilian version of the short form of the CPQ11-14 before treatment (T1) and 12 months after the placement of the fixed appliance (T2). Statistical analysis was carried out using the Wilcoxon signed-rank test and the Bonferroni correction for the overall score and the short form of the CPQ11-14 subscales. Responsiveness of the measure and the minimal clinically important difference (MCID) were also evaluated. Results And Statistics: Out of the 101 individuals originally admitted to this study, three were excluded due to cessation of treatment and failure to return the questionnaire, providing a response rate of 97.0%. Statistically, significant improvements were found in the overall score (P &lt; 0.001) as well as in both emotional well-being (P &lt; 0.001) and social well-being (P = 0.007) subscales. The reductions in score were associated with an effect of size showing moderate clinically meaningful changes in the overall score and in the functional limitations, emotional well-being and social well-being. The MCID was 3.35. Conclusion: The first 12 months of orthodontic treatment with a fixed appliance had a positive effect on the quality of life of western adolescents. abstract_id: PUBMED:16161878 An evaluation of the Child Perceptions Questionnaire in the UK. Objective: To assess the reliability and validity of the Child Perceptions Questionnaire (CPQ11-14), an oral health related quality of life measure for 11-14 year old children, for use in the UK. Basic Research Design: Cross-sectional questionnaire and clinical analytical study. Clinical Setting: Orthodontic and paediatric dentistry clinics at a dental hospital and one general dental practice. Participants: Eighty-nine children between 11 and 14 years of age attending for an examination. Main Outcomes Measures: The children were invited to complete the CPQ11-14, global oral health and impact on life overall ratings. Clinical data on caries status, malocclusion and presence of dental opacities and gingivitis were collected. CPQ11-14 was summarised as the total score (sum of the item codes) and the number of impacts reported often or every day. Results: The Cronbach's alpha for the total scale was 0.87 and ranged from 0.59 to 0.83 for the subscales indicating acceptable internal consistency. The intraclass correlation coefficient on repeated application of the measure was 0.83 (95% CI = 0.76-0.90) suggesting almost perfect agreement. Summary measures of CPQ11-14 correlated with the global oral health rating indicating acceptable criterion validity. Impact on life overall was related to all summary measures of CPQ11-14. Number of impacts correlated with the total number of missing teeth and missing teeth due to caries. No other relationships between clinical and CPQ11-14 data were apparent. Conclusion: The CPQ11-14 shows acceptable reliability, criterion and construct validity in relation to life overall. Relationships with clinical data were more tenuous. If this measure is to be used to compare the impacts of oral diseases in similar settings in the UK a large sample will be required. abstract_id: PUBMED:32524347 Validity and reliability of the Arabic version of the child perceptions questionnaire for 8-10-year-old children. Purpose: To develop an Arabic version of the CPQ8-10 and test its validity and reliability for use among Arabic-speaking children. Methods: The 25-item professionally translated questionnaire included two global rating questions across four domains, which was assessed through a pilot study on 20 participants who were not included in the main study. Children (n = 175) aged 8-10 years were consecutively recruited: group I (n = 120) included pediatric dental patients, group II (n = 25) included children with orofacial clefts, and group III (n = 30) included orthodontic patients. Construct (convergent and discriminant) validity, internal consistency, and test-retest reliability were assessed using Spearman's rank correlation coefficients, Cronbach's alpha coefficient, and intraclass correlation coefficient, respectively. All children were clinically examined; 66 children completed the questionnaire a second time. A cross-sectional study design was employed. Results: CPQ8-10 scores and global ratings were positively correlated. CPQ8-10 scores were highest in group II, followed by groups I and III, respectively. CPQ8-10 scores were significantly higher in children affected with caries or malocclusion compared to unaffected children. Cronbach's alpha was 0.95 and the intraclass correlation coefficient was 0.97. Conclusions: The Arabic CPQ8-10 was valid and reliable; therefore, it can be utilized with Arabic-speaking children in this age group. abstract_id: PUBMED:32653004 The development, validation, and psychometric properties of the Japanese version of the Child Oral Health Impact Profile-Short Form 19 (COHIP-SF 19) for school-age children. Background: This study was performed to develop and validate a Japanese version of Child Oral Health Impact Profile-Short Form (COHIP-SF) 19 and to assess its psychometric properties in Japanese school-age children. Methods: The original English COHIP-SF 19 was translated into Japanese (COHIP-SF 19 JP) using a standard forward and backward translation procedure. The psychometric properties of the COHIP-SF 19 JP were assessed in 379 public school students between 7 and 18 years of age in Fukuoka, Japan. Internal consistency (Cronbach's alpha) and test-retest reliability (intraclass correlation coefficient, ICC) were the metrics used for evaluation of this questionnaire. The discriminant validly was examined using the Wilcoxon rank sum test to identify significant differences in COHIP-SF 19 JP scores according to the results of dental examinations. The convergent validity was examined using the Spearman correlations to determine the relationships between COHIP-SF 19 JP scores and the self-perceived oral health ratings. Confirmatory factor analyses (CFA) were performed to verify the factor structure of the questionnaire. Results: The COHIP-SF 19 JP revealed good internal consistency (Cronbach's alpha, 0.77) and test-retest reliability (ICC, 0.81). Discriminant validity indicated that children with dental caries or malocclusion had significantly lower COHIP-SF 19 JP scores (P &lt; 0.05); convergent validity indicated that the self-perceived oral health rating was significantly correlated with the COHIP-SF 19 JP total score and subscores (rs = 0.352-0.567, P &lt; 0.0001), indicating that the questionnaire had a sufficient construct validity. CFA suggested that the modified four-factor model had better model fit indices than the original three-factor model. Conclusion: The collected data showed that the COHIP-SF 19 JP possesses sufficient psychometric properties for use in Japanese school-age children. abstract_id: PUBMED:19445725 Psychometric properties of the Brazilian version of the Child Perceptions Questionnaire (CPQ11-14) - short forms. Background: The need to evaluate the impact of oral health has led to the development of instruments for measuring oral health-related quality of life (OHQoL). One such instrument is the Child Perceptions Questionnaire (CPQ(11-14)), developed specifically for 11-to-14-year-old children. As this questionnaire was considered long (37 items), shorter forms were developed with 8 (Impact Short Form: 8--ISF:8) and 16 items (Impact Short Form: 16--ISF:16) to facilitate use in the clinical setting and population-based health surveys. The aim of the present study was to translate and cross-culturally adapt these CPQ(11-14) short forms for Brazilian Portuguese and evaluate the measurement properties of these versions for use on Brazilian children. Methods: Following translation and cross-cultural adaptation, the ISF:8 and ISF:16 were tested on 136 children from 11 to 14 years of age in the city of Belo Horizonte, Brazil. The instrument was administered by a trained researcher who also performed clinical examinations. The measurement properties (i.e. criterion validity, construct validity, internal consistency reliability, test-retest reliability) were determined. Discriminant validity was tested between groups, which were divided into children with no cavities and no malocclusion; children with cavities and without malocclusion; and children with malocclusion and without cavities. Results: The mean total score was 6.8 [standard deviation (SD) 4.2] for the ISF:8 and 11.9 (SD 7.6) for the ISF:16 (p &lt; 0.001). Statistically significant associations were found between oral abnormalities and the subscales of the ISF:8 and ISF:16 (p &lt; 0.05). Both test-retest stability and internal consistency, as measured by the intra-class correlation coefficient (ICC) (ISF:8 = 0.98 and ISF:16 = 0.97) and Cronbach's alpha (ISF:8 = 0.70 and ISF:16 = 0.84) proved to be adequate. Construct validity was confirmed from the correlation between the short form scores and oral health and overall well-being ratings. The score on the short forms of the CPQ(11-14) was able to discriminate between different oral conditions. Criterion validity was satisfactory (p &lt; 0.05). Conclusion: The Brazilian versions of CPQ(11-14) ISF:8 and ISF:16 have satisfactory psychometric properties, similar to those of the original instrument. Answer: The short-form Child Perceptions Questionnaire (CPQ ISF-16) has been explored for its relevance and meaning to children with malocclusion in the UK. The study conducted in the UK found that while some items of the CPQ ISF-16 were relevant, particularly those from the emotional and social well-being subscales, other questions were not considered relevant by the young participants. The participants expressed concerns about the face and content validity of the questionnaire, including the response format, the use of 'double' questions, and the interpretation of certain words. This suggests that the CPQ ISF-16 may not fully capture the impact of malocclusion on the quality of life of children in the UK, and further consideration should be given to developing a child-centered malocclusion-specific oral health-related quality of life (OHRQoL) measure (PUBMED:20439924). In contrast, other studies have shown that short-form versions of the CPQ have been successfully adapted and validated in different cultural contexts, such as the Hindi version of the CPQ11-14 in India, which was found to be valid and reliable after modifying certain questions to suit the Indian sociocultural context (PUBMED:30397369). Similarly, the Brazilian version of the CPQ11-14 short forms (ISF:8 and ISF:16) demonstrated satisfactory psychometric properties and was able to discriminate between different oral conditions, including malocclusion (PUBMED:19445725). However, it is important to note that the agreement between children and parents on children's OHRQoL using the Swedish short forms of CPQ11-14 and P-CPQ was low, indicating that perceptions of children and parents can differ and both should be considered in estimating a child's OHRQoL (PUBMED:31094265). In summary, while the CPQ ISF-16 has been questioned for its relevance to children with malocclusion in the UK, other short-form versions of the CPQ have been adapted and validated successfully in different cultural settings. This suggests that the relevance and meaningfulness of the CPQ to children with malocclusion may vary depending on cultural and contextual factors, and there may be a need for region-specific adaptations of the questionnaire.
Instruction: Household motor vehicle use and weight status among Colombian adults: are we driving our way towards obesity? Abstracts: abstract_id: PUBMED:19632267 Household motor vehicle use and weight status among Colombian adults: are we driving our way towards obesity? Objective: To determine the associations between household motor vehicle ownership and weight status among Colombian adults. Methods: Secondary analysis of data from the 2005 Demographic and HealthSurvey of Colombia. Height, weight and waist circumference were objectively measured in 49,079 adults, ages 18 to 64 that resided in urban settings. Abdominal obesity was defined as a waist circumference &gt;80 cm in women and &gt;90 cm in men. Results: Prevalence was 19.9% for motor vehicle ownership in household, 33.1% for BMI between 25 and 29.9 kg/m(2), 14.4% for BMI&gt;30 kg/m(2), and 46% for abdominal obesity. Males reporting any household motor vehicle ownership were more likely to be overweight or obese, and to have abdominal obesity (p for genderexposure variables interaction=&lt;0.001). Conclusions: Household motor vehicle ownership is associated with overweight, obesity, and abdominal obesity among Colombian men but not women. abstract_id: PUBMED:24929196 Driving towards obesity: a systematized literature review on the association between motor vehicle travel time and distance and weight status in adults. Background: Higher levels of sedentary behavior are associated with adverse health outcomes. Over-reliance on private motor vehicles for transportation is a potential contributor to the obesity epidemic. The objective of this study was to review evidence on the relationship between motor vehicle travel distance and time and weight status among adults. Methods: Keywords associated with driving and weight status were entered into four databases (PubMed Medline Transportation Research Information Database and Web of Science) and retrieved article titles and abstracts screened for relevance. Relevant articles were assessed for their eligibility for inclusion in the review (English-language articles a sample ≥ 16 years of age included a measure of time or distance traveling in a motor vehicle and weight status and estimated the association between driving and weight status). Results: The database search yielded 2781 articles, from which 88 were deemed relevant and 10 studies met the inclusion criteria. Of the 10 studies included in the review, 8 found a statistically significant positive association between time and distance traveled in a motor vehicle and weight status. Conclusions: Multilevel interventions that make alternatives to driving private motor vehicles more convenient, such as walking and cycling, are needed to promote healthy weight in the adult population. abstract_id: PUBMED:32584997 Household motor vehicle ownership and obesity among Indian females and males: 2005-2016. Background: To determine associations between household motor vehicle ownership and obesity among Indian adults. Methods: Bivariate and multivariable analyses were conducted using the 2005-2006 and 2015-2016 Indian Demographic and Health surveys, with over 800 000 respondents. Results: Obesity prevalence (body mass index ≥25 kg/m2) rose in females (16.87% to 20.35%) and in males (12.55% to 18.74%). In 2005, having both types of vehicles (motorcycle/motor scooter and car) significantly increased the odds of obesity in females (OR, 1.63; 95% CI 1.50 to 1.76) and males (OR, 2.49; 95% CI 2.24 to 2.77) as well as in 2015 (OR, 1.10; 95% CI 1.07 to 1.13 and OR, 1.56; 95% CI 1.45 to 1.68, respectively). The wealthiest were more likely to be obese in 2005 (OR, 14.95; 95% CI 16.06 to 17.12 for females; OR, 12.69; 95% CI 10.17 to 15.70 for males) and in 2015 (OR, 7.69; 95% CI 7.43 to 7.95 for females and OR, 6.40; 95% CI 5.40 to 7.01 for males). Higher education levels, being younger and rural residence were significant protective factors in 2005 and 2015. Conclusions: After adjusting for confounders, motor vehicle ownership was significantly associated with obesity at both time points, but the effect of vehicle ownership presents differently by gender. With obesity prevalence increasing in India, policies promoting active vs motorized transport could attenuate this problem. abstract_id: PUBMED:20725775 Tobacco use cessation and weight management among motor freight workers: results of the gear up for health study. Objectives: To present the results of a study of a worksite-based intervention to promote tobacco use cessation and improve weight management among motor freight workers. Methods: This study used a pre-test/post-test, non-randomized design to assess the effectiveness of a four-month intervention that addressed the social context of the work setting. We evaluated 7-day tobacco quit prevalence among baseline tobacco users, and successful weight management, defined as no weight gain in workers with BMI &lt;25 at baseline and any weight loss among overweight and obese workers. Results: At baseline, 40% were current tobacco users, and 88% had a BMI of 25 or greater. Of 542 workers invited to participate, 227 agreed to participate and received at least the first telephone call (42%). Ten-month post-baseline, baseline tobacco users who participated in the intervention were more likely to have quit using tobacco than non-participants: 23.8% vs. 9.1% (p = 0.02). There was no significant improvement in weight management. Conclusions: Incorporating work experiences and job conditions into messages of health behavior change resulted in significant tobacco use cessation among participating motor freight workers. abstract_id: PUBMED:34568243 Interrelationship Between Age, Gender, and Weight Status on Motor Coordination in Italian Children and Early Adolescents Aged 6-13 Years Old. Although numerous evidences reported a negative correlation between motor coordination (MC) and overweight/obesity in children and adolescents, the interrelationship between age, gender, and weight status is still debatable. Hence, the aim of this cross-sectional study was to examine the association between MC and weight status according to age and gender across childhood and early adolescence in a large sample of Italian elementary and middle school students. A number of 1961 Italian school students (1,026 boys, 935 girls) was stratified in three consecutive age groups (6-7, 8-10, and 11-13 years) and four weight status categories (underweight, normal weight, overweight, and obese) according to Cole's body mass index (BMI) cut-off points for children. MC performance was assessed measuring motor quotient (MQ) with the Körperkoordinationstest für Kinder (KTK). Results showed significantly lower MQ levels in children in overweight (OW) and with obesity (OB) in both sexes for all age groups than peers in normal weight (NW), except in 6-7-year-old boys. Girls in OW and with OB had similar MQ levels across all age groups, while younger boys in OW and with OB showed higher MQ levels than older ones (p &lt; 0.05). The 6-7-year-old boys showed better MQ levels than girls peers in NW, OW, and with OB, while 8-10-year-old boys in underweight (UW), NW, and OW; and 11-13-year-old boys only in NW (p &lt; 0.05). No interaction effect was found between age, gender, and weight status on MQ levels. These outcomes showed the negative impact of higher weight status on MC performance according to age and gender, pointing out the importance of planning targeted motor programs that consider these variables to improve MC performance. abstract_id: PUBMED:30989749 Factors associated with nutritional status and motor development among young children. This study was conducted to investigate the nutritional status and gross motor development of children in a suburban area of northeast China and the factors influencing these issues. A total of 189 children aged 1-3 years and their mothers participated. Child and maternal length/height and weight were measured. Information was collected on basic characteristics, household monthly income, child's age at achievement of six gross motor milestones, and feeding practices by interviewing the mothers/caregivers. The prevalence of overweight (using World Health Organization growth standards) and delayed achievement of walking alone among the children was 27% and 12.7%, respectively. The independent predictors of increased odds for being overweight were middle household income and high dietary diversity score. Exclusive breastfeeding was associated with lower odds of children being overweight. Delayed solid feeding initiation was associated with increased odds of delay in walking alone for the children. Breastfeeding predicted reduced odds of this issue. These findings suggest that nutritional intervention and childcare education, such as breastfeeding promotion, improved diet quality, and the timely introduction of solid food, could improve growth and development among young children in suburban northeast China. abstract_id: PUBMED:33367276 Effectiveness of a Physical Education Program on the Motor and Pre-literacy Skills of Preschoolers From the Training-To-Health Project: A Focus on Weight Status. Many studies reported a positive relationship between motor skills, cognitive functions, and school performance in school-age children; however, little is known in preschool children. The aim of the present study was to demonstrate the effectiveness of a physical education program (PEP) on locomotor, object control skills, and pre-literacy cognitive functions in a wide population of preschoolers and verify whether weight status could influence these abilities. In the context of the Training-to-Health Project, a sample of 1,029 preschoolers was recruited in kindergartens from the urban area of Palermo (Italy). Their gross motor and pre-literacy skills were tested before (PRE) and after (POST) following 16 weeks (2 h/week) of a PEP, which included ludic-motor activities aimed at developing body awareness and fundamental motor and perceptual-sensory skills. Analyses of variance (ANOVA) were performed to assess the skills before and after the intervention and to evaluate the effect of different categories of weight status on the examined variables. Regression analyses were conducted to confirm the hypothesized interrelationship between motor and pre-literacy skills in the considered sample. Both locomotor/object control and pre-literacy skills were significantly higher in children after the PEP (p &lt; 0.05). We found 23% of overweight children and no significant difference between weight status classes in both PRE and POST PEP groups. In the POST group, higher locomotor and object control skills were mostly associated with better pre-literacy skills. This study shows that PEP was effective in improving both motor and pre-literacy skills in preschoolers independently from age and gender, while weight status did not affect these skills suggesting that this program can be administrated indifferently in children with different categories of weight status. Therefore, PEP could be a decisive education strategy to enhance motor and cognitive learning in preschool children and to achieve successful academic outcomes. abstract_id: PUBMED:22965211 Weight status and gross motor skill in kindergarten children. Purpose: Childhood obesity rates are increasing globally. Physical activity is one behavioral variable that influences weight status. Participation in physical activity requires basic gross motor proficiency in early childhood. The purpose of this study was to examine the relationship between gross motor skill level and weight status in a large national representative sample of kindergarten-aged children. Methods: Body mass index percentile ranking was calculated for 4650 children from the Early Childhood Longitudinal Study-Birth Cohort. Children were classified into underweight, healthy, overweight, or obese categories according to the Centers for Disease Control and Prevention criteria. The Early Screening Inventory Revised was used to evaluate gross motor skill level. Results: Children with obesity displayed lower gross motor skill levels compared with peers of healthy weight. Largest differences were seen in locomotor and balance skills. Conclusions: Clinicians should consider adjusting gross motor expectations for locomotor or stability tasks in young children with obesity. abstract_id: PUBMED:23113200 Quality of Life, Motor Ability, and Weight Status among School-aged Children of Tehran. Background: This study aimed to investigate the relationship between health Related quality of life (HRQOL), motor ability and weight status in children. Methods: Two hundred forty children ages 9-11 yr who were selected via multi stage cluster sampling design from primary schools in the Shahre Qods at Tehran, Iran in 2007. HRQOL was assessed by the pediatric quality of life inventory (PedsQL). Motor abilities were determined by a Basic Motor Ability Test (BMAT). Body mass index was calculated to determine weight status. Results: Psychosocial, physical, and total health related qualities of life (all P&lt; 0.05) were significantly lowered for obese when compared to normal weight participants. In contrast, the mean scores for each HRQOL domain in motor ability category were not significant. No significant interaction was apparent when examining HRQOL scores, BMAT variables and weight status. Conclusion: Regardless of motor ability levels, reducing body weight among children is a potential avenue for promoting improved HRQOL. Over weight boys reported significantly worse school performance than over weight girls, suggesting the importance in considering such dimensions in programs aimed at further understanding obesity in children. abstract_id: PUBMED:32735359 The importance of body weight status on motor competence development: From preschool to middle childhood. We evaluated the association between weight status and motor competence from preschool age (3-5 years of age) until middle childhood (7-9 years of age). Longitudinal study with three to five-year-old preschool children (n = 1155) enrolled in public and private preschools in Recife, Brazil. Children were followed twice (2010, 2012, and 2014) for four years. Köperkoordinationstest für kinder (KTK) assessed the children's motor competence (KTK Motor Quotient). Weight status (underweight, normal weight, overweight, and obesity) was classified according to the children's sex and age. Preschool children with normal weight exhibited higher motor competence at 5-7 years of age compared to preschool children with overweight (+3.73 MQ, P = .03) and obesity (+5.09 MQ, P &lt; .01). Preschool children with normal weight presented higher motor competence at 7-9 years of age compared to their peers with overweight (+6.00 MQ, P = .03) and obesity (+5.88 MQ, P = .01). Children with normal weight at 5-7 years of age presented higher motor competence at 7-9 years of age compared to their peers with overweight (+3.33 MQ, P = .02) and obesity (+4.00 MQ, P = .02). Independent of the childhood phase and extension of the period evaluated (2- or 4-year period), children who had excessive weight (overweight or obesity) and changed their weight status to underweight or normal weight presented similar motor competence compared to children who continued underweight or normal weight. Weight status already at preschool age is an important predictor of the children's motor competence until middle childhood. Interventions improving the children's weight status, already at preschool age, might impact their motor competence development positively. Answer: The study identified in abstract PUBMED:19632267 investigated the association between household motor vehicle ownership and weight status among Colombian adults. The results indicated that males who reported any household motor vehicle ownership were more likely to be overweight or obese and to have abdominal obesity. However, this association was not observed among women. The study concluded that household motor vehicle ownership is associated with overweight, obesity, and abdominal obesity among Colombian men but not women. This suggests that increased reliance on motor vehicles for transportation could be contributing to higher rates of obesity, at least among men, in urban settings in Colombia. This finding is consistent with the broader literature on the subject, as indicated by a systematic literature review (PUBMED:24929196), which found that eight out of ten studies included in the review reported a statistically significant positive association between time and distance traveled in a motor vehicle and weight status. The review suggested that interventions promoting alternatives to driving, such as walking and cycling, are needed to promote healthy weight in the adult population. In summary, the evidence suggests that household motor vehicle use is associated with increased weight status among Colombian men, which could be indicative of a broader trend where reliance on motor vehicles contributes to obesity. This underscores the importance of promoting active transportation as a strategy to combat obesity.
Instruction: Synchrony of hand-foot coupled movements: is it attained by mutual feedback entrainment or by independent linkage of each limb to a common rhythm generator? Abstracts: abstract_id: PUBMED:17067367 Synchrony of hand-foot coupled movements: is it attained by mutual feedback entrainment or by independent linkage of each limb to a common rhythm generator? Background: Synchrony of coupled oscillations of ipsilateral hand and foot may be achieved by controlling the interlimb phase difference through a crossed kinaesthetic feedback between the two limbs, or by an independent linkage of each limb cycle to a common clock signal. These alternative models may be experimentally challenged by comparing the behaviour of the two limbs when they oscillate following an external time giver, either alone or coupled together. Results: Ten subjects oscillated their right hand and foot both alone and coupled (iso- or antidirectionally), paced by a metronome. Wrist and ankle angular position and Electromyograms (EMG) from the respective flexor and extensor muscles were recorded. Three phase delays were measured: i) the clk-mov delay, between the clock (metronome beat) and the oscillation peak; ii) the neur (neural) delay, between the clock and the motoneurone excitatory input, as inferred from the EMG onset; and iii) the mech (mechanical) delay between the EMG onset and the corresponding point of the limb oscillation. During uncoupled oscillations (0.4 Hz to 3.0 Hz), the mech delay increased from -7 degrees to -111 degrees (hand) and from -4 degrees to -83 degrees (foot). In contrast, the clk-mov delay remained constant and close to zero in either limb since a progressive advance of the motoneurone activation on the pacing beat (neur advance) compensated for the increasing mech delay. Adding an inertial load to either extremity induced a frequency dependent increase of the limb mechanical delay that could not be completely compensated by the increase of the neural phase advance, resulting in a frequency dependent increment of clk-mov delay of the hampered limb. When limb oscillations were iso- or antidirectionally coupled, either in the loaded or unloaded condition, the three delays did not significantly change with respect to values measured when limbs were moved separately. Conclusion: The absence of any significant effect of limb coupling on the measured delays suggests that during hand-foot oscillations, both iso- and antidirectionally coupled, each limb is synchronised to the common rhythm generator by a "private" position control, with no need for a crossed feedback interaction between limbs. abstract_id: PUBMED:30900288 Entrainment and Modulation of Gesture-Speech Synchrony Under Delayed Auditory Feedback. Gesture-speech synchrony re-stabilizes when hand movement or speech is disrupted by a delayed feedback manipulation, suggesting strong bidirectional coupling between gesture and speech. Yet it has also been argued from case studies in perceptual-motor pathology that hand gestures are a special kind of action that does not require closed-loop re-afferent feedback to maintain synchrony with speech. In the current pre-registered within-subject study, we used motion tracking to conceptually replicate McNeill's () classic study on gesture-speech synchrony under normal and 150 ms delayed auditory feedback of speech conditions (NO DAF vs. DAF). Consistent with, and extending McNeill's original results, we obtain evidence that (a) gesture-speech synchrony is more stable under DAF versus NO DAF (i.e., increased coupling effect), (b) that gesture and speech variably entrain to the external auditory delay as indicated by a consistent shift in gesture-speech synchrony offsets (i.e., entrainment effect), and (c) that the coupling effect and the entrainment effect are co-dependent. We suggest, therefore, that gesture-speech synchrony provides a way for the cognitive system to stabilize rhythmic activity under interfering conditions. abstract_id: PUBMED:37887509 Pilot Studies on Empathy and Closeness in Mutual Entrainment/Improvisation vs. Formalised Dance with Different Types of Rhythm (Regular, Irregular, and No Rhythm) and Coupling (Visual, Haptic, Full Coupling): Building a Case for the Origin of Dance in Mutual Entrainment Empathic Interactions in the Mother-Infant Dyad. This paper employs a novel research design to examine changes in empathy and closeness in partnered face-to-face dance, considering both different types of rhythm (regular, irregular, and no external rhythm, or 'mutual entrainment only') and different types of coupling (visual only, haptic only, and full visual and haptic coupling). Two studies were undertaken to pilot the design. In both studies, the Interpersonal Reactivity Index and Inclusion of Other in the Self were used to measure empathy and closeness, respectively. Study 1 employed 24 participants (12 pairs) distributed across two rhythm conditions, external regular rhythm, and no external rhythm, with full coupling in both conditions. Closeness increased similarly in both conditions. Empathic concern (EC) was significantly affected in the 'no rhythm' condition. Study 2 employed 54 participants assigned to form pairs and distributed across all combinations of rhythm and coupling types. Closeness decreased with irregular rhythm. EC increased in the 'no rhythm' conditions relative to regular rhythm. Fantasy (F) decreased with haptic coupling only (no visual coupling) while personal distress (PD) increased. In addition, the analyses suggest that perspective taking (PT) increases with irregular rhythm and in the condition without rhythm (mutual entrainment only). The discussion gauges the value of the designs and results for capturing changes in empathy and closeness with different rhythm and coupling types. Capturing such changes is important for research on the origins of dance in empathic mutual entrainment in the mother-infant dyad. abstract_id: PUBMED:26124735 Word-by-word entrainment of speech rhythm during joint story building. Movements and behavior synchronize during social interaction at many levels, often unintentionally. During smooth conversation, for example, participants adapt to each others' speech rates. Here we aimed to find out to which extent speakers adapt their turn-taking rhythms during a story-building game. Nine sex-matched dyads of adults (12 males, 6 females) created two 5-min stories by contributing to them alternatingly one word at a time. The participants were located in different rooms, with audio connection during one story and audiovisual during the other. They were free to select the topic of the story. Although the participants received no instructions regarding the timing of the story building, their word rhythms were highly entrained (øverlineR = 0.70, p &lt; 0.001) even though the rhythms as such were unstable (øverlineR = 0.14 for pooled data). Such high entrainment in the absence of steady word rhythm occurred in every individual story, independently of whether the subjects were connected via audio-only or audiovisual link. The observed entrainment was of similar strength as typical entrainment in finger-tapping tasks where participants are specifically instructed to synchronize their behavior. Thus, speech seems to spontaneously induce strong entrainment between the conversation partners, likely reflecting automatic alignment of their semantic and syntactic processes. abstract_id: PUBMED:10968217 Neural compensation for mechanical differences between hand and foot during coupled oscillations of the two segments. (1) Rhythmic flexion-extensions of the hand and foot on one side were performed by ten male and nine female subjects. Limbs were rotated in the same direction (in-phase) or in opposite directions (anti-phase). Oscillation frequency ranged from 0.6 to 3.2 Hz for in-phase and to 2.2 Hz for anti-phase movements. In both genders, movement synchrony was more strictly maintained during anti-phase than during in-phase coupling. (2) EMG recordings showed that, in males, movement synchrony was achieved by activating hand movers in advance of foot movers. This phase advance increased as the oscillation frequency increased. In females, instead, muscles of the two limbs were activated almost simultaneously over most of the frequency range. Since the different timing of muscle activation in the two genders suggests that their limbs have different mechanical characteristics, the frequency response of each limb was estimated in either gender. The frequency response between 0.6 and 3.2 Hz was evaluated in five males and five females by measuring the phase delay between the onset of the EMG activity and the onset of the related movement, both when the limbs were moved in isolation and when they were coupled. (3) In uncoupled conditions, the hand and foot curves were roughly parallel in females, the phase delay being about 45 degrees larger in the hand than in the foot. In males, the curves were also separated by 45 degrees at the lowest frequencies but they further diverged when the frequency was raised, because of a faster increase in the phase delay in the hand than in the foot. These results indicate that, when the extremities have to be coupled, a nervous compensation is necessary and that it must be different in the two genders. (4) Analysis of the phase-response when limbs were coupled showed that synchrony was approached by two mechanisms: (a) an earlier EMG activation of the hand movers, preferentially utilised by males during in-phase coupling; and (b) a change in the viscoelastic properties of one extremity, which reduces (or eliminates) the difference between their frequency responses as well as between the EMG onsets of hand and foot movers. This second mechanism was utilised by both genders during anti-phase coupling. abstract_id: PUBMED:11482840 Neural compensation for mechanical loading of the hand during coupled oscillations of the hand and foot. The role of kinaesthetic afferences in controlling coupling of voluntary oscillation of the hand and foot, both in-phase and anti-phase, was investigated by modifying the mechanical properties of one of the two segments (the hand) with applied inertial or elastic loads. Loads consisted of a lead disk, rotating coaxially with the wrist (total inertial momentum 15 g m2), or in two symmetrical rubber bands (elasticity, 4 g deg(-1)) connected 5 cm away from the wrist pivot. Experiments were performed on five male and five female subjects. Both the frequency responses of the hand and foot (i.e. the phase relations between the onset of muscular activation in limb extensors and the onset of the related movement) and the inter-limb phase relations (the phase differences between the hand and foot movement cycles and between the onsets of the electromyographic (EMG) activity in hand and foot extensors) were analysed. The hand frequency-response was fitted with a 2nd-order model, allowing us to describe the loaded and unloaded conditions through the changes in the model response. Inertial loading induced an immediate and steep decay in the frequency response, with a clear-cut reduction of the model resonance frequency, while elastic loading shifted the response to the right and upwards. Inter-limb phase relations were only partially affected by inertial loading of the hand. Despite the fact that the load strongly increased the difference between the frequency-responses of the hand and foot, when hand and foot were oscillated in-phase only about half of this difference remained as an increased phase-lag between hand and foot oscillations. The other half was offset by an advance of the contraction of the hand movers with respect to the foot movers. This compensation mechanism was more effective during anti-phase than during in-phase movements. Elastic loading improved inter-limb synchronisation, since it superimposed the hand frequency-response on that of the foot. In this condition, the requested synchronisation (in-phase or anti-phase) could be achieved by an almost simultaneous (or in strict phase opposition) contraction of the hand and foot movers. In conclusion, the main feedback reaction to the de-coupling effect of hand loading consisted in modifying the timing of activation of the muscles moving the extremities. An advance of hand movers on foot movers is already present in unloaded conditions to compensate for the difference in the natural mechanical properties of the two segments. This advance is enhanced when increasing the inertia of the hand system and attenuated when increasing its elasticity. abstract_id: PUBMED:38083597 Robustness of Optimal Circadian Rhythm Entrainment under Model Perturbation. The optimal circadian rhythm entrainment problem has been studied based on mathematical models, e.g. the Kronauer model. In this paper, we study the robustness of light-based circadian rhythm entrainment under model parameter perturbations as well as propose a feedback control law to improve the robustness of the entrainment strategy. Our study finds the model parameter whose perturbations affect the entrainment the most. We also find that feedback control reduces the sensivitity of the entrainment process to model parameter perturbations.Clinical relevance- Circadian misalignment has negative impacts on health, such as higher risks of cardiovascular disease and cancer. We present a numerical study of how well optimized circadian rhythm entrainment plans that are derived from a generic mathematical model work on personalized cases. abstract_id: PUBMED:33785502 Synchrony of complex signals in an acoustically communicating katydid. The ability to entrain to auditory stimuli has been a powerful method to investigate the comparative rhythm abilities of different animals. While synchrony to regular simple rhythms is well documented, synchrony to complex stimuli, with multiple components at unequal time intervals, is rarer. Several katydid species with simple calls have been shown to achieve synchrony as part of their natural calling interactions in multi-individual choruses. Yet no study so far has demonstrated synchrony in any insect with a complex call. Using natural calling behaviour and playback experiments, we investigated acoustic synchrony and the mechanisms underlying it in the katydid species Mecopoda 'Two Part Caller'. This species has a complex call consisting of a long trill followed by two or more chirps. We found that individual males synchronized trills and, to a lesser extent, chirps. Further investigation of trill synchrony showed that the timing of trills is modified by external trills but not chirps. Chirp synchrony is modified by external chirps, but also by trills. We suggest a qualitative two-oscillator model underlying synchrony in this species and discuss the implications for the evolution of acoustic synchrony. abstract_id: PUBMED:37154032 Temperature compensation and entrainment in cyanobacteria circadian rhythm. Circadian rhythm is an endogenous rhythmic behavior of organisms used to adapt to the external environment. Although most biochemical reactions accelerate with increasing temperature, the period of circadian rhythms remains relatively stable across a range of temperature, a phenomenon known as temperature compensation. Meanwhile, circadian rhythms can be reset by environmental signals, such as daily periodic light or temperature, a phenomenon known as entrainment. Cyanobacteria are the simplest organisms to have circadian rhythms. The effect of light on cyanobacteria circadian rhythm has been widely studied with mathematical models. However, the effect of temperature on cyanobacteria circadian rhythm and the mechanisms of temperature compensation and entrainment are far from clear. In this paper, we apply a recent model to incorporate temperature dependence by Van't Hoff rule. With numerical simulation, we study the temperature compensation and entrainment in detail. The results show that the system can exhibit temperature compensation when the post-transcription process is insensitive to temperature. The temperature compensation is caused by the cancellation of the increase of amplitude and the acceleration of speed, resulting in the stable period, when the temperature rises. The system can also exhibit temperature entrainment in constant light in a very limited temperature range. When the periodic light is added simultaneously to simulate more realistic environment, the temperature range of entrainment is greatly improved. The results also suggest that long-day condition is conducive to entrainment. The findings of this paper provide a theoretical reference for biological research and help us understand the dynamical mechanisms of cyanobacteria circadian rhythm. abstract_id: PUBMED:37395457 Mother-child inter-brain synchrony during a mutual visual search task: A study of feedback valence and role. Parent and child have been shown to synchronize their behaviors and physiology during social interactions. This synchrony is an important marker of their relationship quality and subsequently the child's social and emotional development. Therefore, understanding the factors that influence parent-child synchrony is an important undertaking. Using EEG hyperscanning, this study investigated brain-to-brain synchrony in mother-child dyads when they took turns performing a visual search task and received positive or negative feedback. In addition to the effect of feedback valence, we studied how their assigned role, i.e., observing or performing the task, influenced synchrony. Results revealed that mother-child synchrony was higher during positive feedback relative to negative feedback in delta and gamma frequency bands. Furthermore, a main effect was found for role in the alpha band with higher synchrony when a child observed their mother performing the task compared to when the mother observed their child. These findings reveal that a positive social context could lead a mother and child to synchronize more on a neural level, which could subsequently improve the quality of their relationship. This study provides insight into mechanisms that underlie mother-child brain-to-brain synchrony, and establishes a framework by which the impact of emotion and task demand on a dyad's synchrony can be investigated. Answer: The synchrony of hand-foot coupled movements is attained by independent linkage of each limb to a common rhythm generator rather than by mutual feedback entrainment. This conclusion is supported by a study where subjects oscillated their right hand and foot both alone and coupled, paced by a metronome. The study found that during hand-foot oscillations, both iso- and antidirectionally coupled, the delays measured (clk-mov delay, neur delay, and mech delay) did not significantly change when limbs were moved together compared to when they were moved separately. This suggests that each limb is synchronized to the common rhythm generator by a "private" position control, without the need for crossed feedback interaction between limbs (PUBMED:17067367).
Instruction: Is coffee a colonic stimulant? Abstracts: abstract_id: PUBMED:9581985 Is coffee a colonic stimulant? Background: It is unclear if ingestion of coffee affects colonic function and if this effect is due to its caffeine content. We investigated the effects of coffee on colonic motor activity in healthy humans. Methods: We performed ambulatory colonic manometry by placing a six-sensor solid-state probe up to the mid-transverse colon in 12 healthy subjects. The following day, over a 10 h period, subjects received four stimuli: 240 ml of three drinks at 45 degrees C in random order: black Colombian coffee (150 mg caffeine), decaffeinated coffee or water and 1000 kcal meal. We analyzed the effects of each stimulant on colonic motor responses. Results: Caffeinated coffee, decaffeinated coffee and meal induced more activity in the colon with a greater area under the curve of pressure waves (P &lt; 0.01) and a greater number of propagated contractions (P &lt; 0.05) when compared with water. Caffeinated coffee, decaffeinated coffee and meal induced greater (P &lt; 0.05) motor activity in the transverse/descending colon when compared with the rectosigmoid colon. The effects of decaffeinated coffee on colonic motility were not significantly different from those of water or caffeinated coffee and were lower (P &lt; 0.05) than that of a meal. Conclusion: Caffeinated coffee stimulates colonic motor activity. Its magnitude is similar to a meal, 60% stronger than water and 23% stronger than decaffeinated coffee. abstract_id: PUBMED:23904092 Catabolism of coffee chlorogenic acids by human colonic microbiota. Several studies have indicated potential health benefits associated with coffee consumption. These benefits might be ascribed in part to the chlorogenic acids (CGAs), the main (poly)phenols in coffee. The impact of these dietary (poly)phenols on health depends on their bioavailability. As they pass along the gastrointestinal tract, CGAs are metabolized extensively and it is their metabolites rather than the parent compounds that predominate in the circulatory system. This article reports on a study in which after incubation of espresso coffee with human fecal samples, high-performance liquid chromatography-mass spectrometry (HPLC-MS) and gas chromatography-mass spectrometry (GC-MS) were used to monitor CGA breakdown and identify and quantify the catabolites produced by the colonic microflora. The CGAs were rapidly degraded by the colonic microflora and over the 6-h incubation period, 11 catabolites were identified and quantified. The appearance of the initial degradation products, caffeic and ferulic acids, was transient, with maximum quantities at 1 h. Dihydrocaffeic acid, dihydroferulic acid, and 3-(3'-hydroxyphenyl)propionic acid were the major end products, comprising 75-83% of the total catabolites, whereas the remaining 17-25% consisted of six minor catabolites. The rate and extent of the degradation showed a clear influence of the composition of the gut microbiota of individual volunteers. Pathways involved in colonic catabolism of CGAs are proposed and comparison with studies on the bioavailability of coffee CGAs ingested by humans helped distinguish between colonic catabolites and phase II metabolites of CGAs. abstract_id: PUBMED:35349898 Bioaccessibility and bioactivities of phenolic compounds from roasted coffee beans during in vitro digestion and colonic fermentation. Bioaccessibility and bioactivity of phenolic compounds in coffee beans relate to roasting and digestion process. This study aimed to estimate phenolic content, antioxidant potential, bioaccessibility, and changes in short chain fatty acids (SCFAs) production during in vitro digestion and colonic fermentation of commercial roasted (light, medium and dark) coffee beans. There was no significant difference found among all three different roasting levels. TPC and DPPH were enhanced 15 mg GAE/g and 60 mg TE/g during gastrointestinal digestion, respectively. For colonic fermentation, the highest TPC and FRAP of all coffee beans was found at 2 and 4 h, respectively. The gastric bioaccessibility of most of the phenolic compounds were relatively higher due to thermal phenolic degradation. Total SCFAs production was only up to 0.02 mM because of thermal polysaccharide decomposition. Light roasted beans exhibited relatively higher phenolic bioaccessibility, antioxidant activities and SCFAs production, which would be more beneficial to gut health. abstract_id: PUBMED:36574000 Effect of Caffeine on Colonic Manometry in Children. Objectives: Coffee and caffeinated products have been widely consumed for many centuries. Previous adult studies have suggested that both coffee and decaffeinated beverages induce colonic motility. However, no study has been conducted in pediatrics, and the role of caffeine alone in pediatric colonic motility needs to be explored. Methods: A prospective study of pediatric patients undergoing standard colonic motility testing that were able to consume caffeinated coffee, decaffeinated coffee, and caffeine tablet during colonic manometry. Patients who had a gastrocolonic reflex and high amplitude propagated contractions (HAPCs) in response to intraluminal administration of bisacodyl in the colon were included in the final analyses. Results: Thirty-eight patients were recruited, 22 of which were excluded, 11 due to abnormal studies (no HAPC seen in response to intraluminal response to bisacodyl), and 11 due to inability to consume all study agents or complete the study. Sixteen patients met criteria for final analyses. Intracolonic bisacodyl produced a larger area under the curve (AUC) compared to all other agents. Caffeinated coffee resulted in a higher AUC, motility index (MI), and time to HAPC compared with decaffeinated coffee ( P &lt; 0.05). There was no significant difference between caffeinated coffee and caffeine tablet, or caffeine tablet and decaffeinated coffee. Conclusions: Caffeine is indeed a colonic stimulant; however, other components of caffeinated and non-caffeinated beverages likely induce colonic response and require further evaluation for possible use as a colonic stimulant. abstract_id: PUBMED:36139892 Understanding the Gastrointestinal Behavior of the Coffee Pulp Phenolic Compounds under Simulated Conditions. Numerous residues, such as the coffee pulp, are generated throughout coffee processing. This by-product is a source of antioxidant phytochemicals, including phenolic compounds and caffeine. However, the antioxidant properties of the phenolic compounds from the coffee pulp are physiologically limited to their bioaccessibility, bioavailability, and biotransformation occurring during gastrointestinal digestion. Hence, this study explored the phenolic and caffeine profile in the coffee pulp flour (CPF) and extract (CPE), their intestinal bioaccessibility through in vitro digestion, and their potential bioavailability and colonic metabolism using in silico models. The CPE exhibited a higher concentration of phenolic compounds than the CPF, mainly phenolic acids (protocatechuic, chlorogenic, and gallic acids), followed by flavonoids, particularly quercetin derivatives. Caffeine was found in higher concentrations than phenolic compounds. The antioxidant capacity was increased throughout the digestive process. The coffee pulp matrix influenced phytochemicals' behavior during gastrointestinal digestion. Whereas individual phenolic compounds generally decreased during digestion, caffeine remained stable. Then, phenolic acids and caffeine were highly bioaccessible, while flavonoids were mainly degraded. As a result, caffeine and protocatechuic acid were the main compounds absorbed in the intestine after digestion. Non-absorbed phenolic compounds might undergo colonic biotransformation yielding small and potentially more adsorbable phenolic metabolites. These results contribute to establishing the coffee pulp as an antioxidant food ingredient since it contains bioaccessible and potentially bioavailable phytochemicals with potential health-promoting properties. abstract_id: PUBMED:28893365 Coffee Consumption and Prevention of Cirrhosis: In Support of the Caffeine Hypothesis. Coffee is acknowledged as the most widely used drug worldwide. Coffee is also a foodstuff, so its use is often used to satisfy dietary urges. When used as a drug, coffee is normally consumed as a stimulant rather than to treat or prevent particular diseases. Recently, coffee consumption has been inversely related to progression of liver fibrosis to cirrhosis and even hepatocellular carcinoma. Experiments in cellular and animal models have provided biological plausibility for coffee as an antifibrotic agent in the liver. A recent article examined one of the key questions regarding the antifibrotic role of coffee-specifically what is the primary antifibrotic agent in coffee? This article briefly reviews the relevant issues with regard to coffee as an antifibrotic agent for patients with chronic liver disease. abstract_id: PUBMED:34560791 Coffee Leaves: An Upcoming Novel Food? Unlike those of coffee beans, the healthy properties of coffee leaves have been overlooked for a long time, even if they are consumed as a beverage by local communities of several African countries. Due to the presence of xanthines, diterpenes, xanthones, and several other polyphenol derivatives as main secondary metabolites, coffee leaves might be useful to prevent many daily disorders. At the same time, as for all bioactive molecules, careless use of coffee leaf infusions may be unsafe due to their adverse effects, such as the excessive stimulant effects on the central nervous system or their interactions with other concomitantly administered drugs. Moreover, the presence of some toxic diterpene derivatives requires careful analytical controls on manufactured products made with coffee leaves. Accordingly, knowledge about the properties of coffee leaves needs to be increased to know if they might be considered a good source for producing new supplements. The purpose of the present review is to highlight the biosynthesis, metabolism, and distribution of the 4 main classes of secondary metabolites present in coffee leaves, their main pharmacological and toxicological aspects, and their main roles in planta. Differences in coffee leaf chemical composition depending on the coffee species will also be carefully considered. abstract_id: PUBMED:7881818 Caffeine and coffee: effects on health and cardiovascular disease. Caffeine is a methylxanthine whose primary biological effect is the competitive antagonism of the adenosine receptor. Its presence in coffee, tea, soda beverages, chocolate and many prescription and over-the-counter drugs makes it a commonly consumed stimulant. Coffee and/or caffeine consumption has been linked to many human diseases in epidemiologic studies. Causal relationships have been difficult to substantiate. Initial investigations, showing an association between coffee and coronary heart disease, suffer from confounding variables and have been difficult to replicate. Recent studies, showing a significant effect over long follow-up periods and with high coffee intake, have again raised the question of a role for coffee and/or caffeine consumption in the pathogenesis of atherosclerotic heart disease. Contrary to common belief, the published literature provides little evidence that coffee and/or caffeine in typical dosages increases the risk of infarction, sudden death or arrhythmia. abstract_id: PUBMED:27374534 Spent coffee grounds, an innovative source of colonic fermentable compounds, inhibit inflammatory mediators in vitro. Spent coffee grounds (SCG), rich in dietary fiber can be fermented by colon microbiota producing short-chain fatty acids (SCFAs) with the ability to prevent inflammation. We investigated SCG anti-inflammatory effects by evaluating its composition, phenolic compounds, and fermentability by the human gut flora, SCFAs production, nitric oxide and cytokine expression of the human gut fermented-unabsorbed-SCG (hgf-NDSCG) fraction in LPS-stimulated RAW 264.7 macrophages. SCG had higher total fiber content compared with coffee beans. Roasting level/intensity reduced total phenolic contents of SCG that influenced its colonic fermentation. Medium roasted hgf-NDSCG produced elevated SCFAs (61:22:17, acetate, propionate and butyrate) after prolonged (24h) fermentation, suppressed NO production (55%) in macrophages primarily by modulating IL-10, CCL-17, CXCL9, IL-1β, and IL-5 cytokines. SCG exerts anti-inflammatory activity, mediated by SCFAs production from its dietary fiber, by reducing the release of inflammatory mediators, providing the basis for SCG use in the control/regulation of inflammatory disorders. The results support the use of SGC in the food industry as dietary fiber source with health benefits. abstract_id: PUBMED:9806164 Prevention of colonic aberrant crypt foci and modulation of large bowel microbial activity by dietary coffee fiber, inulin and pectin. The present experiments were aimed at developing novel dietary fibers to aid in reduction of colon cancer risk. We assessed the effects of coffee (non-fiber fraction), coffee fiber (arabino-galactose polymer) and inulin (oligo-fructose) in male F344 rats using formation of azoxymethane (AOM)-induced aberrant crypt foci (ACF) in the colon as the measure of preventive efficacy (or lack of such). At 5 weeks of age, groups of rats were fed the AIN-76A (control) and experimental diets that contained 1% coffee, 10% coffee fiber, 10% inulin, 10% pectin (positive control for fiber) or 200 p.p.m. piroxicam (a known ACF inhibitor). At 7 weeks of age, all animals were s.c injected with AOM (15 mg/kg body wt) once weekly for 2 weeks. All rats were killed 8 weeks after the last AOM injection and ACF were counted. The contents of the cecum were analyzed for bacterial beta-glucuronidase activity and short-chain fatty acids (SCFAs). Dietary administration of coffee fiber significantly suppressed AOM-induction of colonic ACF, in terms of total number, as well as crypt multiplicity and number of ACF/cm2 colon (P &lt; 0.01-0.001). Inulin diet had no significant effect on total ACF, but had reduced the number of ACF/cm2 (P &lt; 0.05). Whereas coffee had no effect on ACF formation, 10% pectin diet and 200 p.p.m. piroxicam significantly suppressed colonic ACF (P &lt; 0.001) as had been expected. A significant reduction of cecal beta-glucuronidase activity was observed in the rats fed coffee, coffee fiber and pectin diets. Further, coffee fiber, inulin and pectin increased cecal SCFA levels 3- to 5-fold. These results suggest that coffee fiber can prevent colon cancer risk. Further studies are warranted to determine the full potential of this fiber in pre-clinical efficacy studies. Answer: Yes, coffee is a colonic stimulant. A study involving ambulatory colonic manometry in healthy subjects showed that caffeinated coffee stimulates colonic motor activity. Its effects were found to be similar to those of a meal, 60% stronger than water, and 23% stronger than decaffeinated coffee (PUBMED:9581985). Additionally, another study conducted on children undergoing colonic motility testing indicated that caffeinated coffee resulted in a higher area under the curve (AUC), motility index (MI), and time to high amplitude propagated contractions (HAPCs) compared with decaffeinated coffee, suggesting that caffeine is indeed a colonic stimulant (PUBMED:36574000).
Instruction: Can Solid-Organ-Transplanted Patients Perform a Cycling Marathon? Abstracts: abstract_id: PUBMED:27109968 Can Solid-Organ-Transplanted Patients Perform a Cycling Marathon? Trends in Kidney Function Parameters in Comparison With Healthy Subjects. Background: Few solid-organ-transplanted patients (TP) perform regular sport activity. Poor data are available on the safety of intense and prolonged physical exercise on this population. The aim of the study was to evaluate kidney function parameters in a group of TP in comparison with healthy volunteers (HV) involved in a long-distance road cycling race: length 130 km and total uphill gradient, 1871 m. Methods: Nineteen TP were recruited: 10 renal, 8 liver, and 1 heart and compared with 35 HV. Renal function parameters, namely, creatinine, estimated glomerular filtration rate (eGFR), urea, uric acid, urine specific gravity, microalbuminuria, and proteinuria were collected and their values were compared the day before the race (T1), immediately after crossing the finish line (T2), and 18 to 24 hours after the competition (T3). Results: No adverse events were recorded. At baseline, TP showed lower values of eGFR (69 ± 22 versus 87 ± 13 mL/min/1.73 m(2)), lower urine specific gravity (1015 ± 4 versus 1019 ± 6), and higher microalbuminuria (56 ± 74 versus 8 ± 15) and proteinuria values (166 ± 99 versus 74 ± 44) (in mg/L). At T2 in both groups, renal function parameters showed the same trends: decline of eGFR (54 ± 19 versus 69 ± 15 mL/min/1.73 m(2)) and rise in protein excretion. At T3, functional parameters returned to baseline, except for urine specific gravity values remaining stable in TP (1018 ± 6) and growing higher in HV (1028 ± 4). Conclusions: Selected and well-trained organ-transplanted patients can perform an intensive exercise, displaying temporary modifications on kidney function parameters comparable to healthy subjects, despite differences related to baseline clinical conditions and pharmacological therapies. abstract_id: PUBMED:36367586 Systematic review on oncologic outcomes on adjuvant endovesical treatment for non-muscle invasive bladder cancer in patients with solid organ transplant. Purpose: Urothelial carcinoma has a higher incidence in renal transplanted patients according to several registries (relative risk × 3), and the global prognosis is inferior to the general population. The potential impact of immunosuppressive therapy on the feasibility, efficacy, and complications of endovesical treatment, especially Bacillus Calmette-Guerin, has a low level of evidence. We performed a systematic review that aimed to assess the morbidity and oncological outcomes of adjuvant endovesical treatment in solid organ transplanted patients. Methods: Medline was searched up to December 2021 for all relevant publications reporting oncologic outcomes of endovesical treatment in solid organ transplanted patients with NMIBC. Data were synthesized in light of methodological and clinical heterogeneity. Results: Twenty-three retrospective studies enrolling 238 patients were included: 206 (96%) kidney transplants, 5 (2%) liver transplants, and 2 (1%) heart transplants. Concerning staging: 25% were pTa, 62% were pT1, and 22% were CIS. 140/238 (59%) patients did not receive adjuvant treatment, 50/238 (21%) received mitomycin C, 4/238 (2%) received epirubicin, and 46/238 (19%) received BCG. Disease-free survival reached 35% with TURBT only vs. 47% with endovesical treatment (Chi-square test p = 0.08 OR 1.2 [0.98-1.53]). The complication rate of endovesical treatment was 12% and was all minor (Clavien-Dindo I). Conclusion: In solid organ transplanted patients under immunosuppressive treatment, both endovesical chemotherapy and BCG are safe, but the level of evidence concerning efficacy in comparison with the general population is low. According to these results, adjuvant treatment should be proposed for NMIC in transplanted patients as in the general population. abstract_id: PUBMED:30362361 Rhinosinusitis in Solid Organ Transplant Recipients: Analysis of 4562 Transplanted Patients. Introduction: Survival rates of solid organ transplant recipients are steadily increasing. Chronic immunosuppression is a key to sustain the transplanted organ. Thus, these patients are at a higher risk for fulminant disease and severe complications of rhinosinusitis (RS). Surprisingly, this has been scarcely discussed in the literature. Objective: To analyze the characteristics and disease course of RS in solid organ transplant recipients. Materials And Methods: Retrospective study. Medical records of all solid organ transplant recipients with a diagnosis RS treated at a national transplant center between the years 2001 and 2016 were reviewed. Results: Of 4562 solid organ transplant recipients, a documented diagnosis of RS was identified only in 61 (1.3%) patients. Sixty-four patients presented with posttransplantation RS; of them, 54.5% had chronic RS (CRS) and the remaining 45.5% patients were diagnosed with acute RS. Microbial cultures grew almost exclusively bacterial pathogens. A documented invasive fungal infection was noted in only 2 patients. A total of 24 (40%) patients underwent endoscopic sinus surgery, the majority (22) for CRS. On subgroup analysis, surgical intervention was more frequent in lung transplant recipients ( P = .005). Neither specific disease nor surgical complications were found. Conclusions: Interestingly, acute fulminant infection or sinusitis complications in solid organ transplant patients were much lower than expected. CRS in this patient group was less frequent than expected as well. Whether chronic immunosuppression minimizes the likelihood for CRS deserves further investigation. A more surgically oriented approach in CRS patients may be favored early in the management course of medically refractory patients in light of patients excellent outcomes. abstract_id: PUBMED:29405102 Prevalence of Chagas Disease among Solid Organ-Transplanted Patients in a Nonendemic Country. Reactivation of Chagas disease in the chronic phase may occur after solid organ transplantation, which may result in high parasitemia and severe clinical manifestations such as myocarditis and meningoencephalitis. The aim of the present study is to describe the prevalence of Chagas disease among solid organ-transplanted patients in a tertiary hospital from a nonendemic country. A cross-sectional study was performed at Vall d'Hebron University Hospital (Barcelona, Spain) from April to September 2016. Chagas disease screening was performed through serological tests in adult patients coming from endemic areas that had received solid organ transplantation and were being controlled in our hospital during the study period. Overall, 42 patients were included, 20 (47.6%) were male and median age was 50.5 (23-73) years. Transplanted organs were as follows: 18 kidneys, 17 lungs, and 7 livers. Three patients had Chagas disease, corresponding to a prevalence among this group of solid organ-transplanted patients of 7.1%. All three patients were born in Bolivia, had been diagnosed with Chagas disease and received specific treatment before the organ transplantation. We highly recommend providing screening tests for Chagas disease in patients with or candidates for solid organ transplantation coming from endemic areas, early treatment with benznidazole, and close follow-up to prevent clinical reactivations. abstract_id: PUBMED:34835067 Robust and Persistent B- and T-Cell Responses after COVID-19 in Immunocompetent and Solid Organ Transplant Recipient Patients. The development and persistence of SARS-CoV-2-specific immune response in immunocompetent (IC) and immunocompromised patients is crucial for long-term protection. Immune response to SARS-CoV-2 infection was analysed in 57 IC and 15 solid organ transplanted (TX) patients. Antibody responses were determined by ELISA and neutralization assay. T-cell response was determined by stimulation with peptide pools of the Spike, Envelope, Membrane, and Nucleocapsid proteins with a 20-h Activation Induced Marker (AIM) and 7-day lymphoproliferative assays. Antibody response was detected at similar levels in IC and TX patients. Anti-Spike IgG, IgA and neutralizing antibodies persisted for at least one year, while anti-Nucleocapsid IgG declined earlier. Patients with pneumonia developed higher antibody levels than patients with mild symptoms. Similarly, both rapid and proliferative T-cell responses were detected within the first two months after infection at comparable levels in IC and TX patients, and were higher in patients with pneumonia. T-cell response persisted for at least one year in both IC and TX patients. Spike, Membrane, and Nucleocapsid proteins elicited the major CD4+ and CD8+ T-cell responses, whereas the T-cell response to Envelope protein was negligible. After SARS-CoV-2 infection, antibody and T-cell responses develop rapidly and persist over time in both immunocompetent and transplanted patients. abstract_id: PUBMED:32001992 Complications of total joint arthroplasty in solid organ transplant patients versus a large control group. Background: Solid organ transplant patients are theoretically at increased risk for complications after total joint replacement due to immunosuppressive medication regimens and multiple medical co-morbidities. There are a number of studies that report on outcomes of total joint arthroplasty (TJA) following solid organ transplant, however, the results are heterogeneous. This study evaluated the outcomes of TJA in solid organ transplant patients as compared to non-organ transplant controls at one academic medical center. Methods: This study was a single institution retrospective review of a consecutive series of patients who underwent joint replacement following solid organ transplant as compared to a control cohort over a 10-year period. Univariable and multivariable generalized linear mixed effects models were used to compare the odds of readmission, infection, mortality, and being discharged home between transplanted (cases) and non-transplanted (control) patients. Results: Transplant and non-transplant cohorts had similar BMI, although transplant patients were younger (61 versus 65 years) and had a higher incidence of Diabetes (55% vs. 16%). On multivariable analysis, there was no difference in the odds of re-admission or rate of infection, but there was an increased risk of death and admission to a rehab facility in the transplant cohort. Conclusion: Overall, this study demonstrates that solid organ transplant alone does not increase the risk of peri-operative complications in patients who underwent hip and knee replacement. However, it should be expected that these patients have a higher mortality rate and that many of them will need to be discharged to a post-acute care facility. abstract_id: PUBMED:30017342 Etiology of increased cancer incidence after solid organ transplantation. Over the past decades, there has been an encouraging increase in survival after solid organ transplantation. However, with longer life spans, more transplant recipients are at risk of dying with functioning grafts from illnesses such as cancer and cardiovascular conditions. Malignancy has emerged as an important cause of death in transplant recipients and is expected to become the leading cause of death in transplanted patients within the next decade. While it is known that solid organ transplant recipients have a three to five-fold increased risk of developing cancer compared with the general population, the mechanisms that lead to the observed excess risk in transplant recipients are less clear. This review explores the etiology of the increased cancer incidence in solid organ transplant including the effect of immunosuppressants on immunosurveillance and activation of oncogenic viruses, and carcinogenic effects of these medications; the role of chronic stimulation of the immune system on the development of cancer; and the impact of pre-existing cancer risk factors and factors related to end-stage organ disease on the cancer excess incidence in solid organ transplant recipients. abstract_id: PUBMED:36567068 Supporting Reproductive Care for Patients Requiring Solid Organ Transplant. After a solid organ transplant, individuals have the capability to maintain a successful pregnancy. However, many posttransplant pregnancies are unintended, thereby posing risks to health. There is a critical need to strengthen patient education regarding safe approaches to conception, general reproductive health, and childbearing capabilities. The complex needs of patients after transplant, including management of graft health and medication regimens, may distract from the usual reproductive care offered to other individuals of childbearing potential. Thorough education about immunosuppressant medications, contraceptive methods, and expected waiting periods before conceiving are essential. Nurses can work with each individual and direct the interprofessional health care team to provide effective, safe, comprehensive, and inclusive reproductive care to patients posttransplant. abstract_id: PUBMED:32562028 Head and neck squamous cell carcinoma in solid organ transplant recipients. Purpose: Characterize head and neck squamous cell carcinomas in solid transplant recipients and compare outcomes with non-transplanted population. Methods: We carried a retrospective cohort analysis in a tertiary care center in Madrid, Spain. The study reviews 26 cases of non-cutaneous HNSCC diagnosed in solid organ transplant recipients between 2000 and 2017. We select a cohort of 130 (1:5) non-transplanted patients among all non-transplanted patients diagnosed during the same period for comparison purposes, through hierarchical clustering analysis. Univariate, overall and specific survival analysis and multivariate Cox proportional hazards regression were used to evaluate our objectives. Results: The overall risk of non-cutaneous HNSCC in solid transplant recipients was 7.2 cases per 1000. Alcohol abuse (p = 0.021) and upfront surgery (p = 0.019) were more likely to occur in transplanted patients. Age &gt; 60 was an independent predictor of worse outcomes both for overall (HR = 2.4, p &lt; 0.001) and specific (HR = 2.1, p = 0.012) survival. Transplant significantly worse overall survival (HR = 2.1, p = 0.012) but no statistical significant differences were observed on specific survival (p = 0.392). Conclusions: Solid organ transplant recipients have a higher risk of suffering non-cutaneous HNSCC. The higher mortality rate of these patients does not appear to be directly related to suffering from head and neck cancer, although it seems to contribute to developing other fatal complications in these fragile patients. abstract_id: PUBMED:19624498 Prognosis of acute kidney injury requiring renal replacement therapy in solid organ transplanted patients. Solid organ transplanted patients represent a complex and multi-morbid population with potential acute illness. They are at high risk not only for chronic renal failure (CRF), but also for acute kidney injury (AKI) and little is known about the overall epidemiology or prognosis. We conducted a retrospective review of all solid organ transplant patients who required emergency renal replacement therapy (RRT) for AKI during a period of 7.5 years. We identified 53 episodes of AKI requiring RRT occurring in 51 transplanted patients, and 58.5% of them were freshly (&lt;48 h) transplanted when admitted in ICU. The majority of episodes were a result of cardio-circulatory or septic events (84%), and a large proportion of the AKI episodes were a result of multifactorial causes (27%). Overall 90 days mortality was 49%, and no difference was detected between kidney and nonkidney transplants. On univariate analysis, the risk factors for death were smoking status [OR = 4.09 (CI 95%: 1.16-14.43); P = 0.028] and sepsis [OR = 4.90 (CI 95%: 1.39-17.31); P = 0.014]. Transplanted patients with AKI are younger, more prone to be diabetic and to have previous chronic renal failure compared with the general ICU population, possibly in part because of their immunosuppressive therapy. Nevertheless, they have the same prognosis. Answer: Yes, solid organ transplanted patients (TP) can perform a cycling marathon. A study evaluated kidney function parameters in a group of TP in comparison with healthy volunteers (HV) involved in a long-distance road cycling race. The race was 130 km long with a total uphill gradient of 1871 m. Nineteen TP, including 10 renal, 8 liver, and 1 heart transplant recipients, were compared with 35 HV. The study found that no adverse events were recorded and that at baseline, TP showed lower values of estimated glomerular filtration rate (eGFR), lower urine specific gravity, and higher microalbuminuria and proteinuria values. After the race, both groups displayed a decline in eGFR and a rise in protein excretion. However, 18 to 24 hours after the competition, functional parameters returned to baseline, except for urine specific gravity values which remained stable in TP and grew higher in HV. The study concluded that selected and well-trained organ-transplanted patients can perform an intensive exercise, displaying temporary modifications on kidney function parameters comparable to healthy subjects, despite differences related to baseline clinical conditions and pharmacological therapies (PUBMED:27109968).
Instruction: Variable cyclosporine exposure: a risk factor for chronic allograft nephropathy and graft loss? Abstracts: abstract_id: PUBMED:15251323 Variable cyclosporine exposure: a risk factor for chronic allograft nephropathy and graft loss? Background: Following the introduction of ciclosporine (CsA), the 2-year survival of transplanted kidneys improved from less than 60% to over 80%. Though the introduction of this drug resulted in a marked improvement in graft survival, its use was not without problems. Variable absorption and a narrow therapeutic index resulted in the need for measurements of CsA blood concentrations to tailor the drug dose to maximize therapeutic efficacy while minimizing toxicity. Methods: Data were available from the LOTESS study of 4948 transplant patients receiving Neoral with at least 5 years' follow-up. Potential risk factors associated with outcome in renal transplant recipients treated with CsA were explored: the primary outcome variable was graft loss. A stepwise binary logistic regression analysis was used to identify donor, recipient, and treatment variables related to outcome. Results: In the initial analysis, chronic rejection was the only significant predictor of graft loss. The relative risk (RR) of graft loss was 16.9 (95% CI = 13.9-20.4). Further analysis identified four independent risk factors for chronic rejection cadaveric donor (RR, 1.50; 95% CI = 1.05-2.15), older donor (RR, 1.02; 95% CI = 1.01-1.02), younger recipient (RR, 1.02; 95% CI = 1.02-1.03), and variable predose CsA concentration (RR, 1.25; 95% CI = 1.06-1.48). Conclusion: With the UK kidney transplant waiting list at about 5000 patients and only 1658 transplants performed during 2002, it is important maximize graft survival. For example, perhaps marginal donors (age &gt; 55) can be matched to older recipients without increasing the risk of chronic allograft nephropathy and therefore graft loss. Variable predose CsA concentrations may arise from at least three different sources: adherence to treatment, drug formulation, and individual variation in absorption. Therefore, it is important to emphaze to patients that erratic compliance may increase their risk of graft loss. Second, although only one CsA formulation is marketed in the UK, when generic forms of CsA are introduced it will be important to demonstrate consistent delivery of CsA from these new formulations. Third, improved monitoring of CsA using a C2 rather than a predose blood concentration measurement may be used to reduce intra-individual variations in drug exposure. abstract_id: PUBMED:15237243 Late calcineurin inhibitor withdrawal as a strategy to prevent graft loss in patients with suboptimal kidney transplant function. Unlabelled: Chronic allograft nephropathy is a major cause of progressive renal failure in renal transplant recipients. Its etiology is multifactorial and may include both immunologic and nonimmunologic causes. In this observational cohort study we set out to see if calcineurin inhibitor withdrawal would reduce the likelihood of graft loss. Methods: One hundred and five renal transplant recipients with impaired kidney function (mean serum creatinine 3.0 +/- 0.1 mg/dl) and biopsy-proven chronic allograft nephropathy had the dose of their calcineurin inhibitors, cyclosporine (CSA), or tacrolimus (FK), reduced or discontinued with either the addition of, or continuation of mycophenolate mofetil and low-dose corticosteroids. This intervention occurred at a mean of 29.0 +/- 2.7 months after transplantation. Follow-up after intervention was 54.3 +/- 4.1 months in the reduced CSA group (n = 64), 41.6 +/- 3.2 months in the reduced FK group (n = 28), and 75.5 +/- 6.7 months in the calcineurin inhibitor withdrawal group (n = 13). Results: There were 24 graft failures in the reduced CSA group, 9 graft failures in the reduced FK group, and 1 graft lost in the calcineurin inhibitor withdrawal group. The unadjusted relative risk for graft failure in the CSA and FK groups combined (confidence interval 1.05-31.6), was 4.07 using the calcineurin inhibitor withdrawal group as the reference, p = 0.05. A Cox proportional hazards model adjusting for baseline covariates including age, gender, race, type of transplant, delayed graft function, baseline blood pressure and random serum glucose and cholesterol demonstrated that only calcineurin inhibitor dose reduction but not withdrawal, older age, delayed graft function, higher serum creatinine at the time of intervention, and higher diastolic blood pressure and serum glucose, correlated with graft loss. Only 6 of the 105 patients developed Banff grade acute rejection. All responded to steroid therapy. We conclude that although this observational cohort study may have a selection bias, late calcineurin inhibitor withdrawal in patients with chronic allograft nephropathy and impaired kidney function appears safe and durable as a treatment strategy to reduce the likelihood of graft failure. abstract_id: PUBMED:18204239 Comparative study of graft nephrectomy in pre-cyclosporine and cyclosporine era. Objective: To assess the incidence and identify the indications for graft nephrectomy (GN) in the cyclosporine (CSA) era as compared to GN in the pre-CSA era. Materials And Methods: This is a retrospective study of 1,866 renal transplants done from 1971 to 1999. 675 were transplanted in the pre-CSA era (group 1) and 1,191 in the CSA era (group 2). The published series on experience with GN in the pre-CSA era was compared with that in the CSA era. GN done within 6 months of transplant was defined as early GN and those done after 6 months were included under late GN. The incidence, indication and the implications of GN were studied and compared with our experience in the pre-CSA era. Results were analyzed using the chi(2) test. Results: Of the 675 transplants in group 1, thirty-one had GN compared to 15 of 1,191 in group 2. There was a significant decrease in GN in the CSA era. Of the 31 in group 1, thirty had early GN as compared to 6 of 15 in group 2 (p = 0.003). On the contrary, late GN was significantly higher in group 2 (9/15) as compared to group 1 (1/31). Acute rejections and graft infections were the predominant causes of graft loss in group 1, while late graft loss due to symptomatic chronic rejection was the commonest cause in group 2. Morbidity was equal in both groups while mortality was significantly higher in group 1. Conclusion: CSA has significantly reduced the need for GN. By reducing hyper, acute, and irreversible acute rejection, the need for early GN has also been reduced significantly. Though there is an increased incidence of chronic allograft nephropathy, late GN is indicated only when there is refractory hematuria, intractable proteinuria and graft sepsis. With better immunosuppression, graft loss secondary to infection has decreased and mortality due to GN has been minimized. abstract_id: PUBMED:19917374 Role of peritubular capillaries and vascular endothelial growth factor in chronic allograft nephropathy. Objective: To investigate the role of peritubular capillary damage and vascular endothelial growth factor (VEGF) in chronic allograft injury and to evaluate their correlation with clinical factors. Patients And Methods: The study included 56 patients who underwent transplantation between 1987 and 2004 and experienced chronic graft dysfunction. CD34 (peritubular capillaries) and VEGF were evaluated at histologic analysis. Patients were classified into 3 groups: 47 with chronic allograft injury, 9 with pure cyclosporine toxicity, and 26 who served as the control group (time 0 biopsy). Results: Compared with the control group, CD34 total expression in chronic nephropathy was indirectly proportional to Banff stage (P &lt; .05), and VEGF was increased in chronic allograft injury grade I or II or nephrotoxicity (P &lt; .05). CD34 expression was correlated with age (P &lt; .007) and number of acute rejection episodes (P = .005). A negative correlation was observed between expression of CD34 and of VEGF (P &lt; .001). Low expression of CD34 was associated with risk of graft loss of 1.45 (95% confidence interval, 1.15-7.24; P = .04). Conclusion: Peritubular capillaries decreased progressively with development of chronic allograft injury. The VEGF demonstrated a bimodal behavior, increasing at the onset of nephropathy and decreasing in the final stages. Loss of peritubular capillaries was associated with worse graft survival and overexpression of VEGF. abstract_id: PUBMED:15760400 Vascular endothelial growth factor expression and cyclosporine toxicity in renal allograft rejection. The aim of this study was to evaluate the influence of vascular endothelial growth factor (VEGF) on renal function and on development of interstitial fibrosis (IF) in renal allografts. Tubular and interstitial expressions of VEGF and TNF-alpha, and density of macrophages in the interstitium were examined in 92 patients with nonrejected kidneys, acute rejection (AR), chronic allograft nephropathy (CAN), borderline changes (BC) and acute cyclosporin A (CsA) toxicity. Follow-up biopsy specimens from patients with AR and BC were evaluated for development of IF. A significant difference in tubular and interstitial VEGF expressions was found between patients with AR, BC, CAN and CsA toxicity (p &lt; 0.001). Macrophage infiltration was positively correlated with VEGF and TNF-alpha expressions (p &lt; 0.001). VEGF expression increased with increasing expression of TNF-alpha (p &lt; 0.001). Renal function in first 6 months after initial biopsy was better in patients with marked tubular VEGF expression (p &lt; 0.01); however, in follow-up, development of IF and graft loss was found earlier in these patients (p &lt; 0.01 and p &lt; 0.05, respectively). Increased renal VEGF expression has protective properties immediately following renal allograft but allows for increased risk of early IF, and therefore poor graft outcome in the long term. abstract_id: PUBMED:15280689 Proximal tubular dysfunction is associated with chronic allograft nephropathy and decreased long-term renal-graft survival. Background: Chronic allograft nephropathy is the major cause of graft loss after the first year of transplantation. Although many conditions are associated with its development, there is no method that can anticipate its risk in patients with good renal function. Methods: We prospectively studied 92 renal-transplant recipients with good and stable allograft function and correlated the development of chronic allograft nephropathy and graft loss with their levels of urinary retinol binding protein (uRBP). Patients were divided in two groups regarding the level of their tubular protein: high, above 0.400 mg/L, and normal levels, 0.400 mg/L or less. Results: Forty-eight (52%) patients had high levels of uRBP. At the enrollment time, patients with high and normal uRBP had comparable serum creatinine and cyclosporine trough levels. During a 5-year follow-up period, chronic allograft nephropathy was detected in 23 (25%) patients, 19 (82.6%) of whom had high levels of uRBP. Five-year chronic allograft nephropathy-free and graft survivals were significantly worse in patients with higher levels of uRBP than in patients with normal levels (57.5% vs. 89.9% P=0.0004; 70.7% vs. 100%, respectively, P=0.0002). High levels of uRBP were the strongest factor associated with the development of chronic allograft nephropathy (RR=5.3, 95% confidence interval 1.45-19.58, P=0.012). Conclusions: Among renal-transplant patients with good and stable graft function, high levels of uRBP identify those having a high risk of developing chronic allograft nephropathy. abstract_id: PUBMED:12790768 TGF-beta1 expression and chronic allograft nephropathy in protocol kidney graft biopsy. Chronic allograft nephropathy (CAN) represents a frequent and irreversible cause of long-term renal graft loss. TGF-beta1 is a key profibrogenic cytokine associated with CAN pathogenesis. Because of clinical diagnostic inaccuracy, protocol biopsy has been suggested to be a beneficial method for early CAN detection. Protocol core biopsy was carried out in 67 consecutive cyclosporine-based immunosuppression-treated kidney transplant recipients with stable renal function 12 months after renal transplantation. Biopsy specimens were analyzed morphologically according to Banff-97' criteria and immunohistologically for TGF-beta1 staining. The data obtained were correlated with plasma TGF-beta1 levels and clinical data. CAN (grade I-III) was found in 51 patients (76 %). CAN grade I was found to be the most frequent one (44 %). A normal finding within the graft was made in only 12 patients (18 %). Clinically silent acute rejection Banff IA was present in 4 patients (6 %). In 8 patients (12 %) with CAN, borderline changes were present. We found a significant correlation between CAN grade and creatinine clearance, as measured by the Cockroft-Gault formula (p&lt;0.01) as well as body mass index (p&lt;0.01). There was a significant correlation between chronic vasculopathy (Banff cv) and creatinine clearance, and between the degree of TGF-beta1 staining and chronic vasculopathy (p&lt;0.01). There were no relations between morphological findings and TGF-beta1 plasma levels, cyclosporine levels, plasma lipids, HLA-mismatches, panel reactive antibodies (PRA), proteinuria, and the donor's age. In conclusion, CAN is a frequent finding in protocol kidney graft biopsies 12 months after transplantation. TGF-beta1 tissue expression is linked with chronic vasculopathy. abstract_id: PUBMED:16899517 Untreated rejection in 6-month protocol biopsies is not associated with fibrosis in serial biopsies or with loss of graft function. Donor age, calcineurin inhibitor nephrotoxicity, and acute rejection are the most significant predictors of chronic allograft nephropathy. Protocol biopsies, both in deceased- and living-donor renal grafts, have shown that cortical tubulointerstitial fibrosis correlates with graft survival and function. The impact of not treating subclinical acute rejection (SAR) is less clear. In this study, 126 de novo renal transplant recipients were randomly assigned to receive area-under-the-curve-controlled exposure of either a cyclosporine or a tacrolimus-based immunosuppressive regimen that included steroids, mycophenolate mofetil, and basiliximab induction. Protocol biopsies were taken before and 6 and 12 mo after transplantation. The prevalence of SAR was determined retrospectively. Fibrosis was evaluated by quantitative digital analysis of Sirius red staining in serial biopsies. Donor age correlated significantly with tubulointerstitial fibrosis in pretransplantation biopsies and inferior graft function at month 6 (rtau = -0.26; P = 0.033). Acute rejection incidence was 11.5%, and no clinical late rejection occurred. The prevalence of SAR at 6 mo was 30.8% but was not associated with differences in serial quantitative Sirius red staining at 6 or 12 mo, proteinuria, or progressive loss of GFR up to 2 yr. No differences were found in donor variables, histocompatibility, rejection history, or exposure of immunosuppressants. Controlled individualized calcineurin inhibitor exposure and subsequent tapering resulted in a low early acute rejection rate and prevented late acute rejection. Because, by design, we did not treat SAR, these results provide evidence that asymptomatic infiltrates in 6-mo surveillance biopsies may not be deleterious in the intermediate term. There is need for reliable biomarkers to prove that not all cell infiltrates are equivalent or that infiltrates may change with time. abstract_id: PUBMED:14529900 Gingival overgrowth in kidney transplant recipients treated with cyclosporine and its relationship with chronic graft nephropathy. Our previous study of a group of renal transplant recipients treated with CsA showed a significantly faster development of chronic graft failure among patients with gingival hyperplasia (GH) compared to unaffected patients. The aim of the present research was to establish the impact of CsA dose and blood levels on the incidence of chronic graft nephropathy and gingival overgrowth as well as to assess risk factors for chronic graft nephropathy. The study included 64 renal graft recipients (32 patients with GH and 32 without GH) transplanted between 1989 and 1994. There were no significant differences between the pretransplant demographic and clinical data of the patients with and without GH. Patients with GH received a significantly higher total yearly dosages of CsA compared those without GH (P &lt;.03). Serum creatinine in the first year posttransplant in patients with GH was 1.9 mg/dL versus 1.6 mg/dL in those without GH. During 9 to 14 years follow-up, end-stage renal failure due to chronic nephropathy occurred in 18 patients (56%) with GH and eight patients (25%) without GH. Ten-year renal graft survival was 35% in GH patients and 69% in patients without GH. Ten-year patient survival was 69% in the GH group and 91% in the group without GH. CsA dosage was a risk factor for GH and for graft loss, which implies a role of CsA toxic effects on the pathological mechanisms of GH and of chronic allograft nephropathy. abstract_id: PUBMED:15686679 Early acute rejection does not affect chronic allograft nephropathy and death censored graft failure. Background: Even with the development of modern immunosuppression, an acute rejection episode is a major complication after renal transplantation. Acute rejection episodes have been used as clinical indicators for chronic allograft nephropathy and graft loss. We investigated the timing and frequency of acute rejection episodes in relation to long-term graft survival and chronic allograft nephropathy. Methods: The Long Term Efficacy and Safety Surveillance study of transplant patients receiving cyclosporin (Neoral) included 1706 adult renal transplants (1995 to 2003) with a functioning graft for at least 1 year. The impact on death-censored long-term graft survival was evaluated for acute rejection episodes (single or multiple) within 3 months, at 3 to 6 months, at 6 to 12 months, or at over 1 year posttransplant. A stepwise binary logistic regression was employed to identify independent risk factors for the time to occurrence of an acute rejection episode. Results: An acute rejection episode occurring within 3 months posttransplantation had no effect on either death-censored long-term graft failure (P=.2157) or chronic allograft nephropathy (P=.9331). However, an acute rejection episode occurring at 1 year or later posttransplantation was significantly associated with death censored long-term graft failure (P &lt;.0001) and chronic allograft nephropathy (P &lt;.0001). The numbers of HLA-DR mismatches and younger recipient ages were independent risk factors for early acute rejection. Conclusion: Among patients whose graft survives at least 12 months, an early acute rejection episode within 3 months posttransplant was not associated with either death-censored long-term graft survival or chronic allograft nephropathy among adults treated with cyclosporin. However, an acute rejection episode occurring at 1 year or later posttransplantation showed a positive association with death-censored long-term graft survival or chronic allograft nephropathy. Lower numbers of HLA-DR mismatches sum to reduce the occurrence of acute rejection and the hospitalization time. Answer: Yes, variable cyclosporine exposure is a risk factor for chronic allograft nephropathy (CAN) and graft loss. The LOTESS study of 4948 transplant patients receiving Neoral with at least 5 years' follow-up identified variable predose cyclosporine concentration as an independent risk factor for chronic rejection, which is a significant predictor of graft loss (PUBMED:15251323). The relative risk of graft loss due to variable predose cyclosporine concentration was 1.25. Variable cyclosporine concentrations may result from adherence to treatment, drug formulation, and individual variation in absorption. It is emphasized that erratic compliance may increase the risk of graft loss, and improved monitoring of cyclosporine using a C2 rather than a predose blood concentration measurement may be used to reduce intra-individual variations in drug exposure (PUBMED:15251323). Furthermore, the study by PUBMED:15280689 found that gingival overgrowth, which can be a side effect of cyclosporine use, was associated with a higher total yearly dosage of cyclosporine and was a risk factor for graft loss, implying a role of cyclosporine toxic effects on the pathological mechanisms of chronic allograft nephropathy. In addition, the study by PUBMED:19917374 highlighted the role of peritubular capillary damage in chronic allograft injury and found that low expression of CD34, a marker for peritubular capillaries, was associated with worse graft survival and overexpression of VEGF, which could be linked to cyclosporine toxicity. These findings suggest that managing cyclosporine levels to avoid variability and toxicity is crucial for reducing the risk of CAN and improving long-term graft survival.
Instruction: Impairment of the modulatory role of nitric oxide on the endothelin-1-elicited contraction of cerebral arteries: a pathogenetic factor in cerebral vasospasm after subarachnoid hemorrhage? Abstracts: abstract_id: PUBMED:9218313 Impairment of the modulatory role of nitric oxide on the endothelin-1-elicited contraction of cerebral arteries: a pathogenetic factor in cerebral vasospasm after subarachnoid hemorrhage? Objective: Nitric oxide (NO) and endothelin-1 (ET-1) are two endothelium-derived factors probably involved in the pathogenesis of cerebral vasospasm after subarachnoid hemorrhage (SAH). Our aim was twofold, i.e., to ascertain whether endothelial and nonendothelial NO modulates the contractile response of cerebral arteries to ET-1 and to analyze whether this relationship might be impaired after experimental SAH. Methods: Rings of middle cerebral artery from goats in the control group and from goats with SAH were set up for isometric tension recordings. SAH was induced 3 days before the experiments by infusion of 10 ml of autologous arterial blood through a catheter previously inserted into the subarachnoid space (basal cistern). In goats in the control group, the response to ET-1 was obtained as follows: 1) in control arteries (unrubbed and nonincubated arteries); 2) in rubbed arteries (arteries in which the endothelium was mechanically removed); 3) during incubation with NG-nitro-L-arginine (L-NOArg) alone or plus L- or D-arginine; and 4) in rubbed arteries plus incubation with L-NOArg. In goats with SAH, that response was obtained in control arteries, rubbed arteries, and during incubation with L-NOArg. Specimens of middle cerebral artery were processed for transmission electron microscopy study. Results: In goats in the control group, ET-1 elicited concentration-dependent contraction of the middle cerebral artery that was significantly potentiated after endothelium denudation or during incubation with L-NOArg. The latter effect was reversed by L-arginine but not by D-arginine. Combined endothelium denudation and incubation with L-NOArg produced a contractile response to ET-1 significantly higher than that induced by each treatment separately. Hyperreactivity to ET-1 was observed in goats with SAH. Endothelium denudation did not alter the enhanced response to ET-1, but it was further significantly increased after incubation with L-NOArg. Conclusion: These results demonstrate that an ET-1-NO interaction exists in control cerebral arteries in such a way that endothelial and nonendothelial NO partially counteract the contractile response to ET-1 and that although SAH did not modify the effect of nonendothelial NO, the absence of endothelial NO after SAH may contribute to the hyperreactivity of cerebral arteries to ET-1 and, thereby, to the development of cerebral vasospasm. abstract_id: PUBMED:27044361 Role of Cyclooxygenase-2 in Relation to Nitric Oxide and Endothelin-1 on Pathogenesis of Cerebral Vasospasm After Subarachnoid Hemorrhage in Rabbit. Endothelial dysfunctions that include decreased nitric oxide (NO) bioactivity and increased endothelin-1 (ET-1) bioactivity have been considered to be involved in the pathogenesis of cerebral vasospasm (CVS) after aneurysmal subarachnoid hemorrhage (SAH). Recent cardiovascular studies have revealed that cyclooxygenase-2 (COX-2) is involved in a disturbance in cross-talk between NO and ET-1. COX-2 expression was detected in the endothelial cells of a spastic artery after experimental SAH; however, the pathophysiological significance of COX-2 in relation to CVS remains unclear. The aim of this study was to investigate the role of COX-2 in relation to NO and ET-1 in the pathogenesis of CVS by using the COX-2 selective inhibitor, celecoxib. In the SAH group, SAH was simulated using the double-hemorrhage rabbit model. In the celecoxib group, SAH was simulated and celecoxib was administered. The basilar artery was extracted on day 5 and examined. The cross-section area of the basilar artery in the celecoxib group was significantly larger than in the SAH group. An increased expression of COX-2, ET-1, and ETA receptor (ETAR), and a decreased expression of endothelial NO synthase (eNOS) were seen in the SAH group. In the celecoxib group compared to the SAH group, expression of COX-2, ET-1, and ETAR were statistically significantly decreased, and eNOS expression was significantly increased. COX-2 might be involved in the pathogenesis of CVS due to up-regulation of ET-1 and ETAR and down-regulation of eNOS, and celecoxib may potentially serve as an agent in the prevention of CVS after SAH. abstract_id: PUBMED:16265315 Mechanisms of disease: roles of nitric oxide and endothelin-1 in delayed cerebral vasospasm produced by aneurysmal subarachnoid hemorrhage. Molecular biologic investigations in the past decades have begun to unravel the intracellular mechanisms involved in vasomotor regulation of cerebral blood vessels and their failure in delayed cerebral vasospasm produced by aneurysmal subarachnoid hemorrhage. Progress in deciphering macrovascular regulatory mechanisms and their failure in delayed cerebral vasospasm induced by aneurysmal subarachnoid hemorrhage have revealed that there are at least two important vasoactive substances-nitric oxide and endothelin-1-that play important roles in the clinical manifestations of subarachnoid-hemorrhage-induced cerebral vasospasm. Nitric oxide is a cell-membrane-permeable free radical gas that accounts for the phenomenon of vasodilatation by a variety of vasodilator agents. Endothelin-1, a 21 amino acid peptide, is one of the most potent constricting factors. Cerebral vasospasm is thought to represent a disturbance in the cerebral vasomotor equilibrium for which these two physiologically antagonistic compounds are at least partly responsible. Advances in our understanding of the molecular responses of the cerebral vasculature to subarachnoid hemorrhage should lead to more comprehensive management as knowledge becomes translated into development of effective pharmacologic agents to reverse or prevent cerebral vasospasm following aneurysmal subarachnoid hemorrhage. abstract_id: PUBMED:11872904 Protective vasomotor effects of in vivo recombinant endothelial nitric oxide synthase gene expression in a canine model of cerebral vasospasm. Background And Purpose: Post-subarachnoid hemorrhage (SAH) cerebral vasospasm is a potentially devastating condition whose pathogenesis involves impaired nitric oxide (NO) bioavailability. We aimed to determine whether recombinant endothelial NO synthase (eNOS) gene expression may protect vasomotor function and prevent vasospasm in a canine experimental SAH model. Methods: Recombinant adenoviral vectors (5x10(9) plaque-forming units/animal) encoding genes for eNOS (AdeNOS) and beta-galactosidase (AdLacZ) or vehicle were injected into the cerebrospinal fluid (CSF) of dogs on day -1 (ie, 24 hours before the first intra-CSF injection of blood on day 0). Cerebral angiography was performed at day 0 (baseline) and day 7 (immediately before death), and tissues were harvested for additional studies. Results: Western analysis and immunohistochemistry detected recombinant eNOS exclusively in cerebral arteries isolated from AdeNOS-transduced dogs, and in this group of animals CSF NO concentrations were significantly elevated by day 2. Analysis of day 7 versus day 0 cerebral angiograms for each group revealed significant spasm at the basilar artery midpoint in AdLacZ-transduced and nontransduced dogs but not in AdeNOS-transduced dogs. Isometric force recording of basilar arteries isolated from AdeNOS-transduced dogs showed significantly augmented relaxations to bradykinin and reduced contractions to endothelin-1. Conclusions: Our results suggest that expression of recombinant eNOS in the adventitia of cerebral arteries may contribute toward protection against post-SAH vasospasm. abstract_id: PUBMED:21189627 Changes of nitric oxide and endothelin-1 levels in rat brain tissue during cerebral vasospasm following subarachnoid hemorrhage and protective effect of nimodipine Aim: To investigate the changes of nitric oxide and endothelin-1 levels in brain tissue in rat with cerebral vasospasm (CVS) following subarachnoid hemorrhage (SAH), and the protective effect of nimodipine. Methods: Wistar rats were divided randomly into SAH group, nimodipine-treated group and sham-operated group. Diameters of basilar artery before and after operation were measured, and dynamic changes of regional cerebral blood flow (rCBF), brain nitric oxide and endothelin-1 levels were determined within 24 h. Pathological examination of hippocampus was also performed. Results: SAH resulted in significant and persistent decrease of rCBF. Diameter of basilar artery was reduced markedly. Neurons in hippocampus CA1 subfield were damaged severely. Both nitric oxide and endothelin-1 levels in brain tissue were increased significantly from 1 h to 24 h after SAH. However, the pathological changes of above parameters were not so severe in nimodipine-treated rats. Conclusion: Increase of nitric oxide and endothelin-1 levels in brain tissue is probably involved in the development of ischemic brain damage caused by CVS after SAH. Nimodipine exerts its protective effect on brain neurons by antagonizing CVS and the pathological changes of nitric oxide and endothelin-1 in brain tissue. abstract_id: PUBMED:7635177 The role of endothelin and nitric oxide in modulation of normal and spastic cerebral vascular tone in the dog. To investigate the roles of endothelin and nitric oxide (NO) in the regulation of cerebral vascular tone under basal conditions and in cerebral vasospasm following subarachnoid hemorrhage in dogs, we used BQ-123 (cyclo(-D-Trp-D-Asp-L-Pro-D-Val-L-Leu-) sodium salt), an endothelin ETA receptor antagonist, L-arginine, a substrate for the formation of NO, and NG-nitro-L-arginine methyl ester, an NO synthesis inhibitor, and measured the angiographic diameter of the basilar artery in vivo. In normal dogs, intracisternal (i.c.) injection of BQ-123 (0.6 mg/kg) produced a 29.4 +/- 6.11% (P &lt; 0.01) increase in the basal diameter 24 h after injection. NG-nitro-L-arginine methyl ester (0.6 mg/kg i.c.) produced a 19.3 +/- 2.93% (P &lt; 0.05) decrease in the basal diameter 2 h after injection. This decrease was significantly attenuated by both BQ-123 (0.06-0.6 mg/kg i.c.) and L-arginine (6 mg/kg i.c.), but not by D-arginine. In the two-hemorrhage canine model, BQ-123 significantly inhibited the development of cerebral vasospasm (36.9 +/- 4.11% decrease on day 5 and 42.0 +/- 4.54% decrease on day 6 in controls vs 21.7 +/- 4.75% decrease (P &lt; 0.05) on day 5 and 20.8 +/- 4.14% decrease (P &lt; 0.05) on day 6 for 0.6 mg/kg i.c.) significantly attenuated the cerebral vasospasm on day 4 from a mg/kg i.c.). Furthermore, in this model, L-arginine (6 30.9 +/- 5.78% decrease (before)) to a 12.6 +/- 5.99% decrease (after). The immunoreactive endothelin-1 levels in the endothelial layer and the adventitia of the basilar artery were much higher on days 3 and 7 after the injection of autologous blood than on day 0 before blood injection. These results suggest that endogenous endothelin and NO both participate in regulating the basal tone of cerebral arteries, and, therefore, the development of cerebral vasospasm following subarachnoid hemorrhage may be at least partially attributed to an impairment of the balanced action of endothelin and NO. Furthermore, endothelin ETA antagonists or NO products may be useful in the treatment of cerebral vasospasm following subarachnoid hemorrhage. abstract_id: PUBMED:18342802 New insights into the causes and therapy of cerebral vasospasm following subarachnoid hemorrhage. Cerebral vasospasm lingers as the leading preventable cause of death and disability in patients who experience aneurysmal subarachnoid hemorrhage. Despite the potentially devastating consequences of cerebral vasospasm, the mechanisms behind it are incompletely understood. Nitric oxide, endothelin-1, bilirubin oxidation products and inflammation appear to figure prominently in its pathogenesis. Therapies directed at many of these mechanisms are currently under investigation and hold significant promise for an ultimate solution to this substantial problem. abstract_id: PUBMED:29377443 Salvinorin A ameliorates cerebral vasospasm through activation of endothelial nitric oxide synthase in a rat model of subarachnoid hemorrhage. Objective: This study aimed to demonstrate the potential of salvinorin A (SA) for cerebral vasospasm after subarachnoid hemorrhage (SAH) and investigate mechanisms of therapeutic effect using rat SAH model. Methods: Salvinorin A was injected intraperitoneally, and the neurobehavioral changes were observed at 12 hours, 24 hours, 48 hours, and 72 hours after SAH. Basilar artery was observed by magnetic resonance imaging (MRI). The inner diameter and thickness of basilar artery were measured. The morphological changes and the apoptosis in CA1 area of hippocampus were detected. Endothelin-1 (ET-1) and nitric oxide (NO) levels were detected by ELISA kit. The protein expression of endothelial NO synthase (eNOS) and aquaporin-4 (AQP-4) was determined by Western blot for potential mechanism exploration. Results: Salvinorin A administration could relieve neurological deficits, decrease the neuronal apoptosis, and alleviate the morphological changes in CA1 area of hippocampus. SA alleviated CVS by increasing diameter and decreasing thickness of basilar artery, and such changes were accompanied by the decreased concentration of ET-1 and increased level of NO. Meanwhile, SA increased the expression of eNOS and decreased the expression of AQP-4 protein in the basilar artery and hippocampus. Conclusions: Salvinorin A attenuated CVS and alleviated brain injury after SAH via increasing expression of eNOS and NO content, and decreasing ET-1 concentration and AQP-4 protein expression. abstract_id: PUBMED:19415172 Elevated level of endothelin-1 in cerebrospinal fluid and lack of nitric oxide in basilar arterial plasma associated with cerebral vasospasm after subarachnoid haemorrhage in rabbits. Background: The role of endothelin-1 (ET-1) and nitric oxide (NO) as two important mediators in the development of cerebral vasospasm (CVS) after subarachnoid haemorrhage (SAH) is controversial. The objective of this study was to determine whether local levels of ET-1 and NO in cerebral arterial plasma and/or in cerebrospinal fluid (CSF) are associated with the occurrence of CVS after SAH. Methods: CVS was induced using the one-haemorrhage rabbit model and confirmed by digital subtraction angiography of the rabbits' basilar artery on day 5. Prior to sacrifice, local CSF and basilar arterial plasma samples were obtained by a transclival approach to the basilar artery. Systemic arterial plasma samples were obtained. ET-1 levels were determined by immunometric technique (pg/ml +/- SEM) and total nitrate/nitrite level spectrophotometrically (micromol/l +/- SEM). Findings: Angiographic CVS was documented after SAH induction (n = 12, P &lt; 0.05). The ET-1 level in CSF was significantly elevated by 27.3% to 0.84 +/- 0.08 pg/ml in SAH animals (n = 7) in comparison to controls (0.66 +/- 0.04 pg/ml, n = 7, P &lt; 0.05). There was no significant difference in ET-1 levels in systemic and basilar arterial plasma samples of SAH animals compared to controls. A significant lack of local NO metabolites was documented in basilar arterial plasma after SAH (36.8 +/- 3.1 micromol/l, n = 6) compared to controls (61.8 +/- 6.2 micromol/l, n = 6, P &lt; 0.01). Conclusion: This study demonstrates that an elevated ET-1 level in CSF and local lack of NO in the basilar arterial plasma samples are associated with CVS after experimental SAH. abstract_id: PUBMED:26712324 Mechanisms of magnesium-induced vasodilation in cerebral penetrating arterioles. We investigated in cerebral penetrating arterioles the signaling mechanisms and dose-dependency of extracellular magnesium-induced vasodilation and also its vasodilatory effects in vessels preconstricted with agonists associated with delayed cerebral vasospasm following SAH. Male rat penetrating arterioles were cannulated. Their internal diameters were monitored. To investigate mechanisms of magnesium-induced vasodilation, inhibitors of endothelial function, potassium channels and endothelial impairment were tested. To simulate cerebral vasospasm we applied several spasmogenic agonists. Increased extracellular magnesium concentration produced concentration-dependent vasodilation, which was partially attenuated by non-specific calcium-sensitive potassium channel inhibitor tetraethylammonium, but not by other potassium channel inhibitors. Neither the nitric oxide synthase inhibitor L-NNA nor endothelial impairment induced by air embolism reduced the dilation. Although the magnesium-induced vasodilation was slightly attenuated by the spasmogen ET-1, neither application of PF2α nor TXA2 analog effect the vasodilation. Magnesium induced a concentration- and smooth muscle cell-dependent dilation in cerebral penetrating arterioles. Calcium-sensitive potassium channels of smooth muscle cells may play a key role in magnesium-induced vasodilation. Magnesium also dilated endothelium-impaired vessels as well as vessels preconstricted with spasmogenic agonists. These results provide a fundamental background for the clinical use of magnesium, especially in treatment against delayed cerebral ischemia or vasospasm following SAH. Answer: The impairment of the modulatory role of nitric oxide (NO) on the endothelin-1 (ET-1) elicited contraction of cerebral arteries is indeed considered a pathogenetic factor in cerebral vasospasm after subarachnoid hemorrhage (SAH). Studies have shown that NO and ET-1 are two endothelium-derived factors likely involved in the pathogenesis of cerebral vasospasm following SAH (PUBMED:9218313). NO is a vasodilator, while ET-1 is a potent vasoconstrictor, and their balance is crucial for maintaining vascular tone. After SAH, this balance is disturbed, leading to hyperreactivity of cerebral arteries to ET-1 and contributing to the development of cerebral vasospasm (PUBMED:9218313). The interaction between ET-1 and NO in control cerebral arteries is such that endothelial and nonendothelial NO partially counteracts the contractile response to ET-1. However, after SAH, the absence of endothelial NO may contribute to the hyperreactivity of cerebral arteries to ET-1 (PUBMED:9218313). Additionally, cyclooxygenase-2 (COX-2) has been implicated in the disturbance of the cross-talk between NO and ET-1, with COX-2 expression detected in the endothelial cells of spastic arteries after experimental SAH (PUBMED:27044361). Furthermore, molecular biologic investigations have revealed that NO and ET-1 play important roles in the clinical manifestations of SAH-induced cerebral vasospasm, with NO being a vasodilator and ET-1 being a potent constrictor (PUBMED:16265315). The therapeutic potential of targeting these pathways has been demonstrated in various studies. For instance, the administration of recombinant endothelial NO synthase (eNOS) gene expression in a canine model of SAH showed protective vasomotor effects, preventing vasospasm (PUBMED:11872904). Similarly, nimodipine, a calcium channel blocker, has been shown to exert a protective effect on brain neurons by antagonizing cerebral vasospasm and the pathological changes of NO and ET-1 in brain tissue (PUBMED:21189627).
Instruction: Endocarditis-associated paravalvular abscesses. Do clinical parameters predict the presence of abscess? Abstracts: abstract_id: PUBMED:7705150 Endocarditis-associated paravalvular abscesses. Do clinical parameters predict the presence of abscess? Study Objective: To determine whether standard clinical and transthoracic echocardiographic criteria considered to be suggestive of the presence of endocarditis-associated paravalvular abscess are predictive of which patients would benefit from reliable but invasive transesophageal echocardiographic investigations for abscess. Design: Retrospective chart review. Setting: A 630-bed university hospital. Patients: Forty-eight patients with 51 episodes of definite endocarditis and 24 paravalvular abscesses. Measurements And Results: A comparison of abscess and nonabscess populations revealed that clinical parameters (patient demographics, valvular involvement, presence of a prosthesis, infection with a virulent organism, pericarditis, persistent fever, persistent bacteremia, congestive heart failure, history of intravenous drug use, embolization) and transthoracic echocardiographic parameters were insensitive predictors of the presence of abscess. The only statistically significant correlate was the presence of previously undetected atrioventricular or bundle branch block. Paravalvular abscesses were common in our population and were associated with increased mortality. Improved survival correlated with the absence of mitral valve involvement and the absence of moderate-to-severe congestive heart failure. Conclusions: Given the accuracy and safety of transesophageal echocardiography and the unreliability of clinical and transthoracic echocardiographic criteria, we recommend that transesophageal echocardiography be considered in all endocarditis patients with previously unrecognized conduction disturbances, aortic or prosthetic valve involvement, or both, or indications for valve replacement, or all of the foregoing. abstract_id: PUBMED:38204677 Surgery for paravalvular abscess in children. Objective: To investigate the outcomes of surgery in children with paravalvular abscess at our institution. Methods: A retrospective review of all patients who underwent surgery for paravalvular abscess was performed. Results: Between 1989 and 2020, 30 patients underwent surgery for paravalvular abscess, of whom 5 (16.7%) had an intracardiac fistula and 6 (20.0%) had a pseudoaneurysm. Aortic annulus abscesses were most common, occurring in 23 patients (76.7%). Aortic root replacement was performed in 17 patients (56.7%), root reconstruction was performed in 4 (13.3%), and reconstruction of the central fibrous body was required in 5 (16.7%). Postoperatively, 7 patients (23.3%) required extracorporeal membrane oxygenation (ECMO) support, and 1 patient (3.3%) required permanent pacemaker insertion. There were 6 early deaths, 5 of whom were on ECMO, and no late deaths, with a 15-year survival of 79.7% (95% confidence interval [CI], 60.2%-90.3%). Deaths were from sudden cardiac arrest resulting in brain death in 3 patients, inability to wean from ECMO due to severe cardiac dysfunction in 2 patients, and cerebral mycotic aneurysm and hemorrhage in 1 patient. Freedom from reoperation was 40.0% (95% CI, 17.0%-62.3%) at 15 years Reoperation due to recurrence was rare, occurring in only 2 patients (6.7%). Streptococcus pneumoniae (hazard ratio [HR], 9.2; 95% CI, 1.6-51.7) and preoperative shock (HR, 6.4; 95% CI, 1.3-32.0) were associated with mortality. Central fibrous body reconstruction was associated with reoperation (HR, 4.4; 95% CI, 1.2-16.1). Conclusions: Although paravalvular abscess in children is associated with high early mortality, hospital survivors have good long-term survival. Reoperation is frequent, but is rarely due to recurrence of endocarditis. abstract_id: PUBMED:36968888 Prosthetic Aortic Valve Endocarditis Creeping Into the Paravalvular Space. Prosthetic valve endocarditis is a devastating infection with a challenging diagnosis and management. Despite advances in its diagnostic modalities, medical, and surgical interventions, prosthetic valve endocarditis still carries high morbidity and mortality rates. Here, we report a case of prosthetic aortic valve endocarditis that progressed to involve the paravalvular space and the importance of multimodality cardiac imaging in the early detection of paravalvular complications. abstract_id: PUBMED:34497834 Catheter-based Closure of a Post-infective Aortic Paravalvular Pseudoaneurysm Fistula With Severe Regurgitation After Two Valve Replacement Surgeries: A Case Report. Backgsround: Infective endocarditis (IE) of prosthetic valves is a dire complication of cardiac valve replacement surgery and is associated with high rates of morbidity and mortality. Case Summary: A 72-year-old woman with multiple comorbidities underwent surgical replacement of the aortic valve with a mechanical prosthetic valve after recurrent IE. After 10 years, IE recurred and the mechanical valve was surgically replaced with a bioprosthetic valve. Ten years later, severe heart failure developed due to severe paravalvular leak (PVL) caused by an aortic annulus abscess complicated by a paravalvular pseudoaneurysm fistula (PPF). The patient was deemed at prohibitive surgical risk and a catheter-based PVL closure procedure was planned. However, the interventional procedure was delayed several months due to the Covid-19 pandemic with progressive heart failure worsening. Despite an acute satisfactory result of the PPF transcatheter closure and a significant clinical improvement, the patient died 10 months later due to multiorgan failure. It is likely that this was due, at least in part, to the long treatment delay caused by the unprecedented strain on the healthcare system. Discussion: In patients at high surgical risk, early diagnosis and prompt interventional treatment of severe PVL are crucial for improving expectancy and quality of life. However, the recent outbreak of COVID-19 caused deferral of elective and semi-elective structural heart disease procedures (SHD) as in our case. Thus, a proactive and vigilant stance on managing SHD should be a priority even in the context of the COVID-19 pandemic. abstract_id: PUBMED:16376774 Predictors of mortality in paravalvular abscess. Background: Paravalvular abscess formation is an ominous complication of infective endocarditis; however, prognostic variables in paravalvular abscess are poorly defined. Methods: We examined our experience in patients with paravalvular abscess between 1987 and 2004. Clinical, echocardiographic, microbiologic, and surgical data were examined. Results: There were 45 patients (17 females), age 57 +/- 17 years. Twenty-four patients had prosthetic valve endocarditis. Methicillin-sensitive Staphylococcus aureus and coagulase-negative S. aureus were the most common organisms accounting for 25 (56%) cases. Thirty-eight patients (84%) underwent surgery during initial admission. Surgical mortality was 7%, in-hospital mortality was 31%, and 1-year mortality was 38%. Between patients who died and patients who survived, there were no differences in age (61 +/- 20 years vs 55 +/- 15 years, P = .3), type of microorganism, presence of prosthetic heart valves (47% vs 57%), presence of moderate to severe or severe regurgitation of involved valve (47% vs 57%, P = .37), presence of associated valvular vegetation (93% vs 93%), area of abscess (5.6 +/- 2.9 cm2 vs 4.4 +/- 3.2 cm2, P = .39), left ventricular systolic function (56% +/- 13% vs 56% +/- 10%, P = .9), white cell count (13 +/- 4 vs 13 +/- 7, P = .9), or polymorphonuclear leukocytosis (86% +/- 6% vs 81% +/- 9%, P = .1). Patients who died were sicker on admission compared with those who survived (33% had stroke or altered mental status vs 7%, P = .03) and had worse renal function compared with those who survived (creatinine 4 +/- 4 mg/dL vs 1.6 +/- 1.9 mg/dL, P = .009). Conclusion: Neurologic impairment and renal impairment are significant determinants of 1-year survival in patients who present with paravalvular abscess. abstract_id: PUBMED:32990862 Prevalence and predictors of aortic root abscess among patients with left-sided infective endocarditis: a cross-sectional comparative study. Background: Aortic root abscess (ARA) is a major complication of infective endocarditis that is associated with increased morbidity and mortality. Limited data are present about patient characteristics and outcomes in this lethal disease. We aimed to study the clinical and echocardiographic characteristics of patients with ARA compared to patients with left-sided infective endocarditis without ARA. We included patients with a definite diagnosis of left-sided infective endocarditis according to modified Duke's criteria. The patients were classified into two groups according to the presence of ARA (ARA and NO-ARA groups). All the patients were studied regarding their demographic data, clinical characteristics, laboratory and imaging data, and complications. Results: We included 285 patients with left-sided infective endocarditis. The incidence of ARA was 21.4% (61 patients). Underlying heart disease, mechanical prosthesis, bicuspid aortic valve, and prior IE were significantly higher in ARA. The level of CRP was higher in ARA (p = 0.03). ARA group showed more aortic valve vegetations (73.8% vs. 37.1%, p &lt; 0.001), more aortic paravalvular leakage (26.7% vs. 4.5%, p &lt; 0.001), and less mitral valve vegetations (21.3% vs. 68.8%, p &lt; 0.001). Logistic regression analysis showed that the odds of ARA increased in the following conditions: aortic paravalvular leak (OR 3.9, 95% CI 1.2-13, p = 0.03), mechanical prosthesis (OR 3.6, 95% CI 1.5-8.7, p = 0.005), aortic valve vegetations (OR 3.0, 95% CI 1.2-8.0, p = 0.02), and undetected organism (OR 2.3, 95% CI 1.1-4.6, p = 0.02), while the odds of ARA decreased with mitral valve vegetations (OR 0.2, 95% CI 0.08-0.5, p = 0.001). We did not find a difference between both groups regarding the incidence of major complications, including in-hospital mortality. Conclusion: In our study, ARA occurred in one fifth of patients with left-sided IE. Patients with mechanical prosthesis, aortic paravalvular leakage, aortic vegetations, and undetected organisms had higher odds of ARA, while patients with mitral vegetations had lower odds of ARA. abstract_id: PUBMED:1852959 Active infective endocarditis complicated by paravalvular abscess. Review of 40 cases We have performed a retrospective analysis of the clinical, echocardiographic, surgical and necropsy data in 40 patients affected by infective endocarditis complicated by paravalvular abscess. The abscess developed on prosthetic valve in 13 cases, and on native valve in 27. Aortic valve was the most affected (85%). The clinical course was considered to be acute in 52% of the patients. Fever persisted despite of adequate antibiotic therapy in 57%. In 90%, overt heart failure were present. Staphylococcus were the most frequent causative microorganism (S. aureus, 22.5%; S. epidermidis, 20%). Bundle branch or atrioventricular block were detected in 18 patients (sensibility, 45%; specificity, 88%). Bidimensional echocardiography, along with Doppler techniques when necessary, detected an abscess in 81% of patients (sensibility, 80%; specificity, 84%). Thirty patients were operated. Eight of them (26%) died, two intraoperatively. Ten patients were not operated and nine died (90%). The difference was significant (p less than 0.001). Only age and surgical treatment were related to early survival. Over the follow-up of the operated patients, five presented prosthetic dehiscence, but only two required a new operation 3 months after the first intervention, and none died. In conclusion, the detection of a paravalvular abscess complicating an infective endocarditis is an indication for surgical treatment. Both surgical mortality and postoperative prosthetic dehiscence are acceptable. Bidimensional echocardiography is the most reliable tool for the diagnosis of this complication. abstract_id: PUBMED:30552648 Feasibility of transcatheter prosthetic paravalvular leakage closure: a single-center, exploratory safety evaluation study of transcatheter closure of prosthetic paravalvular regurgitation in Japan (STOP PARA study). Objective: The standard treatment of prosthetic paravalvular leakage (PVL) accompanied by heart failure or hemolytic anemia is repeat open surgery. Although favorable outcomes for transcatheter prosthetic PVL closure in patients with a high surgical risk for repeat open surgery have been reported, procedural feasibility has not been examined in Japan. Methods: From March 2015 to November 2015, transcatheter PVL closure in mitral position was performed in four patients (age range 41-78, three females) with high surgical risk due to history of repeated surgeries, chest radiation and reconstruction of the mitral annulus after debridement of abscess owing to infective endocarditis. All procedures were performed via a transapical approach under general anesthesia. Results: Of four patients, the indications for PVL closure of two patients were heart failure, and those of the others were hemolytic anemia. There were no major complications, mortalities, or prolonged intensive care unit or hospital stays. Technical success was achieved in two patients. Moderate paravalvular regurgitation persisted in one patient, although regurgitation reduction of one grade was obtained. In one patient, occluder devices were not deployed because the wire could not cross the defect. Improvement in New York Heart Association functional class compared with that at baseline was observed in two patients. Conclusions: The safety and acute technical success rates of transcatheter mitral PVL closure via a transapical approach were confirmed in Japanese patients. In Japan, transcatheter PVL closure may be an alternative option for patients with PVL who have a high surgical risk. abstract_id: PUBMED:30011349 A rare echocardiographic image of aortic prosthetic valve endocarditis complicated with paravalvular abscess, pseudoaneurysm and aorto-right atrial fistula. Early infectious endocarditis (IE) occurs in 3% of prosthesis in the first 12 months. Early IE is more aggressive than late prosthetic valve endocarditis. Mortality remains high, despite combined medical and surgical treatment. We present a case of early IE in aortic prosthetic valve complicated with paravalvular abscess, pseudoaneurysm and aorto- right atrial fistula. abstract_id: PUBMED:15220902 Mortality in patients with paravalvular abscess diagnosed by transesophageal echocardiography. Background: Paravalvular abscess is a complication of endocarditis that may lead to persistent infection, conduction abnormalities, fistula formation, worsening congestive heart failure, and death. Methods: Between 1991 and 2001, paravalvular abscess was identified on transesophageal echocardiography in 24 patients who subsequently underwent surgical treatment. Echocardiographic findings were reviewed for location of abscess, presence of a valvular prosthesis, valvular function, and presence of vegetations. Information gathered included the time interval between transesophageal echocardiography diagnosis and operation, inhospital mortality, and microbiologic data. Results: Of 24 patients, 9 died, for a mortality of 38%. Of the patients who died, the average survival after operation was 43 days, with a range of 1 to 238 days. Of the 14 patients with significant valvular or paravalvular regurgitation, 8 died (57%). However, of the 10 patients with mild or no regurgitation, only 1 died (10%). This difference was statistically significant (P =.02). Conclusion: Patients without preserved valve function (those with moderate or severe regurgitation) have a significantly worse outcome (57% mortality) than do those with normal valvular function (10% mortality). Answer: The presence of endocarditis-associated paravalvular abscesses is a serious complication that can be difficult to predict using standard clinical and transthoracic echocardiographic criteria. A retrospective chart review study found that these parameters were insensitive predictors of the presence of abscess, with the only statistically significant correlate being the presence of previously undetected atrioventricular or bundle branch block (PUBMED:7705150). Given the unreliability of clinical and transthoracic echocardiographic criteria, the study recommended considering transesophageal echocardiography in all endocarditis patients with previously unrecognized conduction disturbances, aortic or prosthetic valve involvement, or indications for valve replacement. Other studies have also explored predictors of mortality and complications in patients with paravalvular abscess. One study found that neurologic impairment and renal impairment were significant determinants of 1-year survival in patients presenting with paravalvular abscess (PUBMED:16376774). Another study identified that patients with mechanical prosthesis, aortic paravalvular leakage, aortic vegetations, and undetected organisms had higher odds of developing aortic root abscess, while patients with mitral vegetations had lower odds (PUBMED:32990862). Additionally, a study on mortality in patients with paravalvular abscess diagnosed by transesophageal echocardiography found that patients without preserved valve function (those with moderate or severe regurgitation) had a significantly worse outcome than those with normal valvular function (PUBMED:15220902). In summary, while certain clinical parameters may be associated with the presence of endocarditis-associated paravalvular abscesses, they are generally not reliable predictors. The use of transesophageal echocardiography is recommended for better assessment, especially in patients with certain risk factors such as conduction disturbances and valve involvement.
Instruction: Is pulpitis painful? Abstracts: abstract_id: PUBMED:17804321 Kv1.4 subunit expression is decreased in neurons of painful human pulp. Kv1.4, a subunit of voltage-gated K(+) channels, plays a large role in regulating neuronal excitability. The level of Kv1.4 expression is unknown in human sensory neurons innervating healthy or painful tissue. Therefore, we examined Kv1.4 immunoreactivity in axons innervating both clinically diagnosed asymptomatic and painful symptomatic human tooth pulp. Antibodies directed against Kv1.4 and PGP9.5, a protein marker for axons, was used to determine the proportion of PGP9.5 immunopositive tissue that was also immunopositive for Kv1.4. We report that on pulpal axons innervating symptomatic teeth Kv1.4 immunoreactivity, a correlate of decreased Kv1.4 expression, is significantly decreased (p &lt; 0.0001), suggestive of a factor responsible for facilitating chronic dental pain and decreases in currents produced, such as I(A), in neurons innervating painful pulp. abstract_id: PUBMED:14520770 Capsaicin receptor VR1 and ATP purinoceptor P2X3 in painful and nonpainful human tooth pulp. Aims: To investigate the levels of the capsaicin or vanilloid receptor-1 (VR1) and the ATP-gated purinoceptor P2X3 in painful and nonpainful human tooth pulps. Methods: Immunohistochemistry with specific antibodies and image analysis was used to quantify VR1- and P2X3-positive nerve fibers in painful (n = 13) and nonpainful (n = 33) human tooth pulps, and VR1 immunoreactivity was compared with immunoreactivity for the structural neuronal marker peripherin. Results: Strong VR1-like immunoreactivity was documented for the first time in dental pulp neurons. Weaker P2X3-like immunoreactivity was also detected in fewer nerve fibers. The ratio of VR1 to peripherin immunoreactivity was not significantly different between nonpainful and painful tissues (mean +/- SE% area of VR1 to peripherin; nonpainful 53.4 +/- 4.7%, n = 33; pulpitis 35.1 +/- 7.1%, n = 13; P = .07). Conclusion: The presence of VR1 and P2X3 in fibers of human tooth pulp suggest that they may play a role in perception of dental pain, but further studies, including quantitation of their ligands, are necessary to elucidate any role they may play in pathophysiologic states. abstract_id: PUBMED:11853236 Quantitative analysis of substance P, neurokinin A and calcitonin gene-related peptide in pulp tissue from painful and healthy human teeth. Aim: The purpose of this study was to investigate the levels of substance P (SP), neurokinin A (NKA) and calcitonin gene-related peptide (CGRP) in painful and healthy human dental pulps. Methodology: Forty-six samples of pulp tissue were collected from extracted or endodontically treated painful teeth and 20 from clinically healthy teeth extracted for orthodontic reasons. All pulp samples were boiled in 0.5 m acetic acid for 10 min, centrifuged and the supernatant collected. SP, NKA and CGRP levels were measured using radioimmunoassay. Results: Substance P and CGRP were present in all samples and NKA was detected in 96% of the pulps. CGRP was present in much higher concentrations than SP and NKA in both painful and non-painful teeth. The painful teeth had significantly higher concentrations of SP (P = 0.02), NKA (P &lt; 0.001) and CGRP (P = 0.03) than non-painful teeth. The concentration of CGRP was significantly higher in the pulps of smokers compared with non-smokers (P = 0.02). Conclusions: Elevated levels of these neuropeptides in pulps from painful teeth indicate that they may play an important role in the process of pulpal inflammation and pain. Further investigation of the association between these neuropeptides and pulpal status may help to improve our understanding of pulpal inflammation and dental pain. abstract_id: PUBMED:12120702 Quantitative analysis of substance P, neurokinin A and calcitonin gene-related peptide in gingival crevicular fluid associated with painful human teeth. The aim of this study was to investigate the presence of substance P (SP), neurokinin A (NKA) and calcitonin gene-related peptide (CGRP) in the gingival crevicular fluid of teeth diagnosed with pain of pulpal origin compared with clinically healthy teeth, and to detect any changes in the levels of these neuropeptides in gingival crevicular fluid after removal of the pulp from the painful teeth. Gingival crevicular fluid was collected at baseline from one interproximal site at a painful and a non-painful contralateral tooth from 54 adult patients. Sampling was repeated after 1 wk in a subset of 21 subjects. Samples were analysed for SP, NKA, and CGRP using radioimmunoassay. The mean levels of SP and NKA were significantly higher in gingival crevicular fluid from painful teeth compared with non-painful teeth. The level of SP in the GCF of painful teeth fell significantly 1 wk after pulpectomy. In contralateral teeth, there were no significant differences in the levels of SP and NKA after 1 wk. It is concluded that SP and NKA are present in significantly greater amounts in the GCF of painful teeth compared with healthy teeth. abstract_id: PUBMED:17889683 TRPM8 Axonal expression is decreased in painful human teeth with irreversible pulpitis and cold hyperalgesia. Pulpitis pain might be triggered by a cold stimulus, yet the cellular mechanisms responsible for this phenomenon are largely unknown. One possible mechanism involves the direct activation of cold-responsive thermoreceptors. The purpose of this study was to evaluate the possible role of the TRPM8 thermoreceptor in cold-mediated noxious pulpal pain mechanisms by comparing expression patterns in pulpal nerves from healthy control molars to cold-sensitive painful molars with irreversible pulpitis. Samples were identically processed with the indirect immunofluorescence method, and images were obtained with confocal microscopy. The immunofluorescence intensity and area occupied by TRPM8 within N52/PGP9.5-identified nerve fibers were quantified. Results showed that relative to normal samples, TRPM8 nerve area expression was significantly less in the cold-sensitive painful samples (34.9% vs 8%, P &lt;0.03), but with no significant difference in immunofluorescence intensity between the 2 groups. These results suggest that TRPM8 is most likely not involved in cold-mediated noxious pulpal pain mechanisms. abstract_id: PUBMED:20529324 Sodium channel Na v 1.7 immunoreactivity in painful human dental pulp and burning mouth syndrome. Background: Voltage gated sodium channels Na v 1.7 are involved in nociceptor nerve action potentials and are known to affect pain sensitivity in clinical genetic disorders. Aims And Objectives: To study Na v 1.7 levels in dental pulpitis pain, an inflammatory condition, and burning mouth syndrome (BMS), considered a neuropathic orofacial pain disorder. Methods: Two groups of patients were recruited for this study. One group consisted of patients with dental pulpitis pain (n = 5) and controls (n = 12), and the other patients with BMS (n = 7) and controls (n = 10). BMS patients were diagnosed according to the International Association for the Study of Pain criteria; a pain history was collected, including the visual analogue scale (VAS). Immunohistochemistry with visual intensity and computer image analysis were used to evaluate levels of Na v 1.7 in dental pulp tissue samples from the dental pulpitis group, and tongue biopsies from the BMS group. Results: There was a significantly increased visual intensity score for Na v 1.7 in nerve fibres in the painful dental pulp specimens, compared to controls. Image analysis showed a trend for an increase of the Na v 1.7 immunoreactive % area in the painful pulp group, but this was not statistically significant. When expressed as a ratio of the neurofilament % area, there was a strong trend for an increase of Na v 1.7 in the painful pulp group. Na v 1.7 immunoreactive fibres were seen in abundance in the sub-mucosal layer of tongue biopsies, with no significant difference between BMS and controls. Conclusion: Na v 1.7 sodium channel may play a significant role in inflammatory dental pain. Clinical trials with selective Na v 1.7 channel blockers should prioritize dental pulp pain rather than BMS. abstract_id: PUBMED:32918280 Correlation between the clinically diagnosed inflammatory process and periapical index scores in severely painful endodontically involved teeth. Aim: To assess and correlate three distinct states of severely painful endodontically derived inflammation with their depiction on periapical radiographs using periapical index (PAI) scores. Methodology: During a period of 15 months, 368 consecutively enrolled patients with suspected endodontic emergency conditions were examined at the University of Zurich, Center of Dental Medicine. Cases with a severely painful (numeric rating scale, NRS-11 &gt; 6) endodontically involved tooth and a clear pulpal and apical diagnosis (n = 162) were selected (one tooth per patient). Teeth were divided into three groups according to the clinically diagnosed main location of the inflammatory process: level 1: pulp (positive response to cold test), level 2: periodontium (no response to cold without swelling) and level 3: periapical tissues (no response to cold with swelling). Periapical radiographs were obtained using a digital unit and analysed by two calibrated observers. For level 2, which had the highest PAI variance (n = 76), the PAI scores were further scrutinized regarding their dependence on tooth location and the duration of pain. Data were analysed using chi-squared and non-parametric tests, alpha = 0.05. Results: Overall, the PAI scores correlated well with the clinically diagnosed main location of periapical inflammation (Spearman's rho = 0.5131, P &lt; 0.001), with level 1 having the lowest scores by far (P &lt; 0.001) and level 2 having significantly lower scores compared to level 3 (P &lt; 0.05). However, a PAI score of 5 was found in merely 3 teeth within the entire cohort, and 49% of the teeth in the level 2 group had no radiolucency (PAI &lt; 3). Within level 2, the PAI scores were not dependent on tooth location but were substantially (P &lt; 0.001) higher for teeth which had hurt for more than one week, and for root filled teeth. Conclusions: For the analysed, severely painful endodontically involved teeth, the clinically diagnosed main location of inflammation was reflected by the periapical index. PAI scores were not significantly influenced by anatomical noise, yet in some cases under-estimated the clinical situation. abstract_id: PUBMED:28920110 Painful neuropathy caused by compression of the inferior alveolar nerve by focal osteosclerotic lesion of the mandible: A case report. Osteosclerotic lesions are a common finding on dental radiographs. They are considered developmental variants of a normal bone architecture and they usually do not need any treatment. The purpose of this article is to present a rare case of osteosclerotic lesion of the mandible causing trigeminal neuropathy by compression of the alveolar nerve. The pain started with dental hypersensitivity of the mandibular right first molar. Later on, signs of irreversible molar tooth pulpitis developed. Endodontic therapy and apicoectomy did not resolve the pain, which later intensified, and painful neuropathy localized to inferior alveolar nerve developed; therefore, surgical decompression was indicated. Treating a dental patient with neuralgic pain is always a challenge, especially if there is no obvious source or reason for this type of pain. A clear evaluation and treatment protocol are important to minimize the patient's morbidity and avoid unnecessary overtreatment. abstract_id: PUBMED:37876583 Gingival Crevicular Fluid Levels of Neurokinin A and Substance P in Patients with Symptomatic Irreversible Pulpitis: A Systematic Review and Meta-Analysis. Aims And Objectives: The levels of neuropeptides neurokinin A (NKA) and substance P (SP) in the gingival crevicular fluid of patients with symptomatic irreversible pulpitis (SIP) were evaluated using a comprehensive review and meta-analysis. Materials And Methods: The data bases of Pubmed, Scopus, EBSCOhost, Science Direct, Proquest and Cochrane library databases were thoroughly searched. The quality of the study was evaluated using the Joanna Briggs Criteria. Twenty four studies were listed following a thorough search of full texts, abstracts, and removal of duplicates. Only two of these papers were eligible for inclusion in the meta-analysis. Since the results obtained were in mean and standard deviation, the levels of neuropeptides in the test and control groups were examined using the meta-analysis. Results: When compared with healthy teeth, NKA was not significantly elevated in GCF of individuals with SIP (P = 0.06; odd ratio = 1.34 [-0.05 to 2.74] at 95% confidence interval [CI]). Additionally, there was no evidence of an association between SP and SIP (P = 0.08; odds ratio = 0.84 [-0.10 to 1.77] at 95% CI). Conclusion: This systematic research demonstrated that in individuals with SIP, NKA, and SP are not substantially linked. However, the lack of study in this area makes it evident that additional research is needed, particularly in relation to pulpal disorders and NKA. abstract_id: PUBMED:12406376 Is pulpitis painful? Aim: To determine whether inflamed dental pulps progress to necrosis without pain. Methodology: Records of 2,202 maxillary anterior teeth endodontically treated at the University of Michigan were collected. Records of teeth presenting with periapical radiolucencies but no response to vitality tests were examined further to determine, from the history, whether the patient had experienced pain or no pain from the involved tooth. Results: Approximately 40% of the teeth included gave no history of spontaneous or prolonged pain to a thermal stimulus. No statistically significant differences in the incidence of 'painless pulpitis' were related to either gender or tooth type. Patients aged &gt;53 years experienced 'painless pulpitis' more often than patients &lt;33 years. Conclusions: Many teeth appear to progress to pulpal necrosis without the patient experiencing pain attributable to the pulp. Answer: The question of whether pulpitis is painful can be addressed by examining the evidence from the provided abstracts. According to the study reported in PUBMED:12406376, approximately 40% of the teeth that progressed to pulpal necrosis gave no history of spontaneous or prolonged pain to a thermal stimulus. This suggests that many teeth may progress to pulpal necrosis without the patient experiencing pain attributable to the pulp. However, this does not mean that pulpitis is always painless; rather, it indicates that there can be cases where pulpitis does not result in pain that is noticeable to the patient. Other studies provide evidence that pulpitis can indeed be associated with pain. For instance, PUBMED:17804321 reports that Kv1.4 subunit expression is decreased in neurons of painful human pulp, which is suggestive of a factor responsible for facilitating chronic dental pain. Similarly, PUBMED:14520770, PUBMED:11853236, and PUBMED:12120702 all discuss the presence of various receptors and neuropeptides, such as VR1, P2X3, substance P (SP), neurokinin A (NKA), and calcitonin gene-related peptide (CGRP), in painful human tooth pulp, indicating their potential role in the perception of dental pain and pulpal inflammation. Furthermore, PUBMED:17889683 and PUBMED:20529324 discuss the decreased expression of TRPM8 in painful teeth with irreversible pulpitis and the increased immunoreactivity of sodium channel Na v 1.7 in painful dental pulp, respectively, suggesting their involvement in the mechanisms of dental pain. In summary, while some cases of pulpitis may progress without causing pain that is noticeable to the patient, there is substantial evidence that pulpitis is often associated with pain due to the involvement of various receptors, neuropeptides, and ion channels that contribute to the sensation of pain in the dental pulp (PUBMED:17804321, PUBMED:14520770, PUBMED:11853236, PUBMED:12120702, PUBMED:17889683, PUBMED:20529324).
Instruction: Nadir Hematocrit on Bypass and Rates of Acute Kidney Injury: Does Sex Matter? Abstracts: abstract_id: PUBMED:31053430 Effect of sex on nadir hematocrit and rates of acute kidney injury in coronary artery bypass. Objective: Findings from a large multicenter experience showed that sex influenced the relationship between low nadir hematocrit and increased risk of acute kidney injury after cardiac surgery. We explored whether sex-related differences persisted among patients undergoing isolated coronary artery bypass grafting. Methods: We undertook a prospective, observational study of 17,363 patients without dialysis (13,137 male: 75.7%; 4226 female: 24.3%) undergoing isolated coronary artery bypass grafting between 2011 and 2016 across 41 institutions in the Perfusion Measures and Outcomes registry. Odds ratios between nadir hematocrit and stage 2 or 3 acute kidney injury were calculated, and the interaction of sex with nadir hematocrit was tested. The multivariable, generalized, linear mixed-effect model adjusted for preoperative and intraoperative factors and institution. Results: Median nadir hematocrit was 22% among women and 27% among men (P &lt; .001). Women were administered a greater median net prime volume indexed to body surface area (407 vs 363 mL/m2) and more red blood cell transfusions (55.5% vs 24.3%; both P &lt; .001). Acute kidney injury was higher among women (6.0% vs 4.3%, P &lt; .001). There was no effect of sex on the relationship between nadir hematocrit and acute kidney injury (P = .67). Low nadir hematocrit was inversely associated with acute kidney injury (adjusted odds ratios per 1-unit increase in nadir hematocrit 0.96; 95% confidence interval, 0.93-0.98); this effect was similar across sexes and independent of red blood cell transfusions. Conclusions: We found no sex-related differences in the effect of nadir hematocrit on acute kidney injury after isolated coronary artery bypass grafting. However, the strong inverse relationship between anemia and acute kidney injury across sexes suggests the importance of reducing exposure to low nadir hematocrit. abstract_id: PUBMED:26296273 Nadir Hematocrit on Bypass and Rates of Acute Kidney Injury: Does Sex Matter? Background: Reports have associated nadir hematocrit (Hct) on cardiopulmonary bypass with the occurrence of renal dysfunction. Recent literature has suggested that women, although more often exposed to lower nadir Hct, have a lower risk of postoperative renal dysfunction. We assessed whether this relationship held across a large multicenter registry. Methods: We undertook a prospective, observational study of 15,221 nondialysis-dependent patients (10,376 male, 68.2%; 4,845 female, 31.8%) undergoing cardiac surgery between 2010 and 2014 across 26 institutions in Michigan. We calculated crude and adjusted OR between nadir Hct during cardiopulmonary bypass and stage 2 or 3 acute kidney injury (AKI), and tested the interaction of sex and nadir Hct. The predicted probability of AKI was plotted separately for men and women. Results: Nadir Hct less than 21% occurred among 16.6% of patients, although less commonly among men (9.5%) than women (31.9%; p &lt; 0.001). Acute kidney injury occurred among 2.7% of patients, with small absolute differences between men and women (2.6% versus 3.0%, p = 0.20). There was a significant interaction between sex and nadir Hct (p = 0.009). The effect of nadir Hct on AKI was stronger among male patients (adjusted odds ratio per 1 unit decrease in nadir Hct 1.10, 95% confidence interval: 1.05 to 1.13) than female patients (adjusted odds ratio 1.01, 95% CI: 0.96, 1.06). Conclusions: Lower nadir Hct was associated with an increased risk of AKI, and the effect appears to be stronger among men than women. Understanding of the mechanism underlying this association remains uncertain, although these results suggest the need to limit exposure to lower nadir Hct, especially for male patients. abstract_id: PUBMED:23673070 Association of gender and lowest hematocrit on cardiopulmonary bypass with acute kidney injury and operative mortality in patients undergoing cardiac surgery. Background: Nadir hematocrit on cardiopulmonary bypass (CPB) is a known risk factor for worse outcomes after cardiac surgery. Whether women, because of lower nadir hematocrit on CPB, are more prone to worse outcomes than men after cardiac surgery remains unknown. Methods: We evaluated 13,734 patients (31.3% women) undergoing cardiac surgery (6/1/2001 to 06/30/2011) to study the association of hematocrit on CPB and gender with postoperative acute kidney injury (AKI) stage 2-3 (increase in creatinine at least twice the baseline), and operative mortality. Results: Women were older (68 ± 12 vs 65 ± 12 years, p &lt; 0.001), with more comorbidities. Baseline (37.4% ± 4.4% vs 39.8% ± 4.6%, p &lt; 0.001) and nadir (24.5% ± 3.5% vs 27.4% ± 3.6%, p &lt; 0.001) hematocrit were lower, whereas the hematocrit drop on CPB (baseline and nadir) was greater in women (12.9% ± 4.35 vs 12.4% ± 4.2%, p &lt; 0.001). Observed AKI stage 2-3 and mortality rates were significantly higher in women than in men (5.8% vs 4.9%, p = 0.025 and 4.3% vs 3.4%, p = 0.009, respectively). While nadir hematocrit was inversely related to AKI stage 2-3 and death in both genders, the subgroup of patients with severe hemodilution (nadir hematocrit ≤ 22%) demonstrated a nonsignificant higher rate of AKI stage 2-3 in men (9.2% vs 7.8%) and a significant higher mortality in men (11% vs 7.6%) compared with women. Conclusions: Both men and women undergoing cardiac surgery on CPB were prone to the deleterious effects of hemodilution on renal function and death. Yet, despite greater hemodilution, women had a lower relative risk of AKI and death than men at lower nadir hematocrit values on CPB suggesting better tolerance to hemodilution in women. abstract_id: PUBMED:36324246 Association Between Nadir Hematocrit and Severe Acute Kidney Injury After Off-Pump Coronary Artery Bypass Graft Surgery: A Retrospective Cohort Study Based on the MIMIC-IV Database. BACKGROUND We aimed to evaluate the association between postoperative nadir hematocrit (Hct) and severe acute kidney injury (AKI) in patients undergoing off-pump coronary artery bypass graft (OPCABG) surgery. MATERIAL AND METHODS Data of patients who received OPCABG were extracted from the Medical Information Mart for Intensive Care IV (MIMIC-IV) database. A generalized additive model was applied to explore the relationship between nadir Hct and severe AKI. Patients were divided into 4 groups by quartiles of postoperative nadir Hct, with the lowest group (Hct &lt;25%) as reference. We conducted multivariate logistic regression models to calculate adjusted odds ratios (OR) and 95% CI and evaluate trend among the 4 groups. RESULTS In total, 1783 OPCABG patients were included. A nonlinear association between nadir Hct and severe AKI was identified. After adjusting for potential confounders, nadir Hct was negatively associated with risk of severe AKI when Hct was less than 31%; there was no statistical significance between highest Hct group (Hct ≥31%) and control group (Hct &lt;25%; P&gt;0.05). Tests for trend were significant (P&lt;0.05). Subgroup analyses showed each 1% increase in postoperative nadir Hct was associated with a 23% decrease in risk of severe AKI (OR, 0.77; P=0.002) in lower BMI group (&lt;30 kg/m²). CONCLUSIONS The association between postoperative nadir Hct and severe AKI in patients after OPCABG was nonlinear. Lower nadir Hct may be associated with increased risk of severe AKI when Hct values are less than 31%. However, no statistical significance was found between the highest Hct group and control group. abstract_id: PUBMED:15678051 Hemodilution during cardiopulmonary bypass is an independent risk factor for acute renal failure in adult cardiac surgery. Background: This observational study sought to determine whether the degree of hemodilution during cardiopulmonary bypass is independently related to perioperative acute renal failure necessitating dialysis support. Methods: Data were prospectively collected on consecutive patients undergoing cardiac operations with cardiopulmonary bypass from 1999 to 2003 at a tertiary care hospital. The independent relationship was assessed between the degree of hemodilution during cardiopulmonary bypass, as measured by nadir hematocrit concentration, and acute renal failure necessitating dialysis support. Multivariate logistic regression was used to control for variables known to be associated with perioperative renal failure and anemia. Results: Of the 9080 patients included in the analysis, 1.5% (n = 134) had acute renal failure necessitating dialysis support. There was an independent, nonlinear relationship between nadir hematocrit concentration during cardiopulmonary bypass and acute renal failure necessitating dialysis support. Moderate hemodilution (nadir hematocrit concentration, 21%-25%) was associated with the lowest risk of acute renal failure necessitating dialysis support; the risk increased as nadir hematocrit concentration deviated from this range in either direction (P = .005). Compared with moderate hemodilution, the adjusted odds ratio for acute renal failure necessitating dialysis support with severe hemodilution (nadir hematocrit concentration &lt;21%) was 2.34 (95% confidence interval, 1.47-3.71), and for mild hemodilution (nadir hematocrit concentration &gt;25%) it was 1.88 (95% confidence interval, 1.02-3.46). Conclusions: Given that there is an independent association between the degree of hemodilution during cardiopulmonary bypass and perioperative acute renal failure necessitating dialysis support, patient outcomes may be improved if the nadir hematocrit concentration during cardiopulmonary bypass is kept within the identified optimal range. Randomized clinical trials, however, are needed to determine whether this is a cause-effect relationship or simply an association. abstract_id: PUBMED:12963200 The association of lowest hematocrit during cardiopulmonary bypass with acute renal injury after coronary artery bypass surgery. Background: Acute renal injury is a common serious complication of cardiac surgery. Moderate hemodilution is thought to reduce the risk of kidney injury but the current practice of extreme hemodilution (target hematocrit 22% to 24%) during cardiopulmonary bypass (CPB) has been linked to adverse outcomes after cardiac surgery. Therefore we tested the hypothesis that lowest hematocrit during CPB is independently associated with acute renal injury after cardiac surgery. Methods: Demographic, perioperative, and laboratory data were gathered for 1,404 primary elective coronary bypass surgery patients. Preoperative and daily postoperative creatinine values were measured until hospital discharge per institutional protocol. Stepwise multivariable linear regression analysis was performed to determine whether lowest hematocrit during CPB was independently associated with peak fractional change in creatinine (defined as the difference between the preoperative and peak postoperative creatinine represented as a percentage of the preoperative value). A p value of less than 0.05 was considered significant. Results: Multivariable analyses including preoperative hematocrit and other perioperative variables revealed that lowest hematocrit during CPB demonstrated a significant interaction with body weight and was highly associated with peak fractional change in serum creatinine (parameter estimate [PE] = 4.5; p = 0.008) and also with highest postoperative creatinine value (PE = 0.06; p = 0.004). Although other renal risk factors were significant covariates in both models, TM50 (an index of hypotension during CPB) was notably absent. Conclusions: These results add to concerns that current CPB management guidelines accepting extreme hemodilution may contribute to postoperative acute renal and other organ injury after cardiac surgery. abstract_id: PUBMED:22111467 Lowest safe hematocrit level on cardiopulmonary bypass in patients undergoing coronary artery bypass grafting. Unlabelled: Coronary artery bypass grafting (CABG) is the most common cardiothoracic surgical procedure performed in the United States. The majority of patients undergoing CABG are placed on cardiopulmonary bypass (CPB) to support the circulation. CPB hemodilutes the patient imposing extremes in the hemostatic system, requiring careful assessment of pre-surgical hematologic values. Recent clinical data suggests that patients who receive blood transfusions while hospitalized for CABG have an increased morbidity and mortality. Women have a greater risk of transfusions than men with CABG and are thus at greater postoperative risk. The purpose of the present study was to determine the lowest safe hematocrit level achievable on CPB during CABG surgery where no transfusion and no post-operative complications were identified. Methods: Inpatient record review evaluation including socio-demographic data, hematocrit values (pre-pump and on pump), red blood cell transfusion administration and Surgery, postoperative complications. Results: Collected data from 136 first-time, single CABG patients demonstrated 68% had no postoperative complications. Of this non-complicated group 60% were transfused while only 40% were non-transfused. The non-complicated, non-transrusea group nematocrit vaiues averagea 25.1% +/- 2.8 with a minimum of 19%. (P = 0.003). 68% of the patients had no postoperative complication. Transient acute renal insufficiency was the most common complication observed. Conclusion: The lowest safe hematocrit level on CABG in non-complicated and non-transfused patients was 19% corresponding to an average of 25.1% +/- 2.8. A preoperative patient profile has been identified where age, weight, height, BSA, BMI, and pre-pump and on pump hematocrit values can aid medical staff about transfusion decision making. abstract_id: PUBMED:26650616 Ideal Hematocrit to Minimize Renal Injury on Cardiopulmonary Bypass. Objective: Renal dysfunction after cardiopulmonary bypass (CBP) ranges from subclinical injury to established renal failure requiring dialysis. The pathophysiology is multifactorial, and recently, hemodilution during CBP has been thought to be an important determinant of postoperative renal injury. In this study, we attempted to assess the independent effect of hemodilution on renal function. We also aimed to identify the optimal hematocrit where hemodilution-induced renal injury is minimal. Methods: A prospective observational study was conducted on 200 patients between February 2012 and July 2013. One hundred fifty patients were included in the study group, who were further subdivided on the basis of lowest hemodilution as mild hemodilution (&gt;25%), moderate hemodilution (21%-25%), and severe hemodilution (&lt;21%) categories. The primary outcome of the study was renal outcome measure, which was assessed by comparing the creatinine clearance across the groups. Results: The creatinine clearance decreased over a period in all three groups. When compared with mild or moderate hemodilution, the reduction in creatinine clearance was significantly higher in the group with severe hemodilution (P ≤ 0.0001). However, there was no significant difference in creatinine clearance reduction between the mild and moderate hemodilution groups (P = 0.813; 95% confidence interval, -8.41 to 10.68). Conclusions: Based on our observations, we would like to propose that a hematocrit of 21% should be considered the critical threshold. Hematocrit below this value of 21% during CBP is associated with the most significant deterioration in renal function. abstract_id: PUBMED:37575895 Cardiopulmonary bypass in neonates and infants: advantages of high flow high hematocrit bypass strategy-clinical practice review. Cardiopulmonary bypass is an integral and indispensable part of surgical repair of congenital heart defects. While the complications and morbidity secondary to the use of cardiopulmonary bypass has decreased considerably, there remains a significant incidence of clinically relevant renal and neurological injury. To provide more physiological delivery of oxygenated blood to the end-organs, our center has been successfully using a high-flow, high hematocrit cardiopulmonary bypass strategy since 2006. The essential components of this strategy include maintaining high flows (typically 200 mL/kg/min in neonates, 150-175 mL/kg/min in older infants weighing &lt;10 kg, and 2.6 L/min/m2 in older children) throughout the duration of cardiopulmonary bypass irrespective of patient temperature, as well as maintaining a hematocrit of at least 32% on cardiopulmonary bypass. The incidence of post-operative acute kidney injury (around 3%) and clinical acute neurological events (&lt;1%) with this strategy is considerably less when compared to other contemporary publications using the conventional cardiopulmonary bypass strategy. In this review, we discuss the rationale behind our approach and present evidence to support the high-flow, high-hematocrit strategy. We also discuss the practical aspects of our strategy and describe the adjuncts we use to derive additional benefits. These adjuncts include the use of a hybrid pH/alpha stat strategy during cooling/rewarming, aggressive use of conventional ultrafiltration during cardiopulmonary bypass, a terminal hematocrit of 40-45%, and avoidance of milrinone and albumin in the early peri-operative period. This results in a very low incidence of post-operative bleeding, facilitates chest closure in the operating room even in most neonates, helps in reducing the need for post-operative blood product transfusion and helps in achieving a favorable post-operative fluid balance early after surgery. abstract_id: PUBMED:33229798 Perioperative Nadir Hemoglobin Concentration and Outcome in Off-Pump Coronary Artery Bypass Surgery - A Retrospective Review. Background: Emerging evidence advocates the use of restrictive transfusion strategies at hemoglobin (Hb) levels of approximately 7-8 g/dL in cardiac surgeries using cardiopulmonary bypass. Yet, it is unclear whether the same thresholds can be applied to off-pump coronary bypass (OPCAB) that accompanies cardiac displacement and warm regional ischemia-reperfusion injury without the aid of a bypass machine. The aim of this study is to investigate the relationship between perioperative nadir Hb level and outcome following OPCAB.Methods and Results:Medical records of 1,360 patients were reviewed. Hb levels were serially assessed during and after surgery. The incidence of composite endpoints was 35%, which included myocardial infarction, stroke, acute kidney injury, sternal infection, reoperation, prolonged mechanical ventilation, and in-hospital mortality. The nadir Hb level was significantly lower in the morbidity group than in the non-morbidity group (8.1 [7.4-9.1] vs. 8.8 [7.9-9.8] g/dL, P&lt;0.001). Multivariable logistic regression analysis revealed nadir Hb as an independent risk factor of adverse outcome (odds ratio: 0.878, 95% confidence intervals: 0.776-0.994, P=0.04), whereas preoperative anemia and perioperative transfusion were not. The critical value of Hb for predicting detrimental outcome was 8.05 g/dL. Conclusions: A significant association is found between perioperative nadir Hb and adverse outcome after OPCAB. Although preoperative anemia was not associated with poor prognosisper se, it was the only modifiable risk factor that was closely linked to nadir Hb. Answer: The relationship between nadir hematocrit on cardiopulmonary bypass and the rates of acute kidney injury (AKI) appears to be influenced by sex, but the findings across studies are not entirely consistent. One study found that sex did not affect the relationship between nadir hematocrit and AKI after isolated coronary artery bypass grafting (CABG). In this study, both men and women had a similar inverse association between low nadir hematocrit and increased risk of AKI, suggesting the importance of reducing exposure to low nadir hematocrit for all patients, regardless of sex (PUBMED:31053430). However, another study reported a significant interaction between sex and nadir hematocrit, with the effect of nadir hematocrit on AKI being stronger among male patients than female patients. This suggests that men may be more susceptible to the risks associated with lower nadir hematocrit levels (PUBMED:26296273). Further research indicated that despite women experiencing greater hemodilution, they had a lower relative risk of AKI and death than men at lower nadir hematocrit values on cardiopulmonary bypass (CPB), suggesting better tolerance to hemodilution in women (PUBMED:23673070). In summary, while some studies suggest that sex may play a role in the relationship between nadir hematocrit on bypass and rates of AKI, with men potentially being at greater risk at lower hematocrit levels, other studies do not find a significant sex-related difference. The evidence indicates that both sexes could benefit from strategies aimed at minimizing exposure to low nadir hematocrit during CPB to reduce the risk of AKI.
Instruction: Are racial/ethnic disparities in the prevalence of nocturia due to socioeconomic status? Abstracts: abstract_id: PUBMED:19233415 Are racial/ethnic disparities in the prevalence of nocturia due to socioeconomic status? Results of the BACH survey. Purpose: Racial/ethnic disparities in nocturia prevalence have been reported previously. We estimated nocturia prevalence rates by race/ethnicity and determined the contribution of socioeconomic status to potential differences by race/ethnicity. Materials And Methods: The Boston Area Community Health Survey used a multistage stratified design to recruit a random sample of 5,501 adults, including 2,301 men and 3,200 women, who were 30 to 79 years old. Nocturia was defined as voiding more than once per night in the last week or voiding more than once per night fairly often, usually or almost always in the last month. Self-reported race/ethnicity was defined as black, Hispanic and white. Socioeconomic status was defined as a combination of education and household income. Results: The overall prevalence of nocturia was 28.4% with a higher prevalence in black and Hispanic participants compared to white participants (38.6% and 30.7%, respectively, vs 23.2%), a trend that was consistent by gender. After adjusting for socioeconomic status the increased odds of nocturia in Hispanic men disappeared (adjusted OR 1.04, 95% CI 0.71, 1.52), while the OR in black men was attenuated but remained statistically significant (OR 1.57, 95% CI 1.12, 2.21). In women the association between race/ethnicity and nocturia was attenuated but remained statistically significant after adjusting for socioeconomic status. Conclusions: Socioeconomic status accounts for part of the racial/ethnic disparities in prevalence of nocturia. The effect of socioeconomic status was more pronounced in men and in Hispanic participants, while differences in nocturia prevalence remained significant in black men and women. abstract_id: PUBMED:22847394 The prevalence of lower urinary tract symptoms (LUTS) and overactive bladder (OAB) by racial/ethnic group and age: results from OAB-POLL. Aims: To estimate the prevalence of LUTS and OAB in a large, ethnically diverse US study. Methods: This cross-sectional, population-representative survey was conducted via the Internet in the US among 10,000 men and women aged 18-70 (2,000 African-Americans [AA], 2,000 Hispanics, 6,000 whites). The LUTS tool assessed how often participants experienced LUTS during the past 4 weeks on a five-point Likert scale. OAB was defined by the presence of urinary urgency ≥ "sometimes" or ≥ "often," and/or the presence of urgency urinary incontinence (UUI). Descriptive statistics were used to evaluate group differences. Logistic regression analyses were conducted to examine the impact of racial/ethnic group on OAB. Results: Response rate, 56.7%. Prevalent LUTS included terminal dribble and nocturia across gender, post-micturition leaking (men), and stress incontinence (women). Prevalence of OAB ≥ "sometimes" and ≥ "often" were 17% and 8% in men and 30% and 20% in women--with significantly higher rates among AA men and women. A similar trend was found for UUI among men (AA, 10%; Hispanic and whites, 6%), while AA and white women had higher prevalence of UUI (19%) as compared to Hispanic women (16%). In logistic regression analyses, AA and Hispanic men and women were significantly more likely than whites to have OAB despite having lower prevalence of self-reported comorbid conditions and risk factors. Conclusions: LUTS and OAB are highly prevalent in both men and women and increase with advancing age. Further, racial/ethnic group is a robust predictor of OAB in men and women. abstract_id: PUBMED:12031373 Prevalence of and racial/ethnic variation in lower urinary tract symptoms and noncancer prostate surgery in U.S. men. Objectives: To estimate the proportion of U.S. men affected by specific lower urinary tract symptoms (LUTS) and to assess whether the prevalence of LUTS varies by race/ethnicity. Methods: Included were 30+-year-old men who took part in the Third National Health and Nutrition Examination Survey. Men were asked whether they experienced nocturia, incomplete emptying, or hesitancy. Men 60+ years old were also asked whether they had a decreased urinary stream or had ever undergone noncancer prostate surgery. To obtain the estimated prevalences for the U.S. population, we applied sampling fraction weights. We calculated age-adjusted odds ratios (ORs) of 3+ symptoms or surgery by race/ethnicity using logistic regression analysis. Results: Only nocturia was common in 30 to 59-year-old men. Among men who had not had prostate surgery, 59.9% of men 60 to 69 years old and 75.1% of men 70+ years old had at least one symptom. All four symptoms were reported by 3.0% of men 60 to 69 years and 5.6% of men 70+ years old. Of the men 60 to 69 years old and men 70+ years old, 8.0% and 22.4%, respectively, reported having undergone surgery. In men 60+ years old, the age-adjusted OR for either having 3+ symptoms or surgery was 0.8 for non-Hispanic black men compared with non-Hispanic white men. The odds of having 3+ symptoms (OR = 1.6), but not surgery (OR = 1.1), appeared greater for Mexican-American men than for non-Hispanic white men. Conclusions: Specific LUTS are common in older U.S. men. Older black men were not more likely to have LUTS than were older white men. The apparent modestly higher prevalence of LUTS in older Mexican-American men requires additional study. abstract_id: PUBMED:23220718 Cohort profile: the Boston Area Community Health (BACH) survey. The Boston Area Community Health (BACH) Survey is a community-based, random sample, epidemiologic cohort of n = 5502 Boston (MA) residents. The baseline BACH Survey (2002-05) was designed to explore the mechanisms conferring increased health risks on minority populations with a particular focus on urologic signs/symptoms and type 2 diabetes. To this end, the cohort was designed to include adequate numbers of US racial/ethnic minorities (Black, Hispanic, White), both men and women, across a broad age of distribution. Follow-up surveys were conducted ∼5 (BACH II, 2008) and 7 (BACH III, 2010) years later, which allows for both within- and between-person comparisons over time. The BACH Survey's measures were designed to cover the following seven broad categories: socio-demographics, health care access/utilization, lifestyles, psychosocial factors, health status, physical measures and biochemical parameters. The breadth of measures has allowed BACH researchers to identify disparities and quantify contributions to social disparities in a number of health conditions including urologic conditions (e.g. nocturia, lower urinary tract symptoms, prostatitis), type 2 diabetes, obesity, bone mineral content and density, and physical function. BACH I data are available through the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) Central Repositories (www.niddkrepository.org). Further inquiries can be made through the New England Research Institutes Inc. website (www.neriscience.com/epidemiology). abstract_id: PUBMED:31056998 Healthcare-seeking with bothersome lower urinary tract symptoms among men in the Danish population: the impact of lifestyle and socioeconomic status. Objective: (1) To identify possible factors of importance for reporting lower urinary tract symptoms (LUTS) among men and (2) to examine possible associations between socioeconomic status (SES), lifestyle factors, and likelihood of men contacting a general pracitioner (GP) regarding LUTS reported to be of concern or influencing daily activities (bothersome LUTS). Design: Nationwide population-based, cross-sectional survey. Data was collected in 2012. Setting: The general Danish population. Subjects: A total of 48,910 randomly selected men aged 20+. Main Outcome Measures: (1) Odds ratios for reporting LUTS by lifestyle and SES, and (2) Odds ratios for GP contact with bothersome LUTS by lifestyle and SES. Results: 23,240 men participated (49.8%). Nocturia was the most commonly experienced LUTS (49.8%). Incontinence was most often reported as bothersome (64.1%) and nocturia less often reported as bothersome (34.2%). Only about one third of the men reporting a bothersome LUTS contacted their GP. Odds for reporting LUTS significantly increased with increasing age, obesity, and lack of labor market affiliation. Increasing age and symptom burden significantly increased the odds for GP contact regarding bothersome LUTS. No overall associations were found between lifestyle, SES, and GP contact. Conclusion: Bothersome LUTS are common among Danish men. Concern and influence of LUTS on daily activities are important determinants of GP contact, yet only one in three bothersome LUTS are discussed with a GP. Advanced age and symptom burden were significantly associated with GP contact. Implications: Information on treatment options for LUTS might be desirable among Danish men regardless of SES and lifestyle. Key points Urological symptoms are common among men in the Danish population and are often managed without contacting healthcare professionals. Increasing age and symptom burden significantly increase the likelihood of consulting a general practitioner regarding bothersome urological symptoms Healthcare-seeking behavior with bothersome urological symptoms is not influenced by lifestyle or socioeconomic status among Danish men; Information about available, effective treatment options for urological symptoms might be desirable among men regardless of socioeconomic status and lifestyle. abstract_id: PUBMED:18796363 Prevalence of lower urinary tract symptoms in male aborigines and non-aborigines in eastern Taiwan. Background/purpose: Lower urinary tract symptoms (LUTS) are highly prevalent in men. The relationship of LUTS with socioeconomic level has not been investigated in Asian people. This study investigated the prevalence of LUTS in men and between aborigines and non-aborigines in eastern Taiwan, and their association with demographic, socioeconomic and behavioral characteristics. Methods: A survey of LUTS in adult male community residents of Hualien district was conducted in 2006. LUTS were assessed using questionnaires for the International Prostate Symptom Score (IPSS) and Quality of Life Index (QoL-I). The prevalence of each bothersome LUTS, moderate LUTS (IPSS&gt;8) and impaired QoL (QoL-I&gt;4) were compared between aborigines and non-aborigines. The association between the prevalence of LUTS and socioeconomic status was also compared between the two groups. Results: There were 196 aborigines (32%) and 419 non-aborigines (68%) who completed the questionnaires. Nocturia was the most common LUTS in aboriginal and non-aboriginal men. The prevalence of moderate LUTS increased with age (p&lt;0.001) in men overall. Significantly higher prevalence rates of frequency (11.7% vs. 7.2%, p=0.044) and nocturia (31.1% vs. 19.8%, p=0.002) were found in aborigines than non-aborigines. However, there was no significant difference in the prevalence of moderate LUTS (12.8% vs. 8.8%, p=0.220) or impaired QoL (24.3% vs. 19.9%, p=0.088) between aborigines and non-aborigines. The higher prevalence of frequency and nocturia in aborigines was associated with significantly lower educational and economic levels, which were also associated with a higher prevalence of alcohol drinking, betel quid chewing and cigarette smoking. Conclusion: Nocturia was the most common LUTS in community dwelling aboriginal and non-aboriginal adult males. A low socioeconomic level was associated with a higher prevalence rate of moderate LUTS and impaired QoL, especially in aborigines. abstract_id: PUBMED:20434031 Prevalence of lower urinary tract symptoms in indigenous and non-indigenous women in Eastern Taiwan. Background/purpose: To investigate the prevalence of lower urinary tract symptoms (LUTS), their impact on quality of life, and their association with socioeconomic and lifestyle factors among indigenous and non-indigenous women in Eastern Taiwan. Methods: A total of 376 indigenous women and 509 non-indigenous women aged over 18 years were interviewed concerning LUTS in the recent 6 months using International Prostate Symptom Score questionnaires. Results: Indigenous women had a higher prevalence of one or more LUTS than non-indigenous women (44.9%vs. 31.2%). Indigenous women had a significantly higher prevalence of urgency (7.7%vs. 4.3%, p = 0.024), straining to void (6.1%vs. 3.3%, p = 0.036), and nocturia (37.2%vs. 24.8%, p &lt; 0.001) than non-indigenous women. There was no significant difference in the prevalence of impaired quality of life between indigenous and non-indigenous women (33.8%vs. 31.2%). Lower educational level, alcohol consumption, betel quid chewing, and cigarette smoking, and not difference in race, had significant effect on a higher prevalence of bothersome LUTS in indigenous women than non-indigenous women. Conclusion: Indigenous women with lower educational level and specific lifestyle risk factors have a higher prevalence of LUTS than non-indigenous women in Taiwan. abstract_id: PUBMED:21251675 Prevalence of nocturia in United States men: results from the National Health and Nutrition Examination Survey. Purpose: We estimated the prevalence of nocturia in the United States male population and identified associated factors. Materials And Methods: Data were analyzed for 5,297 men (20 years old or older) who participated in the 2005-2006 and 2007-2008 cycles of the NHANES, a cross-sectional survey of the United States noninstitutionalized population. Getting up 2 or more times at night to urinate was coded as nocturia. Potential factors included age, race/ethnicity, education, waist circumference, self-reported health status, chronic diseases, and prior diagnosis of benign prostatic enlargement and/or prostate cancer (men 40 years old or older). Prevalence and prevalence odds ratios were estimated from a multivariable logistic regression analysis using appropriate sampling weights. Results: The prevalence of nocturia was 21% (weighted 95% CI 19.3-23.0). Nocturia increased in prevalence with age (p&lt;0.001) from 8.2% (CI 6.7-10.2) in men 20 to 34 years old up to 55.8% (CI 51.3-60.2) in men 75 years old or older. More nonHispanic black men had nocturia (30.2%, CI 26.7-34.1) than other racial/ethnic groups (20.1%, CI 18.1-22.1, p&lt;0.001). Significant factors included 10-year increase in age (POR 1.5, CI 1.5-1.6), nonHispanic black race/ethnicity (POR 2.0, CI 1.6-2.7), fair/poor self-rated health (POR 1.5, CI 1.2-1.9), major depression (POR 2.5, CI 1.6-3.9), hypertension (POR 1.4, CI 1.0-1.9) and arthritis (POR 1.3, CI 1.0-1.7). Among men 40 years old or older benign prostatic enlargement (POR 1.4, CI 1.1-1.8) and prostate cancer (POR 1.6, CI 1.0-2.4) were associated with nocturia. Conclusions: After adjusting for age and race norms nocturia was common among United States men. NonHispanic black men had greater odds of nocturia even when controlling for other factors. abstract_id: PUBMED:33497070 Evaluating the Prevalence of Pelvic Floor Disorders in Women in Nonmetropolitan Communities. Introduction: Prevalence rates of pelvic floor disorders in women in nonmetropolitan communities compared with metropolitan communities are unknown. We hypothesize that the rates are higher in women in nonmetropolitan communities. Methods: We accessed the health survey data from the NHANES (National Health and Nutrition Examination Survey) from 2009 to 2010 divided by geocodes into nonmetropolitan (&lt;50,000 inhabitants) and metropolitan communities (&gt;50,000). Responses were analyzed for the following conditions: urinary incontinence and nocturia, bowel urgency and anal incontinence, and symptomatic vaginal bulge. Age-adjusted prevalence rates were estimated using logistic regression. Results: The 2009-2010 NHANES survey was completed nationwide by 302 women in nonmetropolitan communities and 2201 women in metropolitan communities. Overall, prevalence rates of PFDs did not significantly differ between groups. Prevalence rates of urinary incontinence and nocturia at least weekly were similar between metropolitan and nonmetropolitan groups (16.2% vs 14.6%, P = 0.47), with stress incontinence being more common than urgency and other types of incontinence (40% vs 23% and 8%). Women in metropolitan communities reported more bowel urgency than women in nonmetropolitan communities (33.3% vs 26.8%, P = 0.02); however, prevalence rates between both groups are similar with regards to anal incontinence at least once a month or more (9.2% vs 9.0%, P = 0.76). Prolapse symptoms were also not significantly different between the groups (2.4% in both). There was an increase in prevalence in each of the pelvic floor disorder symptoms with age. Conclusions: Pelvic floor disorders are prevalent and increase with age in women in nonmetropolitan communities at rates similar to women in metropolitan communities. abstract_id: PUBMED:15951907 Investigation of voiding dysfunction in a population-based sample of children aged 3 to 9 years Objective: To describe voiding patterns and related dysfunctions in a population-based sample of children aged 3 to 9 years. Methods: A cross-sectional population-based survey including 580 children. A probabilistic sample of households in the urban area of Pelotas in southern Brazil was selected following a multiple-stage protocol. Voiding and fecal patterns were investigated using the dysfunction score created by Farhat et al. and modified by the addition of high urinary frequency (more than eight times a day). Boys with scores above eight and girls above five were clinically investigated, as well as a sub-sample of the remaining children. Results: Nocturia (60.4%), urinary urgency (49.7%) and holding maneuvers (42.1%) were the most frequently reported symptoms. The prevalence of enuresis was 20.1% in boys and 15.1% in girls. The prevalence of urinary dysfunction was 22.8%. Most symptoms were more frequently reported by girls and younger children. Among girls, low socioeconomic level was related to an increased prevalence of enuresis and straining to urinate, while among boys urgency was more common among the poor. Only 10.5% of the parents of the children with voiding dysfunction consulted a doctor because of their problems. Conclusions: The voiding symptoms studied presented high prevalence rates, and therefore should be investigated in clinical practice, with direct questions about each symptom, aiming to diagnose voiding dysfunction. Answer: The relationship between racial/ethnic disparities in the prevalence of nocturia and socioeconomic status (SES) has been explored in several studies. The Boston Area Community Health (BACH) Survey found that the overall prevalence of nocturia was higher in black and Hispanic participants compared to white participants. After adjusting for SES, the increased odds of nocturia in Hispanic men disappeared, while the odds in black men were attenuated but remained statistically significant. In women, the association between race/ethnicity and nocturia was attenuated but remained statistically significant after adjusting for SES (PUBMED:19233415). This suggests that SES accounts for part of the racial/ethnic disparities in the prevalence of nocturia, with the effect being more pronounced in men and in Hispanic participants. Another study, the OAB-POLL, found that racial/ethnic group is a robust predictor of overactive bladder (OAB) in men and women, with African-American men and women having significantly higher rates of OAB despite having lower prevalence of self-reported comorbid conditions and risk factors (PUBMED:22847394). This indicates that factors beyond SES may contribute to the disparities in nocturia and OAB prevalence. The Third National Health and Nutrition Examination Survey (NHANES) also looked into the prevalence of lower urinary tract symptoms (LUTS) and found that specific LUTS are common in older U.S. men, but older black men were not more likely to have LUTS than older white men (PUBMED:12031373). This contrasts with the findings from the BACH survey and suggests that the relationship between race/ethnicity, SES, and nocturia may be complex and not fully explained by SES alone. In summary, while socioeconomic status does contribute to racial/ethnic disparities in the prevalence of nocturia, it does not fully account for these differences. Other factors, potentially including biological differences, access to healthcare, cultural practices, and other social determinants of health, may also play a role in the observed disparities (PUBMED:19233415; PUBMED:22847394; PUBMED:12031373).
Instruction: Is sentinel lymph node mapping indicated for isolated local and in-transit recurrent melanoma? Abstracts: abstract_id: PUBMED:14578738 Is sentinel lymph node mapping indicated for isolated local and in-transit recurrent melanoma? Objective: To determine the feasibility of sentinel lymph node mapping in local and in-transit recurrent melanoma. Summary Background Data: The accuracy of intraoperative lymphatic mapping and sentinel lymphadenectomy (LM/SL) for identification of occult lymph node metastases is well established in primary melanoma. We hypothesized that LM/SL could be useful to detect regional node metastases in patients with isolated local and in-transit recurrent melanoma (RM). Methods: Review of our prospective melanoma database of 1600 LM/SL patients identified 30 patients who underwent LM/SL for RM. Patients with tumor-positive sentinel nodes (SNs) were considered for completion lymph node dissection. Results: Of the 30 patients, 17 were men and 13 were women; their median age was 57 years (range, 29-86 years). Primary lesions were more often on the extremities (40%) than the head and neck (33%) or the trunk (8%). At least 1 SN was identified in each lymph node basin that drained an RM. Of the 14 (47%) patients with tumor-positive SNs, 11 (78%) underwent complete lymph node dissection; 4 had tumor-positive non-SNs. The median disease-free survival after LM/SL was 16 months (range, 1-108 months) when an SN was positive and 36 months (range, 6-132 months) when SNs were negative. At a median follow-up of 20 months (range, 2-48 months), there were no dissected basin recurrences after a tumor-negative SNs. Conclusions: LM/SL can accurately identify SNs draining an RM, and the high rate of SN metastases and associated poor disease-free survival for patients with tumor-positive SN suggests that LM/SL should be routinely considered in the management of patients with isolated RM. abstract_id: PUBMED:18794198 In-transit lymph node metastases in breast cancer: a possible source of local recurrence after Sentinel Node procedure. Aims: In-transit lymph node metastases are a common phenomenon in melanoma patients and have been increasingly recognised since the introduction of the Sentinel Node (SN) procedure. To which extent this also occurs in patients with breast cancer has not been studied yet. The aim of this study was therefore to explore the occurrence of in-transit lymph node metastases in patients with breast cancer. Methods: Afferent lymph vessels to the SN identified by blue dye were removed from 17 patients with breast cancer during a regular SN procedure. Results: Three out of 17 patients showed a lymph node associated with the afferent lymph vessel. One of these lymph nodes showed a breast cancer macrometastasis, to be regarded as an in-transit metastasis. This metastasis would normally have been left in situ. Conclusions: In-transit lymph nodes associated with the afferent SN lymph vessels seem to occur in a significant proportion of patients with breast cancer. These lymph nodes may contain metastases, which are a potential source of local recurrence when left in situ. This finding generates the hypothesis that there may be an indication to remove these lymph vessels during the SN procedure. abstract_id: PUBMED:15278236 Sentinel lymph node in melanoma The procedure of sentinel node biopsy (SNB) has emerged as an important advance especially with respect to staging of malignant melanoma. Elective (prophylactic) lymph node dissection that had been practiced in primary melanoma with a suspected increased risk of (clinically occult) lymphatic metastasis has been replaced by SNB. Patients with proven metastatic involvement of the sentinel node (12-25%) can be specifically selected for regional lymph node dissection. Metastatic involvement of the sentinel node (SN) is a significant independent prognostic factor. The value of detecting metastasis by highly sensitive diagnostic tools such as RT-PCR is just as uncertain as is the therapeutic benefit of operative or conservative therapies in sentinel node-positive patients with respect to improving prognosis and is currently under study. abstract_id: PUBMED:16631558 Is there increased risk of local and in-transit recurrence following sentinel lymph node biopsy? Recent publications have suggested that sentinel lymph node biopsy (SLNB) and completion lymphadenectomy (CLND) increase the rate of local and in-transit disease up to 23% in sentinel node positive group of patients with cutaneous melanoma. This retrospective study combined the data from two national centres on local and in-transit disease in 972 melanoma patients who underwent SLNB procedure over 6.5 years period. In total, 77 patients (7.9%) developed loco-regional recurrence: 41 (4.2%) local recurrence only and 36 patients (3.7%) in-transit metastases during a mean follow-up of 42 months. Patients with positive sentinel lymph node were three times more likely to develop loco-regional metastases than those with no nodal disease (17 vs. 5.6%). Over one third of all recurrences developed following excision of thick (Breslow thickness over 4mm) primary tumours. In both centres age and Breslow thickness were found to be significantly higher in the recurrence group (p&lt;0.001 for both). This study revealed a strong association between increased risk of loco-regional metastases and aggressive tumour biology and adverse patients factors. No conclusive evidence was found to support an increased incidence in patients undergoing SLNB and CLND compared to that published for patients undergoing wide local excision alone. abstract_id: PUBMED:24922694 The role of sentinel lymph node biopsy in patients with local recurrence or in-transit metastasis of melanoma. From January 2003 to March 2010, a prospective study was undertaken at the National Cancer Research Institute of Genoa in 15 patients with melanoma who had local recurrence (LR) or a few (≤ 3) in-transit metastases and clinically-negative regional lymph nodes with the aim of defining: i) the feasibility of sentinel node re-staging (r-sN) of the regional nodal basin; ii) the prognostic value of sentinel node status, and iii) the potential benefit in terms of disease-free survival and overall survival in patients with an histologically-positive sentinel node undergoing therapeutic regional lymph node dissection. Preoperative lymphoscintigraphy was performed to identify the r-sN: the radiotracer was intra-dermally injected around the LR or in-transit metastasis. Moreover, 10 min prior to the operative procedure, 0.5 ml intradermal injection of Patent-Blue-V dye was given around each LR or in-transit metastasis site, so that r-sN identification was achieved by both visualization of the nodal blue dye staining and the information supplied by gamma-detection probe. At least one sentinel node was intra-operatively identified in each patient, and a tumor-positive r-sN was required in four out of fifteen patients. The interval between the diagnosis of primary melanoma and the onset of recurrence was longer, although not significantly, in patients with tumor-negative r-sN, a compared to tumor-positive r-sN (49 ± 47 months vs. 25 ± 19 months, p=0.342). There was a trend toward an improved 1-, 3-, and 5-year disease-free survival and overall survival in patients with tumor-negative r-sN a compared to tumor-positive r-sN. Hence, the r-sN proved to be a feasible and accurate staging procedure even in patients with a few localizations of LR or in-transit metastases (≤ 3). r-sN identified those with a more favorable prognosis, supporting an aggressive therapeutic approach in the natural history of their disease; moreover, an unnecessary regional lymph node dissection was safely avoided in 11 out of 15 73.3% patients because they had a tumor-negative r-sN. abstract_id: PUBMED:31482542 In-transit sentinel lymph nodes predicted by F-18 Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography. Although the in-transit lymph node is a well-known issue in malignant melanoma, it is not documented sufficiently in breast cancer. In this case report an in-transit lymph node demonstrated by both PET/CT and sentinel lymph node mapping and diagnosed by pathology is reported. abstract_id: PUBMED:12599031 Lymphatic mapping and sentinel lymph node biopsy in gastric cancer Introduction: Lymphatic mapping and the sentinel lymph node (SLN) concept has been validated in malignant melanoma and breast cancer.However, the application for other solid tumors is still controversial. One of the most promising approaches is selective lymph node staging in gastric cancer.The presented pilot study evaluated the feasibility of the radiocolloid technique in gastric cancer patients and its value in predicting a positive nodal status. Patients And Methods: Fifteen patients with gastric cancer (u T(1-3)) underwent endoscopic submucosal injection of 0.4 ml 60 MBq (99m)Tc-Nanocis around the tumor 17 (+/-3) h prior to surgery. After laparotomy the activity of all 16 (JGCA) lymph node stations was measured by a handheld probe. All patients underwent standard gastrectomy with systematic D2 lymphadenectomy. After resection the site was scanned for residual activity. All sentinel lymph nodes (SLN's) were removed ex vivo from the resected specimen and processed for intensified histopathologic assessment including serial sections and immunohistochemistry. Results: In 14 of 15 patients at least one or more SLN's were obtained (93%), the median number of SLN's was 3 (1-5). Of the 14 patients, 9 revealed lymph node metastases. In eight of the nine patients the sentinel node(s) correctly predicted metastatic lymph node invasion. In five cases the lymph node station with positive sentinel node(s) was the only positive node station resulting in a sensitivity of 8/9 (89%). In one case immunohistochemical staining revealed micrometastases leading to an upstaging in 1/6 of the initially nodal-negative patients. Conclusion: Lymphatic mapping and sentinel node biopsy using the radiocolloid technique is feasible in gastric cancer. Limited results indicate a correct prediction of the nodal status and the potential of upstaging.Further studies seem to be justified to evaluate the clinical impact of the method. abstract_id: PUBMED:18023573 The application of sentinel lymph node mapping in colon cancer. Lymph node status is the most important prognostic factor for colorectal carcinoma. Complete lymph node dissection has historically been an integral part of the surgical treatment of these diseases. Sentinel lymph node mapping is a newer technology that allows selective removal of the first node draining a tumor. Sentinel node mapping is well accepted for the management of breast carcinoma and cutaneous melanoma, and has resulted in reduced morbidity without adversely affecting survival. Sentinel node mapping is currently being investigated for treatment of colorectal cancers. Recent studies show promise for incorporating the sentinel node mapping technique for treatment of several gastrointestinal malignancies. abstract_id: PUBMED:15256245 Selective lymphadenectomy in sentinel node-positive patients may increase the risk of local/in-transit recurrence in malignant melanoma. Aim: To determine whether sentinel lymph node biopsy (SLNB) for cutaneous malignant melanoma, particularly when followed by selective lymphadenectomy (SL) if involved nodes are found, alters the incidence of local/in-transit recurrence. Methods: A literature overview of SLNB with or without SL has been performed, concentrating on the reported site(s) of first recurrence, and with specific reference to the incidence of local/in-transit recurrence. This is compared to the incidence after wide local excision (WLE) alone. Results: The incidence of local/in-transit recurrence after WLE alone is 2.5-6.3% over a given range of tumour thickness, and is 9.0% after SLNB (with or without SL). In the latter group, the local/in-transit recurrence rate is 5.7% following SLNB alone in SN-negative patients, and is 20.9% after SLNB plus SL in SN-positive patients. Conclusions: The incidence of local/in-transit recurrence following selective lymphadenectomy in sentinel node-positive patients may be greater than four times the incidence expected. This possible iatrogenic risk should be confirmed or refuted by randomised controlled trial. Until then the SLNB procedure should be regarded as experimental and not performed outside validation trials. abstract_id: PUBMED:25769710 Sentinel lymph node mapping in melanoma in the twenty-first century. The incidence of melanoma is increasing faster than any other cancer. The status of the regional nodal basin remains the most important prognostic factor. Sentinel lymph node biopsy (SLNB) is recommended for staging in patients diagnosed with intermediate-thickness melanoma (1.01-4.0 mm). SLNB is considered somewhat controversial, especially when used to stage thin (1 mm), thick (&gt;4 mm), or desmoplastic melanoma. This article reviews the current literature regarding SLNB in thin, intermediate, thick, and desmoplastic melanoma. Data supporting the use of newer radiopharmaceuticals in sentinel lymph node mapping along with newer imaging modalities are also reviewed. Answer: Yes, sentinel lymph node mapping is indicated for isolated local and in-transit recurrent melanoma. The feasibility of sentinel lymph node mapping in local and in-transit recurrent melanoma has been demonstrated, and it can accurately identify sentinel nodes draining a recurrent melanoma. The high rate of sentinel node metastases and associated poor disease-free survival for patients with tumor-positive sentinel nodes suggests that sentinel lymph node mapping should be routinely considered in the management of patients with isolated recurrent melanoma (PUBMED:14578738). Additionally, sentinel lymph node biopsy has emerged as an important advance in the staging of malignant melanoma, allowing for the selection of patients with metastatic involvement of the sentinel node for regional lymph node dissection (PUBMED:15278236). Furthermore, a prospective study showed that sentinel node re-staging in patients with local recurrence or a few in-transit metastases and clinically-negative regional lymph nodes is feasible and accurate. It identified patients with a more favorable prognosis and supported an aggressive therapeutic approach in the natural history of their disease, while avoiding unnecessary regional lymph node dissection in patients with tumor-negative sentinel nodes (PUBMED:24922694). Therefore, the evidence supports the use of sentinel lymph node mapping in the management of patients with isolated local and in-transit recurrent melanoma.
Instruction: Carotid artery stenting in older patients: is age a risk factor for poor outcome? Abstracts: abstract_id: PUBMED:12431135 Carotid artery stenting in older patients: is age a risk factor for poor outcome? Purpose: To assess the impact of age on technical success and complications of carotid stenting in a prospective single-center cohort study. Methods: One hundred eleven consecutive patients (74 men; median age 70 years) with &gt;or=70% symptomatic (n=33) or &gt;or=90% asymptomatic (n=78) internal carotid artery (ICA) stenosis underwent carotid artery stent implantation. Primary technical success and periprocedural complications were compared in patients aged &gt;75 years (n=28) to patients &lt;75 years (n=83). Results: Patient groups below and above 75 years compared well with respect to baseline demographic and clinical data. Successful stenting was achieved in 108 (97%) patients. The combined neurological complication rate was 7% (n=8), with 1 (1%) major stroke, 1 (1%) minor stroke, and no 30-day mortality. Technical angiographic complications occurred in 8 (7%) patients. No significant differences between patients &gt;75 years and those &lt;75 years were observed for primary success rates (100% [28/28] versus 96% [80/83]; p=0.8), overall complications (14% [4/28] versus 16% [13/83]; p=1.0), neurological complications (7% [2/28] versus 7% [6/83]; p=1.0), or technical complications (7% [2/28] versus 4% [3/83]; p=0.6). Conclusions: Elective carotid stenting can be performed safely in older patients with several comorbidities. Patient age does not seem to be an independent risk factor for poor outcome after endovascular treatment of internal carotid artery stenosis. abstract_id: PUBMED:37657401 Complications and outcomes of carotid artery stenting in high-risk cases. Objectives: Carotid artery stenting is sometimes adapted for some at-risk cases; however, appropriate treatment timing with stroke onset is controversial. This study aims to identify factors that have an impact on complications and outcomes, especially in patients at high risk. Materials And Methods: We examined the characteristics of 152 consecutive patients treated by carotid artery stenting between January 2018 and March 2022 and retrospectively analyzed the risk factors for complications and poor outcomes (modified-Rankin-Scale deterioration), such as patient background, carotid artery stenting risks (access route tortuosity, severe calcification, vulnerable plaque, estimated glomerular filtration rate &lt;30 mL/min/1.73 m2, etc.), characteristics of the stenosis, details of treatment, and treatment timing. Results: The average North American Symptomatic Carotid Endarterectomy Trial criteria score was 68.3% and the lesion length was 20.5±9.7mm. Among patients, 107 (70.4%) had a carotid artery stenting risk. In high-risk carotid artery stenting cases, symptomatic complications occurred in 32 (30.0%), and the 90-day modified Rankin scale score deteriorated in 15 cases (14.0%). Multivariate analysis showed that cases with triple antithrombotic therapy (p=0.003), stenting within 7 days (p=0.0032), and after 28+ days (p=0.0035) of stroke onset were independently associated factors for complications. Conclusions: This study showed that among risk factors, triple antithrombotic therapy in particular was a risk factor for perioperative complications. Carotid artery stenting for patients with stroke after 28 days of onset affects the prognosis. Therefore, although further study is warranted, waiting more than one month for treatment in patients requiring carotid artery stenting is a potential risk. abstract_id: PUBMED:36476091 Age and Five-Year Outcomes After Carotid Artery Stenting in Symptomatic Carotid Stenosis: A Retrospective Cohort Study. Background: Several clinical trials have reported that periprocedural risk of carotid artery stenting (CAS) increase with age. China is experiencing one of the most rapid transitions to an aging society, while the clinical outcomes of CAS in real-world China are still limited. The study aimed to compare the periprocedural and an extending 5-year event rates between younger and older patients treated by CAS to testify the safety of CAS in older patients in China. Methods: This is a single center, retrospective cohort study. Symptomatic patients who underwent CAS from Nov 2011 to June 2014 were retrospectively included in this study, The population was divided into two age groups: &lt;70 and ≥70. The main primary endpoint was stroke, myocardial infarction or death occurring at 30 days, or ipsilateral stroke over 5-year after stenting. Results: A total of 103 symptomatic patients (&lt;70: 68%; ≥70: 32%) with CAS included in the study. During the 30-day period, the rate of primary outcome was 1.0% with only one stroke in patients younger than 70 years old (P = 1.000). After five years, the rate of primary outcome was low (10.9%, 10/92) despite some of the patients had major stroke or underwent bilateral C1 stenting. There was no significant difference in the 5-year rates of the primary outcome between the young and old groups (12.3% vs 7.4%, P = .718). Kaplan-Meier estimates of the proportion of study participants with a primary endpoint showed that there was no significant age-dependent difference of the stroke and death outcome in symptomatic patients. Conclusions: Age (&lt;70 vs ≥ 70) had no influence on the risk of stroke or death in symptomatic patients either in the short or long term. CAS is an optimal treatment for older patients with cervical carotid artery stenosis, and CAS was safe for those with major stroke or bilateral C1 stenting. abstract_id: PUBMED:28919313 Periprocedural Outcomes of Carotid Artery Stenting in Elderly Patients. Background: Carotid artery stenting has emerged as an alternative to carotid endarterectomy especially in patients with high risk of carotid endarterectomy. Older age (≥80 years old) was recognized as one of the high risk factors of carotid endarterectomy. However, the association between older age and increased risk of adverse events for carotid artery stenting has been reported. The purpose of this study was to evaluate the association between age and periprocedural outcomes after carotid artery stenting. Methods: A total of 126 symptomatic and asymptomatic cases of carotid artery stenosis were treated with tailored carotid artery stenting. The type of stents and embolic protection devices were chosen according to clinical and morphologic characteristics of the patients. Procedural, imaging, and clinical outcomes were retrospectively assessed and compared between the elderly patients group (≥80 years old) and the non-elderly patients group (&lt;80 years old). Results: Clinical and morphologic characteristics except for dyslipidemia were not significantly different between the 2 groups. Periprocedural neurologic complications were not significantly different between the 2 groups (P = .095). Minor stroke occurred more frequently in the elderly patients group (P = .021). However, the frequency of major stroke was not significantly different between the 2 groups (P = 1). Presence of new ischemic lesions on postprocedural examination was not significantly different between the 2 groups (P = .84). Myocardial infarction and death did not occur in either group. Conclusions: Carotid artery stenting can be performed safely in elderly patients, comparable with non-elderly patients. abstract_id: PUBMED:38403718 Clinical Significance of Carotid Endarterectomy and Carotid Artery Stenting in Older Patients Over 80. As the average life expectancy increases, neurosurgeons are likely to encounter patients aged 80 years and above with carotid stenosis; however, whether old age affects clinical post-treatment outcomes of carotid endarterectomy (CEA) or carotid artery stenting (CAS) remains inconclusive. Thus, this study aimed to evaluate the outcomes following CEA or CAS in patients aged 80 years and above. This study included older over 80 years (n = 34) and younger patients (&lt;80 years; n = 222) who underwent CEA or CAS between 2012 and 2022. All of them were followed up for a mean of 55 months. All-cause mortality, the incidence of vascular events, ability to perform daily activities, and nursing home admission rates were assessed. During follow-up periods, 34 patients (13.3%) died due to coronary artery disease, malignancy, and pneumonia, and the incidence was significantly higher in the elderly group than in the younger group (P = 0.03; HR, 3.01; 95% CI, 1.53-5.56). The incidence of vascular events did not differ between the older group (29.5%) and the younger group (26.9%, P = 0.58); however, the incidence was significantly higher in patients with high-intensity plaques than in those without that (P = 0.008; HR, 2.83, 95%CI, 1.27-4.87). The decline in the ability to perform daily activities and increased nursing home admission rates were high in elderly patients (P &lt; 0.01). Although the mortality rate was higher in the elderly group, subsequent vascular events were comparable to that in the younger group. The results suggest that CEA and CAS are safe and useful treatments for carotid stenosis in older patients, especially to prevent ipsilateral ischemic stroke. abstract_id: PUBMED:24512895 Carotid artery stenting: Influence of experience and cerebrovascular risk factors on outcome. Aim: To evaluate technical success, complications and the influence of the learning curve on outcome in carotid artery stenting (CAS) performed in patients not suitable for surgery. Patients And Methods: One hundred and nine procedures of protected carotid stenting in 103 high risk patients were performed. All patients presented at least one factor that potentially increased the surgical risk of carotid endoarterectomy (CEA), according to SAPPHIRE criteria. Neurologic complications were quantified by the National Institutes of Health Stroke Scale (NIHSS) and were evaluated by median Rankin Scale (mRS). To evaluate the influence of experience of the operator to perform CAS, we retrospectively analyzed periprocedural and neurological complications of the first 50 procedures compared with that of the following 59 interventions. Results: Technical success rate was 98%. Neurological periprocedural complications were revealed in 4.5% of patients. In-hospital and 30-days neurological complications rate was 7.6 and 2.6% respectively. Periprocedural neurological complications rate was lower in the last procedures performed, according to a higher confidence of the operators. Conclusions: CAS may be performed as an alternative of CEA for the treatment of severe carotid obstructive disease in patients not suitable for surgery. The learning curve positively influence complications rate. abstract_id: PUBMED:29171201 Carotid Artery Stenting. Carotid artery stenosis is relatively common and is a significant cause of ischemic stroke, but carotid revascularization can reduce the risk of ischemic stroke in patients with significant symptomatic stenosis. Carotid endarterectomy has been and remains the gold standard treatment to reduce the risk of carotid artery stenosis. Carotid artery stenting (CAS) (or carotid artery stent implantation) is another method of carotid revascularization, which has developed rapidly over the last 30 years. To date, the frequency of use of CAS is increasing, and clinical outcomes are improving with technical advancements. However, the value of CAS remains unclear in patients with significant carotid artery stenosis. This review article discusses the basic concepts and procedural techniques involved in CAS. abstract_id: PUBMED:37567132 Predictors of hemodynamic instability during and persistent after carotid artery stenting. Objectives: The risk factors for post-carotid artery stenting severe hemodynamic instability remain elusive. This study aimed to identify the predictors of severe hemodynamic instability during and persisted after carotid artery stenting. Materials And Methods: Consecutive patients who underwent carotid artery stenting for extracranial carotid artery stenosis at a single-center between September 2018 and July 2021 were retrospectively assessed. The predictive factors of severe hemodynamic instability intraoperation and post-operation were analyzed. Results: Among the 139 patients included, 63 experienced severe hemodynamic instability, with 45 and 18 cases occurring intra and postoperatively, respectively. Persistent was observed in 21 patients. Smoke exposure (odds ratio [OR], 2.38; p=0.039), carotid bifurcation stenosis (OR, 0.91; p=0.018), and large-diameter balloon (&gt;4 mm) dilatation (OR, 11.95; p&lt;0.001) were identified as independent risk factors for hemodynamic instability at any stage of carotid artery stenting. Intraoperatively, large-diameter balloon (&gt;4 mm) dilatation was associated with an increased risk of hemodynamic instability occurrence (OR, 4.67; p=0.01), whereas general anesthesia (OR, 0.19; p=0.001) and a longer distance from the stenosis to the carotid bifurcation (OR, 0.89; p=0.01) were negatively associated with hemodynamic instability. Furthermore, smoking exposure (OR, 3.73; p=0.03), large diameter balloon dilatation (OR, 6.12; p=0.032), distance from stenosis to bifurcation (OR, 0.85; p=0.047) and long-stent (40 mm) implantation (OR, 0.84 [95% confidence interval, 0.74-0.95]; p=0.007) could independently predict persistent hemodynamic instability. Conclusion: Patients with a smoking history, lesions near the carotid bulb, or dilatation using a large-diameter balloon were most likely to suffer severe hemodynamic instability. General anesthesia can protect against severe hemodynamic instability only intraoperatively. Long-term stent implantation may reduce persistent hemodynamic instability. abstract_id: PUBMED:25496686 Anatomical risk factors for ischemic lesions associated with carotid artery stenting. The purpose of this study was to investigate the anatomical risk factors for ischemic lesions detected by diffusion-weighted imaging (DWI) associated with carotid artery stenting (CAS). DWI was performed within four days after CAS in 50 stenotic lesions between January 2008 and September 2013. We retrospectively analyzed the correlation between the anatomical factors and ischemic lesions associated with CAS. Post-procedural DWI revealed new ischemic lesions after 24 (48%) of the 50 CAS procedures. All three patients with common carotid artery tortuosity, defined as the presence of severe angulation (less than 90 degrees) in the common carotid artery, developed new ischemic lesions. However, there were no significant differences between the patients with and without tortuosity, likely due to the small number of cases. Meanwhile, seven of eight patients with internal carotid artery tortuosity, defined as the presence of severe angulation (less than 90 degrees) in the cervical segment of the internal carotid artery, developed new ischemic lesions. A multivariate analysis showed internal carotid artery tortuosity (odds ratio: 11.84, 95% confidence interval: 1.193-117.4, P= 0.035) to be an independent risk factor for the development of ischemic lesions associated with CAS. Anatomical factors, particularly severe angulation of the internal carotid artery, have an impact on the risk of CAS. The indications for CAS should be carefully evaluated in patients with these factors. abstract_id: PUBMED:35743667 Patient-Tailored Stenting versus Endarterectomy for the Treatment of Asymptomatic Carotid Artery Stenosis. Carotid artery stenosis is a major cause of acute ischemic strokes in adults. Given the consequences and sequelae of an acute ischemic stroke, intervention while patients are still asymptomatic is a key opportunity for stroke prevention. Although carotid endarterectomy has been the gold standard of treatment for carotid stenosis for many years, recent advances in carotid stenting technology, practitioner experience, and dual antiplatelet therapy have expanded the use for treatments other than endarterectomy. Review of the current literature has demonstrated that endarterectomy and carotid artery stenting produce overall similar results for the treatment of asymptomatic carotid stenosis, but certain factors may help guide physicians and patients in choosing one treatment over the other. Age 70 years and older, renal disease, poor medication compliance, and unstable plaque features all portend better outcomes from endarterectomy, whereas age under 70 years, high cervical location of disease, cardiac disease, and reliable medication compliance favor stenting. The decision to pursue endarterectomy versus stenting is therefore complex, and although large studies have demonstrated similar outcomes, the approach to treatment of asymptomatic carotid stenosis must be optimized for each individual patient to achieve the best possible outcome. Answer: Based on the provided abstracts, age does not appear to be a significant risk factor for poor outcomes following carotid artery stenting (CAS) in older patients. Several studies have specifically investigated the impact of age on the success and complications of CAS: - A prospective single-center cohort study found that elective carotid stenting can be performed safely in older patients with comorbidities, and patient age did not seem to be an independent risk factor for poor outcomes after endovascular treatment of internal carotid artery stenosis (PUBMED:12431135). - A retrospective cohort study in China comparing younger and older patients treated by CAS found no significant difference in periprocedural and 5-year event rates between the two age groups, suggesting that age had no influence on the risk of stroke or death in symptomatic patients either in the short or long term (PUBMED:36476091). - Another study evaluating the association between age and periprocedural outcomes after CAS concluded that the procedure can be performed safely in elderly patients, comparable with non-elderly patients (PUBMED:28919313). - A study including patients over 80 years old undergoing CEA or CAS found that although the mortality rate was higher in the elderly group, subsequent vascular events were comparable to that in the younger group, suggesting that CEA and CAS are safe and useful treatments for carotid stenosis in older patients (PUBMED:38403718). However, it is important to note that while age alone may not be a significant risk factor, other factors such as patient background, carotid artery stenting risks (e.g., access route tortuosity, severe calcification, vulnerable plaque), and treatment timing can influence complications and outcomes (PUBMED:37657401). Additionally, anatomical factors like severe angulation of the internal carotid artery have been identified as risk factors for ischemic lesions associated with CAS (PUBMED:25496686). In conclusion, the evidence from these studies suggests that CAS can be a safe and effective treatment for carotid stenosis in older patients, and age by itself should not be considered a contraindication for the procedure. However, individual patient factors and anatomical considerations should be taken into account when planning treatment.
Instruction: Hartmann procedure: is it still relevant today? Abstracts: abstract_id: PUBMED:28621183 Colon reconstruction after the Hartmann procedure. Laparoscopy or laparotomy? The Hartmann procedure - primarily - was introduced for the treatment of cancerous diseases of sigmoid colon and rectum. In the last few years it was performed in cases with complicated inflammation of the left colon, colon cancers with bowel obstruction or perforation, or in the case of insufficiency of the anastomoses, or in urgent operations after traumatics bowel injuries, when making the anastomosis is at very high risk. Later on the restoration of the continuity of the GI tract can be done traditionally (by laparotomy) or laparoscopically. We present our experiences with the above technique, and also compare our results with traditional open surgeries in our patients. abstract_id: PUBMED:27365576 HARTMANN'S PROCEDURE REVISITED. Between July 1991 and June 1999 out of 176 patients undergoing colonic surgery (elective and emergency), Hartmann's procedure was performed in 63. There were 39 males and 24 females. Mean age was 43 years (range 12 to 81). To assess the utility of Hartmann's procedure the documents of all these 63 patients were retrospectively analysed. In 47 bowel continuity was reestablished. The over all mortality in first stage operation (Hartmann's procedure) was 12.7% and mortality in the second stage procedure (reestablishing bowel continuity) was nil. Complication rate was drastically less in second operation. The authors conclude that Hartmann's procedure is safe and effective while dealing with colorectal pathologies as resection and primary anastomosis is fraught with danger. abstract_id: PUBMED:15943733 Hartmann procedure: is it still relevant today? Background: With more and more centres worldwide resorting to primary anastomosis for most left sided colonic pathology, the place for a Hartmann procedure seems to be relegated to surgical history books. However, in our centre it is still being performed on a regular basis. As such, we decided to retrospectively look at our results for the procedure. Methods: All hospital records of patients undergoing the Hartmann procedure between January 1998 and December 2001 were retrospectively analysed looking at demographics, comorbidities and indications of the procedure. Results: There were 52 men and 33 women with a median age of 69 years (range 31-96 years). Sixty-six per cent of the patients had medical comorbidities at the time of the operation. The indications for performing the procedure were: 45 patients for cancer (31 patients for obstruction, 11 patients for perforation, two patients for fistulation to other organs and one for uncontrollable bleeding). 19 patients had the procedure for complicated diverticulitis while four patients had anastomotic leaks, which required conversion to the procedure. Other indications include trauma (four patients), ischemic bowel (six patients) and iatrogenic (one patient). Our median operating time was 160 min (range 50-415 min). Our reversal rate was 32%. Our mortality rate for the first stage was 16% and our morbidity, 51%. The morbidity for the reversal was 29%, with no mortalities. Conclusion: Though the idea of primary anastomosis with on table lavage for left sided anastomosis seems attractive, we think the Hartmann procedure is still useful for selected patients. abstract_id: PUBMED:21234148 Laparoscopic reversal of Hartmann procedure. Background: The Hartmann procedure is a standard life-saving operation for acute left colonic complications. It is usually performed as a temporary procedure with the intent to reverse it later on. This reversal is associated with considerable morbidity and mortality by open method. The laparoscopic reestablishment of intestinal continuity after Hartmann procedure has shown better results in terms of decrease in morbidity and mortality. Materials And Methods: The laparoscopic technique was used consecutively in 12 patients for the reversal of Hartmann procedure in the last 3 years. The adhesiolysis and mobilization of the colon was done under laparoscopic guidance. The colostomy was mobilized and returned to abdominal cavity after tying the anvil in the proximal end. An end-to-end intracorporeal anastomosis was performed between the proximal colon and the rectum using the circular stapler. Results: Mean age of the patients was 40 years and the mean time of restoration of intestinal continuity was 130 days. Two patients were converted to open. The mean time of operation was 90 min. There were no postoperative complications and mortality. The mean hospital stay was 5 days. Conclusion: Laparoscopic reversal of Hartmann is technically safe and feasible. abstract_id: PUBMED:30572420 Efficacy and Safety of Laparoscopic Hartmann Colostomy Reversal. Purpose: Hartmann operation is widely recognized as a useful procedure, especially in emergencies involving the rectosigmoid colon. One of the surgeon's foremost concerns after Hartmann operation is future colostomy reversal, as colostomy reversal after a Hartmann procedure is associated with relatively high morbidity and mortality. Laparoscopic surgical techniques continue to prove useful for an ever-increasing variety of indications. We analyzed the outcomes of laparoscopic Hartmann colostomy reversals at our center. Methods: We retrospectively analyzed the hospital records of 170 patients who had undergone Hartmann operation between January 2010 and June 2017 at Wonkwang University Hospital. Among 68 Hartmann colostomy reversals, we evaluated and compared the outcomes of 3 groups of patients: 29 patients in the open colostomy reversal group (OG) who had undergone laparotomies for Hartmann reversals, 19 patients in the conversion group (CG) whose laparoscopic procedures had required conversion to a laparotomy, and 20 patients in the laparoscopy group (LG). Results: The overall reversal rate for Hartmann colostomies was 40.5% during this time period. The duration of hospital stay was significantly shorter among LG patients (10.15 ± 2.94 days) than among OG patients (16 ± 9.5 days). The overall complication rate among OG patients was higher than that among LG patients (adjusted odds ratio, 8.78; P = 0.01). The most common complication was postoperative ileus (19.1%). Conclusion: If no contraindications to laparoscopy exist, surgeons should favor a laparoscopic reversal of Hartmann operation over an open reversal. abstract_id: PUBMED:34504403 Robotic Reversal of Hartmann's Procedure. The Hartmann's procedure first described in 1920 is a gold standard for a variety of emergent procedures of the sigmoid colon. A standardized approach to a robotic reversal of a Hartmann's procedure is described to reestablish bowel continuity. abstract_id: PUBMED:22493768 Laparoscopic reversal of Hartmann's procedure. Reversal of Hartmann's procedure is a major surgical procedure associated with significant morbidity and mortality. Because of the difficulty of the procedure, laparoscopic reversal of Hartmann's procedure is not well established. We describe our experience with this laparoscopic procedure to assess its difficulty and safety. Five patients (4 men and 1 woman) underwent laparoscopic reversal of Hartmann's procedure (LRHP). The initial surgeries were performed to manage obstructive colorectal cancer for 4 patients, and rectovesical fistula for one patient. The procedure was laparoscopically completed for 4 patients. Conversion to open laparotomy was required for one patient, secondary to massive adhesion in lower abdomen. Transient ileostomies were made in 2 cases. Operative time ranged from 240 to 545 minutes. There was no operative mortality. LRHP can be performed safely by an experienced surgeon. However, it is still technically challenging and time consuming. abstract_id: PUBMED:37109443 Laparoscopic Hartmann Procedure-A Surgery That Still Saves Lives. Background: A Hartmann operation, which is the intervention by which the lower part of the sigmoid and the upper part of the rectum are resected with the closing of the rectal stump and end colostomy, has as its indications: advanced or complicated rectosigmoid neoplasm, moderate biological condition of the patient, peritoneal sepsis, intestinal occlusion and fragile colonic wall, especially in the context of inflammatory changes. The Hartmann procedure can save lives even at the cost of a stoma reversal failure. Methods: The cases operated with the Hartmann procedure by an open approach or laparoscopic approach in our clinic, between 1 January 2016 and 31 December 2020, were admitted in this study and their medical records were reviewed, also making a comparison between the two types of approach. Univariate statistical comparisons but also a multivariate analysis was performed. Results: We performed 985 operations for intestinal and colonic occlusion (7.15% of the total operations in the clinic), 531 (54%) were non-tumor occlusions and 454 (46%) were occlusive tumors (88 Hartmann operations). Of these, 7.3% were laparoscopically performed (7 laparoscopic Hartmann operations and 23 diagnostic laparoscopies). A total of 11 cases (18%) also had colonic perforation. We compared laparoscopic Hartmann with open Hartmann and observed the benefits of laparoscopy for postoperative morbidity and mortality. The presence of pulmonary and cardiac morbidities is associated with the occurrence of general postoperative morbidities, while peritonitis is statistically significantly associated with the occurrence of local complications that are absent after the laparoscopic approach. Conclusions: The Hartmann procedure is still nowadays an operation widely used in emergency situations. Laparoscopy may become standard for the Hartmann procedure and reversal of the Hartmann procedure, but the percentage of laparoscopy remains low due to advanced or complicated colorectal cancer, poor general condition both at the first and second intervention, and the difficulties of reversal of the Hartmann procedure. abstract_id: PUBMED:36316535 The suspended ileal loop for difficult Hartmann's reversal. Background: Hartmann's reversal can be complicated by the presence of dense adhesions in the upper part of the abdominal cavity, difficulties in freeing the splenic flexure with the risk of splenic tears, a lack of sufficient colonic length, the risk of ureteral lesion, or the risk of lesions of the vascular arcade. Methods: We propose a technique which consists of interposing an adapted segment of ileal loop between the end of the proximal colon and the rectum to restore intestinal continuity. Results: Two patients had Hartmann procedure, the first for a Hinchey stage 4 perforated diverticulitis and the second for a colorectal fistula due to ischemia of the proximal colonic segment. Hartmann's reversal was expected to be difficult, so a suspended ileal loop was used. The outcomes were uneventful, and functional results were satisfactory. Conclusions: A suspended ileal loop could be used as a salvage procedure in some cases of potentially difficult Hartmann's reversal. abstract_id: PUBMED:34540468 Outcomes of Laparoscopic vs. Open Reversal of Hartmann's Procedure: A Single Centre Experience. Introduction Hartmann's procedure is widely performed to fix colonic obstruction and perforation. It should ideally be followed by a reversal to restore bowel continuity. Reversal of Hartmann's procedure was traditionally performed using an open technique. However, in recent days, the use of a laparoscopic approach has become increasingly popular. In our retrospective observational study, we aim to investigate the outcomes of laparoscopic versus open reversal of Hartmann's procedure in a UK tertiary centre. Methods All patients who underwent reversal of their Hartmann's procedure between January 2017 and December 2019 were included in the study. Data including demographics, days between primary operation and reversal, laparoscopic or open reversal, length of hospital stay following reversal procedure, 30-day readmission, mortality, and complication rate were collected. Statistical analysis was performed using t-test and chi-squared test. Results Forty-nine patients underwent reversal of Hartmann's procedure from January 2017 to December 2019. The mean age of our cohort was 59.6 ± 13.2 years. There was no significant difference in baseline demographics of both groups, apart from the number of days between the primary operation and reversal procedure. There was also no statistical difference in length of stay, 30-day readmission, and mortality between laparoscopic and open reversal techniques. However, there was a higher incidence of wound complications in patients who underwent open reversal of Hartmann's procedure. Conclusion The reversal of Hartmann's procedure is a challenging operation. We found no significant difference between both open and laparoscopic approaches, but our study might be confounded by various factors including small sample size and selection bias. A larger, randomised study with greater statistical power is needed to confirm our findings. Answer: The Hartmann procedure, which involves the resection of a portion of the colon with the creation of a colostomy, remains relevant today, particularly in emergency situations involving the rectosigmoid colon. Despite advancements in surgical techniques and the increasing popularity of primary anastomosis for left-sided colonic pathology, the Hartmann procedure continues to be a life-saving operation in certain clinical scenarios (PUBMED:15943733, PUBMED:37109443). The procedure is often performed as a temporary measure with the intention of later reversal to restore bowel continuity. However, the reversal is associated with significant morbidity and mortality, especially when performed using the traditional open method (PUBMED:27365576, PUBMED:21234148). The laparoscopic approach for the reversal of the Hartmann procedure has shown better results in terms of decreased morbidity and mortality, making it a safer and more feasible option for many patients (PUBMED:21234148, PUBMED:30572420, PUBMED:22493768). The laparoscopic reversal technique involves adhesiolysis, mobilization of the colon, and intracorporeal anastomosis, which has been associated with shorter hospital stays and lower complication rates compared to the open approach (PUBMED:30572420, PUBMED:34540468). Additionally, robotic-assisted reversal of the Hartmann procedure has been described, suggesting an ongoing evolution in the surgical management of these patients (PUBMED:34504403). Despite the benefits of laparoscopic surgery, the Hartmann procedure, whether performed via an open or laparoscopic approach, remains a critical intervention for patients with advanced or complicated rectosigmoid neoplasm, moderate biological condition, peritoneal sepsis, intestinal occlusion, and fragile colonic wall, particularly in the context of inflammatory changes (PUBMED:37109443). In some difficult cases, alternative techniques such as the use of a suspended ileal loop for Hartmann's reversal have been proposed (PUBMED:36316535). In conclusion, the Hartmann procedure continues to be a relevant and important surgical option today, with laparoscopic and robotic techniques offering improved outcomes for the reversal phase. However, the choice of approach must be tailored to the individual patient's condition and the surgeon's expertise (PUBMED:28621183, PUBMED:22493768, PUBMED:37109443).
Instruction: Central corneal thickness in type II diabetes mellitus: is it related to the severity of diabetic retinopathy? Abstracts: abstract_id: PUBMED:7797383 The relation between corneal autofluorescence, endothelial cell count and severity of the diabetic retinopathy. We measured the corneal autofluorescence in groups with different levels of diabetic retinopathy severity (72 eyes of 46 patients) and in age-matched non-diabetic controls (34 eyes of 24 controls). We also estimated the corneal endothelium cell count and pachymetry with a contact specular microscope. For the controls, mean corneal autofluorescence was 8.8 ng equivalents fluorescein/ml (SD 0.3). Results showed increased autofluorescence of the cornea in diabetic patients (mean 17.9 ng equivalents fluorescein/ml, SD 4.2), related to the duration of diabetes (P &lt; 0.05) and to the severity of diabetic retinopathy (P &lt; 0.0001). Corneal endothelial cell count results showed no statistically significant relation to corneal autofluorescence (P &lt; 0.6), indicating that the increased autofluorescence cannot be attributed to a change in corneal cell density. abstract_id: PUBMED:35781596 Evaluation of central corneal epithelial thickness with anterior segment OCT in patients with type 2 diabetes mellitus. Background: This study aimed to evaluate the central corneal thickness (CCT) and central corneal epithelial thickness (CCET) in patients with Type 2 diabetes mellitus (DM), and the effect of the duration of diabetes, the degree of diabetic retinopathy (DR), and HbA1c level. Methods: CCT and CCET values of 72 patients diagnosed with type 2 DM and 72 healthy individuals were measured by anterior segment optical coherence tomography. The eye tear function was evaluated with the Tear Break-up Time test (TBUT) and the Schirmer test. From the results of fundus examination, the diabetic patients were grouped as those without DR, non-proliferative DR, and proliferative DR. The disease duration and the HbA1c levels were recorded. Results: In the diabetic patients, the mean CCT was determined to be thicker (p = 0.025), the CCET was thinner (p = 0.003), and the TBUT and Schirmer values were lower (p &lt;0.001, p &lt;0.001, respectively). The duration of diabetes and the HbA1c level was not found to have any statistically significant effect on these parameters (p &gt;0.05). The presence of retinopathy had no significant effect on CCT, TBUT, and Schirmer values. The CCET was determined to be thinner in patients with retinopathy (p &lt;0.001). Conclusions: As the corneal epithelial thickness is reduced in patients with advanced diabetic retinopathy, corneal epithelial pathologies can be seen more often. Therefore, early and effective treatment can be started by taking into consideration the complications which may develop associated with the corneal epithelium following surgical procedures, especially those applied to the cornea. abstract_id: PUBMED:28280298 Corneal endothelium and central corneal thickness changes in type 2 diabetes mellitus. Purpose: This study was conducted to compare the corneal endothelial cell density (ECD), morphological features, and central corneal thickness (CCT) in type 2 diabetes mellitus (DM) with age-matched, nondiabetic control subjects using EM-3000 Specular Microscope. Study Design: This was a prospective, hospital-based, nonrandomized, case-control, observational, and quantitative study. Subjects And Methods: The study included 57 patients (57 eyes) with type 2 DM and 45 control (nondiabetic) subjects (45 eyes). The corneal endothelial structure and CCT were examined in all eyes by noncontact specular microscopy using EM-3000 Specular Microscope. The endothelial structure was studied for ECD, coefficient of variation of cell area (CV), and percentage of hexagonal cells. Results: The study included 36 eyes without diabetic retinopathy (DR), 14 eyes with nonproliferative DR, and 7 eyes with proliferative DR. There were 26 eyes with a duration of ≤10 years and 31 eyes with a duration of &gt;10 years. Also, there were 24 eyes with HbA1c ≤7.5% and 33 eyes with HbA1c &gt;7.5%. ECD was significantly lower in the diabetic cornea than in control group (P=0.014). CV was higher in diabetic cornea (P=0.008). The diabetic cornea group had lower percentage of hexagonal cells than the control group, but the difference was not statistically significant (P=0.603). Also, diabetic cornea was thicker than control group, but not statistically significant (P=0.301). Conclusion: This study documented that type 2 DM causes a significant reduction of ECD and increased CV (polymegathism). Also, diabetic cornea has increased CCT and lower percentage of hexagonal cells than normal subjects, but without statistical significance. abstract_id: PUBMED:26281334 Central corneal thickness in type II diabetes mellitus: is it related to the severity of diabetic retinopathy? Background/aim: To compare the central corneal thickness (CCT) of type II diabetes mellitus patients with age- and sex-matched healthy subjects and to determine the association of the severity of diabetic retinopathy and CCT. Materials And Methods: Type II diabetes mellitus patients without retinopathy, with nonproliferative retinopathy, and with proliferative retinopathy were organized as the three subgroups of the study group, and an age- and sex-matched control group was formed. All subjects underwent full ophthalmological examination and CCT measurement with ultrasonographic pachymetry. CCT values were compared between diabetic and healthy subjects and between the three diabetic subgroups. Correlation analysis was performed to determine any relationship between CCT and intraocular pressure. Results: The average CCT was significantly higher in diabetic patients than in the control group (P = 0.04). CCT in diabetic patients without retinopathy did not significantly differ from that of patients with retinopathy (P = 0.64). Similarly, there was no significant difference in CCT between nonproliferative and proliferative diabetic retinopathy patients (P = 0.47). In the whole study population, CCT was significantly correlated with intraocular pressure (P &lt; 0.01). Conclusion: CCT is significantly increased in type II diabetes mellitus patients with respect to controls. Retinal disease severity does not seem to have an effect on corneal thickness. abstract_id: PUBMED:34117392 Association of severity of diabetic retinopathy with corneal endothelial and thickness changes in patients with diabetes mellitus. Objective: To analyse the central corneal thickness, endothelial cell density and morphology in patients with diabetes mellitus (DM). Methods: We analysed corneal endothelium, i.e. central corneal thickness (CCT), endothelial cell density (ECD), coefficient of variation in cell size (CV), and hexagonality (Hex) with specular microscopy in patients with type 2 DM and compared with age-matched controls. The influence of diabetic retinopathy (DR) severity, duration of DM, and level of glycosylated haemoglobin (HbA1c) was also analysed. Results: The study cohort included 592 eyes of 592 diabetic patients and 596 eyes of 596 control subjects. A significant difference was found in CCT (522.1 ± 36.6 μm in DM, 514.9 ± 37.1 μm in controls; P = 0.001), ECD (2484.5 ± 299.5 cells/mm2 in DM, 2555.9 ± 258.2 cells/mm2 in controls; P = 0.017), CV (40.3 ± 6.1 in DM, 37.2 ± 6.1 in controls; P &lt; 0.001) and Hex (39.9 ± 5.2 in DM, 44.6 ± 6.0 in controls; P &lt; 0.001). The longer duration of DM ( &gt; 10 years) and poor glycaemic control (HbA1c &gt; 7.5%) were associated with similar results. A significantly reduced ECD (P &lt; 0.001) and Hex (P = 0.001) and higher CV (P = 0.007) and CCT (P = 0.01) was noted when assessed against various stages of DR. Multivariate analysis showed that increasing age was significantly associated with lower ECD (P &lt; 0.001), Hex (P &lt; 0.001), and CCT (P = 0.004); and a higher CV (P &lt; 0.001). Conclusions: DM has deleterious effects on corneal endothelium and thickness. The presence of DR may further warrant a thorough corneal evaluation, especially when planning intraocular surgery. abstract_id: PUBMED:32764865 Evaluation of Corneal Structure and Endothelial Morphological Characteristics in Type 2 Diabetic and Non-Diabetic Patients. Purpose: To compare corneal structure and endothelial morphological features between patients with type 2 diabetes mellitus (DM) and non-diabetic patients; and determine if the DM duration, glycated hemoglobin (HbA1c) levels, and diabetic retinopathy (DR) stage affect corneal morphological properties. Patients And Methods: Sixty diabetic patients and 47 age- and sex-matched controls were enrolled in this cross-sectional study. DM group was analyzed according disease duration, HbA1c levels, and presence of retinopathy. Endothelial cell density (ECD) and morphology (average and coefficient of variation [CV] of cell size, percentage of hexagonal cells) were recorded using non-contact specular microscopy. Central corneal thickness (CCT) and corneal volume were measured by scheimpflug tomography. Univariate and multivariate linear regression analyses were performed to evaluate the relationship between demographical, clinical, and ocular variables with CCT and ECD. Results: There were no statistically significant differences in the endothelium cell density or morphology between DM and non-DM groups (p&gt;0.05). Also, there was no statistical difference between groups for CCT or corneal volume (p&gt;0.05). Multivariate linear regression analysis showed that older age (p=0.028) was significantly associated with lower ECD; CCT was found to be significantly greater in males (p&lt;0.001) and positively associated with corneal volume (p&lt;0.001). Conclusion: The present study did not find any statistically significant differences between the corneal structural and endothelial characteristics of diabetic and non-diabetic subjects; other demographical parameters, such as age and gender, seem to be more determinant for the corneal properties. abstract_id: PUBMED:32321301 Effects of diabetes duration and HgA1C level on corneal endothelial morphology. Objective: The aim of this study is to investigate the effects of the HbA1c level and the duration of diabetes mellitus on the corneal endothelium morphology and to compare between healthy individuals and diabetes mellitus patients with non-proliferative diabetic retinopathy and proliferative diabetic retinopathy. Material And Methods: Ninety patients who applied to the Health Sciences University Ulucanlar Eye Training and Research Hospital between January 2016 and January 2017 were included in this prospective randomized study. In the study, 45 diabetes mellitus patients and 45 healthy individuals were evaluated. The diabetes patients were compared in terms of HbA1c level, diabetes mellitus duration, corneal endothelial cell density, coefficient of variation, standard deviation, and hexagonality with healthy control group. Results: A statistically significant difference was found in the endothelial cell densitometer, coefficient of variation, and standard deviation measurements between the diabetes mellitus patients and the control (healthy) group. But, there was no statistically significant difference between 6A (hexagonality) and central corneal thickness measurements. There was a negative correlation between HbA1c levels and diabetes mellitus times and endothelial cell densitometer values in the patients with diabetes mellitus diagnosis and standard deviation values in the positive direction. There was a statistically significant difference between diabetes mellitus patients with the diagnosis of non-proliferative diabetic retinopathy and proliferative diabetic retinopathy in the endothelial cell densitometer and standard deviation values. But there was not any statistically significant difference between coefficient of variation, 6A, and central corneal thickness values. Conclusion: The endothelial cell densitometer in diabetes mellitus patients with retinopathy is lower than that in healthy individuals. There is a negative correlation between retinopathy severity and corneal endothelial cell density. Diabetes affects negatively not only vascular tissues but also avascular cornea. abstract_id: PUBMED:32493325 The findings of corneal specular microscopy in patients with type-2 diabetes mellitus. Background: We aimed to compare the morphological characteristics of corneal endothelial cells in type 2 diabetic patients and age-matched healthy subjects by specular microscopy. We also aimed to determine the association of corneal morphological features with the general characteristics and laboratory data of diabetic patients, including disease duration, haemoglobin A1c (HbA1c) levels and urine albumin creatinine ratio. Methods: A total of 195 diabetic patients and 100 healthy controls were enrolled in the study. All participants underwent a complete ophthalmological examination. Corneal endothelial measurements were performed using a noncontact specular microscopy. Laboratory data including serum fasting glucose, haemoglobin A1c levels, creatinine levels, and the urinary albumin-to-creatinine ratio were recorded. Diabetic patients were further subdivided into 3 groups according to the presence and stage of diabetic retinopathy. Specular microscopy findings and central corneal thickness of all patients were compared. Results: The ECD and hexagonal cell ratio were significantly lower, while the average cell size, CV%, and central corneal thickness were determined to be significantly higher in diabetic patients than in healthy controls (p = 0.001). With the presence and advancement of diabetic retinopathy, the ECD and hexagonal cell ratio decreased, while the average cell size, CV%, and central corneal thickness increased. When correlation analysis was performed between corneal morphological features and laboratory data of diabetic patients, ECD showed a significant negative correlation with diabetes duration (p = 0.028). HbA1c levels, urinary albumin-creatinine ratio (p = 0.041), average cell size and CV showed a positive correlation with these parameters. Conclusion: In conclusion, keratopathy is an important complication of type 2 diabetes. With an increase in the stage of diabetic retinopathy, alterations in corneal findings also increased. In that respect, we can suggest that keratopathy should be evaluated more cautiously in diabetic patients. abstract_id: PUBMED:16935596 Decreased density of corneal basal epithelium and subbasal corneal nerve bundle changes in patients with diabetic retinopathy. Purpose: To define alterations in the density of corneal basal epithelium in relation to diabetic retinopathic severity and the alteration of corneal innervation using confocal microscopy. Design: Prospective case-control study. Methods: Forty-two type II diabetic patients stratified into nondiabetic (NDR), nonproliferative diabetic (NPDR), and proliferative diabetic (PDR) retinopathy and 14 age-matched healthy control subjects were studied. Epithelial and stromal cell densities and subbasal nerves were investigated by confocal microscopy. Results: Corneal basal epithelial cell density (P = .0001), nerve fiber density (NFD) (P &lt; .0001), nerve branch density (NBD) (P = .0003), and tortuosity coefficient (Tc) (P &lt; .0001) were significantly different among the four groups. Basal epithelial density was significantly correlated with NFD (r = 0.43, P = .0009), NBD (r = 0.36, P = .007), and TC (r = -0.58, P &lt; .0001). Conclusions: Patients with diabetes show alterations in corneal innervations and basal epithelial cell density in different retinopathic stages. We demonstrate that reduced density in basal epithelial cell is correlated with changes in innervation. abstract_id: PUBMED:27623389 Corneal thickness differences between type 2 diabetes and non-diabetes subjects during preoperative laser surgery examination. Aims: To evaluate the differences in corneal thickness between type 2 diabetes subjects with HbA1c under 7.0% and non-diabetes subjects during their preoperative laser surgery examinations. Methods: The mean of five consecutive corneal thickness measurements at the central and mid-peripheral cornea was obtained by means of noncontact scanning-slit corneal topography (Orbscan Topography System II; Orbscan, Inc., Salt Lake City, UT, USA) in 35 myopic non-insulin dependent type 2 diabetes subjects (17 males and 18 females) and 48 healthy myopic controls (23 males and 25 females). Results: The corneal thickness values at the central and mid-peripheral cornea were significantly higher in the diabetic group (p&lt;.001). The diabetic subjects presented the highest thickness value in the superior cornea (n=22; 62.9%) followed by the nasal (n=9; 25.7%) and the temporal (n=4; 11.4%) cornea, but never in the inferior cornea. The control subjects presented the highest thickness value in the superior cornea (n=19; 39.6%) followed by the nasal (n=18; 37.5%), the inferior (n=6; 12.5%), and the temporal (n=3; 6.3%) cornea. The central corneal thickness (CCT) of the diabetes patients was not statistically correlated with their HbA1c (r2=.078; p=.104), body mass index (r2=.007; p=.633), and time from diagnosis of diabetes (r2=.025; p=.363), but it was correlated with their corneal endothelial cell density values (r2=.543; p&lt;.001). Conclusions: Diabetes subjects with HbA1c under 7.0% who are candidates for laser refractive surgery present thicker corneas than their age-matched control subjects. In these patients, there is a correlation between their CCT values and their corneal endothelial cell density values, so when higher CCT values were found, lower corneal endothelial cell density values were observed. Answer: The relationship between central corneal thickness (CCT) in type II diabetes mellitus and the severity of diabetic retinopathy (DR) has been investigated in several studies, with varying results. Some studies have found a significant increase in CCT in diabetic patients compared to controls, but have not established a clear relationship between CCT and the severity of DR. For instance, one study found that CCT was significantly higher in diabetic patients than in the control group, but there was no significant difference in CCT between diabetic patients without retinopathy and those with nonproliferative or proliferative retinopathy (PUBMED:26281334). Similarly, another study found that the presence of retinopathy had no significant effect on CCT (PUBMED:35781596). Conversely, other studies have suggested that there may be a relationship between CCT and DR severity. One study reported that CCT was significantly thicker in diabetic patients, and the corneal epithelial thickness was thinner in patients with retinopathy (PUBMED:35781596). Another study found that CCT was significantly increased in type II diabetes mellitus patients with respect to controls, but retinal disease severity did not seem to have an effect on corneal thickness (PUBMED:26281334). Additionally, a study documented that type 2 diabetes mellitus (DM) causes a significant reduction of endothelial cell density (ECD) and increased coefficient of variation (CV), and while diabetic cornea has increased CCT and lower percentage of hexagonal cells than normal subjects, these differences were not statistically significant (PUBMED:28280298). Furthermore, a study that included a larger cohort found a significant difference in CCT between diabetic patients and controls, and a significantly reduced ECD and hexagonality (Hex) and higher CV and CCT were noted when assessed against various stages of DR (PUBMED:34117392). This suggests that the presence of DR may further warrant a thorough corneal evaluation, especially when planning intraocular surgery. In summary, while there is evidence that CCT is generally increased in type II diabetes mellitus patients compared to non-diabetic controls, the relationship between CCT and the severity of diabetic retinopathy is not consistently demonstrated across studies. Some studies have found no significant effect of DR severity on CCT, while others have suggested that changes in CCT and other corneal parameters may be associated with the progression of DR.
Instruction: Does early enteral feeding of very low birth weight infants increase the risk of necrotizing enterocolitis? Abstracts: abstract_id: PUBMED:30699425 Early Total Enteral Feeding versus Conventional Enteral Feeding in Stable Very-Low-Birth-Weight Infants: A Randomised Controlled Trial. Objective: To evaluate the effect of early total enteral feeding (ETEF) when compared with conventional enteral feeding (CEF) in stable very-low-birth-weight (VLBW; 1,000-1,499 g) infants on the postnatal age (in days) at attaining full enteral feeds. Methods: In this unblinded randomised controlled trial, 180 infants were allocated to an ETEF (n = 91) or a CEF group (n = 89). Feeds were initiated as total enteral feeds in the ETEF group and as minimal enteral nutrition (20 mL/kg) in the CEF group. The rest of the day's requirement in the CEF group was provided as parenteral fluids. The primary outcome was postnatal age at attaining full enteral feeds. The secondary outcomes included episodes of feed intolerance, incidence of sepsis and necrotising enterocolitis (NEC), and duration of hospital stay. Results: The baseline variables including birth weight and gestational age were similar in the two groups. The infants of the ETEF group attained full enteral feeds earlier than those of the CEF group (6.5 ± 1.5 vs. 10.1 ± 4.1 days postnatal age; mean difference -3.6 [-4.5 to -2.7]; p &lt; 0.001). Total episodes of feed intolerance and clinical sepsis were fewer, with a shorter duration of hospital stay, in the ETEF group (15.5 vs. 19.6 days) (p = 0.01). The incidence of NEC was similar in the two groups. Conclusion: ETEF in stable VLBW infants results in earlier attainment of full feeds and decreases the duration of hospital stay without any increased risk of feed intolerance or NEC. abstract_id: PUBMED:31248308 Early total enteral feeding in stable preterm infants: a systematic review and meta-analysis. Objective: To evaluate safety and feasibility of early total enteral feeding (ETEF) in stable preterm infants. Study Design: Systematic review and meta-analysis of randomized trials comparing incidence of necrotizing enterocolitis (NEC) and feeding intolerance between ETEF and slow rates of enteral feed advancement. Results: Four trials involving 393 very low birth weight (VLBW) infants with birth weight &gt;1000-1200 g were included. Meta-analyses did not show statistical difference in risks for NEC (RR 0.87, 95% CI 0.19-3.98) and feeding intolerance (RR 0.78, 95% CI 0.39-1.59). ETEF resulted in lower risk of late-onset sepsis (RR: 0.43, 95% CI: 0.30-0.61). Length of hospital stay was reduced in ETEF (mean difference -1.31 days, 95% CI: -1.54 to -1.07). Conclusions: ETEF appears to be safe and feasible in stable VLBW infants with birth weight &gt;1000-1200 g. A large randomized trial is needed to confirm benefits. abstract_id: PUBMED:34836137 Early Enteral Feeding Improves Tolerance of Parenteral Nutrition in Preterm Newborns. (1) Background: The tolerance of preterm newborns for the high nutritional intakes given by parenteral nutrition (PN) is still debated because of the risk of metabolic complications. Despite enteral nutrition (EN) being the preferred route of nutrition, an exclusive enteral feeding is not always possible, as in preterm newborns, the gut is immature and less tolerant of EN. We aimed to study the impact of a minimal enteral feeding (MEF) on the possible early metabolic complications of PN in a cohort of preterms with gestational age at birth GA ≤ 29 + 6/7 weeks of postmenstrual age. (2) Methods: We divided the study sample in two cohorts: 1) Late-Feeding (cohort 1), newborns who received MEF starting from the 8th day of age, and (2) Early-Feeding (cohort 2), newborns who received MEF, consisting of the administration of at least 4-5 mL/kg/day by the enteral route, in the first 7 days of age. The primary outcome of the study was the rate of at least one metabolic complication, including hyperglycemia, hypertriglyceridemia, or metabolic acidosis. (3) Results: We enrolled 80 newborns (Late-Feeding cohort 51 vs. Early-Feeding cohort 29). The rate of all metabolic complications was statistically higher in the Late-Feeding cohort compared to the Early-Feeding cohort. Binary logistic regression analysis showed that late administration of MEF negatively influenced the rate of all metabolic complications. (4) Conclusions: Early minimal administration of EN is associated with less frequent PN-related metabolic side effects and a higher rate of survival in critically ill newborns. abstract_id: PUBMED:31443906 Early enteral feeding in preterm infants. Early enteral feeding is a potentially modifiable risk factor for necrotising enterocolitis (NEC) and late onset sepsis (LOS), however enteral feeding practices for preterm infants are highly variable. High-quality evidence is increasingly available to guide early feeding in preterm infants. Meta-analyses of randomised trials indicate that early trophic feeding within 48 h after birth and introduction of progressive enteral feeding before 4 days of life at an advancement rate above 24 ml/kg/day can be achieved in clinically stable very preterm and very low birthweight (VLBW) infants, without higher mortality or incidence of NEC. This finding may not be generalisable to high risk infants such as those born small for gestational age (SGA) or following absent/reversed end diastolic flow velocity (AREDFV) detected antenatally on placental Doppler studies, due to the small number of such infants in existing trials. Trials targeting such high-risk preterm infants have demonstrated that progressive enteral feeding started in the first 4 days is safe and does not lead to higher NEC or mortality; however, there is a paucity of data to guide feeding advancement in such infants. There is little trial evidence to support bolus or continuous gavage feeding as being superior in clinically stable preterm infants. Trials that examine enteral feeding are commonly unblinded for technical and practical reasons, which increases the risk of bias in such trials, specifically when considering potentially subjective outcome such as NEC and LOS; future clinical trials should focus on objective, primary outcome measures such as all-cause mortality, long term growth and neurodevelopment. Alternatively, important short-term outcomes such as NEC could be used with blinded assessment. abstract_id: PUBMED:18043702 Does early enteral feeding of very low birth weight infants increase the risk of necrotizing enterocolitis? Background/objective: In this retrospective study, we intended to test whether early enteral feeding (EEF) of very low birth weight (VLBW) preterm babies increases the risk of necrotizing enterocolitis (NEC) or not. Subjects And Methods: Overall, 297 VLBW preterm babies admitted to the neonatal intensive care unit (NICU) between April 2003 and April 2006 were included. The study consisted of two periods: the first period was between April 2003 and October 2004, when babies were not fed enterally until they were extubated (167 preterm VLBWs). The second period was between November 2004 and April 2006, when babies were fed even when they were intubated, starting preferably on the first day of life (130 preterm VLBWs). Criteria for withholding enteral feeding in both periods were hypotension necessitating vasopressor agent use, abdominal distention, abdominal tenderness and suspected or proven NEC. Possible risk factors for NEC were also recorded. Results: The overall incidence of NEC in VLBW preterm babies was 6.7% and did not differ between the two study periods: 7.2% in the late and 6.2% in the EEF regimens. On logistic regression analysis, the most important risk factors associated with NEC were sepsis (P&lt;0.001) and blood culture positivity (P&lt;0.001). The average daily weight gain was significantly higher in the early fed babies (P=0.011). Conclusions: The EEF of VLBW preterm babies does not increase the risk of NEC. Increased daily weight gain is an important reason to feed these babies earlier. abstract_id: PUBMED:24612934 Early enteral feeding in very low birth weight infants. Background/aim: Debate exists about when to initiate enteral feeding (EF) in very low birth weight (VLBW) preterm infants. This retrospective study compared the effectiveness of an education-based quality improvement project and the relationship of time of the first EF to necrotizing enterocolitis (NEC) or death incidence and parenteral nutrition (PN) days in VLBW infants. Study Design/subjects: VLBW infants born in 2 epochs were compared for hour of the first feed, PN days, NEC or death incidence, and feeding type. The 2 epochs were temporally divided by a quality improvement initiative to standardize initiation of EF in postnatal hours 6-24. Results: 603 VLBW infants were included. Median time of feed initiation decreased from 33 (Epoch 1) to 14h (Epoch 2) (p&lt;0.0001). Median PN days were 14 vs. 12, respectively (p=0.07). The incidence of NEC or death was 13.4% vs. 9.5%, respectively (p=0.14). When controlling for birth weight, gestational age, race, gender, and time period, earlier feed initiation was associated with decreased NEC or death (p=0.003). Evaluation of the relationship of early EF (defined as within the first 24h) in Epoch 2 alone showed that early EF was significantly associated with decreased NEC or death (6.3 vs 15.1%) (RR, 95% CI=0.28, 0.13-0.58) and less PN days (p&lt;0.0001). Conclusions: In a VLBW infant cohort, an education-based process improvement initiative decreased time of EF initiation to a median of 14h with no associated increase in NEC or death. In fact, results suggest that earlier feeding is associated with decreased NEC or death. abstract_id: PUBMED:15210657 Early enteral feeding and nosocomial sepsis in very low birthweight infants. Background: The interrelations between early enteral feeding, necrotising enterocolitis (NEC), and nosocomial sepsis (NS) remain unclear. Objective: To evaluate the effect of age at the introduction of enteral feeding on the incidence of NS and NEC in very low birthweight (VLBW&lt; 1500 g) infants. Methods: Data were collected on the pattern of enteral feeding and perinatal and neonatal morbidity on all VLBW infants born in one centre during 1995-2001. Enteral feeding was compared between infants with and without NS and/or NEC. Results: The study sample included 385 infants. Of these, 163 (42%) developed NS and 35 (9%) developed NEC. Enteral feeding was started at a significantly earlier mean (SD) age in infants who did not develop nosocomial sepsis (2.8 (2.6) v 4.8 (3.7) days, p = 0.0001). Enteral feeding was introduced at the same age in babies who did or did not develop NEC (3.1 (2) v 3.7 (3) days, p = 0.28). Over the study period, the mean annual age at the start of enteral feeding fell consistently, and this correlated with the mean annual incidence of NS (r(2) = 0.891, p = 0.007). Multiple logistic regression analysis showed age at start of enteral feeding, respiratory distress syndrome, and birth weight to be the most significant predictors of risk of NS (p = 0.0005, p = 0.024, p = 0.011). Conclusions: Early enteral feeding was associated with a reduced risk of NS but no change in the risk of NEC in VLBW infants. These findings support the use of early enteral feeding in this high risk population, but this needs to be confirmed in a large randomised controlled trial. abstract_id: PUBMED:33368149 Early full enteral feeding for preterm or low birth weight infants. Background: The introduction and advancement of enteral feeds for preterm or low birth weight infants is often delayed because of concerns that early full enteral feeding will not be well tolerated or may increase the risk of necrotising enterocolitis. Early full enteral feeding, however, might increase nutrient intake and growth rates; accelerate intestinal physiological, metabolic, and microbiomic postnatal transition; and reduce the risk of complications associated with intravascular devices for fluid administration. OBJECTIVES: To determine how early full enteral feeding, compared with delayed or progressive introduction of enteral feeds, affects growth and adverse events such as necrotising enterocolitis, in preterm or low birth weight infants. Search Methods: We used the standard search strategy of Cochrane Neonatal to search Cochrane Central Register of Controlled Trials; MEDLINE Ovid, Embase Ovid, Maternity &amp; Infant Care Database Ovid, the Cumulative Index to Nursing and Allied Health Literature, and clinical trials databases, conference proceedings, and the reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials to October 2020. Selection Criteria: Randomised controlled trials that compared early full enteral feeding with delayed or progressive introduction of enteral feeds in preterm or low birth weight infants. Data Collection And Analysis: We used the standard methods of Cochrane Neonatal. Two review authors separately assessed trial eligibility, evaluated trial quality, extracted data, and synthesised effect estimates using risk ratios (RR), risk differences, and mean differences (MD) with 95% confidence intervals (CI). We used the GRADE approach to assess the certainty of evidence. Main Results: We included six trials. All were undertaken in the 2010s in neonatal care facilities in India. In total, 526 infants participated. Most were very preterm infants of birth weight between 1000 g and 1500 g. Trials were of good methodological quality, but a potential source of bias was that parents, clinicians, and investigators were not masked. The trials compared early full feeding (60 mL/kg to 80 mL/kg on day one after birth) with minimal enteral feeding (typically 20 mL/kg on day one) supplemented with intravenous fluids. Feed volumes were advanced daily as tolerated by 20 mL/kg to 30 mL/kg body weight to a target steady-state volume of 150 mL/kg to 180 mL/kg/day. All participating infants were fed preferentially with maternal expressed breast milk, with two trials supplementing insufficient volumes with donor breast milk and four supplementing with preterm formula. Few data were available to assess growth parameters. One trial (64 participants) reported a slower rate of weight gain (median difference -3.0 g/kg/day), and another (180 participants) reported a faster rate of weight gain in the early full enteral feeding group (MD 1.2 g/kg/day). We did not meta-analyse these data (very low-certainty evidence). None of the trials reported rate of head circumference growth. One trial reported that the mean z-score for weight at hospital discharge was higher in the early full enteral feeding group (MD 0.24, 95% CI 0.06 to 0.42; low-certainty evidence). Meta-analyses showed no evidence of an effect on necrotising enterocolitis (RR 0.98, 95% CI 0.38 to 2.54; 6 trials, 522 participants; I² = 51%; very low-certainty evidence). Authors' Conclusions: Trials provided insufficient data to determine with any certainty how early full enteral feeding, compared with delayed or progressive introduction of enteral feeds, affects growth in preterm or low birth weight infants. We are uncertain whether early full enteral feeding affects the risk of necrotising enterocolitis because of the risk of bias in the trials (due to lack of masking), inconsistency, and imprecision. abstract_id: PUBMED:28704816 Short versus Extended Duration of Trophic Feeding to Reduce Time to Achieve Full Enteral Feeding in Extremely Preterm Infants: An Observational Study. Background: Trophic feeding compared to no enteral feeding prevents atrophy of the gastrointestinal tract. However, the practice of extending the duration of trophic feeding often delays initiation of full enteral feeding in extremely preterm infants. We hypothesized that a short duration of trophic feeding (3 days or less) is associated with early initiation of full enteral feeding. Methods: A total of 192 extremely preterm infants (23-28 weeks' gestation) born between 2013 and 2015 were included. Infants were divided into 2 groups according to the duration of trophic feeding (short vs. extended). The primary outcome was time to achieve full enteral feeding and the safety outcome was necrotizing enterocolitis (NEC) and/or death. Results: A short duration of trophic feeding was associated with a reduction in time to achieve full enteral feeding after adjustment for birth weight, gestational age, race, sex, type of enteral nutrition, and day of initiation of trophic feeding (mean difference favoring a short duration of trophic feeding: -4.1 days; 95% CI: -2.3 to -5.8; p &lt; 0.001). A short duration of trophic feeding was not associated with a higher risk of NEC and/or death after achieving full enteral feeding (AOR: 0.91; 95% CI: 0.30-2.77; p = 0.87). Conclusions: A short duration of trophic feeding is associated with early initiation of full enteral feeding. A short duration of trophic feeding is not associated with a higher risk of NEC, but our study was underpowered for this safety outcome. Randomized trials are needed to test this study hypothesis. abstract_id: PUBMED:28431170 Early Total Enteral Feeding in Stable Very Low Birth Weight Infants: A Before and After Study. Background: Fear of necrotizing enterocolitis (NEC) has perpetuated delayed initiation and slow advancement of enteral feeding in very low birth weight (VLBW) infants with inherent risks of parenteral alimentation. The objective of this study was to assess effect of early total enteral feeding (ETEF) on day of achievement of full enteral feeds, feed intolerance, NEC and sepsis. Methods: In total, 208 stable VLBW neonates (28-34 weeks) admitted during 6 month periods of three consecutive years were enrolled. First phase (n = 73) constituted the 'before' phase with standard practice of initial intravenous fluid therapy and slow enteral feeding. The second prospective phase (n = 51) consisted of implementation of ETEF with infants receiving full enteral feeds as per day's fluid requirement since Day 1 of life. The third phase (n = 84) was chosen to assess the sustainability of change in practice. Results: Day of achievement of full feeds was significantly earlier in Phases 2 and 3 compared with Phase 1 (8.97 and 5.47 vs. 14.44 days, respectively, p = 0.0001). Incidence of feed intolerance was comparable between Phases 1 and 2 (22 vs. 14%, p = 0.28), with marked reduction in incidence of NEC (14 vs. 4%, p = 0.028). There was a significant decrease in sepsis, duration of parenteral fluid and antibiotic therapy as well as hospital stay with comparable mortality. Conclusion: In stable preterm VLBW infants, ETEF is safe and has the benefit of optimizing nutrition with decrease in sepsis, NEC and hospital stay. Answer: The available evidence from the provided abstracts suggests that early enteral feeding (EEF) of very low birth weight (VLBW) infants does not increase the risk of necrotizing enterocolitis (NEC). Several studies have investigated the relationship between EEF and NEC, with the following findings: 1. A randomised controlled trial found that early total enteral feeding (ETEF) in stable VLBW infants resulted in earlier attainment of full feeds without any increased risk of feed intolerance or NEC (PUBMED:30699425). 2. A systematic review and meta-analysis of randomized trials comparing ETEF with slow rates of enteral feed advancement did not show a statistical difference in risks for NEC (PUBMED:31248308). 3. A retrospective study concluded that EEF of VLBW preterm babies does not increase the risk of NEC and that increased daily weight gain is an important reason to feed these babies earlier (PUBMED:18043702). 4. Another retrospective study found that earlier enteral feeding initiation was associated with decreased NEC or death in VLBW infants (PUBMED:24612934). 5. An observational study on early enteral feeding in VLBW infants reported that early enteral feeding was associated with a reduced risk of nosocomial sepsis but no change in the risk of NEC (PUBMED:15210657). 6. A Cochrane review included six trials and found no evidence of an effect on NEC when comparing early full enteral feeding with delayed or progressive introduction of enteral feeds in preterm or low birth weight infants (PUBMED:33368149). 7. A before and after study assessing the effect of ETEF on stable VLBW neonates reported a marked reduction in the incidence of NEC with early total enteral feeding (PUBMED:28431170). In summary, the collective findings from these studies indicate that EEF does not increase the risk of NEC in VLBW infants and may have additional benefits such as reduced duration of hospital stay and improved growth parameters. However, it is important to note that the quality of evidence and the specific populations studied may vary, and individual clinical decisions should be made in the context of each infant's clinical status and risk factors.
Instruction: Is there a stroke belt in China and why? Abstracts: abstract_id: PUBMED:23674531 Is there a stroke belt in China and why? Background And Purpose: In 1960s, a stroke belt with high stroke mortality was discovered in the southeast United States. In China, where stroke is the leading cause of death, we aimed to determine whether a focal region of high stroke incidence (stroke belt) exits and, if so, the possible causal and modifiable factors. Methods: We systematically reviewed all studies of stroke incidence in China between 1980 and 2010, and included those which met our criteria for a high-quality study. Criteria for a provincial region of high stroke incidence were ranking in the top one third of all provinces for stroke incidence and ranking of more than one third of prefectural regions within the province in the top two sevenths of all prefectural regions for stroke incidence. We also reviewed regional distribution of major vascular risk factors, socioeconomic status, and demographic profiles in China. Results: Nine eligible studies provided data on the incidence of stroke in 32 of 34 provincial regions of China (with Hong Kong and Macao as exceptions) and 52% of the 347 prefectural regions. Nine provincial regions (Heilongjiang, Tibet, Jilin, Liaoning, Xinjiang, Hebei, Inner Mongolia, Beijing, and Ningxia) met our criteria for a region of high stroke incidence and constitute a stroke belt in north and west China. The incidence of stroke in the stroke belt was 236.2 per 100 000 population compared with 109.7 in regions outside the belt (rate ratio, 2.16; 95% confidence interval, 2.10-2.22). The mean population prevalence of hypertension and overweight (body mass index, &gt;25) was greater in the stroke belt than that in other regions (15.3% versus 10.3%, P&lt;0.001; 21.1% versus 12.3%, P=0.013, respectively). The prevalence of hypertension and overweight also correlated significantly with regional stroke incidence (R=0.642, P&lt;0.001; R=0.438, P=0.014, respectively, by Spearman rank correlation). Conclusions: A stroke belt of high stroke incidence exists in 9 provincial regions of north and west China. The stroke belt may be caused, at least in part, by a higher population prevalence of hypertension and excess body weight. Lowering blood pressure and body weight in the stroke belt may reduce the geographic disparity in stroke risk and incidence in China. abstract_id: PUBMED:33599053 A comparison of post-stroke hypertension medication use between US Stroke Belt and Non-Stroke Belt residents. Although hypertension is a contributing factor to higher stroke occurrence in the Stroke Belt, little is known about post-stroke hypertension medication use in Stroke Belt residents. Through the use of national Behavioral Risk Factor Surveillance System surveys from 2015, 2017, and 2019; we compared unadjusted and adjusted estimates of post-stroke hypertension medication use by Stroke Belt residence status. Similar levels of post-stroke hypertension medication use were observed between Stroke Belt residents (OR: 1.09, 95% CI: 0.89, 1.33) and non-Stroke Belt residents. After adjustment, Stroke Belt residents had 1.14 times the odds of post-stroke hypertension medication use (95% CI: 0.92, 1.41) compared to non-Stroke Belt residents. Findings from this study suggest that there is little difference between post-stroke hypertension medication use between Stroke Belt and non-Stroke Belt residents. However, further work is needed to assess whether use of other non-medicinal methods of post-stroke hypertension control differs by Stroke Belt residence status. abstract_id: PUBMED:31611170 A Cross-Sectional Analysis of Differences in Physical Activity Levels between Stroke Belt and Non-Stroke Belt US Adults. Background: The Stroke Belt is a region of the United States with elevated stroke incidence and prevalence of stroke risk factors. Physical inactivity is an important stroke risk factor, but little is known about whether current physical activity levels differ between Stroke Belt and non-Stroke Belt states. In this nationally representative study, we determined whether unadjusted and adjusted physical activity levels differ between the Stroke Belt region and the rest of the United States. Methods: Using 2017 Behavioral Risk Factor Surveillance System data, we conducted bivariate analyses to obtain unadjusted physical activity levels in Stroke Belt and non-Stroke Belt states. Logistic regressions that controlled for sociodemographic and stroke risk factors were created to estimate adjusted associations between Stroke Belt residence and physical activity. Results: A higher percentage of Stroke Belt residents were inactive (Stroke Belt: 35.3%, non-Stroke Belt: 29.4%) and failed to meet physical activity guidelines (Stroke Belt: 53.7%, non-Stroke Belt: 47.8%) compared to non-Stroke Belt residents. Stroke Belt residence was significantly associated with lower odds of meeting physical activity guidelines in a model that adjusted for sociodemographic factors only (odds ratio [OR]: 0.85, 95% confidence interval [CI]: 0.78-0.91) and one that adjusted for both sociodemographic and stroke risk factors (OR: 0.87, 95% CI: 0.81-0.93). Conclusions: The considerably lower physical activity levels and likelihood of meeting physical activity guidelines in Stroke Belt residents compared to their non-Stroke Belt counterparts demonstrates a need for clinician attention and public health interventions to increase regular physical activity as part of a stroke reduction strategy in this region. abstract_id: PUBMED:34195346 Association and pathways of birth in the stroke belt on old age dementia and stroke Mortality. This paper uses data from the Diet and Health Study (DHS) to examine associations between being born in a "stroke belt" state and old age stroke and mortality outcomes. Adding to prior work that used administrative data, our paper explores educational and health mechanisms that are both stratified by geography and by mortality outcomes. Using logistic regression, we first replicate earlier findings of elevation in risk of dementia mortality (OR 1.13, CI [1.07, 1.20]) and stroke mortality (OR 1.17, CI [1.07, 1.29]) for white individuals born in a stroke belt state. These associations are largely unaffected by controls for educational attainment or by experiences with surviving a stroke and are somewhat attenuated by controls for self-rated health status in old age. The results suggest a need to consider additional life course mechanisms in order to understand the persistent effects of place of birth on old age mortality patterns. abstract_id: PUBMED:33039522 Understanding Human Neural Control of Short-term Gait Adaptation to the Split-belt Treadmill. Many studies of split-belt treadmill adaptation have been focused on the biomechanical changes that occur in response to walking on belts at different speeds. This process of adaptation, and the ensuing aftereffects that are present when participants return to typical treadmill walking, have inspired the potential use of the split-belt treadmill as a rehabilitation tool. However, a full systematic review of the current split-belt treadmill literature has not yet been published to consolidate what is known about how the human central nervous system (CNS) controls adaptation to this type of symmetry perturbation. A systematic literature search identified 55 studies that investigated the neural control of human gait adaptation to a split-belt treadmill. Studies of infants and manipulated sensory feedback in healthy adults suggest that the initial gait adjustments to split-belt walking are reliant on proprioceptive feedback to inform central pattern generators to modify lower limb muscle activation patterns appropriately. Simultaneously, this literature suggested that proprioceptive and visual feedback inform supraspinal centres for motor planning and motor output to adapt and store a new and efficient gait pattern to walk on belts at different speeds. However, evidence from participants with brain injury (post-stroke, cerebellar lesions) suggest that injury impedes, but does not completely take away, the ability to adjust and adapt aspects of the gait pattern to split-belts. The model proposed from this review provides the overarching roles of the CNS in the adaptation process, specifically in short-term, and potential sites of focus within the human CNS for future rehabilitation-based work. abstract_id: PUBMED:35773672 Real-time feedback control of split-belt ratio to induce targeted step length asymmetry. Introduction: Split-belt treadmill training has been used to assist with gait rehabilitation following stroke. This method modifies a patient's step length asymmetry by adjusting left and right tread speeds individually during training. However, current split-belt training approaches pay little attention to the individuality of patients by applying set tread speed ratios (e.g., 2:1 or 3:1). This generalization results in unpredictable step length adjustments between the legs. To customize the training, this study explores the capabilities of a live feedback system that modulates split-belt tread speeds based on real-time step length asymmetry. Materials And Methods: Fourteen healthy individuals participated in two 1.5-h gait training sessions scheduled 1 week apart. They were asked to walk on the Computer Assisted Rehabilitation Environment (CAREN) split-belt treadmill system with a boot on one foot to impose asymmetrical gait patterns. Each training session consisted of a 3-min baseline, 10-min baseline with boot, 10-min feedback with boot (6% asymmetry exaggeration in the first session and personalized in the second), 5-min post feedback with boot, and 3-min post feedback without boot. A proportional-integral (PI) controller was used to maintain a specified step-length asymmetry by changing the tread speed ratios during the 10-min feedback period. After the first session, a linear model between baseline asymmetry exaggeration and post-intervention asymmetry improvement was utilized to develop a relationship between target exaggeration and target post-intervention asymmetry. In the second session, this model predicted a necessary target asymmetry exaggeration to replace the original 6%. This prediction was intended to result in a highly symmetric post-intervention step length. Results And Discussion: Eleven out of 14 participants (78.6%) developed a successful relationship between asymmetry exaggeration and decreased asymmetry in the post-intervention period of the first session. Seven out of the 11 participants (63.6%) in this successful correlation group had second session post-intervention asymmetries of &lt; 3.5%. Conclusions: The use of a PI controller to modulate split-belt tread speeds demonstrated itself to be a viable method for individualizing split-belt treadmill training. abstract_id: PUBMED:29747067 More symmetrical gait after split-belt treadmill walking does not modify dynamic and postural balance in individuals post-stroke. Spontaneous gait is often asymmetrical in individuals post-stroke, despite their ability to walk more symmetrically on demand. Given the sensorimotor deficits in the paretic limb, this asymmetrical gait may facilitate balance maintenance. We used a split-belt walking protocol to alter gait asymmetry and determine the effects on dynamic and postural balance. Twenty individuals post-stroke walked on a split-belt treadmill. In two separate periods, the effects of walking with the non-paretic leg, and then the paretic one, on the faster belt on spatio-temporal symmetry and balance were compared before and after these perturbation periods. Kinematic and kinetic data were collected using a motion analysis system and an instrumented treadmill to determine symmetry ratios of spatiotemporal parameters and dynamic and postural balance. Balance, quantified by the concepts of stabilizing and destabilizing forces, was compared before and after split-belt walking for subgroups of participants who improved and worsened their symmetry. The side on the slow belt during split-belt walking, but not the changes in asymmetry, affected balance. Difficulty in maintaining balance was higher during stance phase of the leg that was on the slow belt and lower on the contralateral side after split-belt walking, mostly because the center of pressure was closer (higher difficulty) or further (lower difficulty) from the limit of the base of support, respectively. Changes in spatiotemporal parameters may be sought without additional alteration of balance during gait post-stroke. abstract_id: PUBMED:31325672 Adaptive changes in foot placement for split-belt treadmill walking in individuals with stroke. Background: Adaptation to split-belt treadmill walking differs between individual stroke survivors. Many discussions only address spatiotemporal parameters that are related to movement, and the changes in interlimb spatiotemporal parameters as a consequence of adaptation are poorly understood. Objectives: To investigate symmetry of the center of pressure (CoP) position relative to the center of mass (CoM), and ascertain whether this can be used to identify differences in adaptation of interlimb spatiotemporal parameters among stroke survivors during split-belt walking. Methods: Twenty-two chronic post-stroke patients and nine elderly controls walked in tied- then split-belt (2:1 ratio of fast:slow) conditions. Spatiotemporal parameters were compared within groups to assess symmetry of the CoM-CoP angle at foot contact. Results: Asymmetry of the CoM-CoP angle was associated with asymmetry of spatiotemporal parameters. Re-establishment of symmetry of CoM-CoP angle was reflected in re-established symmetry of spatiotemporal parameters in post-stroke and control participants. Conclusions: Post-stroke patients who re-establish symmetry of the COM-COP angle are able to adapt their walking for split-belt perturbation. This suggests that predictively symmetric foot placements on the fast and slow sides are necessary for adaptation in walking. Symmetrical foot placement is achieved by interlimb coordination and may contribute to dynamic stability. abstract_id: PUBMED:28086163 Changes in lower limb muscle activity after walking on a split-belt treadmill in individuals post-stroke. Background: There is growing evidence that stroke survivors can adapt and improve step length symmetry in the context of split-belt treadmill (SBT) walking. However, less knowledge exists about the strategies involved for such adaptations. This study analyzed lower limb muscle activity in individuals post-stroke related to SBT-induced changes in step length. Methods: Step length and surface EMG activity of six lower limb muscles were evaluated in individuals post-stroke (n=16) during (adaptation) and after (after-effects) walking at unequal belt speeds. Results: During adaptation, significant increases in EMG activity were mainly found in proximal muscles (p⩽0.023), whereas after-effects were observed particularly in the distal muscles. The plantarflexor EMG increased after walking on the slow belt (p⩽0.023) and the dorsiflexors predominantly after walking on the fast belt (p⩽0.017) for both, non-paretic and paretic-fast conditions. Correlation analysis revealed that after-effects in step length were mainly associated with changes in distal paretic muscle activity (0.522⩽r⩽0.663) but not with functional deficits. Based on our results, SBT walking could be relevant for training individuals post-stroke who present shorter paretic step length combined with dorsiflexor weakness, or individuals with shorter nonparetic step length and plantarflexor weakness. abstract_id: PUBMED:24243917 Spatial and temporal asymmetries in gait predict split-belt adaptation behavior in stroke. Background: Step asymmetries during gait in persons after stroke can occur in temporal or spatial domains. Prior studies have shown that split-belt locomotor adaptation can temporarily mitigate these asymmetries. Objective: . We investigated whether baseline gait asymmetries affected how patients adapt and store new walking patterns. Methods: Subjects with stroke and age-matched controls were studied walking at a 2:1 speed ratio on the split-belt during adaptation and assessed for retention of the learned pattern (the after-effect) with both belts at the same speed. Results: Those with stroke adapted more slowly (P &lt; .0001), though just as much as healthy older adults. During split-belt walking, the participants with stroke adapted toward their baseline asymmetry (eg, F = 14.02, P &lt; .01 for step symmetry), regardless of whether the subsequent after-effects improved or worsened their baseline step asymmetries. No correlation was found between baseline spatial and temporal measures of asymmetry (P = .38). Last, the initial spatial and temporal asymmetries predicted after-effects independently of one another. The after-effects in the spatial domain (ie, center of oscillation difference) are only predicted by center of oscillation difference baseline (F = 15.3, P = .001), while all other parameters were nonsignificant (all Ps &gt; .17). Temporal coordination (ie, phasing) after-effects showed a significant effect only from phasing baseline (F = 26.92, P &lt; .001, all others P &gt; .33). Conclusion: This work demonstrates that stroke patients adapt toward their baseline temporal and spatial asymmetries of walking independently of one another. We define how a given split-belt training session would affect asymmetries in these domains, which must be considered when developing rehabilitation interventions for stroke patients. Answer: Yes, there is a stroke belt in China. A study aimed to determine whether a focal region of high stroke incidence exists in China, similar to the stroke belt observed in the southeast United States. The study systematically reviewed stroke incidence studies in China from 1980 to 2010 and included high-quality studies to identify regions with high stroke incidence. Nine provincial regions in north and west China—Heilongjiang, Tibet, Jilin, Liaoning, Xinjiang, Hebei, Inner Mongolia, Beijing, and Ningxia—met the criteria for a region of high stroke incidence, thus constituting a stroke belt. The incidence of stroke in this belt was significantly higher (236.2 per 100,000 population) compared to regions outside the belt (109.7 per 100,000 population). The study also found that the mean population prevalence of hypertension and overweight was greater in the stroke belt than in other regions. The prevalence of these factors correlated significantly with regional stroke incidence, suggesting that they may be causal and modifiable factors contributing to the high stroke incidence in the stroke belt of China. Therefore, lowering blood pressure and body weight in the stroke belt may help reduce the geographic disparity in stroke risk and incidence in China (PUBMED:23674531).
Instruction: Do ultrasound parameters allow diagnosis of biliary sphincter of Oddi dysfunction? Abstracts: abstract_id: PUBMED:9281239 Do ultrasound parameters allow diagnosis of biliary sphincter of Oddi dysfunction? Unlabelled: A noninvasive test to prove sphincter of Oddi dysfunction is desired, because endoscopic manometry is technically demanding and not without risks. Methods: 40 consecutive patients (n = 20 patients with, and n = 20 patients without enzymatic cholestasis) with suspected SOD were investigated both by ultrasonography (US; 3.5 MHz) and by endoscopic manometry. SOD was suspected at US if the extrahepatic bile duct diameter was &gt; or = 9 mm and a further increase (at least &gt; 0.5 mm) was observed after intravenous ceruletide (0.3 micrograms/kg b.w.). SOD was verified manometrically by a sphincter of Oddi basal pressure &gt; or = 40 mmHg. Endoscopic sphincterotomy was performed if SOD was diagnosed by manometry. Thereafter, all patients were enrolled in a prospective follow-up (median: one year). Results: At US SOD was suspected in eleven of 20 patients with cholestasis. SOD was confirmed manometrically in all of them but also in two further patients (13 of 20 patients with proven SOD). After EST twelve of 13 patients remained free from biliary symptoms. In the 20 patients without cholestasis SOD was suspected at US in five patients only. However, endoscopic manometry revealed SOD in eleven of 20 patients and proved sonographically presumed SOD in only three of five patients. After EST only three of eleven patients remained asymptomatic during follow-up (p &lt; 0.05 vs. patients with cholestasis). Clinically important side effects were not observed after ceruletide administration, whereas postmanometry pancreatitis was observed in three of 40 patients. Conclusion: In patients with recurrent symptoms after cholecystectomy and enzymatic cholestasis SOD was reliably diagnosed by ultrasonography (sensitivity: 85%, specificity: 100%), and this finding may guide endoscopic sphincterotomy. abstract_id: PUBMED:36320796 Preoperative Diagnosis of Gallbladder Agenesis: A Case Report. Gallbladder agenesis (GA) is a rare congenital malformation with less than 500 reported cases. Often an incidental finding intraoperatively or in postmortem autopsies, the first reported case dates back to 1701. We discuss the case of a 21-year-old female presenting with classic biliary symptoms who, on imaging, failed to have a visualizable gallbladder. Initial ultrasound (US) reported a non-visualized gallbladder with gallbladder contraction as a possible explanation. More advanced imaging allowed for the preoperative diagnosis, thus preventing an unpleasant intraoperative surprise. As imaging techniques continue to advance, unnecessary operations can hopefully be avoided, albeit a difficult diagnosis to make. abstract_id: PUBMED:18722735 Biliary causes of postcholecystectomy syndrome. Background: The postcholecystectomy syndrome refers to the persistence of gastrointestinal symptoms after cholecystectomy; patients with this syndrome commonly seek treatment in the Emergency Department (ED). There are a multitude of biliary and non-biliary causes of postcholecystectomy syndrome. Objectives: To review the biliary causes of postcholecystectomy syndrome and present a suggested diagnostic workup. Discussion: A number of biliary causes are discussed, including choledocholithiasis, bile duct injury and biliary leaks, cystic duct and gallbladder remnants, sphincter of Oddi dysfunction, and biliary ascariasis. The ED workup should focus on differentiating biliary from non-biliary causes of the patient's symptoms. Conclusion: A right upper quadrant abdominal ultrasound, liver transaminases, alkaline phosphatase, and serum bilirubin are the tests most useful in the ED for making this distinction and determining the most appropriate treatment and disposition of the patient. Computed tomography may be of use in the initial post-operative period. abstract_id: PUBMED:9773151 Endoscopic cholangiography in biliary tract diseases in HIV+ patients Unlabelled: Biliary tract involvement in the course of the infection by HIV was first described in 1983. Since then, various opportunistic infections have been responsible of different biliary lesions, coming to a new entity called "cholangiopathy associated to HIV". Our aim was to determined the use of ERCP in the management of HIV patients with cholestasis. From August 1994 to October 1997, 11 HIV patients (8 men, 3 women, mean age 35 y.) were submitted to ERCP because of jaundice (n = 8), upper right abdominal pain (n = 8), fever (n = 6), pruritus (n = 5) and elevated alkaline phosphatase (n-10). In 7 the diagnosis of AIDS had already be made. All had hepatobiliary ultrasound and endoscopic periampullar duodenal mucosa biopsy was taken in 7. According to Cello, 4 types of radiologic lesions were considered: 1) Papillary stenosis with dilated extrahepatic biliary tract. 2) Sclerosing cholangitis (focal intra or extrahepatic stenosis and dilatations). 3) Association of types 1 and 2.4) Choledocal long stenosis in the absence of previous biliary surgery or chronic pancreatitis. Five patients (45%) had biliary abnormalities; in 3 related to HIV infection: sclerosing cholangitis (n = 2) and papillary stenosis (n = 1). Two had choledocal stones. Four had upper right abdominal pain and dilated bile ducts at ultrasound. Cryptosporidium was found in duodenal mucosa in one patient with sclerosing cholangitis and in the patient with papillary stenosis. Biliary stents were placed without sphincterotomy in 2, with relief of pain and improving of cholestasis in only one. The choledocal stones were removed endoscopically in one patient and by surgery in the other. Conclusion: The ERCP is a useful method in the diagnosis and treatment of the biliary tract abnormalities associated to HIV. abstract_id: PUBMED:24949397 Pancreato-biliary Endoscopic Ultrasound in Opium Addicts Presenting with Abdominal Pain. Objective: Asymptomatic dilatation of bile duct and symptomatic sphincter of Oddi dysfunction have been reported in opium addicts. Except one case report, there is no report in the literature on endoscopic ultrasound (EUS) study of pancreato-biliary system in opium addicts. The aim of the present study was to report the EUS features of pancreato-biliary system in opium addicts presenting with abdominal pain. Patients And Methods: A total of 15 opium addicts presenting with upper abdominal pain and dilated common bile duct (CBD) and or pancreatic duct (PD) on abdominal ultrasound were included in this study. EUS findings of pancreato-biliary system were analyzed in these patients. Results: All the 15 patients were males (mean age 53.3 years) presented with upper abdominal pain. Mean duration of opium addiction was 20.1 years. On EUS CBD was dilated in all the patients while PD was dilated in six patients. Gall bladder, liver and pancreatic parenchyma was normal in all these patients. Surface area of papilla of Vater (SPV) was increased in 12 patients. Conclusion: Opium addiction causes obstruction at ampulla and produces dilatation of bile duct and PD. Bile duct dilatation was seen in all the patients while PD dilatation was seen in few patients. Increase in SPV was a peculiar finding and appears to be as a result of direct effect of opium on ampulla. abstract_id: PUBMED:2052722 Cryptosporidiosis of the biliary tract in AIDS. Cryptosporidiosis of the biliary system was studied retrospectively in 13 patients with acquired immunodeficiency syndrome (AIDS). The diagnosis was made by means of histologic examination (n = 9) or imaging studies and the presence of intestinal cryptosporidiosis (n = 4). Imaging studies were done in 10 patients. Ultrasound (US) and computed tomography (CT) showed dilatation of the biliary ducts, some with wall thickening, thickening of the gallbladder wall and pericholecystic fluid. Cholangiograms showed attenuation and pruning of the intrahepatic bile ducts, some with beading and dilatation of the common bile duct. Three patients had papillary stenosis. Numerous Cryptosporidium organisms were found in three resected gallbladders and in the biliary ducts of seven patients in whom autopsy was performed, accompanied by an exuberant inflammatory response. Correlation of the radiologic and pathologic findings establishes a direct etiologic role of Cryptosporidium as one of the major infectious agents in cholangitis associated with AIDS. abstract_id: PUBMED:24679419 Symptomatic cholelithiasis and functional disorders of the biliary tract. Symptomatic cholelithiasis and functional disorders of the biliary tract present with similar signs and symptoms. The functional disorders of the biliary tract include functional gallbladder disorder, dyskinesia, and the sphincter of Oddi disorders. Although the diagnosis and treatment of symptomatic cholelithiasis are relatively straightforward, the diagnosis and treatment of functional disorders can be much more challenging. Many aspects of the diagnosis and treatment of functional disorders are in need of further study. This article discusses uncomplicated gallstone disease and the functional disorders of the biliary tract to emphasize and update the essential components of diagnosis and management. abstract_id: PUBMED:18992594 Biliary dyskinesia. Functional disorders of the biliary tract include gallbladder dyskinesia (GBD) and sphincter of Oddi dysfunction (SOD). The diagnosis of GBD is made if the gallbladder ejection fraction is less than 35% to 40% using cholecystokinin cholescintigraphy. Despite slightly inferior outcomes compared with calculous disease, patients who have GBD should be treated with cholecystectomy. SOD is most often noted in the postcholecystectomy patient and symptoms can be biliary or pancreatic in nature. The gold standard for diagnosis remains manometry, with basal biliary or pancreatic sphincter pressures measuring greater than 40 mm Hg. Patients who have increased pressures may benefit from endoscopic sphincterotomy. abstract_id: PUBMED:18625146 Biliary dyskinesia in the pediatric patient. The term biliary dyskinesia commonly describes a motility disorder of the biliary tract that is divided into two main categories: gallbladder dyskinesia (GBD) and sphincter of Oddi dysfunction (SOD). SOD is further subdivided into biliary SOD and pancreatic SOD. GBD causes typical biliary colic without gallstones, whereas SOD typically presents with recurrent pancreatitis or chronic abdominal pain, usually after cholecystectomy. GBD and SOD are uncommon in children. Based on adult experience, this review discusses the diagnosis and treatment of GBD and SOD in the pediatric population. abstract_id: PUBMED:35552193 Post-transplant biliary complications: advances in pathophysiology, diagnosis, and treatment. Liver transplantation (LT) is the only curative therapy in patients with end-stage liver disease. Long-term survival is excellent, yet LT recipients are at risk of significant complications. Biliary complications are an important source of morbidity after LT, with an estimated incidence of 5%-32%. Post-LT biliary complications include strictures (anastomotic and non-anastomotic), bile leaks, stones, and sphincter of Oddi dysfunction. Prompt recognition and management is critical as these complications are associated with mortality rates up to 20% and retransplantation rates up to 13%. This review aims to summarise our current understanding of risk factors, natural history, diagnostic testing, and treatment options for post-transplant biliary complications. Answer: Yes, ultrasound parameters can allow for the diagnosis of biliary sphincter of Oddi dysfunction (SOD). According to the study reported in PUBMED:9281239, ultrasonography (US) was used to suspect SOD in patients with recurrent symptoms after cholecystectomy and enzymatic cholestasis. The criteria for suspecting SOD on US included an extrahepatic bile duct diameter greater than or equal to 9 mm and a further increase of at least 0.5 mm after intravenous administration of ceruletide. These ultrasound findings were then compared with endoscopic manometry, which is the standard for diagnosing SOD by measuring a sphincter of Oddi basal pressure greater than or equal to 40 mmHg. The study found that in patients with enzymatic cholestasis, SOD was reliably diagnosed by ultrasonography with a sensitivity of 85% and a specificity of 100%. This suggests that ultrasound, when combined with specific parameters and provocative testing with ceruletide, can guide the diagnosis of SOD and subsequent therapeutic interventions such as endoscopic sphincterotomy.
Instruction: Does the use of biofuels affect respiratory health among male Danish energy plant workers? Abstracts: abstract_id: PUBMED:21098831 Does the use of biofuels affect respiratory health among male Danish energy plant workers? Objectives: To study asthma, respiratory symptoms and lung function among energy plant employees working with woodchip, straw or conventional fuel. Methods: Respiratory symptoms in 138 woodchip workers, 94 straw workers and 107 control workers from 85 heating- or combined heating and power plants were collected by questionnaire. Spirometry, metacholine provocation tests and skin prick tests were performed on 310 workers. The work area concentrations of 'total dust' (n=181), airborne endotoxin (n=179), cultivable Aspergillus fumigatus (n=373) and cultivable fungi (n=406) were measured at each plant. Personal exposure was calculated from the time spent on different tasks and average work area exposures. Results: Median (range) average personal exposures in biofuel plants were 0.05 (0 to 0.33) mg/m³ for 'total' dust and 3.5 (0 to 294) endotoxin units/m³ for endotoxin. Fungi were cultivated from filters (straw plants) or slit samplers (woodchip plants); the average personal exposures were 5.230×10³ (118 to 1.85×10⁴) and 1.03×10³ (364 to 5.01×10³) colony-forming units/m³ respectively. Exposure levels were increased in biofuel plants compared with conventional plants. The prevalence of respiratory symptoms among conventional plant and biofuel plant workers was comparable, except for asthma symptoms among non-smokers, which were higher among straw workers compared with controls (9.4 vs 0%, p&lt;0.05). A trend for increasing respiratory symptoms with increasing endotoxin exposure was seen with ORs between 3.1 (1.1 to 8.8) (work-related nose symptoms) and 8.1 (1.5 to 44.4) (asthma symptoms) for the most exposed group. Associations between fungal exposure and respiratory symptoms were less clear but suggested cultivable fungi to be associated with asthma symptoms and work-related respiratory symptoms. No associations were seen between lung function and the level of endotoxin or fungal exposure. Conclusions: Working with biofuel at an energy plant does not generally enhance the prevalence of respiratory symptoms. However, the exposure level to micro-organisms has an impact on the occurrence of respiratory symptoms among biofuel workers. abstract_id: PUBMED:23898361 Association between the use of biomass fuels on respiratory health of workers in food catering enterprises in Nairobi Kenya. Introduction: Indoor air pollution from biomass fuel use has been found to be responsible for more than 1.6 million annual deaths and 2.7% of the global burden of disease. This makes it the second biggest environmental contributor to ill health, behind unsafe water and sanitation. Methods: The main objective of this study was to investigate if there was any association between use of bio-fuels in food catering enterprises and respiratory health of the workers. A cross-sectional design was employed, and data collected using Qualitative and quantitative techniques. Results: The study found significantly higher prevalence of respiratory health outcomes among respondents in enterprises using biomass fuels compared to those using processed fuels. Biomass fuels are thus a major public health threat to workers in this sub-sector, and urgent intervention is required. Conclusion: The study recommends a switch from biomass fuels to processed fuels to protect the health of the workers. abstract_id: PUBMED:33754938 Flying below the Radar: Psychoactive Drug Use among Young Male Construction Workers in Sydney, Australia. Background: Globally, there is growing concern regarding workers' illicit drug use and its implications for health and workplace safety. Young workers in male-dominated industries, such as construction, may be more susceptible to illicit drug use, risky drinking and its associated harms.Purpose/objectives: To investigate drug use and perceptions of risk among male construction workers, drawing comparisons between workers under 25 years with older age groups.Methods: Workers in Sydney, Australia (N = 511) completed a survey measuring past year illicit drug and alcohol use, psychological distress and perceptions of drug-related risks to health and safety. Prevalence in the total sample was compared with national estimates, and differences between younger and older survey respondents were examined using logistic regression models.Results: Survey respondents' cocaine, meth/amphetamine and cannabis use was significantly higher than estimates of male employees nationally (OR = 6.60, 3.58, 1.61, respectively). Young workers ≤24 were more likely to frequently use illicit drugs, drink heavily, and report psychological distress than those aged 35+. Workers ≤24 were least likely to perceive that drug use posed high risks to health or safety when compared with 25-34 and 35+ age groups.Conclusions/importance: The findings highlight the high prevalence of illicit drug use amongst young construction workers, representing threats to workplace safety even if used outside work hours. Greater emphasis on potential adverse effects of alcohol and drug use and closer examination of contributory workplace factors are required. These findings have practical implications to inform occupational health and safety programs and interventions in high-risk workplaces. abstract_id: PUBMED:29058534 Comparative assessment of respiratory and other occupational health effects among elementary workers. Objective. This study was conducted to assess hazards faced by elementary workers. Methods. A questionnaire survey and a respiratory function test (spirometry) were carried out on 150 respondents. Results. Major hazards identified related to sharp objects, heavy weight lifting, thermally harsh conditions, working at height, whole body vibration, chemicals, pathogens, increased noise levels and confined space entry. Workers suffered from upper and lower respiratory disorder symptoms, digestive problems, optical and musculoskeletal issues, etc. Spirometric measurement showed obstructive lung disorders to be highest among construction workers (CW) (48%) followed by sanitation workers (SW) (32%) and solid waste pickers (SWP) (28%). Restrictive lung pattern was dominant among SW (56%) followed by SWP (46%) and CW (42%). The observed FEV1/FVC in diseased SWP, SW and CW ranged from 51 to 96%, from 52 to 98% and from 31 to 99% respectively while observed mean FEV1 was 2.15, 1.79 and 1.70 L, respectively. Conclusion. The study findings show that occupational exposure can significantly influence respiratory system impairment and contribute to other ailments among elementary workers. The study recommends use of appropriate protective equipment and regular medical examination for early recognition of any health risk so that timely interventions for effective management may be undertaken. abstract_id: PUBMED:30409086 Development of Competencies for Respiratory Protection for Health Care Workers. According to the Bureau of Labor Statistics, more than 18 million health care workers in the United States are currently employed in the health care field and at risk of infectious respiratory exposure. With the emergence of global infectious diseases such as Ebola and Severe Acute Respiratory Syndrome (SARS), there is a need for increased and more clinically competent use of respiratory protection among healthcare workers. In an effort to improve knowledge about the practice of respiratory protection against respiratory infectious agents, a NIOSH-funded project that included an educational program, observations of HCWs, and focus groups was conducted. This study aimed to develop, implement, and evaluate respiratory protection educational program for HCWs and hospital management. This study identified respiratory protection practice issues among HCWs and strategies for integration of respiratory protection standards and procedures into practice. Findings from these efforts were used to develop respiratory protection competencies. Any worker who has the potential for respiratory exposure to infectious agents should be protected and should be able to demonstrate these specific competencies to achieve a standard level of respiratory protection. abstract_id: PUBMED:33763402 Systematic Review of Potential Occupational Respiratory Hazards Exposure Among Sewage Workers. Background: Sewage workers have a higher risk of exposure to various potential occupational respiratory hazards found in sewage plants. Although previous studies discuss occupational respiratory hazard concentration impacting sewage workers' respiratory health, the results are scarce and mixed. Hence, there is a need to identify the potential respiratory hazards in sewage plants so as to clarify the short- and long-term respiratory health effects. Therefore, this systematic review (SR) aims to critically review previous studies investigating potential respiratory hazards found at sewage plants and their effects on sewage workers' respiratory health. Methods: An SR was conducted using PubMed, EBSCO Medline, Web of Science, Scopus, and Google Scholar on peer-reviewed studies published between January 1994 and October 2020 evaluating the impact of potential exposure to respiratory hazards and its effects on respiratory health among sewage workers. "Sewage treatment plant," "respiratory hazards," and "respiratory health effects" were the three main search terms chosen in this SR. The inclusion criteria were (1) studies on potential occupational respiratory hazard exposure among sewage workers, (2) manuscripts written in English, and (3) studies published in the peer-reviewed literature. The human observational studies' quality was assessed using the Effective Public Health Practice Project Quality Assessment Tool. Results: We identified 5,660 articles through an initial database search. Only 26 items met the inclusion criteria and were included in this review; 15 human observational studies and 11 environmental assessment studies were conducted in the sewage industries. Most of the human observational studies were rated as moderate quality, two studies were rated as weak quality, and one study with strong quality was identified. Hydrogen sulfide, bioaerosols, particulate matter 2.5 (PM 2.5), and volatile organic compounds (VOC) were found to be potential respiratory hazards. Most of the risks contributed to adverse outcomes on the sewage workers' respiratory health with some inconsistent findings on the relationship between respiratory hazard exposure and respiratory health effects. Conclusion: Our review finds that, although this area is of great importance, quality studies are still lacking. There is a need for additional studies to clarify the effects of respiratory hazard exposure on sewage workers and respiratory health, especially PM 2.5 and VOC. abstract_id: PUBMED:34616708 Dust Exposure and Respiratory Health Among Workers in Primary Coffee Processing Factories in Tanzania and Ethiopia. Introduction: In primary coffee factories the coffee beans are cleaned and sorted. Studies from the 80- and 90-ties indicated respiratory health effects among the workers, but these results may not represent the present status. Our aim was to review recent studies on dust exposure and respiratory health among coffee factory workers in Tanzania and Ethiopia, two major coffee producing countries in Africa. Methods: This study merged data from cross-sectional studies from 2010 to 2019 in 4 and 12 factories in Tanzania and Ethiopia, respectively. Personal samples of "total" dust and endotoxin were taken in the breathing zone. Chronic respiratory symptoms were assessed using the American Thoracic Society (ATS) questionnaire. Lung function was measured by a spirometer in accordance with ATS guidelines. Results: Dust exposure among male production workers was higher in Ethiopia (GM 12 mg/m3; range 1.1-81) than in Tanzania (2.5; 0.24-36). Exposure to endotoxins was high (3,500; 42-75,083) compared to the Dutch OEL of 90 EU/m3. The male workers had higher prevalence of respiratory symptoms than controls. The highest symptom prevalence and odds ratio were found for cough (48.4%; OR = 11.3), while for breathlessness and wheezing the odds ratios were 3.2 and 2.4, respectively. There was a significant difference between the male coffee workers and controls in the adjusted FEV1 (0.26 l/s) and FVC (0.21 l) and in the prevalence of airflow limitation (FEV1/FVC &lt; 0.7) (6.3 vs. 0.9%). Among the male coffee workers, there was a significant association between cumulative dust exposure and the lung function variables FEV1 and FVC, respectively. Conclusions: The results suggest that coffee production workers are at risk of developing chronic respiratory symptoms and reduced lung function, and that the findings are related to high dust levels. Measures to reduce dust exposure should be targeted to factors identified as significant determinants of exposure. abstract_id: PUBMED:33594549 Health disorders among Egyptian municipal solid waste workers and assessment of their knowledge, attitude, and practice towards the hazardous exposure. Solid waste workers are exposed to variety of injuries and diseases at work. To study health disorders among workers in the waste management field and to assess their knowledge attitude and practice (KAP) towards hazardous exposure and safety measures. A cross-sectional study was carried out on 275 waste workers and an equal number as a group of controls from workers' relatives or workers not involved in waste management. Workers completed a self-administered or interviewer-based questionnaire containing data about (i) knowledge, e.g., management of wastes, types of hazards associated with solid waste management, and safety measures; (ii) attitude, e.g., opinions about mode of transmission of infection and safety measures; and (iii) practice, e.g., available safety measures to prevent exposure to hazards. Health education sessions were carried out with pre- and post-assessment of KAP. Both groups were assessed regarding the health status through history, general, and local examination (respiratory system and skin) in addition to spirometric measurements and complete blood count. A total of 275 exposed solid waste workers and 275 controls were included. A significantly higher prevalence of symptoms regarding respiratory, gastrointestinal, renal, musculoskeletal, and dermatological systems were detected among the exposed group than the controls (P&lt;0.05). Significantly lower levels of RBCs, Hb, HCT, and MCH were reported among the exposed group than the controls (P&lt;0.05). Among the exposed group vs. controls, FEV1 reported 99.75±13.35 vs. 104.83±10.47 (P&lt;0.001), FEV1/FVC% reported 98.88±15.32 vs. 102.72±14.36 (P=0.003), FEF25-75% reported 99.64±17.06 vs. 103.07±19.38 (P=0.029), and PEF% reported 58.73±16.31 vs. 62.12±12.91 (P=0.007). Unsatisfactory knowledge was reported among 64% of solid waste workers, 69.1% had negative attitude, and 73.8% had unsafe practice. KAP was significantly higher post than pre-health education (P&lt;0.001). Respiratory, gastrointestinal, and skin infections were common among solid waste workers. Assessment of KAP towards management of wastes, transmission of infection, exposure to sharps, and safety measures showed poor levels that got improved after health education. To maintain good health among solid waste workers, direct supervision with periodic medical examination and focused continuous health education are required. abstract_id: PUBMED:29780689 Relationship between health-related quality of life and respiratory health status among coal-based sponge iron plant workers in Barjora, India. Background: Many coal-based sponge iron plant workers have poor health-related quality of life in general, and specifically a poor respiratory health status. However, the relationship between their health-related quality of life and respiratory health status is unknown. Aim: This study investigated the relationship between health-related quality of life, measured using the EuroQol-5D (EQ5D), and respiratory health status, measured using the St. George's Respiratory Questionnaire (SGRQ), among coal-based sponge iron plant workers in Barjora, India. Method: A cross-sectional study was conducted among coal-based sponge iron plant workers in Barjora, and complete data were available on 252 participants. Spearman's rank correlation coefficients were reported to show the strength of relationship between health-related quality of life and respiratory health status. Results And Conclusion: Significant correlations were found between all EQ5D dimensions/visual analogue scale (VAS) and all SGRQ scores except between EQ5D-VAS and SGRQ-activity. A range of correlations was found. They were moderate between EQ5D-anxiety/depression and SGRQ-symptom, EQ5D-VAS and SGRQ-symptom, and EQ5D-anxiety/depression and SGRQ-total, but weak between all the other factors. abstract_id: PUBMED:30181443 Respiratory Health Status of Workers in a Bottling Factory in Benin City, Nigeria. Introduction: There is a paucity of data on the respiratory health status of workers in bottling factories in Benin City, Nigeria. Such data will help to drive future studies and influence policy development on occupational health and safety in the country. This study assesses the respiratory symptoms and spirometric indices of exposed workers and controls. Methods: Respiratory symptoms and spirometric parameters of 18 workers on routine mandatory annual lung screening were assessed using the modified MRC (Medical Research Council) questionnaire and spirometer respectively, according to the European Respiratory Society and American Thoracic Society (ERS/ATS) guidelines. Results: The mean age of workers was 35.1 ± 6.7 years. Workers and controls were similar in age, sex, BMI (Body Mass Index) and health status (p &gt; 0.05). Respiratory symptoms were significantly higher among workers compared to controls. Overall, the result was statistically significant in the variables of wheeze in a smoky or dusty environment, presence of at least one respiratory symptom, better symptoms at weekends and better symptoms during holidays (p &lt; 0.05). In particular, 6 (33.3%) exposed workers had wheeze in a smoky or dusty environment, 9 (50.0%) exposed workers reported at least one respiratory symptom compared with 2 (11.1%) controls, 5 (27.8%) had better symptoms at weekends, and 7 (38.9%) had better symptoms at holidays (p &lt; 0.05). Generally, the reported frequency of respiratory symptoms among exposed workers were: cough (22.2%), sputum production (5.6%), breathlessness (11.1%) and wheeze (44.4%). Similarly, workers had significantly lower spirometric indices than controls, particularly in forced expiratory volume in 1 s (FEV₁), FEV₁/forced vital capacity (FVC) ratio and forced expiratory flow between 25% and 75% of FVC (FEF 25⁻75%) measurements. Conclusions: This study provides evidence of adverse respiratory health effects among bottling factory workers which requires further investigation. Answer: Yes, the use of biofuels does affect respiratory health among male Danish energy plant workers. According to a study (PUBMED:21098831), while working with biofuel at an energy plant does not generally enhance the prevalence of respiratory symptoms, the exposure level to micro-organisms has an impact on the occurrence of respiratory symptoms among biofuel workers. Specifically, the study found that the prevalence of respiratory symptoms among conventional plant and biofuel plant workers was comparable, except for asthma symptoms among non-smokers, which were higher among straw workers compared with controls (9.4 vs 0%, p<0.05). There was also a trend for increasing respiratory symptoms with increasing endotoxin exposure, with odds ratios between 3.1 (1.1 to 8.8) for work-related nose symptoms and 8.1 (1.5 to 44.4) for asthma symptoms for the most exposed group. Associations between fungal exposure and respiratory symptoms were less clear but suggested cultivable fungi to be associated with asthma symptoms and work-related respiratory symptoms. No associations were seen between lung function and the level of endotoxin or fungal exposure.
Instruction: Can chest CT decrease the use of preoperative bronchoscopy in the evaluation of suspected bronchogenic carcinoma? Abstracts: abstract_id: PUBMED:9596301 Can chest CT decrease the use of preoperative bronchoscopy in the evaluation of suspected bronchogenic carcinoma? Background: Fiberoptic bronchoscopy (FOB) is frequently used to diagnose and stage bronchogenic carcinoma (BC). However, the value of FOB in diagnosis/staging BC presenting as a pulmonary nodule or mass (PNM) is controversial. Since chest CT is usually obtained in these patients, it may be used in patient selection for preoperative FOB. Objective: Evaluation of the role of chest CT in determining the predictive value of FOB in diagnosing/staging a PNM, by comparing the results of CT and bronchoscopy. Design: Retrospective review of chest CTs and medical records. Patients: Consecutive patients with BC between 1992 and 1994 who had diagnostic FOB and CT in our institution, but without radiographic evidence of (1) pulmonary atelectasis, (2) endobronchial tumor or narrowing of the central airways, and (3) the PNM abutting the central airways. Results: Sixty-four patients met the selection criteria. The size of the PNM ranged from 1.5 to 10 cm; the size was &lt; or = 4 cm in 62 patients. FOB provided a diagnosis in 22 patients. Bronchoscopy detected endobronchial lesions in 11 patients (17%); 3 had lesions in more than one lobe. In three patients, the PNM was &lt;3 cm. The radiographically undetected endobronchial tumor increased the tumor stage in only two patients. The "CT bronchus" sign had a positive and negative predictive value of 75% and 68%, respectively. Conclusions: (1) In this study, CT failed to detect endobronchial tumor in 11 of 64 patients (17%). Because of the implications of a new staging system, more studies are necessary before abandoning staging FOB. (2) The CT bronchus sign has a very high positive and negative predictive value in the use of diagnostic FOB and should be used to guide the method of biopsy of a PNM. abstract_id: PUBMED:33145099 Should computed tomography and bronchoscopy be routine examinations for chronic cough? Chronic cough is a common symptom of many diseases. Guidelines on cough from different countries recommend chest X-ray as the first-line examination. However, as compared to computerized tomography (CT), chest X-ray is insensitive in the diagnosis of many diseases. We speculate that CT is more helpful in excluding the causes of long-term cough, such as bronchial tumors and pulmonary interstitial diseases, which prevents patients from receiving unnecessary examinations and diagnostic treatments for chronic cough. In addition, a single chest CT is safe and affordable in some regions. Therefore, chest CT may be recommended as a first-line examination for patients with chronic cough. In addition, although bronchoscopy is employed as a further examination for chronic cough, it plays an important role in the diagnosis of chronic cough, especially in rare bronchial diseases such as amyloidosis and foreign body inhalation. Induced sputum cytology was not accepted as a necessary examination for chronic cough, but it becomes a first-line examination until physicians recognize that non-asthmatic eosinophilic bronchitis is a common cause of chronic cough. Therefore, we speculate that the roles of chest CT and bronchoscopy in the identification of causes of chronic cough are increasingly important; in some regions, or for patients suspicious for uncommon causes of chronic cough, CT and bronchoscopy are recommended as first-line examinations. abstract_id: PUBMED:8234690 Hemoptysis: comparative study of the role of CT and fiberoptic bronchoscopy. Purpose: A prospective study was performed to compare results of computed tomography (CT) and fiberoptic bronchoscopy in diagnosis of cancer in patients with hemoptysis. Materials And Methods: Ninety-one patients (64 men, 27 women) with hemoptysis underwent thin-section CT and fiberoptic bronchoscopy. Results: CT scans demonstrated all 27 tumors seen at bronchoscopy and an additional seven, five of which were beyond bronchoscopic range. In patients with normal chest radiographs, bronchial carcinoma was detected in 5% at both bronchoscopy and CT. In patients with abnormal findings on radiographs, bronchoscopy allowed both location and histologic diagnosis in 78% of carcinomas but was unreliable in locating peripheral tumors demonstrated at CT. CT was insensitive in demonstrating early mucosal abnormalities, bronchitis, squamous metaplasia, and a benign papilloma, all detected at bronchoscopy. Conclusion: The initial examination should be bronchoscopy when there is high clinical suspicion of carcinoma and relevant radiographic abnormality, and CT when strong clinical suspicion of carcinoma is not substantiated at bronchoscopy in patients with normal findings on chest radiographs. abstract_id: PUBMED:11779976 Evaluation of tracheobronchial lesions with spiral CT: comparison between virtual endoscopy and bronchoscopy Introduction: The aim of this study is to describe the scanning parameters for virtual bronchoscopy in the evaluation of the tracheobronchial tree and to compare the results of this examination with the endoscopic findings. Material And Methods: 27 patients with tracheobronchial neoplasms suspected at preliminary clinical and chest film findings or postoperative follow-up for malignant disease were evaluated with spiral CT of the chest and bronchoscopy. Virtual endoscopy was performed on the pulmonary volume involved by the lesion, using narrow axial images (thickness 2 mm, table index 3 mm, reconstruction index 1 mm.) so as to obtain MPR, MIP and 3D reconstructions with 3D Endo Vew program (Philips Medical System, Eindhoven, Holland). We compared these reconstructions and the findings the normal spiral CT scanning with the corresponding endoscopic examinations. Results: In all patients we were able to study the lobar and segmental bronchi in all patients and in 2 we also evaluated the subsegmental bronchi. 25 lesions in 23 patients were shown by virtual endoscopy (8 occlusions, 8 stenosis, 5 compressions, 3 flogosis with endobronchial mucus, 1 bronchocele) and in 4 patients the examinations were negative. The bronchoscopy was negative in 4 patients and positive in 23 patients with 25 lesions, but we had agreement in 23/27 patients (85,1%). In 2 patients virtual endoscopy showed the lesions in a different bronchus compared to bronchoscopy. In one patient we interpreted the obstruction as neoplastic instead of mucus inside the bronchi and in the last patient bronchoscopy was not performed due to his old age and the virtual endoscopy showed total obstruction of a segmental bronchus. Discussion And Conclusions: The results show that virtual endoscopy can study the tracheobronchial tree as far as the segmental bronchi, and sometimes also the subsegmental bronchi and the bronchi below a closed obstruction. In addition, it can evaluate the extraluminal location of the lesions. For these reasons virtual endoscopy provides a road map for bronchoscopy as a guide for transbronchial biopsy and for endobronchial treatment planning. The limitation of this technique is its inability to evaluate the mucosal surface and distinguish flogosis from neoplastic lesions by biopsy. It can be used however in the postoperative follow-up both for cancer and transplant, when immediate biopsy is not necessary. abstract_id: PUBMED:10901958 Haemoptysis in patients with a normal chest radiograph: bronchoscopy-CT correlation. The exact role of fibre-optic bronchoscopy (FOB) and CT of the chest in the diagnosis of patients presenting with haemoptysis and a normal or non-localizing chest radiograph has not been clearly defined. A study was designed to evaluate 50 patients presenting with haemoptysis and a normal or non-localizing chest radiograph using FOB and high-resolution computed tomography (HRCT). A definitive diagnosis was established in 17 (34%) patients. The aetiologies included bronchiectasis (24%), bronchial adenoma (6%), tuberculosis (2%) and bronchitis (2%). The diagnosis was made by HRCT in 15 (30%) patients, while FOB was diagnostic in five (10%) patients. The diagnosis was made by HRCT and FOB in all patients with focal airway abnormalities. Therefore, HRCT effectively delineated abnormalities of both the central and peripheral airways. It is concluded that CT should be obtained prior to FOB in all patients presenting with haemoptysis and a normal or non-localizing chest radiograph. abstract_id: PUBMED:2280646 Is bronchoscopy a useful additional preoperative examination in esophageal carcinoma? Between 1980 and 1987 95 patients with esophageal carcinoma had a bronchoscopy. In 67 patients (70.5%) no pathological findings could be detected. In 24 patients (25.3%) signs of impingement were present and in 4 patients (4.2%) tumor infiltration into the tracheobronchial tree was visible. The correlation between tumor length and bronchoscopic findings revealed a marked increase of direct and indirect tumor evidence in esophageal carcinomas of more than 5 cm in length. Pathologic bronchoscopic findings were detected twice as often in patients with carcinoma of the upper third of the esophagus in comparison with those of the middle third. In 5 patients (5.3%) bronchoscopy revealed an unknown additional bronchial carcinoma. 67 of the 95 patients underwent operation. In 59 patients the esophagus was resected. Among the normal bronchoscopic findings the operability rate was 77.6%. In contrast, only half of the patients with pathological bronchoscopic findings were operable, the other patients received radiotherapy. We recommend, therefore, preoperative bronchoscopy as an important investigation for assessment of operability and for evaluation of synchronous neoplasms in patients with an esophageal carcinoma. abstract_id: PUBMED:17598417 Virtual bronchoscopy and 3D spiral CT reconstructions in the management of patient with bronchial cancer--our experience with Syngo 3D postprocessing software. Multislice helical CT generated virtual bronchoscopy (VB) represents one of the most recent developments in three-dimensional computer aided visualisation techniques. VB allows non-invasive and relatively accurate 3D evaluation of tracheobronchal tree. We performed virtual bronchoscopy and in-space 3D volume analysis on CT-data set acquired from sixty-four-year old male with bronchial cancer in order to demonstrate advantages and disadvantages of these methods in diagnostics and preoperative management of metastatic bronchial cancer. Siemens Somatom Emotion 16 helical CT scanner was used for data acquisition. Data post-processing was done with 3D Syngo 2006G software package from Siemens medical systems. CT scanning of the thorax was performed in heavy smoker with an expansive T4N1M1 malignant process in a superior lobe of the right lung accompanied with large metastatic lesion attached on the right lateral chest wall. Metastatic lesions were also found in vertebral column. In-space 3D analysis followed with virtual bronchoscopy had revealed obstruction of apical branch of superior lobe segmental bronchus. External compression done by tumor to the superior segmental and right main bronchus was found. We concluded that multi-slice CT in connection with VB became a possible non-invasive alternative to bronchoscopy, if tissue samples are not required. abstract_id: PUBMED:10924381 An evaluation of computed tomography as an aid to diagnosis in patients undergoing bronchoscopy for suspected bronchial carcinoma. Aim: To determine whether computed tomography (CT) can predict the likelihood of obtaining a positive tissue diagnosis at fibreoptic bronchoscopy (FOB), or demonstrate an alternative means of achieving a tissue diagnosis, in patients presenting with a high clinical suspicion of primary bronchogenic carcinoma and an abnormal chest radiograph (CXR). Materials And Methods: Sixty-two patients presenting with a high clinical suspicion of carcinoma and an abnormal CXR had chest and liver CT and FOB performed. All patients subsequently had histocytological confirmation of malignancy. Features recorded from the CTs included: the site and characteristics of a mass if present, and its relationship to adjacent airways; the presence of presumed metastatic disease; and a CT prediction of the likelihood of positive FOB was made. Results: Of the patients, 41/62 (66%) had inoperable stage IIIb/IV disease. Fibreoptic biopsy yielded positive tissue diagnoses in 38/62 (61%). Computed tomography features predicting a positive FOB in this group included: ill-definition of the mass (12/15, 80%); a mass &lt;4 cm from the origin of the nearest lobar bronchus (36/53, 68%); an endobronchial component of mass (22/24, 92%); a segmental or larger airway leading to the mass (30/35, 86%). Overall, CT had positive and negative predictive values for positive FOB of 85% and 78% respectively. The accuracy of the overall CT prediction of positive FOB was better than the accuracy of any of the individual factors. Seventeen of 62 (27%) patients had presumed metastatic disease suitable for percutaneous biopsy. Conclusion: Computed tomography is useful in predicting the likelihood of achieving positive histocytology at FOB. The overall CT prediction is superior to any of the individual CT features taken alone.Bungay, H. K. (2000). Clinical Radiology 55, 554-560. abstract_id: PUBMED:3651311 Bronchoscopy in patients with haemoptysis and normal chest roentgenograms. We reviewed the records of 58 patients with haemoptysis and normal chest roentgenograms who underwent fibreoptic bronchoscopy. A diagnosis of malignancy was made in six patients at bronchoscopy. Three patients had bronchogenic squamous cell carcinoma, one a carcinoid tumour and two laryngeal carcinoma. Sputum for cytology was negative for malignant cells in all six patients. Follow-up data were available for the other 52 patients for an average period of 55.7 +/- 29.6 (SD) months. Two patients had a subsequent diagnosis of bronchogenic carcinoma at 2 and 6 years after initial evaluation. Three patients died from conditions not related to pulmonary malignancy and the remaining patients followed a benign course. Our patients come from a predominantly male, elderly population of cigarette smokers. Among such patients, we conclude that bronchoscopy is indicated in the evaluation of those with haemoptysis and a normal chest roentgenogram. abstract_id: PUBMED:11959731 Multidetector CT virtual bronchoscopy to grade tracheobronchial stenosis. Objective: The purpose of this study was to compare the efficacy of noninvasive multidetector CT (virtual bronchoscopic images, axial CT slices, coronal reformatted images, and sagittal reformatted images) in depicting and allowing accurate grading of tracheobronchial stenosis with that of flexible bronchoscopy. Materials And Methods: Multidetector CT and flexible bronchoscopy were used to examine 200 bronchial sections obtained from 20 patients (15 patients with bronchial carcinoma and five without central airways disease). Multidetector CT was performed using the following parameters: collimation, 4 x 2 mm, pitch, 1.375; and reconstruction intervals, 2 mm. Postprocessing was performed using surface rendering and multiplanar reformatted images. CT images were independently interpreted by two radiologists. The tracheobronchial stenoses revealed on flexible bronchoscopy were graded by a pulmonologist. Results: Virtual bronchoscopic findings, axial CT scans, and multiplanar reformatted images were highly accurate (98% accuracy for virtual bronchoscopic images, 96% for axial slices and coronal reformatted images, and 96.5% for sagittal reformatted images) in revealing tracheobronchial stenosis. In allowing accurate grading of tracheobronchial stenosis, images from virtual bronchoscopy correlated closely (r = 0.91) with those of flexible bronchoscopy. Because use of virtual bronchoscopic images reduced the overestimation of stenosis, these images allowed better assessment of stenosis than did axial CT slices (r = 0.84) or multiplanar reformatted images (r = 0.84) alone. Conclusion: Multidetector CT virtual bronchoscopy is a reliable noninvasive method that allows accurate grading of tracheobronchial stenosis. However, it should be combined with the interpretation of axial CT images and multiplanar reformatted images for evaluation of surrounding structures and optimal spatial orientation. Answer: The use of chest CT in the evaluation of suspected bronchogenic carcinoma has been studied to determine if it can decrease the need for preoperative bronchoscopy. A retrospective review of patients with bronchogenic carcinoma who had both diagnostic fiberoptic bronchoscopy (FOB) and chest CT found that CT failed to detect endobronchial tumor in 17% of patients, suggesting that more studies are necessary before abandoning staging FOB. However, the "CT bronchus" sign had a high positive and negative predictive value in the use of diagnostic FOB and should be used to guide the method of biopsy of a pulmonary nodule or mass (PNM) (PUBMED:9596301). Another study compared the results of CT and fiberoptic bronchoscopy in patients with hemoptysis and found that CT scans demonstrated all tumors seen at bronchoscopy and additional ones beyond bronchoscopic range. However, CT was insensitive in demonstrating early mucosal abnormalities, bronchitis, squamous metaplasia, and a benign papilloma, all detected at bronchoscopy. The study concluded that bronchoscopy should be the initial examination when there is high clinical suspicion of carcinoma with relevant radiographic abnormality, and CT should be used when clinical suspicion of carcinoma is not substantiated at bronchoscopy in patients with normal chest radiographs (PUBMED:8234690). Furthermore, a study evaluating tracheobronchial lesions with spiral CT and virtual endoscopy found that virtual endoscopy can study the tracheobronchial tree as far as the segmental bronchi and sometimes also the subsegmental bronchi and the bronchi below a closed obstruction. It can provide a roadmap for bronchoscopy as a guide for transbronchial biopsy and for endobronchial treatment planning. However, it cannot evaluate the mucosal surface and distinguish flogosis from neoplastic lesions by biopsy (PUBMED:11779976). In conclusion, while chest CT and virtual bronchoscopy can provide valuable information and may guide the biopsy of PNM, they cannot fully replace the need for preoperative bronchoscopy, especially in cases where mucosal abnormalities or early-stage tumors are suspected. Preoperative bronchoscopy remains important for staging and obtaining tissue diagnosis in patients with suspected bronchogenic carcinoma.
Instruction: Can low-density lipoprotein be too low? Abstracts: abstract_id: PUBMED:24739131 Impact of short-term low-dose atorvastatin on low-density lipoprotein and high-density lipoprotein subfraction phenotype. Statins can significantly reduce low-density lipoprotein-cholesterol (LDL-C) and modestly raise or not alter high-density lipoprotein-cholesterol (HDL-C). However, their impact on high-density lipoprotein (HDL) and low-density lipoprotein (LDL) subfractions has been less examined. The aim of the present study was to investigate the short-term impact of low-dose atorvastatin on HDL and LDL subfractions in humans. In this randomized study, data from 52 subjects were analysed. Thirty-seven patients with atherosclerosis were randomized to treatment with atorvastatin 10 mg/day (n = 17) or 20 mg/day (n = 20) for 8 weeks, with 15 healthy subjects without therapy used as a control group. The lipid profile and lipoprotein subfractions were determined using the Lipoprint system at baseline and at 8 weeks. The data suggest that atorvastatin treatment (10 and 20 mg/day) for 8 weeks significantly decreases LDL-C levels and reduces the cholesterol concentration of all LDL subfractions, which is accompanied by an increase of the mean LDL particle size. Although 10 mg/day atorvastatin treatment for 8 weeks had no impact on the HDL subfraction, 20 mg/day atorvastatin for 8 weeks significantly increased the cholesterol concentration of large HDL particles and decreased the cholesterol concentration of small HDL particles without changing serum HDL-C levels in patients with atherosclerosis. Therefore, the results suggest that 20 mg/day atorvastatin treatment for 8 weeks may result in a favourable modification of the HDL subfraction phenotype in addition to its effects on the cholesterol concentration of all LDL subfractions and mean LDL particle size. abstract_id: PUBMED:9275656 Effects of oxidized low density lipoprotein and very low density lipoprotein on the expression of MCP-1 by monocytes Monocyte chemoattractant protein-1 (MCP-1) is a potent chemotatic factor for monocytes. This study aimed at examination of the effects of oxidized low density lipoprotein (OX-LDL) and oxidized very low density lipoprotein (OX-VLDL) on the expression of MCP-1 mRNA and protein by monocytes. The guanidinium isothiocyanate method was used to extract total RNA from monocytes which were exposed to LDL, OX-LDL, VLDL and OX-VLDL for 24 hours respectively and the monocyte derived conditioned media collected. MCP-1 mRNA expression in monocytes was examined by slot blot and Northern blot analysis using a gamma 32P-end-labelled 35 mer oligonucleotide probe of MCP-1. Meanwhile, MCP-1 protein in the conditioned media was examined by use of sandwich ELISA. It was found that monocytes from human peripheral blood can express MCP-1 mRNA and protein after 24 hours incubation at 37 degrees C and the expression of MCP-1 mRNA and protein was markedly increased by a 24 hour exposure to OX-LDL and OX-VLDL, whereas the expression of MCP-1 mRNA and protein was only sightly increased when exposed to LDL and VLDL. It is concluded that monocytes can express MCP-1 that induce additional monocyte migration into the subendothelial space, and that OX-LDL and OX-VLDL enhance this function by inducing stronger MCP-1 expression in monocytes. abstract_id: PUBMED:10783327 Modulation of base excision repair by low density lipoprotein, oxidized low density lipoprotein and antioxidants in mouse monocytes. In the present study, we found that oxidized low density lipoprotein, but not low density lipoprotein, down-regulated base excision repair activity in extracts of mouse monocyte cell line PU5-1.8. An enzyme required in this pathway, DNA polymerase beta, was also down-regulated. In contrast, treatment of monocytes with a combination of ascorbate and alpha-tocopherol up-regulated base excision repair activity and expression of DNA polymerase beta. Co-treatment of monocytes with antioxidants plus oxidized low density lipoprotein prevented down-regulation by oxidized low density lipoprotein. Oxidative DNA damage, as measured by 8-hydroxyguanine accumulation in genomic DNA, was found in cells treated with oxidized low density lipoprotein; 8-hydroxyguanine was not found in the cells treated with low density lipoprotein, antioxidants or oxidized low density lipoprotein plus antioxidants. These results establish a linkage between the DNA base excision repair pathway, oxidative DNA damage and oxidized low density lipoprotein treatment in mouse monocytes. Since oxidized low density lipoprotein is implicated in chronic disease conditions such as atherogenesis, these findings facilitate understanding of genetic toxicology mechanisms related to human health and disease. abstract_id: PUBMED:18468940 From low-density lipoprotein to platelet activation. There is a strong correlation between the level of plasma low-density lipoprotein (LDL) and death by cardiovascular disease (CVD). As a main carrier of cholesterol, a high low-density lipoprotein concentration stimulates atherogenesis by its capacity to become oxidized and to become endocytosed by macrophages in the vessel wall forming cholesterol-rich plaques that are sites for arterial occlusion. New evidence points at a second role of low-density lipoprotein in increasing cardiovascular disease-risk. Contact with low-density lipoprotein induces platelet hypersensitivity to agonists that initiate platelet functions thereby enhancing adhesion, aggregation and secretion of granule contents. The signalling pathways that mediate the priming of platelets by native and oxidized low-density lipoprotein have now been characterized. abstract_id: PUBMED:35130253 Human low-density lipoprotein receptor sequence analysis. Objective: To introduce the local low-density lipoprotein receptor sequence to the GenBank database and to compare the differences with other sequences. Methods: The study was conducted from January to March 2020 at the College of Pharmacy, Karbala University, Iraq. The process of genotyping was done by the amplification of 150bp area of low-density lipoprotein gene by a polymerase chain reaction of deoxyribonucleic acid extracted from the blood samples of the local subjects. Data sequencing was analysed using the basic local alignment search tool software. Results: The phylogenetic tree for low-density lipoprotein receptor showed its evolutionary relationship in human and other biological species that may have a common ancestor. The outcomes reflected the geographic association in the evolution process of low-density lipoprotein receptor. Conclusion: Similarity between the studied region and another sequencing in the database explained that the analogous sequences were of the same genetic origin as the low-density lipoprotein receptor and may not necessarily have the same function. abstract_id: PUBMED:22798705 Oxidized low-density lipoprotein as a biomarker of in vivo oxidative stress: from atherosclerosis to periodontitis. Oxidized low-density lipoprotein is known as an important factor in the development of atherosclerosis. The introduction of a sensitive procedure for the determination of oxidized low-density lipoprotein in human circulating plasma using a monoclonal antibody recognizing oxidized phosphatidylcholines has opened new fields of research based on in vivo oxidized low-density lipoprotein. The plasma oxidized low-density lipoprotein levels are significantly elevated in patients with acute myocardial infarction, cerebral infarction or chronic renal failure accompanied by hemodialysis. It was found that the plasma oxidized low-density lipoprotein level increased prior to aortic atherosclerotic lesion enlargement in apolipoprotein E-knockout mice. Recent studies have pointed out that oxidized low-density lipoprotein is transferrable between vessel wall tissue and the circulation, so it is a reasonable hypothesis that plasma oxidized low-density lipoprotein levels reflect the oxidative status at local sites of atherogenesis. Oxidized low-density lipoprotein measurement has been applied to human gingival crevicular fluids, which can be collected easily and safely, and relatively high levels of oxidized low-density lipoprotein were shown to be present. These findings, together with recent clinical follow-up studies, suggest that oxidized low-density lipoprotein is a predictive biomarker of a variety of diseases related to oxidative stress. This review summarizes the current understanding of in vivo oxidized low-density lipoprotein and its potential significance as a biomarker of disease. abstract_id: PUBMED:8319809 Heterogeneous lipoprotein (a) size isoforms differ by their interaction with the low density lipoprotein receptor and the low density lipoprotein receptor-related protein/alpha 2-macroglobulin receptor. Lipoprotein (a) (Lp(a)) is a complex of low density lipoprotein (LDL) with apolipoprotein (apo) (a). To examine the size distribution of Lp(a), plasma was separated by fast flow gel filtration and Lp(a):B complexes were determined in the eluate by enzyme immunoassays, in which detection was performed with monoclonal antibodies specific for apoB. Lp(a):B particles displayed apparent molecular masses (M(r)) of 2 x 10(6) to at least 10 x 10(6). Lp(a) size isoforms differed by the expression of apoB epitopes and their interaction with cultured human skin fibroblasts. LDL was more effective in inhibiting binding, uptake, and degradation of low M(r) Lp(a) than of high M(r) Lp(a). In contrast, Glu-plasminogen, alpha 2-macroglobulin and tissue-type plasminogen activator were more effective in competing for the cellular degradation of high M(r) Lp(a) than of low M(r) Lp(a). Ligand blotting revealed that Lp(a) bound to the low density lipoprotein receptor, the low density lipoprotein receptor-related protein/alpha 2-macroglobulin receptor (LRP) and to two other endosomal membrane proteins. We propose that the LDL receptor preferentially internalizes low M(r) Lp(a), whereas LRP may have a role in the clearance of high M(r) Lp(a). abstract_id: PUBMED:22540098 Triglycerides, total cholesterol, high density lipoprotein cholesterol and low density lipoprotein cholesterol in rats exposed to premium motor spirit fumes. Background: Deliberate and regular exposure to premium motor spirit fumes is common and could be a risk factor for liver disease in those who are occupationally exposed. A possible association between premium motor spirit fumes and plasma levels of triglyceride, total cholesterol, high density lipoprotein cholesterol and low density lipoprotein cholesterol using a rodent model could provide new insights in the pathology of diseases where cellular dysfunction is an established risk factor. Aim: The aim of this study was to evaluate the possible effect of premium motor spirit fumes on lipids and lipoproteins in workers occupationally exposed to premium motor spirit fumes using rodent model. Materials And Methods: Twenty-five Wister albino rats (of both sexes) were used for this study between the 4(th) of August and 7(th) of September, 2010. The rats were divided into five groups of five rats each. Group 1 rats were not exposed to premium motor spirit fumes (control group), group 2 rats were exposed for 1 hour daily, group 3 for 3 hours daily, group 4 for 5 hours daily and group 5 for 7 hours daily. The experiment lasted for a period of 4 weeks. Blood samples obtained from all the groups after 4 weeks of exposure were used for the estimation of plasma levels of triglyceride, total cholesterol, high density lipoprotein- cholesterol and low density lipoprotein- cholesterol. Result: Results showed significant increase in means of plasma total cholesterol and low density lipoprotein levels (P&lt;0.05). The mean triglyceride and total body weight were significantly lower (P&lt;0.05) in the exposed group when compared with the unexposed. The plasma level of high density lipoprotein, the ratio of low density lipoprotein to high density lipoprotein and the ratio of total cholesterol to high density lipoprotein did not differ significantly in exposed subjects when compared with the control group. Conclusion: These results showed that frequent exposure to petrol fumes may be highly deleterious to the liver cells. abstract_id: PUBMED:10338120 Human platelets exclusively bind oxidized low density lipoprotein showing no specificity for acetylated low density lipoprotein. The widely studied macrophage scavenger receptor system is known to bind both acetylated low density lipoprotein and oxidized low density lipoprotein. Although only the latter ligand has been shown to occur in vivo, acetylated low density lipoprotein is often used to evaluate the contribution of scavenger receptors to different (patho)physiologic processes, assuming that all existing subtypes of scavenger receptors recognise both lipoproteins. In the present work, we identify human platelets as the first natural cell type to bind oxidized low density lipoprotein without showing specificity for acetylated low density lipoprotein. Consequently, platelets possess exclusive receptor(s) for oxidized low density lipoprotein distinct from the 'classical' scavenger receptor AI/AII. From the data presented in this work, we conclude that the class B scavenger receptor CD36 (GPIV) is responsible for this exclusive oxidized low density lipoprotein binding. abstract_id: PUBMED:9336016 Low-density lipoprotein and oxidised low-density lipoprotein: their role in the development of atherosclerosis. Oxidation of low-density lipoprotein (LDL) may be implicated in the development of atherosclerotic disease. Oxidised LDL is taken up more readily by monocyte-derived macrophages than LDL. Antibodies to oxidised LDL are found in atherosclerotic lesions, Increased risk of ischaemic heart disease is associated with a preponderance of small dense LDL particles, which are more susceptible to oxidation. Proatherogenic alterations in cell biochemistry and signalling pathways occur in the presence of LDL and more markedly oxidised LDL. In vitro antioxidants inhibit changes in cell biochemistry, while in vivo, they have been shown to attenuate or reverse development of atherosclerosis. Answer: The provided abstracts do not directly address whether low-density lipoprotein (LDL) can be too low. However, they do discuss various aspects of LDL's role in health and disease, including its impact on atherosclerosis, its modification by statins, and its oxidative forms. For instance, statins are known to significantly reduce LDL-cholesterol (LDL-C) levels and modestly raise or not alter high-density lipoprotein-cholesterol (HDL-C) levels. Atorvastatin, specifically, can decrease LDL-C levels and alter LDL subfractions, potentially leading to a favorable modification of the HDL subfraction phenotype (PUBMED:24739131). Oxidized forms of LDL (OX-LDL) have been shown to have detrimental effects, such as enhancing the expression of monocyte chemoattractant protein-1 (MCP-1) by monocytes (PUBMED:9275656), down-regulating base excision repair activity in mouse monocytes (PUBMED:10783327), and being implicated in the development of atherosclerosis (PUBMED:9336016). OX-LDL is also recognized as a biomarker of in vivo oxidative stress and is associated with various diseases related to oxidative stress (PUBMED:22798705). While the abstracts do not discuss the potential risks of having too low LDL levels, it is known from other sources that very low levels of LDL-C could potentially lead to certain health issues, such as hormonal imbalances and neurological problems, since cholesterol is essential for the synthesis of steroid hormones and the maintenance of cell membrane integrity. However, this topic would require further research and literature that specifically addresses the consequences of LDL levels being too low.
Instruction: Total long-term within-laboratory precision of cortisol, ferritin, thyroxine, free thyroxine, and thyroid-stimulating hormone assays based on a College of American Pathologists fresh frozen serum study: do available methods meet medical needs for precision? Abstracts: abstract_id: PUBMED:15737024 Total long-term within-laboratory precision of cortisol, ferritin, thyroxine, free thyroxine, and thyroid-stimulating hormone assays based on a College of American Pathologists fresh frozen serum study: do available methods meet medical needs for precision? Context: It is important that the total long-term precision of laboratory methods meet the medical needs of the patients being served. Objectives: To determine the long-term within- and between-laboratory variation of cortisol, ferritin, thyroxine, free thyroxine, and thyroid-stimulating hormone measurements using commonly available methods and to determine if these variations are within accepted medical needs. Design: Two vials of pooled frozen serum were mailed 6 months apart to laboratories participating in 2 separate College of American Pathologists surveys. The data from those laboratories that analyzed an analyte in both surveys were used to determine for each method the total variance and the within- and between-laboratory components. Setting: The study included the A mailing of the 2003 College of American Pathologists Ligand Survey and the C mailing of the Chemistry Survey. Main Outcome Measures: For each analyte, total variance was partitioned into within- and between-laboratory components for each analytic method. The within-laboratory variations were then compared with imprecision criteria based on biological variation. Participants: The laboratories that reported results on the same analyte using the same method in both surveys. Results: For each analyte, the median of the long-term within-laboratory variances of each peer group was 78% to 95% of its total-survey variance, and the median long-term within-laboratory coefficients of variation varied from 5.1% to 7.6%. The number of methods that met within-laboratory imprecision goals based on biological criteria were 5 of 5 for cortisol; 5 of 7 for ferritin; 0 of 7 for thyroxine and free thyroxine; and 8 of 8 for thyroid-stimulating hormone. Conclusions: For all analytes tested, the total within-laboratory component of variance was the major source of variability in this study. In addition, there are several methods, especially for thyroxine and free thyroxine, that may not meet analytic goals in terms of their imprecision. abstract_id: PUBMED:15737023 Analytic bias of thyroid function tests: analysis of a College of American Pathologists fresh frozen serum pool by 3900 clinical laboratories. Context: In proficiency testing surveys, there are differences in the values reported by users of various analytic methods. Two contributors to this variation are calibrator bias and matrix effects of proficiency testing materials. Objectives: (1) To quantify the biases of the analytic methods used to measure thyroid-stimulating hormone, thyroxine, triiodothyronine, free thyroxine, and free triiodothyronine levels; (2) to determine if these biases are within allowable limits; and (3) to ascertain if proficiency testing materials correctly identify these biases. Design: A fresh frozen serum specimen was mailed as part of the 2003 College of American Pathologists Ligand and Chemistry surveys. The means and SDs for each analytic method were determined for this sample as well as for a proficiency testing sample from both surveys. In the fresh frozen serum sample, target values for thyroxine and triiodothyronine were determined by isotope dilution/liquid chromatography/tandem mass spectrometry. All other target values in the study were the median of the means obtained for the various analytic methods. Main Outcome Measures: Calibration biases were calculated by comparing the mean of each analytic method with the appropriate target values. These biases were evaluated against limits based on intra- and interindividual biological variation. Matrix effects of proficiency testing materials were assessed by comparing the rank of highest to lowest analytic method means (Spearman rank test) for each analyte. Participants: Approximately 3900 clinical laboratories were enrolled in the College of American Pathologists Chemistry and Ligand surveys. Results: The number of methods in the Ligand Survey that failed to meet the goals for bias was 7 of 17 for thyroid-stimulating hormone and 11 of 13 for free thyroxine. The failure rates were 12 of 16 methods for thyroxine, 8 of 11 for triiodothyronine, and 9 of 11 for free triiodothyronine. The means of the analytic method for the proficiency testing material correlated significantly (P &lt; .05) only with the fresh frozen serum means for thyroxine and thyroid-stimulating hormone in the Chemistry Survey and free triiodothyronine in the Ligand Survey. Conclusions: A majority of the methods used in thyroid function testing have biases that limit their clinical utility. Traditional proficiency testing materials do not adequately reflect these biases. abstract_id: PUBMED:31526200 Effect of sampling time on estimates of thyroid-stimulating hormone, free thyroxine, and free triiodothyronine levels. Despite the need for precise measurement of thyroid hormones levels for disease detection and monitoring, the importance of sampling time for the accuracy of widely used assays is unclear. In this study, the effects of the time of day on thyroid-stimulating hormone (TSH), free triiodothyronine (FT3), and free thyroxine (FT4) levels were retrospectively evaluated using data obtained from the Laboratory Information System of Peking Union Medical College Hospital. TSH concentrations were significantly lower, and FT4 and FT3 concentrations were higher in males than in females (all p &lt; .05). TSH decreased from 7 am to 1 pm and increased from 1 pm to 4 pm. FT3 concentrations differed among sampling time groups (p &lt; .05). Compared to those at 7 am, the deviations in TSH, FT4, and FT3 were -24.53% to -7.41%, -0.84% to 0.84%, and -0.66% to 0.33%, respectively. TSH levels fluctuated during the day but FT4, FT3 did not exhibit a circadian rhythm. The time of sample collection must be standardized for TSH measurements. abstract_id: PUBMED:179126 Thyroxine, triiodothyronine, and thyrotropin in serum during long-term diphenylhydantoin therapy. Clinical laboratory tests for assessment of thyroid function were performed in 31 patients receiving long-term diphenylhydantoin (DPH) therapy. Reduced serum total thyroxine and free thyroxine index were found. The decrease in serum total triiodothyronine and free triiodothyronine index was smaller but statistically significant. DPH had no significant effect on the serum thyrotropin (TSH) concentration and maximal TSH response to TSH-releasing hormone. The absence of reciprocal changes in TSH and serum thyroxine might be due to the near-normal triiodothyronine concentration. Such an explanation would also conform with previous findings that the pituitary contains specific receptors for triiodothyronine but not thyroxine. abstract_id: PUBMED:4132967 Evaluation of new kits for the assessment in vitro of thyroid function by determination of serum total thyroxine, free TBG capacity, and free thyroxine index Sephadex G-25 and 125I-labelled triiodothyronine and thyroxine. A trial was carried out on 134 patients of new kits (Ames Co) using columns of Sephadex G-25 for the determination of serum total thyroxine (Tetralute test) and for the indirect estimation of serum free thyroxine-binding globulin capacity (Trilute test). Both new methods were quicker and easier than the reference resin methods and of similar precision. The two measurements when combined to give a free thyroxine index (Trilute-Tetralute-FTI) increased further the diagnostic discrimination and usefulness of the tests. The method for the determination of serum thyroxine can be modified to give a direct estimate of serum free thyroxine, expressed as a free thyroxine index. This new single-column technique, called the ;single-column free thyroxine index', gave a good correlation with clinical thyroid status in a preliminary trial of 45 patients. abstract_id: PUBMED:34775473 Preterm to term infant postmenstrual age reference intervals for thyroid-stimulating hormone and free thyroxine. Background: Infants born preterm are affected by a hypothalamic-pituitary-thyroid axis that is immature and still developing as they progress closer to corrected term gestation. Multiple risk factors place preterm infants at risk for a hypothyroid state. However, there is variability in thyroid-stimulating hormone cutoff values and limited data on free thyroxine reference intervals to guide clinicians. Methods: 1584 thyroid-stimulating hormone and 1576 free thyroxine laboratory samples that were originally collected to screen hospitalized infants for delayed-onset of hypothyroidism were retrospectively evaluated from a group of 1087 infants who ranged in postmenstrual age from 25 to 43 weeks gestation at the time of laboratory sample collection. Median thyroid hormone values and reference intervals were established using R and the mixtools package. Results: Thyroid-stimulating hormone reference intervals remained similar across gestational ages from 0.340-9.681 µIU/mL in 25-27 6/7-week infants to 1.090-7.627 µIU/mL in 40-43-weeks infants. For the same age groups, free thyroxine reference intervals increased from 0.42-0.91 ng/dL to 0.87-1.32 ng/dL. Conclusion: The reference intervals identified suggest that infants &lt;31 weeks gestation have a higher thyroid-stimulating hormone and lower free thyroxine level at baseline than previously anticipated. Impact: The increasing free thyroxine values in preterm to term infants indicate a maturing hypothalamic-pituitary-thyroid axis. Clinicians need thyroid hormone reference intervals that also vary by postmenstrual age to aid the evaluation of sick preterm infants who are at risk of a delayed hypothyroidism diagnosis that can be missed on the initial newborn screen. This study provides one of the largest samples of thyroid-stimulating hormone and free thyroxine data to establish reference intervals in preterm infants. Clinicians may utilize the identified postmenstrual age-based reference intervals to inform follow-up thyroid testing in preterm infants at several weeks postnatal age. abstract_id: PUBMED:23359141 The effects of simvastatin on the serum concentrations of thyroid stimulating hormone and free thyroxine in hypothyroid patients treated with levothyroxine. Background: Statins, such as simvastatin, are the drugs of choice for the treatment of hypercholesterolemia. On the other hand hypercholesterolmia can occur in hypothyroid patients, who receive levothyroxine. There are few clinical case reports in regards to drug interaction between levothyroxine and lovastatin or simvastatin, indicating decreased levothyroxine effects. This study aimed at determining possible interaction between simvastatin and levothyroxine in hypothyroid patients by assessing serum levels of thyroid stimulating hormone (TSH) and free thyroxine (FT4), the two important laboratory indices for levothyroxine therapy. Methods: In a cross sectional study, 41 eligible hypothyroid patients receiving levothyroxine (50-150 µg/d) were selected. Blood samples were taken before and after three months of simultaneous treatment with simvastatin (20 mg/d) and levothyroxine to determine the serum levels of TSH and FT4. Results: There was no significant difference between the serum levels of TSH (P=0.77) or FT4 (P=0.76) before and after three months of simultaneous treatment. Also, there was no aggravation or initiation of any sign or symptom of hypothyroidism in the patients during the study period. Conclusion: Considering that FT(4) and TSH are the most reliable indicators for the levothyroxine treatment, the findings of the present study suggest that there may not be any significant interaction between simvastatin and levothyroxine. abstract_id: PUBMED:15637904 Evaluation of a kit (Thyrolute) for the combined determination of serum total thyroxine and sequential free thyroxine index using Sephadex G-25 and 125I-thyroxine. A new kit (Thyrolute, Ames) for the combined determination of serum total thyroxine (T-4) and sequential free thyroxine index (F.T.I.) using Sephadex G-25 and 125-thyroxine was evaluated in 136 patients and normal subjects. The T-4 determination was virtually identical to that used in the Ames Tetralute kit and had a similar accuracy and precision. The sequential F.T.I. was compared with a two-stage F.T.I. The two F.T.I.s showed highly significant correlations in the various groups of patients except euthyroid women with raised thyroxine-binding globulin (TBG) (pregnant or oral contraceptive). The overlap found for the sequential F.T.I. between euthyroid, hypothyroid, and thyrotoxic patients was slightly inferior (9%) to that found with the two-stage F.T.I. (6%), but its diagnostic success rate was higher than that of the serum T-4 determination alone. Serial observations of serum T-4 and sequential F.T.I. were also made on eight patients receiving carbimazole-therapy for hyperthyroidism. The sequential F.T.I. showed complete parallelism with serum T-4 regardless of thyroid status, so that it was of no practical value in these patients. It was concluded that the sequential F.T.I kit would be of most value in the smaller hospital laboratory lacking facilities for the radioimmunoassay of thyroid hormones and thyroid stimulating hormone. abstract_id: PUBMED:30496133 Reference intervals for thyroid-stimulating hormone, free thyroxine, and free triiodothyronine in elderly Chinese persons. Background Thyroid hormone levels are essential for diagnosing and monitoring thyroid diseases. However, their reference intervals (RIs) in elderly Chinese individuals remain unclear. We aimed to identify factors affecting thyroid-stimulating hormone (TSH), free triiodothyronine (FT3), and free thyroxine (FT4) levels using clinical "big data" to establish hormone level RIs for elderly Chinese individuals. Methods We examined 6781, 6772, and 6524 subjects aged ≥65 years who underwent FT3, FT4, and TSH tests, respectively, at the Peking Union Medical College Hospital between September 1, 2013, and August 31, 2016. Hormones were measured using an automated immunoassay analyzer (ADVIA Centaur XP). RIs were established using the Clinical Laboratory Standards Institute document C28-A3 guidelines. Results The median TSH was significantly higher in women than in men; the opposite was true for median FT3 and FT4 levels. No differences were observed in TSH or FT4 by age in either sex or overall; FT3 levels significantly decreased with age. Seasonal differences were observed in TSH and FT3 levels but not FT4 levels; the median TSH was the highest in winter and lowest in summer, whereas the median FT3 was the lowest in summer (albeit not significantly). RIs for TSH were 0.53-5.24 and 0.335-5.73 mIU/L for men and women, respectively; those for FT3 were 3.76-5.71, 3.60-5.42, and 3.36-5.27 pmol/L in 64- to 74-, 75- to 84-, and 85- to 96-year-old subjects, respectively. The RI for FT4 was 11.70-20.28 pmol/L. Conclusions RIs for TSH in elderly individuals were sex specific, whereas those for FT3 were age specific. abstract_id: PUBMED:8323787 Thyroxine prescription in the community: serum thyroid stimulating hormone level assays as an indicator of undertreatment or overtreatment. Examination of thyroxine usage in a study in the United States of America revealed that many patients were prescribed thyroxine for non-thyroid indications, such as obesity and fatigue. Many of those receiving thyroxine had high or low serum thyroid stimulating hormone levels, indicating prescription of incorrect doses or lack of patient compliance with therapy. Long term thyroxine therapy may have effects upon the risk of osteoporosis. The aims of this study were to investigate indications for thyroxine prescription in the United Kingdom and to examine the frequency of abnormal serum thyroid stimulating hormone concentrations in those prescribed thyroxine for hypothyroidism. This was in order to determine the relevance of measurement of thyroid stimulating hormone level in monitoring thyroxine therapy. Subjects receiving thyroxine were identified from the computerized prescribing records of four general practices in the West Midlands. Of 18,944 patients registered, 146 (0.8%) were being prescribed thyroxine; 134 of these had primary hypothyroidism and the remainder had other thyroid or pituitary diseases prior to treatment. Of the 97 patients with primary hypothyroidism who agreed to have their thyroid stimulating hormone level measured, abnormal serum levels were found in 48%, high levels in 27% and low levels in 21%. There was a significant relationship between prescribed thyroxine dose and median serum thyroid stimulating hormone level: high hormone levels were found in 47% of those prescribed less than 100 micrograms thyroxine per day, while low levels were found in 24% of those prescribed 100 micrograms or more. Thus, thyroxine prescription was common in the four practices sampled, although indications for its use were appropriate.(ABSTRACT TRUNCATED AT 250 WORDS) Answer: The study conducted to determine the long-term within- and between-laboratory variation of cortisol, ferritin, thyroxine, free thyroxine, and thyroid-stimulating hormone measurements found that the total within-laboratory component of variance was the major source of variability for all analytes tested. The median long-term within-laboratory coefficients of variation for each analyte varied from 5.1% to 7.6%. When comparing the within-laboratory variations with imprecision criteria based on biological variation, the number of methods that met within-laboratory imprecision goals were 5 of 5 for cortisol, 5 of 7 for ferritin, 0 of 7 for thyroxine and free thyroxine, and 8 of 8 for thyroid-stimulating hormone. This indicates that while the methods for measuring cortisol and thyroid-stimulating hormone generally met the analytic goals for precision, several methods for thyroxine and free thyroxine may not meet these goals in terms of their imprecision (PUBMED:15737024). Furthermore, the study also revealed that a majority of the methods used in thyroid function testing have biases that limit their clinical utility, and traditional proficiency testing materials do not adequately reflect these biases (PUBMED:15737023). This suggests that while some methods meet the precision needs for certain assays, there are significant concerns regarding the accuracy and reliability of others, particularly in the context of thyroid function tests. Therefore, it can be concluded that not all available methods meet the medical needs for precision, especially for thyroxine and free thyroxine assays.
Instruction: Does a clerkship in psychiatry contribute to changing medical students' attitudes towards psychiatry? Abstracts: abstract_id: PUBMED:34648168 Stigma in Psychiatry: Impact of a Virtual and Traditional Psychiatry Clerkship on Medical Student Attitudes. Objective: The objective of the study was to assess the change in medical students' attitudes towards psychiatry following a virtual clerkship experience compared to a traditional clerkship experience. Method: Ninety-seven medical students from the University of Ottawa were assessed pre- and post-clerkship on the ATP-30 (Attitudes Towards Psychiatry-30) measure. Cohorts of students were categorized as pre-COVID or during-COVID depending on when and how they experienced their clerkship (traditional or virtual). The total student response rate was approximately 48%. A quasi-experimental design was implemented, and non-parametric statistics were used to analyze the data. Results: Medical students' overall attitudes towards psychiatry improved from pre- to post-clerkship, with the type of clerkship experience (traditional or virtual) having no significant impact on the magnitude to which attitudes improved. Conclusion: Implementation of a virtual clerkship in psychiatry did not deteriorate medical student attitudes towards psychiatry as a specialty, with both the traditional and virtual clerkship program enhancing students' attitudes towards psychiatry favorably. abstract_id: PUBMED:24464416 Impact of the psychiatry clerkship on medical student attitudes towards psychiatry and to psychiatry as a career. Objective: The psychiatry clerkship forms part of the core curriculum of medical schools worldwide and provides psychiatric educators with an ideal opportunity to positively influence students. The aim of this paper is to systematically review literature on the impact of the psychiatry clerkship to determine the effect on attitudes towards psychiatry and to psychiatry as a career. Method: A systematic review was undertaken. The following key search words were used to search a number of electronic databases: medical student/s, attitude/s, psychiatry and clerkship. Studies published in the English language from 1990 to the present were included. Studies were included if they were based on a pre-/post-design, i.e. the same students must have participated in the study both before and after the clerkship. Results: Twenty-six studies from 19 countries were identified for the review. Sixteen studies reported an overall improvement in attitudes towards psychiatry post-clerkship, and ten found no change in attitudes. In terms of career choice, nine studies reported an increase in the number of students interested in psychiatry as a career post-clerkship, nine found no impact on career choice and, in eight studies, it was not assessed. A number of positive and negative factors regarding the clerkship were identified. Conclusion: Overall, the psychiatry clerkship has a positive impact on students' attitudes towards psychiatry, but does not improve interest in psychiatry as a career option. For those students particularly interested in psychiatry, the challenge is to maintain their enthusiasm post-clerkship. Charismatic teachers, mentorship and stigma reduction may be effective strategies. Future research needs to more clearly identify specific components of the clerkship that are viewed favorably by students. abstract_id: PUBMED:38223472 Impact of Psychiatry Clerkship Rotation in Attitudes Towards Mental Illness and Psychiatry as a Career Among Medical Students. Background: Stigma associated with mental illness (MI) permeates many professions, including healthcare. Recognizing and correcting bias is critical in delivering impartial and beneficial healthcare for all patients. Early educational interventions providing exposure to individuals with MI have shown to be effective at reducing MI stigma. The primary aim of our study was to assess the impact of a psychiatry clerkship on attitudes to MI. A secondary aim was to determine if the psychiatry clerkship influenced medical students' perceptions of psychiatry as a career. Methods: A cohort of third-year medical students in Florida was invited to complete an online survey before and after participating in their first 4-week-long psychiatry clerkship during the 2021-2022 academic year. The voluntary, anonymous survey consisted of the Attitudes to Mental Illness Questionnaire (AMIQ) and a 3-item questionnaire on interest and knowledge in psychiatry. The Wilcoxon Sign-Rank test was used to determine statistical significance (P &lt; .05) for pre- and post-clerkship values. Results: Among 39 invited students, 22 participated before (56.4%), and 23 participated after their psychiatry rotation (59.0%). Overall, there was a statistically significant increase in the perceived level of general interest in psychiatry (P = .027), psychiatry knowledge (P &lt; .001), and career interest in psychiatry (P = .040). There was also a significant decrease in the stigmatized attitude score for depression and self-harm after their psychiatry rotation (P = .042). Finally, the participants initially showed the highest stigmatized attitude score for intravenous drug abuse among the 4 mental illnesses presented, which also included depression and suicidal ideation, alcohol use disorder, and schizophrenia. Conclusion: The findings suggest that a psychiatry clerkship provided a positive exposure to the field, enhanced medical students' overall interest in psychiatry, and positively impacted medical students' attitudes towards MI. abstract_id: PUBMED:18349336 Does a clerkship in psychiatry contribute to changing medical students' attitudes towards psychiatry? Objective: The psychiatric clerkship is perceived as an intervention point in medical students' attitude toward psychiatry and career choice after graduation. The authors aim to assess the impact of the psychiatric clerkship in students from Israeli and U.S. origin on their attitude toward psychiatry. Methods: A modified Nielsen's questionnaire was administered at the start and end of the clerkship in two groups of students. Results: There was no statistically significant difference in attitude scores between the start and endpoint of a clerkship on both the Israeli and the United States groups. Item analysis did not reveal significant impact of clerkship. Conclusion: The psychiatry clerkship does not change students' view on the attitude assessed in this study, regardless of their origin (Israeli or U.S.). Further research is needed in order to find more clerkship-dependent contributors to positive or negative attitudes toward psychiatry. abstract_id: PUBMED:3600438 Students' attitudes towards psychiatry. Using a reliable measure, a self-administered questionnaire (ATP), and adequate numbers of students, this study demonstrates the negative effects of general medical/surgical training and the positive effect of the psychiatry clerkship on students' reported attitudes towards psychiatry. A negative view of psychiatry at the beginning of the clerkship may make students reluctant to improve their interview skills but is otherwise unimportant in determining their reactions and performance in the clerkship. Medical students' interest in psychiatry as a career increases during the psychiatry clerkship but this merely offsets the decline in interest that occurs at other stages during the clinical training. Much of the attitudinal change that occurs during the psychiatry clerkship is maintained into the pre-registration year but the housemen recorded a significant negative change on the items concerning efficacy of psychiatric treatment and attitudes towards psychiatric patients. Additional training by psychiatrists during the pre-registration year might increase the motivation of young doctors to detect and treat psychiatric illness which they will encounter in all branches of medicine. abstract_id: PUBMED:3214825 Medical students' attitude towards psychiatry. The attitudes of medical students are important as they influence recruitment to psychiatry. In order to better understand the attitudes of medical students towards psychiatry and the impact of psychiatric clerkship, the authors administered a questionnaire eliciting information regarding demographic data, attitudes and career choices to third year medical students of the University of Ottawa, at the beginning and end of their four week psychiatric clerkship. The data suggests that the students found their clerkship experience valuable and rewarding and expressed an interest in knowing more about psychiatry. The students' perception of the various aspects of their clinical practice of psychiatry became more positive after the clerkship. The authors have identified the positive and negative experiences the students had during their clerkship and postulate that the change in attitude is directly due to their clerkship experience. abstract_id: PUBMED:25888984 Impact of a psychiatry clerkship on stigma, attitudes towards psychiatry, and psychiatry as a career choice. Background: Mental illnesses are a major public health problem around the world and the prevalence and burden of common mental disorders is growing. Psychiatry is an unpopular career choice for many medical students and this impacts negatively on the supply of psychiatrists to the workforce. The psychiatry clerkship can play an important role in influencing students' attitudes towards psychiatry, either positively or negatively. However, stigma towards mental illness detracts students from considering a career in psychiatry. The aim of this study was to assess the impact of an eight week psychiatry clerkship on i) student knowledge and interest in psychiatry; ii) psychiatry as a career choice; iii) attitudes towards psychiatry; and iv) perceptions of stigma towards mental illness. Method: Year 4 medical students at the University of Western Australia completed two questionnaires, the Balon Attitudes Towards Psychiatry and the Mental Illness Clinicians Attitudes (MICA), at the beginning and end of the psychiatry clerkship. Interest in, knowledge of, and consideration of psychiatry as a career were also assessed. Non-parametric tests were used to compare baseline and follow-up differences on the Balon and MICA. Unpaired t-tests compared mean differences for interest, knowledge and psychiatry as a career. Results: Attitudes towards psychiatry were positive at the beginning of the clerkship. Overall, there was a significant decrease in negative and stigmatising views towards mental illness post clerkship measured by the MICA, but the follow-up mean score remained close to the neutral value with views in some areas becoming more negative. There was no significant improvement in students' interest in psychiatry post clerkship, however, knowledge of psychiatry improved significantly. Numbers of students 'definitely considering' psychiatry as a career increased significantly from 7 (4.6%) students at baseline to 17 (10.5%) at follow-up. Conclusion: The clerkship made a modest impact on students' attitudes to psychiatry, stigma and consideration of psychiatry as a career. Integration of strategies to overcome stigma towards mental illness and the mental health profession into pre-clinical teaching may provide students with skills to prepare them for the clerkship. This may assist in improving attitudes towards psychiatry and encourage more students towards a psychiatry career. abstract_id: PUBMED:20678213 Impact of clerkship in the attitudes toward psychiatry among Portuguese medical students. Background: Given the shortage of human resources and the launching of a new Mental Health Plan, recruitment of psychiatrists is currently a major concern in Portugal, as well as in several other countries. Medical students' attitude toward psychiatry has been pointed as a predictor of recruitment. This study aims to evaluate the medical students' perception of psychiatry before and after a clerkship, and the impact on their intention to pursue psychiatry as a future specialty option. Methods: Two self-report questionnaires were administered to all 6th year students in a medical school in Lisbon, before and after a 4-weeks full-time psychiatric clerkship, in order to evaluate attitudes toward psychiatry and intention to follow psychiatry in the future. Statistical analysis included Wilcoxon and Chi-square tests. Results: 153 students (60.8% female) filled in both questionnaires (no dropouts). After the clerkship, there was a significant improvement regarding the overall merits of psychiatry, efficacy, role definition and functioning of psychiatrists, use of legal powers to hospitalize patients and specific medical school factors. There was also a significant increase of students decided or considering the possibility to take a residency in psychiatry.However, perceptions of low prestige and negative pressure from family and peers regarding a future choice of psychiatry remained unchanged in about one-third of the students. Conclusions: The results indicate clearly that the clerkship had a favorable overall impact on the student attitude towards psychiatry, as well as in the number of students considering a future career in psychiatry. Attitudes toward psychiatry seems a promising outcome indicator of the clerkship's quality, but further research is needed in order to assess its reliability as a sound predictor of recruitment. abstract_id: PUBMED:9009515 Changes in attitudes of medical students towards psychiatry: an evaluation of a clerkship in psychiatry. The attitudes to psychiatry of medical students before and after their clinical rotation in psychiatry were assessed during the course of the academic year 1989-1990 by a questionnaire administered before and after the clerkship. Analysis of the data indicated that although there was no major shift in attitudes, the clerkship did succeed in improving clinical skills such as the capability for a better understanding of patients and the improvement of doctor-patient relations, as well as giving the student a more realistic approach to the diagnosis, treatment and prognosis of mentally ill patients. abstract_id: PUBMED:23682250 Iranian medical students' perception of psychiatry: before and after a psychiatry clerkship. Objective: We aimed to compare the medical students' attitude towards psychiatry before and after psychiatry clerkship, and to examine the association of choosing psychiatry as a future career with some personal characteristics. Method: In a self-controlled, quasi-experimental study, all of the medical students entering the psychiatry clerkship in three major medical schools of Iran located in Tehran (Tehran, Shahid Beheshti, and Iran University of Medical Sciences) were asked to participate anonymously in the study on the first and the last 3-days of their psychiatry clerkship. From 346 invited 4th-5th year medical students, 225 (65%) completed anonymous self-report questionnaires before and after a 4-week psychiatry clerkship. Results: Positive response to choose psychiatry as a career was seen in 13.3% and 18.3% before and after psychiatry rotation, respectively. However, the difference was not statistically significant; about one-quarter of the students were turned on to psychiatry and 25% were discouraged during the clerkship. Individual pair wise comparisons revealed significant improvements only in two out of 13 measured aspects of psychiatry. Seventeen out of 38 (47.7%) students who identified psychiatry as the career of choice or strong possibility reported that one of their family members or close friends' mental illness had an impact on their choice. Those students who considered psychiatry as the strong possibility claimed that they are more interested in humanities (OR = 2.96; 95% CI: 1.17, 7.49), and playing a musical instrument (OR = 2.53; 95% CI: 1.15, 5.57). Conclusion: It may be concluded that exposure to psychiatry clerkship could influence medical students' opinion about psychiatry positively, or negatively. Personal characteristics and individual interests of students may play an important role in choosing psychiatry as their future career. Answer: Yes, a clerkship in psychiatry does contribute to changing medical students' attitudes towards psychiatry. Multiple studies have shown that medical students' overall attitudes towards psychiatry improve from pre- to post-clerkship experiences. For instance, a study involving the University of Ottawa medical students found that both traditional and virtual clerkship experiences favorably enhanced students' attitudes towards psychiatry (PUBMED:34648168). Similarly, a systematic review of 26 studies from 19 countries indicated that the majority of studies reported an overall improvement in attitudes towards psychiatry post-clerkship (PUBMED:24464416). Furthermore, a study with third-year medical students in Florida demonstrated a statistically significant increase in general interest in psychiatry, psychiatry knowledge, and career interest in psychiatry after completing a psychiatry clerkship. This study also noted a significant decrease in stigmatized attitudes towards mental illness following the clerkship (PUBMED:38223472). Another study, however, did not find a statistically significant difference in attitude scores before and after a clerkship among Israeli and U.S. students, suggesting that the impact of clerkships might vary across different student populations or educational settings (PUBMED:18349336). Despite these positive findings, some studies have reported that while attitudes towards psychiatry may improve, this does not necessarily translate into an increased interest in psychiatry as a career option (PUBMED:24464416). Nevertheless, some studies have observed an increase in the number of students interested in psychiatry as a career post-clerkship (PUBMED:25888984, PUBMED:20678213). In conclusion, while the impact of a psychiatry clerkship on medical students' attitudes towards psychiatry can be positive, the extent of this change and its influence on career choice may vary. The clerkship experience generally tends to improve attitudes and reduce stigma, but additional factors such as personal characteristics and pre-existing interests also play a role in shaping students' perceptions and career considerations (PUBMED:23682250).
Instruction: Extended pelvic lymphadenectomy and various radical prostatectomy techniques: is pelvic drainage necessary? Abstracts: abstract_id: PUBMED:26166965 Predictive factors for lymph node positivity in patients undergoing extended pelvic lymphadenectomy during robot assisted radical prostatectomy. Introduction: Pelvic lymphadenectomy during radical prostatectomy (RP) improves staging and may provide a therapeutic benefit. However, there is no clear consensus on the selection criteria for subjecting patients to this additional procedure. With a growing adoption of robot assisted radical prostatectomy (RARP) in India, it has become imperative to study the incidence and predictive factors for lymph node involvement in our patients. Materials And Methods: From February 2010 to February 2014, 452 RARP procedures were performed at our institution. A total of 100 consecutive patients from July 2011 to August 2012 were additionally subjected to a robotic extended pelvic lymphadenectomy (EPLND). Lymph node positivity rates and lymph node density were analyzed on the basis of preoperative prostate specific antigen (PSA), Gleason score, clinical stage, D'Amico risk category and magnetic resonance imaging (MRI) findings. Multivariate analysis was performed to ascertain factors associated with lymph node positivity in our cohort. Results: The mean age of the patients was 65.5 (47-77) years and the body mass index was 26.3 (16.3-38.7) kg/m(2). The mean console time for EPLND was 45 (32-68) min. A median of 17 (two to 40) lymph nodes were retrieved. Seventeen patients (17%) had positive lymph nodes (median of 1, range 1-6). Median lymph node density in these patients was 10%. When stratified by PSA, Gleason score, clinical stage, D'Amico risk category and features of locally advanced disease on MRI, a trend towards increasing incidence of lymph node positivity was observed, with an increase in adverse factors. However, on multivariate analysis, clinical stage &gt; T2a was the only significant factor impacting lymph node positivity in our cohort. Conclusions: A significant proportion of men undergoing RARP in India have positive lymph nodes on EPLND. While other variables may also have a potential impact, a higher clinical stage predisposes to an increased incidence of lymph node metastases. abstract_id: PUBMED:23356829 Extended pelvic lymphadenectomy and various radical prostatectomy techniques: is pelvic drainage necessary? Unlabelled: WHAT'S KNOWN ON THE SUBJECT? AND WHAT DOES THE STUDY ADD?: The occurence of lymphoceles in patients after radical prostatectomy is well known (2-10%). It appears that patients undergoing open extraperitoneal radical prostatectomy develop more lymphoceles than patients undergoing robot-assisted radical prostatectomy with transperitoneal access. The present study investigates in a prospective randomized manner whether the time of drainage (1 vs 7 days) makes a difference or whether drainage is even necessary. The study data, collected in the same institution, are compared with the incidence of lymphocele in patients treated by robot-assisted radical prostatectomy. Objective: To investigate whether routine drainage is advisable after open extended pelvic lymph node dissection (ePLND) and retropubic radical prostatectomy (RRP) by measuring the incidence of lymphoceles and comparing these results with those of a series of robot-assisted radical prostatectomy (RARP) and ePLND. Patients And Methods: A total of 331 consecutive patients underwent ePLND and RRP or RARP. The first 132 patients underwent open ePLND and RRP and received two pelvic drains; these patients were prospectively randomized into two groups: group 1 (n = 66), in which the drains were shortened on postoperative (PO) days 3 and 5 and removed on PO day 7, and group 2 (n = 66), in which the drains were removed on PO day 1. The next 199 patients were assigned to two consecutive groups not receiving drainage: group 3 (n = 73) undergoing open ePLND and RRP, followed by group 4 (n = 126) treated by transperitoneal robot-assisted ePLND and RARP. All patients had ultrasonographic controls 5 and 10 days and 3 and 12 months after surgery. Results: Lymphoceles were detected in 6.6% of all patients, 3.3% of whom were asymptomatic and 3.3% of whom were symptomatic. Symptomatic lymphoceles were detected in 0% of group 1, 8% of group 2, 7% of group 3 and 1% of group 4, with groups 2 and 3 differing significantly from group 4 (P &lt; 0.05). In total, 5% of all patients undergoing open RRP (groups 1-3) had symptomatic lymphoceles vs 1% of patients undergoing RARP (group 4) (P = 0.06). Nodal-positive patients had significantly more symptomatic lymphoceles than nodal-negative patients (10% vs 2%) (P &lt; 0.02). Conclusions: Symptomatic lymphoceles occur less frequently after open RRP and pelvic drainage over 7 days than after open RRP and pelvic drainage over 1 day or without drainage. Patients undergoing RARP without drainage had significantly fewer lymphoceles than patients receiving open RRP without drainage. abstract_id: PUBMED:29230586 Comparison of the diagnostic efficacy and perioperative outcomes of limited versus extended pelvic lymphadenectomy during robot-assisted radical prostatectomy: a multi-institutional retrospective study in Japan. Background: We conducted a retrospective study to compare the perioperative course and lymph node (LN) counts of patients undergoing limited pelvic lymphadenectomy (lPLND) or extended pelvic lymphadenectomy (ePLND) during robot-assisted radical prostatectomy in an initial Japanese series. Methods: The cohort included 1333 patients who underwent either lPLND (n = 902) or ePLND (n = 431) during robot-assisted radical prostatectomy at five institutions in Japan. All complications within 28 days of surgery were recorded, and clinical data were collected retrospectively. The outcomes and complications were compared relative to the extent of lymphadenectomy, and we conducted univariate and multivariate logistic regression analyses to assess the predictors of the major complications. Results: On multivariate analysis for evaluating the associations between major complications and perioperative characteristics, console time (p = 0.001) was significantly associated with major complications, although the extent of lymphadenectomy (p = 0.272) was not significantly associated with major complications. In the distribution of positive LNs removed in the extended pelvic lymphadenectomy cohort, 60.4% of patients had positive LNs only in the obturator/internal iliac region. However, 22.6% of the patients with positive LNs had no positive LNs in the obturator/internal iliac region, but only in the external/common iliac region. Conclusions: ePLND, which significantly increased the console time and blood loss but nearly quadrupled the lymph node yield, is considered a relatively safe and acceptable procedure. Moreover, the results of this study suggest that ePLND improves staging and removes a greater number of metastatic nodes. abstract_id: PUBMED:31056859 Declining use of radical prostatectomy and pelvic lymphadenectomy despite more robotics: National population data over 15 years. Aim: To describe national surgical patterns of prostate cancer (PCa) care considering radical prostatectomy with or without pelvic lymphadenectomy and consideration of robotic-assisted techniques. Methods: Retrospective analysis of publicly accessible Medicare claims data was performed for the period 2001-2016 and included patients undergoing radical prostatectomy with or without pelvic lymphadenectomy relative to total and PCa-specific populations among men aged 45-84 years. Proportion of cases performed robotically was considered. Results: Total procedures performed increased from 2001, peaked in 2009 and subsequently decreased until 2016. Since 2009, the age-specific rate of surgery in men aged 75-84 increased by 2.3-fold, whereas the rates for men aged 55-64 and 45-54 reduced by 44% and 55%, respectively. Rates of concurrent pelvic lymphadenectomy fell until 2009 with subsequent stabilization (ratio 1.05-1.14) through to 2016. Significant regional practice patterns were observed, as was an increasing trend toward a robotic-assisted laparoscopic approach, comprising more than 80% of radical prostatectomies in 2016. Conclusion: Since the peak in 2009, radical prostatectomy is performed less in men &lt;65 years and more in men ≥65 years. An increasing proportion of cases omit concurrent pelvic lymphadenectomy and are performed robotically. abstract_id: PUBMED:32634854 Impact of indocyanine green-guided extended pelvic lymph node dissection during robot-assisted radical prostatectomy. Objectives: To evaluate the effectiveness of indocyanine green-guided extended pelvic lymph node dissection during robot-assisted radical prostatectomy for intermediate- to high-risk prostate cancer. Materials And Methods: After institutional review board approval, between July 2017 and December 2018, we carried out 100 indocyanine green-guided extended pelvic lymph node dissections in patients with localized intermediate- and high-risk prostate cancer, using the Firefly (Novadaq Technologies, Mississauga, ON, Canada) and da Vinci Xi surgical system (Intuitive Surgical, Sunnyvale, CA, USA). Indocyanine green was injected transrectally using ultrasound sonography before each surgery. Then, lymphatic drainage patterns and pathological findings were recorded. Results: Lymphatic drainage routes were successfully determined in 91 right-sided and 90 left-sided cases. Five main lymphatic pathways and sites were identified: (i) an internal route (57%); (ii) a lateral route (50%); (iii) a presacral route (20%); (iv) a paravesical artery site (20%); and (v) a pre-prostatic site (5%). Lymph node metastasis was positive in 15 patients, with 44 pathologically confirmed metastatic lymph nodes. Metastatic fluorescent lymph nodes were found in 15 out of 44 lymph nodes (34.1%). For sentinel lymph node identification, the 34% sensitivity and 64.8% specificity rates were detected in regard to identification of lymph node metastasis. Conclusions: Lymphatic drainage patterns from the prostate can be identified and classified using indocyanine green-guided extended pelvic lymph node dissections. Although the direct role of fluorescent lymph nodes in sentinel lymph node identification appears to be limited, the identification of lymphatic pathways could contribute to high-quality extended pelvic lymph node dissection during robot-assisted radical prostatectomy. abstract_id: PUBMED:25547974 The role of extended pelvic lymphadenectomy with radical prostatectomy for high-risk prostate cancer. Introduction: The role of pelvic lymph node dissection (PLND) during radical prostatectomy (RP) for prostate cancer (PCa) is controversial. Despite extensive research in both patterns of lymphatic drainage and the clinical effect of lymph node involvement, the exact role of PLND in PCa is yet to be defined. Methods: A systematic search of the MEDLINE database was performed, and all relevant articles were reviewed in depth. Results: We included 84 relevant articles in our review and subdivided the information into the following categories: preoperative patient evaluation, procedure/extent of dissection, complications, and robotic surgery era. Most authors agree that the greatest benefit is seen in patients with high-risk PCa undergoing RP. Multiple imaging modalities have been evaluated for assistance in patient selection, but the use of preoperative nomograms appears to be the most helpful selection tool. The role of limited PLND vs. extended PLND (e-PLND) is yet to be defined, though many authors agree that e-PLND is preferred in the setting of high-risk PCa. Although PLND is associated with a higher incidence of complications, especially lymphocele formation, it is unclear whether e-PLND leads to more complications than limited PLND. The introduction of minimally invasive surgery may have had a negative effect on implementation of PLND in the appropriate patients undergoing RP. Conclusion: Despite a lack of prospective, randomized trials evaluating PLND in RP, there does appear to be a consistent benefit in patients with high-risk disease. abstract_id: PUBMED:29524857 An unusual localization of seven months delayed pelvic lymphocele following radical retropubic prostatectomy: Case report and literature review. Introduction: A pelvic lymphocele is a collection of lymphatic fluid that develops after extensive lymphadenectomies in surgeries such as urological malignancies or renal transplantation. Pelvic lymphoceles may cause complications such as fever, abdominal pain, leg swelling, genital swelling and flank pain. This report summarizes the management of a pelvic lymphocele after open radical retropubic prostatectomy with bilateral lymphadenectomy. Presentation Of Case: Herein, we present a case in which a pelvic lymphocele developed seven months post-radical open retropubic prostatectomy and through this patient we discussed the lymphocele following radical prostatectomy. The pelvic lymphocele occurred along the sciatic nerve from the sciatica foramen to the intergluteal muscles. The patient was treated with three drainage catheters. This localization is an atypical and unusual for lymphocele after radical retropubic prostatectomy. Discussion: Lymphocele formation that leads to major complications after radical prostatectomy is rare. Lymphocele formation is most commonly seen in the early postoperative period, but it should be considered in patients with fever, abdominal pain or leg swelling during the late postoperative period. Lymphocele formation was the most common cause of hospital readmission after radical prostatectomy. Conclusion: Lymphocele formation can be seen in atypical regions and can lead to unexpected complications after radical prostatectomy. Therefore, it should be brought to mind when complaints such as fever and lower extremity swelling occurred in patients underwent extensive lymph node dissection. Surgical treatment options are available, but percutaneous interventions can also be used. abstract_id: PUBMED:27313985 Internal Hernia Underneath an Elongated External Iliac Artery: A Complication After Extended Pelvic Lymphadenectomy and Robotic-assisted Laparoscopic Prostatectomy. Small bowel herniation underneath the iliac vessel after transperitoneal pelvic lymphadenectomy is a rare complication. This report describes the first case of bowel incarceration behind the external iliac artery after transperitoneal robotic-assisted radical prostatectomy with extended lymph node dissection in a patient with prostate cancer 1 year after surgery. After diagnosis on CT scan, an open resection of the ischemic bowel was performed. Because of thrombosis, the external iliac artery was opened, the clot was removed and the elongated artery was resected with end-to-end anastomosis. In case of a meandering iliac artery, a retroperitonealization after pelvic lymphadenectomy might be discussed. abstract_id: PUBMED:24485081 Comparison of the morbidity between limited and extended pelvic lymphadenectomy during laparoscopic radical prostatectomy Objective: To compare the morbidity of limited pelvic lymphadenectomy to extended lymphadenectomy in patients undergoing LRP (Laparoscopic Radical Prostatectomy) for clinically localized prostate cancer. Patients And Methods: We performed a prospective monocentric study focused on 303 consecutive patients having a pelvic lymphadenectomy during LRP from June 2000 to April 2010. One hundred and seventy six patients had a limited pelvic lymphadenectomy (June 2000-June 2006, group 1). One hundred and twenty seven patients had an extended pelvis lymphadenectomy (June 2006-April 2010, group 2) including two sub-groups according to the lateral limit of the procedure i.e. with (group 2a, 60 patients) or without dissection of the lateral side of the iliac artery (group 2b, 67 patients). Results: Preoperative data (age, BMI, cTNM, Gleason score and PSA) were comparable between the groups. The number of lymph nodes and the incidence of metastatic lymph nodes were lower in group 1 (6,7 lymph nodes and 5,7%) compared to group 2 (a+b) (15.6 lymph nodes and 18.9%) (P=0.001 and P=0.0004). However, there was no difference between groups 2a and 2b (15.4 and 16.7% vs 15,8 and 20.8% P=0.65 respectively). There were more complications in the extended lymphadenectomy group compared to the limited procedure (35.4% vs 14.2%, P=0.0001), in particular more lymphatic complications (27.5% vs 10.2% P=0.0001) and lymphoedema (LO) (15.7% vs 0.6% P=0.00001). However the lymphorhea (LR) and lymphocele (LC) rate was not different (P=0.11 and P=0.17). All complications were mainly of low Clavien's classification grade (1+2) whatever the group of lymphadenectomy. The hospital stay was not increased in group 2a or 2b in regard to group 1. The rate of LR and LC was higher in group 2a than in group 1 (P=0.02 and P=0.05) but not between group 2b and 1 (P=0.81 and P=0.47). Conclusion: Our study showed a higher rate of complications after extended pelvic lymphadenectomy but of low grade in most cases. Moreover the lateral dissection sparing the lateral side of the iliac artery reduced the risk of lymphatic complications without decreasing the number of lymph nodes removed and the rate of metastasis. abstract_id: PUBMED:36088200 Morbidity of extended pelvic lymphadenectomy during robot-assisted laparoscopic prostatectomy for localized cancer prostate Objectives: To assess the morbidity specific of extended pelvic lymphadenectomy during robot-assisted laparoscopic radical prostatectomy in a 8 year retrospective study. Material: We carried out a single-center, single-surgeon retrospective study on 342 consecutive patients who underwent a robot-assisted laparoscopic radical prostatectomy and extended pelvic lymphadenectomy, from July 2010 to March 2018. Postoperative complications were recorded up to 3 months after the operation. Results: Thirty (8.8%) patients had at least one complication related to lymphadenectomy including 1 vascular injury (0.3%), 7 injuries of the obturator nerve (2%), 5 venous thromboembolic complications (1.5%) including 4 pulmonary embolisms, 10 symptomatic lymphoceles (2.9%) and 8 lymphoedemas (2.3%). Of these complications, 13 were classified Clavien 1 (43.3%), 8 Clavien 2 (26.7%), 7 Clavien 3a (23.3%) and 2 Clavien 3b (6.7%). In univariate analysis a high age (P=0.04), high BMI (P&lt;0.01) and pT stage (P=0.02) were significantly associated with complication whereas in multivariate analysis, only age (P=0.02) and BMI (P&lt;0.01) lived were. In univariate analysis high BMI (P=0.04) and lymph node involvement (P=0.04) were associated with lymphatic complication. We did not find any other specific risk factor for the other complications. Conclusion: With 8.8% of overall complications related to lymphadenectomy and 5% of complication classified Clavien grade 2 or higher, extended pelvic lymphadenectomy was not very morbid. Age and BMI were risk factors for a overall complication. BMI and lymph node involvement were risk factors for lymphatic complications. Level Of Proof: 4. Answer: The necessity of pelvic drainage after extended pelvic lymphadenectomy (ePLND) during radical prostatectomy is a topic of debate. According to a study by PUBMED:23356829, the occurrence of lymphoceles after radical prostatectomy is well-known, and the study aimed to investigate whether the duration of drainage (1 day vs. 7 days) or the necessity of drainage itself makes a difference. The study found that symptomatic lymphoceles occurred less frequently after open retropubic radical prostatectomy (RRP) with pelvic drainage over 7 days than after open RRP with pelvic drainage over 1 day or without drainage. Patients undergoing robot-assisted radical prostatectomy (RARP) without drainage had significantly fewer lymphoceles than patients receiving open RRP without drainage. Nodal-positive patients had more symptomatic lymphoceles than nodal-negative patients. The study concluded that pelvic drainage over 7 days is beneficial after open RRP, but patients undergoing RARP without drainage had fewer lymphoceles. In contrast, PUBMED:29230586 reported that extended pelvic lymphadenectomy, which increases console time and blood loss, is considered a relatively safe and acceptable procedure that improves staging and removes a greater number of metastatic nodes. However, the study did not specifically address the necessity of pelvic drainage. PUBMED:36088200 assessed the morbidity specific to extended pelvic lymphadenectomy during robot-assisted laparoscopic radical prostatectomy and found that with 8.8% of overall complications related to lymphadenectomy and 5% of complications classified as Clavien grade 2 or higher, extended pelvic lymphadenectomy was not very morbid. Age and BMI were risk factors for overall complications, while BMI and lymph node involvement were risk factors for lymphatic complications. Again, the study did not directly address the necessity of pelvic drainage. In summary, the necessity of pelvic drainage after ePLND during radical prostatectomy may depend on the surgical approach used (open RRP vs. RARP) and the patient's nodal status. While pelvic drainage over 7 days seems beneficial after open RRP, patients undergoing RARP without drainage appear to have fewer lymphoceles. However, further research may be needed to establish definitive guidelines for the use of pelvic drainage in these surgical settings.
Instruction: Is there a role for planned serial chest radiographs and abdominal ultrasound scans in the resuscitation room following trauma? Abstracts: abstract_id: PUBMED:17059711 Is there a role for planned serial chest radiographs and abdominal ultrasound scans in the resuscitation room following trauma? Introduction: Despite advances in trauma care, significant morbidity and mortality exists which could be reduced if all injuries were immediately identified. Two treatable factors are hypoxia and hypovolaemia which may occur secondary to haemorrhage into the chest and abdomen. Pneumothorax is also a frequent cause of preventable trauma death. Clinical examination is limited and we often rely on imaging. Anecdotally, it seemed some patients were investigated too quickly because their injuries had not evolved sufficiently enough to become detectable. In these patients, repeated assessments and imaging would, therefore, be necessary. Patients And Methods: This was a retrospective study looking at all patients over a 15-month period with significant chest and abdominal injuries. Patients with a chest or abdominal Abbreviated Injury Score (AIS) of 3 or above were identified. As a cross reference, those patients who required at least one chest drain, or a laparotomy within 24 h of admission were also identified. Case notes and films were reviewed with particular attention to the presence of initial 'normal' imaging. Results: A total of 1036 patients were eligible for entry into the trauma database; of these, 170 patients had chest and/or abdominal injuries coded as AIS 3 or more. We were able to identify 7 cases (4%) where initial clinical examination and imaging failed to detect either bleeding (pleural space or abdomen) or a pneumothorax. A further 5 cases were potential missed injuries, but the data were incomplete making confident inclusion in this group impossible. Conclusions: Occult injuries are reported to have an incidence of around 2-5%. Serial imaging in the resuscitation room may enable early identification of chest and abdominal injuries. However, only 12 cases were identified making interpretation of suitable candidates for repeat imaging difficult. The question is which group of patients would benefit from planned repeat imaging before leaving the resuscitation room. abstract_id: PUBMED:28120436 Flail Chest Following Failed Cardiopulmonary Resuscitation. Following the death of a woman with blunt force chest trauma, the question was asked how common was the finding at autopsy of a flail chest in decedents after failed cardiopulmonary resuscitation. It was suggested in court that this was an uncommon occurrence. To address this issue, autopsy cases in adults (&gt;18 years) with rib fractures attributable to cardiopulmonary resuscitation were taken from the files of Forensic Science SA over a 7-year period from 2008 to 2014. Flail chest injuries were defined as those arising from fractures at two sites in at least three consecutive ribs. From 236 cases with rib fractures attributed to resuscitation, a total of 43 flail chest injuries were found in 35 cases (14.8%). The majority occurred in the 60-79-year-old age group. These data suggest that flail chest injuries are a more common sequelae of cardiopulmonary resuscitation than has been previously appreciated in autopsy cases, particularly in the elderly. abstract_id: PUBMED:37575706 Beyond the Surface: Exploring Chest Trauma With Conventional Radiography and CT. Background: Traumatic injuries to the chest are a frequent cause of mortality among young individuals. Imaging plays a crucial role in the management of thoracic trauma, providing essential details for accurate diagnosis and treatment. Objective: To assess the respective contributions of radiography and CT in cases of chest trauma. Settings And Design: We assessed 64 subjects, gathering findings from both CT scans and radiographic imaging. The results were organized into a table, considering various variables such as subcutaneous emphysema, rib fractures, clavicular fractures, sternal fractures, scapular fractures, vertebral fractures, pneumothorax, pneumomediastinum, hemothorax, lung contusions, diaphragmatic injuries, and lung herniations. We analyzed the incidence and mode of injury for each variable. Additionally, we compared the sensitivity and specificity of radiographs to CT scans. Results: The leading cause of chest trauma was road traffic accidents (RTAs) (67.2%). The most common age groups affected were 18-30 years (31.3%) and 30-40 years (25%). Rib fractures (73.4%), contusions (70.3%), and hemothorax (62.5%) were the most frequently observed findings. Comparing the detection rates of contusions, rib fractures, hemothorax/pleural effusions, pneumothorax/pneumomediastinum, radiographs exhibited lower sensitivity than CT scans (p-value &lt; 0.05 for all comparisons). Conclusions: In the assessment of trauma patients, chest radiographs continue to serve as the primary screening method, while CT scans are the preferred imaging technique. CT scans are preferable to radiographs in subjects who are clinically stable, providing valuable information. However, for subjects who are unstable, CT scans become even more indispensable, as they offer critical insights into their condition. abstract_id: PUBMED:33707896 Diagnosis of Traumatic Pneumothorax: A Comparison between Lung Ultrasound and Supine Chest Radiographs. Background/objective: Traumatic pneumothorax is an ominous condition necessitating urgent appropriate action. It is typically detected on chest X-rays; however, these may not be able to detect the presence of a subtle pneumothorax, especially in supine position. Lung ultrasound is emerging as a promising modality for detecting pneumothorax in trauma patients. The aim of our study was to compare ultrasound with supine chest radiography for the detection of pneumothorax in trauma patients. Materials And Methods: This was a prospective, single-blinded study carried out on 212 adult thoracoabdominal trauma patients who underwent ultrasound FAST and supine (AP) chest radiography. During the FAST sonography, ultrasound thorax was done to rule out pneumothorax. Only those cases were considered (118) in which the presence or absence of pneumothorax could be confirmed on CT done subsequently or where pneumothorax was confirmed by air escape on chest tube placement, wherever indicated, and the results were compared with sonographic and chest X-ray findings. Observation/results: There were 48 true positives on CT/chest tube insertion. Among these, ultrasound was able to correctly detect pneumothorax in 43 patients, while supine chest X-rays correctly identified 33 cases. Sensitivity of ultrasound was 89.6 vs. 68.8% of supine chest radiography. Lung ultrasound also had a higher negative predictive value as compared to supine chest X-rays. Conclusions: Lung ultrasound is more sensitive in detecting traumatic pneumothorax than supine chest X-rays, in addition to having numerous other inherent advantages over chest radiography. It should be incorporated in the emergency assessment of thoracic trauma patients to rule out pneumothorax. Clinical Significance: Lung sonography is more sensitive in detecting traumatic pneumothorax than supine chest X-rays. No added equipment is required, and the procedure can be carried out at the time of doing ultrasound FAST, thus saving precious time in trauma patients. How To Cite This Article: Bhoil R, Kumar R, Kaur J, Attri PK, Thakur R. Diagnosis of Traumatic Pneumothorax: A Comparison between Lung Ultrasound and Supine Chest Radiographs. Indian J Crit Care Med 2021;25(2):176-180. abstract_id: PUBMED:31881900 Open-chest cardiopulmonary resuscitation versus closed-chest cardiopulmonary resuscitation in patients with cardiac arrest: a systematic review and meta-analysis. Background: Cardiopulmonary resuscitation is the most urgent and critical step in the rescue of patients with cardiac arrest. However, only about 10% of patients with out-of-hospital cardiac arrest survive to discharge. Surprisingly, there is growing evidence that open-chest cardiopulmonary resuscitation is superior to closed-chest cardiopulmonary resuscitation. Meanwhile, The Western Trauma Association and The European Resuscitation Council encouraged thoracotomy in certain circumstances for trauma patients. But whether open-chest cardiopulmonary resuscitation is superior to closed-chest cardiopulmonary resuscitation remains undetermined. Therefore, the aim of this study was to summarize current studies on open-chest cardiopulmonary resuscitation in a systematic review, comparing it to closed-chest cardiopulmonary resuscitation, in a meta-analysis. Methods: In this systematic review and meta-analysis, we searched the PubMed, EmBase, Web of Science, and Cochrane Library databases from inception to May 2019 investigating the effect of open-chest cardiopulmonary resuscitation and closed-chest cardiopulmonary resuscitation in patients with cardiac arrest, without language restrictions. Statistical analysis was performed using Stata 12.0 software. The primary outcome was return of spontaneous circulation. The secondary outcome was survival to discharge. Results: Seven observational studies were eligible for inclusion in this meta-analysis involving 8548 patients. No comparative randomized clinical trial was reported in the literature. There was no significant difference in return of spontaneous circulation and survival to discharge between open-chest cardiopulmonary resuscitation and closed-chest cardiopulmonary resuscitation in cardiac arrest patients. The odds ratio (OR) were 0.92 (95%CI 0.36-2.31, P &gt; 0.05) and 0.54 (95%CI 0.17-1.78, P &gt; 0.05) for return of spontaneous circulation and survival to discharge, respectively. Subgroup analysis of cardiac arrest patients with trauma showed that closed-chest cardiopulmonary resuscitation was associated with higher return of spontaneous circulation compared with open-chest cardiopulmonary resuscitation (OR = 0.59 95%CI 0.37-0.94, P &lt; 0.05). And subgroup analysis of cardiac arrest patients with non-trauma showed that open-chest cardiopulmonary resuscitation was associated with higher ROSC compared with closed-chest cardiopulmonary resuscitation (OR = 3.12 95%CI 1.23-7.91, P &lt; 0.05). Conclusions: In conclusion, for patients with cardiac arrest, we should implement closed-chest cardiopulmonary resuscitation as soon as possible. However, for cardiac arrest patients with chest trauma who cannot perform closed-chest cardiopulmonary resuscitation, open-chest cardiopulmonary resuscitation should be implemented as soon as possible. abstract_id: PUBMED:37171252 Are Chest Radiographs or Ultrasound More Accurate in Predicting a Pneumothorax or Need for a Thoracostomy Tube in Trauma Patients? Background: Historically, chest radiographs (CXR) have been used to quickly diagnose pneumothorax (PTX) and hemothorax in trauma patients. Over the last 2 decades, chest ultrasound (CUS) as part of Extended Focused Assessment with Sonography in Trauma (eFAST) has also become accepted as a modality for the early diagnosis of PTX in trauma patients. Methods: We queried our institution's trauma databases for all trauma team activations from 2021 for patients with eFAST results. Demographics, injury variables, and the following were collected: initial eFAST CUS, CXR, computed tomography (CT) scan, and thoracostomy tube procedure notes. We then compared PTX detection rates on initial CXR and CUS to those on thoracic CT scans. Results: 580 patients were included in the analysis after excluding patients without a chest CT scan within 2 hours of arrival. Extended Focused Assessment with Sonography in Trauma was 68.4% sensitive and 87.5% specific for detecting a moderate-to-large PTX on chest CT, while CXR was 23.5% sensitive and 86.3% specific. Extended Focused Assessment with Sonography in Trauma was 69.8% sensitive for predicting the need for tube thoracostomy, while CXR was 40.0% sensitive. Discussion: At our institution, eFAST CUS was superior to CXR for diagnosing the presence of a PTX and predicting the need for a thoracostomy tube. However, neither test is accurate enough to diagnose a PTX nor predict if the patient will require a thoracostomy tube. Based on the specificity of both tests, a negative CXR or eFAST means there is a high probability that the patient does not have a PTX and will not need a chest tube. abstract_id: PUBMED:37593404 Deep convolutional neural network for rib fracture recognition on chest radiographs. Introduction: Rib fractures are a prevalent injury among trauma patients, and accurate and timely diagnosis is crucial to mitigate associated risks. Unfortunately, missed rib fractures are common, leading to heightened morbidity and mortality rates. While more sensitive imaging modalities exist, their practicality is limited due to cost and radiation exposure. Point of care ultrasound offers an alternative but has drawbacks in terms of procedural time and operator expertise. Therefore, this study aims to explore the potential of deep convolutional neural networks (DCNNs) in identifying rib fractures on chest radiographs. Methods: We assembled a comprehensive retrospective dataset of chest radiographs with formal image reports documenting rib fractures from a single medical center over the last five years. The DCNN models were trained using 2000 region-of-interest (ROI) slices for each category, which included fractured ribs, non-fractured ribs, and background regions. To optimize training of the deep learning models (DLMs), the images were segmented into pixel dimensions of 128 × 128. Results: The trained DCNN models demonstrated remarkable validation accuracies. Specifically, AlexNet achieved 92.6%, GoogLeNet achieved 92.2%, EfficientNetb3 achieved 92.3%, DenseNet201 achieved 92.4%, and MobileNetV2 achieved 91.2%. Discussion: By integrating DCNN models capable of rib fracture recognition into clinical decision support systems, the incidence of missed rib fracture diagnoses can be significantly reduced, resulting in tangible decreases in morbidity and mortality rates among trauma patients. This innovative approach holds the potential to revolutionize the diagnosis and treatment of chest trauma, ultimately leading to improved clinical outcomes for individuals affected by these injuries. The utilization of DCNNs in rib fracture detection on chest radiographs addresses the limitations of other imaging modalities, offering a promising and practical solution to improve patient care and management. abstract_id: PUBMED:36622382 Patient safety in differentiated (in-hospital) activation of the resuscitation room for severely injured patients Background And Objective: Providing trauma services demands high personnel resources and structural costs. The goal of this study was to show if the assignment of trauma patients to a defined A or B resuscitation room treatment as a modified management concept is safe and feasible. Material And Methods: Between May 2020 and January 2021, all resuscitation room trauma patients were included in this single center prospective observational study. All patients admitted to the resuscitation room underwent a classification according to the German S3 guidelines grade of recommendation GoR A and GoR B in polytrauma and the status of the ABCDE sequence to an A and B resuscitation room. The classification was performed by the senior consultant on call via telephone after consultation and discussion of clinical findings. Results: All 135 resuscitation room patients were included in the study of whom 42 trauma patients were assigned to the A resuscitation room (A-SR) and 93 were assigned to the B resuscitation room (B-SR). The comparison of the two groups showed that patients in the A‑SR group are more likely to be accompanied by a prehospital emergency physician (80.5%) than patients in the B‑SR group (55.5%). Patients in the B‑SR group showed a significantly higher Glasgow coma scale (GCS). Using the eFAST emergency ultrasound protocol, 2.4% of the A‑SR and 4.3% of the B‑SR patients had trauma-associated pathologies, 26% of the A‑SR and only 3.2% of the B‑SR patients had to be admitted to the ICU, 21.4% of the A‑SR and 1% of the B‑SR patients died within 30 days after trauma. The injury severity scores (ISS) of the A‑SR patients were significantly higher than in the B‑SR group (ISS 28.3 vs. 6.8). Conclusion: The study confirmed that the assignment by the emergency department consultant according to the German S3 guideline in combination with the ABCDE sequence into resuscitation room A or B treatment is feasible, does not compromise the quality of care and is efficient in the use of the personnel resources. abstract_id: PUBMED:24636360 Cardiopulmonary resuscitation (CPR)-related posterior rib fractures in neonates and infants following recommended changes in CPR techniques. Posterior rib fractures are highly indicative of non-accidental trauma (NAT) in infants. Since 2000, the "two-thumbs" technique for cardiopulmonary resuscitation (CPR) of newborns and infants has been recommended by the American Heart Association (AHA). This technique is similar to the grip on an infant's thorax while shaking. Is it possible that posterior rib fractures in newborns and infants could be caused by the "two-thumbs" technique? Using computerized databases from three German children's hospitals, we identified all infants less than 12 months old who underwent professional CPR within a 10-year period. We included all infants with anterior-posterior chest radiographs taken after CPR. Exclusion criteria were sternotomy, osteopenia, various other bone diseases and NAT. The radiographs were independently reviewed by the Chief of Pediatric Radiology (MB) and a Senior Pediatrician, Head of the local Child Protection Team (IF). Eighty infants with 546 chest radiographs were identified, and 50 of those infants underwent CPR immediately after birth. Data concerning the length of CPR was available for 41 infants. The mean length of CPR was 11min (range: 1-180min, median: 3min). On average, there were seven radiographs per infant. A total of 39 infants had a follow-up radiograph after at least 10 days. No rib fracture was visible on any chest X-ray. The results of this study suggest rib fracture after the use of the "two-thumbs" CPR technique is uncommon. Thus, there should be careful consideration of abuse when these fractures are identified, regardless of whether CPR was performed and what technique used. The discovery of rib fractures in an infant who has undergone CPR without underlying bone disease or major trauma warrants a full child protection investigation. abstract_id: PUBMED:32754746 Outcomes of chest wall fixation in cardiopulmonary resuscitation-induced flail chest. Chest wall injury is a common complication of cardiopulmonary resuscitation. Chest wall fixation of flail chest has been shown to improve outcomes in patients in whom trauma is the primary pathology. Its efficacy in the post-cardiopulmonary resuscitation setting where the primary event is cardiac arrest is yet to be determined. We report outcomes in a series of 4 patients who underwent rib fixation in the setting of cardiopulmonary resuscitation-induced flail chest. Answer: Yes, there is a role for planned serial chest radiographs and abdominal ultrasound scans in the resuscitation room following trauma. A retrospective study found that occult injuries, which may not be immediately detectable on initial imaging, have an incidence of around 2-5%. Serial imaging in the resuscitation room may enable early identification of chest and abdominal injuries, although the study noted that only 12 cases were identified, making it difficult to interpret which patients would benefit most from repeat imaging before leaving the resuscitation room (PUBMED:17059711). Additionally, the use of lung ultrasound has been shown to be more sensitive in detecting traumatic pneumothorax than supine chest radiographs, suggesting that it should be incorporated into the emergency assessment of thoracic trauma patients to rule out pneumothorax (PUBMED:33707896). Another study comparing chest radiographs and ultrasound for detecting pneumothorax in trauma patients found that ultrasound as part of the Extended Focused Assessment with Sonography in Trauma (eFAST) was superior to chest radiographs for diagnosing the presence of a pneumothorax and predicting the need for a thoracostomy tube (PUBMED:37171252). Furthermore, the use of deep convolutional neural networks (DCNNs) for rib fracture recognition on chest radiographs has shown high validation accuracies, suggesting that integrating DCNN models into clinical decision support systems could significantly reduce the incidence of missed rib fracture diagnoses (PUBMED:37593404). In summary, serial imaging, particularly with the use of ultrasound and advanced technologies like DCNNs, plays a crucial role in the early detection and management of chest and abdominal injuries in trauma patients within the resuscitation room.
Instruction: The impact of left ventricular assist device-induced left ventricular unloading on the myocardial renin-angiotensin-aldosterone system: therapeutic consequences? Abstracts: abstract_id: PUBMED:19223317 The impact of left ventricular assist device-induced left ventricular unloading on the myocardial renin-angiotensin-aldosterone system: therapeutic consequences? Aims: Angiotensin-converting enzyme inhibitors (ACE-Is) prevent the rise in myocardial angiotensin II that occurs after left ventricular assist device (LVAD) implantation, but do not fully normalize cardiac function. Here, we determined the effect of LVAD implantation, with or without ACE-Is, on cardiac renin, aldosterone, and norepinephrine, since these hormones, like angiotensin II, are likely determinants of myocardial recovery during LVAD support. Methods And Results: Biochemical measurements were made in paired LV myocardial samples obtained from 20 patients before and after LVAD support in patients with and without ACE-I therapy. Pre-LVAD renin levels were 100x normal and resulted in almost complete cardiac angiotensinogen depletion. In non-ACE-I users, LVAD support, by normalizing blood pressure, reversed this situation. Cardiac aldosterone decreased in parallel with cardiac renin, in agreement with the concept that cardiac aldosterone is blood-derived. Cardiac norepinephrine increased seven-fold, possibly due to the rise in angiotensin II. Angiotensin-converting enzyme inhibitor therapy prevented these changes: renin and aldosterone remained high, and no increase in norepinephrine occurred. Conclusion: Although LV unloading lowers renin and aldosterone, it allows cardiac angiotensin generation to increase and thus to activate the sympathetic nervous system. Angiotensin-converting enzyme inhibitors prevent the latter, but do not affect aldosterone. Thus, mineralocorticoid receptor antagonist therapy during LVAD support may play a role in further promoting recovery. abstract_id: PUBMED:34303643 Clinical findings associated with incomplete hemodynamic left ventricular unloading in patients with a left ventricular assist device. Introduction And Objectives: The effect of a centrifugal continuous-flow left ventricular assist device (cfLVAD) on hemodynamic left ventricular unloading (HLVU) and the clinical conditions that interfere with hemodynamic optimization are not well defined. Methods: We retrospectively evaluated the likelihood of incomplete HLVU, defined as high pulmonary capillary wedge pressure (hPCWP)&gt; 15mmHg in 104 ambulatory cfLVAD patients when the current standard recommendations for cfLVAD rotor speed setting were applied. We also evaluated the ability of clinical, hemodynamic and echocardiographic variables to predict hPCWP in ambulatory cfLVAD patients. Results: Twenty-eight percent of the patients showed hPCWP. The variables associated with a higher risk of hPCWP were age, central venous pressure, absence of treatment with renin-angiotensin-aldosterone system inhibitors, and brain natriuretic peptide levels. Patients with optimal HLVU had a 15.2±14.7% decrease in postoperative indexed left ventricular end-diastolic diameter compared with 8.9±11.8% in the group with hPCWP (P=.041). Independent predictors of hPCWP included brain natriuretic peptide and age. Brain natriuretic peptide &lt;300 pg/mL predicted freedom from hPCWP with a negative predictive value of 86% (P &lt;.0001). Conclusions: An optimal HLVU can be achieved in up to 72% of the ambulatory cfLVAD patients when the current standard recommendations for rotor speed setting are applied. Age, central venous pressure and therapy with renin-angiotensin-aldosteron system inhibitors had a substantial effect on achieving this goal. Brain natriuretic peptide levels and the magnitude of reverse left ventricular remodeling seem to be useful noninvasive tools to evaluate HLVU in patients with functioning cfLVAD. abstract_id: PUBMED:33084480 The role of renin-angiotensin system in patients with left ventricular assist devices. End-stage heart failure is a condition in which the up-regulation of the systemic and local renin-angiotensin-aldosterone system (RAAS) leads to end-organ damage and is largely irreversible despite optimal medication. Left ventricular assist devices (LVADs) can downregulate RAAS activation by unloading the left ventricle and increasing the cardiac output translating into a better end-organ perfusion improving survival. However, the absence of pulsatility brought about by continuous-flow devices may variably trigger RAAS activation depending on left ventricular (LV) intrinsic contractility, the design and speed of the pump device. Moreover, the concept of myocardial recovery is being tested in clinical trials and in this setting LVAD support combined with intense RAAS inhibition can promote recovery and ensure maintenance of LV function after explantation. Blood pressure control on LVAD recipients is key to avoiding complications as gastrointestinal bleeding, pump thrombosis and stroke. Furthermore, emerging data highlight the role of RAAS antagonists as prevention of arteriovenous malformations that lead to gastrointestinal bleeds. Future studies should focus on the role of angiotensin receptor inhibitors in preventing myocardial fibrosis in patients with LVADs and examine in greater details the target blood pressure for these patients. abstract_id: PUBMED:17969373 Valsartan in the treatment of heart failure or left ventricular dysfunction after myocardial infarction. The physiological role of the renin angiotensin aldosterone system (RAAS) is to maintain the integrity of the cardiovascular system. The effect of angiotensin II is mediated via the angiotensin type I receptor (AT1 ) resulting in vasoconstriction, sodium retention and myocyte growth changes. This causes myocardial remodeling which eventually leads to left ventricular hypertrophy, dilation and dysfunction. Inhibition of the RAAS with angiotensin converting enzyme (ACE) inhibitors after acute myocardial infarction has been shown to reduce cardiovascular morbidity and mortality. Angiotensin receptor blockers (ARBs) specifically inhibit the AT1 receptor. It has not been known until the performance of the VALIANT (valsartan in acute myocardial infarction trial) whether blockade of the angiotensin receptor with an ARB or combination of an ACE inhibitor and ARB leads to similar outcomes as an ACE inhibitor. The VALIANT trial demonstrated equal efficacy and non-inferiority of the ARB valsartan 160 mg bid compared with captopril 50 mg tds, when administered to high risk patients with left ventricular dysfunction or heart failure in the immediate post myocardial infarction period. The combination therapy showed no incremental benefit over ACE inhibition or an ARB alone and resulted in increased adverse effects. This review examines the role of valsartan in left ventricular dysfunction post myocardial infarction. We also discuss pharmacokinetics, dosing, side effects, and usage in the elderly. abstract_id: PUBMED:19442324 Left ventricular hypertrophy and renin-angiotensin system blockade. The renin-angiotensin system (RAS), an important control system for blood pressure and intravascular volume, also causes left ventricular hypertrophy (LVH) and fibrosis. The main causal mechanism is the increase in blood pressure, which leads to increased left ventricular wall stress; however, aldosterone release from the adrenals and (more controversially) the direct action of angiotensin II on the cardiomyocytes also play a role. Large clinical trials evaluating the blockade of the RAS with angiotensin-converting enzyme inhibitors or angiotensin receptor blockers have demonstrated an ability to prevent progression and induce regression of left ventricular mass, thereby reducing the significant and independent cardiovascular risk conferred by LVH. Regression of left ventricular mass is also achieved by other medication classes, but the RAS blockers have an additional beneficial effect for the same blood pressure reduction, for which the mechanism is not entirely clear. Studies comparing the efficacy of angiotensin-converting enzyme inhibitors versus angiotensin receptor blockers to achieve LVH regression have not demonstrated any clear benefit of one class over the other. abstract_id: PUBMED:16849902 Mechanism of angiotensin II type 1 receptor blocker action in the regression of left ventricular hypertrophy. Left ventricular hypertrophy refers to a pathologic increase in left ventricular mass and is associated with an increased risk of subsequent cardiovascular morbidity and mortality from any cause. In the development of left ventricular hypertrophy there is growth of cardiomyocytes and accumulation of extracellular matrix and fibrosis. The actions are partly induced by angiotensin II, the principal effector of the renin-angiotensin-aldosterone system, binding to the AT1 receptor. Biochemical markers, some implicated in inflammatory changes, correlate with changes in left ventricular mass. The reduction in left ventricular mass brought about with angiotensin-converting enzyme inhibitor or angiotensin receptor blocker (ARB) therapy correlates with a reduction in these inflammatory changes, monitored by brain natriuretic peptide. Recent studies incorporating trials of ARBs have found ARBs to be more effective in reducing left ventricular mass than beta blockers and possibly more effective than calcium antagonists. Initial studies suggest that ARBs and angiotensin-converting enzyme inhibitors may have similar effects in terms of reducing left ventricular hypertrophy, and the combination of angiotensin-converting enzyme inhibitors and ARBs is thought to be synergistic due to a more complete inhibition of the renin-angiotensin-aldosterone system. In conclusion, these agents are efficacious in antihypertensive therapy and can play an important role in the prevention or regression of left ventricular hypertrophy due to hypertension. abstract_id: PUBMED:18641608 Imaging left ventricular remodeling: targeting the neurohumoral axis. Left ventricular remodeling is a key determinant of the clinical course and outcome of systolic heart failure. The myocardial renin-angiotensin system (RAS) has been closely linked to the major maladaptive cellular and molecular changes that accompany left ventricular remodeling. Direct inhibition of various components of the RAS, such as the angiotensin-converting enzyme, angiotensin II type 1 receptor, and aldosterone, has resulted in favorable clinical responses in heart failure. Many questions, however, remain unanswered regarding the timing of initiation, optimum doses, need for simultaneous use of RAS inhibitors, and proper monitoring of RAS blockade. Additionally, significant variation has been noted in individual responses to RAS blockade as a result of genetic differences. Answering these questions requires direct access to the myocardial component of RAS, which is largely independent of its systemic component. Molecular imaging using radiotracers with high affinities for myocardial angiotensin-converting enzyme and angiotensin II type 1 receptors can provide direct access to tissue RAS and thus provide a better understanding of the pathophysiology of left ventricular remodeling in individual patients. This Article briefly reviews the potential for evaluating the tissue expression of angiotensin in heart failure by targeted RAS imaging. abstract_id: PUBMED:8583471 Salt and left ventricular hypertrophy: what are the links? Left ventricular hypertrophy is a frequent and prognostically unfavourable finding in patients with essential hypertension and has been found to be a predictor for the development of essential hypertension in normotensive subjects. Among various genetic, haemodynamic and humoral determinants, dietary salt intake has been demonstrated to influence left ventricular mass in hypertensive disease. Several cross-sectional studies have shown a close relation between dietary salt intake and parameters of left ventricular hypertrophy. Moreover, reduction of dietary sodium intake was associated with a decrease of left ventricular mass in a prospective study. The underlying mechanism of how salt intake modulates myocardial structure has not been explained yet. Three possible explanations are discussed: (1) sodium influences left ventricular mass via raised preload, (2) the sympathetic nervous system acts as a mediator, and (3) the renin-angiotensin-aldosterone system is the responsible link. Recent animal experiments and clinical studies suggest that the renin-angiotensin-aldosterone system may mediate both the cardiotrophic and the blood pressure raising effects of salt. However, not all individuals have a similar high susceptibility to blood pressure elevation develop left ventricular hypertrophy when exposed to high salt intake. We suggest that the underlying mechanism is a dysregulation of the renin-angiotensin-aldosterone system. Some individuals may have an impaired downregulation of angiotensin II synthesis when challenged with high salt intake. Accordingly, we found that relatively too high levels of angiotensin II in relation to urinary sodium excretion were associated with left ventricular hypertrophy in these individuals on high salt intake. abstract_id: PUBMED:9543596 Inadequate suppression of angiotensin II modulates left ventricular structure in humans. Background: In a previous study we found that high angiotensin II levels in relation to the corresponding urinary sodium excretion aggravate left ventricular hypertrophy in hypertensive patients. To analyze whether a dysregulation of the renin angiotensin aldosterone system determines left ventricular structure in young individuals, we examined whether the response of angiotensin II after increasing salt intake is related to left ventricular structure. Methods: In 51 young, male Caucasians with normal or mildly elevated blood pressure, left ventricular structure, 24-hour ambulatory blood pressure and dietary sodium intake (as estimated by 24-hour sodium excretion) were determined in parallel with plasma renin activity, angiotensin II, and aldosterone concentrations. Angiotensin II concentration and 24-hour sodium excretion were measured twice: firstly on a normal Bavarian diet and secondly at high salt intake to determine the resulting suppression of the renin-angiotensin-aldosterone system. Results: Body mass index (r = 0.42, p &lt; 0.001) and both systolic (r = 0.28, p &lt; 0.05) and diastolic (r = 0.25, p &lt; 0.05) 24-hour ambulatory blood pressure correlated with left ventricular mass. No direct relationship was found between left ventricular structure and baseline angiotensin II concentration. The lower the physiological decrease of angiotensin II after high oral salt intake, i.e. the higher the angiotensin II level after salt intake remained, the greater was left ventricular mass (r = 0.38; p &lt; 0.006) even after taking 24-hour ambulatory blood pressure into account (partial correlation; r = 0.43, p &lt; 0.005). Consistently, angiotensin II concentration at high salt intake correlated with left ventricular mass independently of ambulatory blood pressure (partial correlation: r = 0.29, p &lt; 0.05). Subgroup analysis revealed that the increase in sodium excretion at high salt intake was related to the decrease in angiotensin II levels in normotensive (r = -0.43, p &lt; 0.05), but not in hypertensive subjects (r = 0.16, n.s.). The changes in angiotensin II concentration at high salt intake were related to left ventricular mass in hypertensive (r = 0.43, p &lt; 0.02), but not in normotensive individuals (r = 0.21, n.s.). Conclusion: Our finding that angiotensin II concentration at high salt intake correlated with left ventricular mass independently of ambulatory blood pressure suggests that inadequate suppression of angiotensin II after high salt intake contributes to left ventricular hypertrophy already in young hypertensive individuals independently of blood pressure. abstract_id: PUBMED:33084438 The renin-angiotensin-system and left ventricular mass in young adults: the African-PREDICT study. Purpose: Raised blood pressure, with the renin-angiotensin system (RAS) as a central regulatory component, is one of the most important contributors to early development of left ventricular hypertrophy. Factors such as increased age, sex, black ethnicity and a low socio-economic status also contribute to left ventricular remodelling. To better understand early contributors to left ventricular mass, we investigated the relationship between left ventricular mass index (LVMi) and the components of the RAS in young healthy adults while considering ethnicity, sex and socio-economic status. Materials And Methods: Black and white women and men (N = 1186) between the ages of 20-30 years were included. By using standard echocardiography, we determined LVMi. Ultra-pressure-liquid chromatography tandem-mass spectrometry (LC-MS/MS) was used to measure the RAS-fingerprint®. Results: Components of the RAS such as plasma renin activity (PRA-S), angiotensin I (Ang I), angiotensin II (Ang II) and aldosterone were suppressed in the black compared to the white group (all p &lt; 0.001). No associations between LVMi and the RAS were evident in the total, black or white groups. With additional grouping according to sex and socio-economic status, inverse associations between LVMi and PRA-S (β= -0.168; p = 0.017), Ang I (β= -0.155; p = 0.028) and Ang II (β= -0.172; p = 0.015) were found only in low socio-economic black women. Conclusion: Despite a suppressed RAS in the black compared to the white group, components of the RAS were not associated with LVMi in this young cohort. The low socio-economic black women of this study population may be vulnerable to future RAS-related increases in left ventricular mass. Answer: The impact of left ventricular assist device (LVAD) induced left ventricular unloading on the myocardial renin-angiotensin-aldosterone system (RAAS) has significant therapeutic consequences. LVAD implantation, which is used to support patients with end-stage heart failure, can lead to changes in the levels of hormones that are determinants of myocardial recovery during LVAD support. Specifically, LVAD support can normalize blood pressure and reduce cardiac renin and aldosterone levels, which are typically elevated in heart failure. However, this unloading allows for an increase in cardiac angiotensin generation, which can activate the sympathetic nervous system (PUBMED:19223317). The use of angiotensin-converting enzyme inhibitors (ACE-Is) in conjunction with LVAD support can prevent the rise in myocardial angiotensin II and the associated increase in cardiac norepinephrine, but they do not affect aldosterone levels. This suggests that therapy with mineralocorticoid receptor antagonists during LVAD support may be beneficial in promoting further recovery (PUBMED:19223317). Clinical findings have shown that incomplete hemodynamic left ventricular unloading (HLVU) can occur in patients with a continuous-flow LVAD (cfLVAD), and factors such as age, central venous pressure, and absence of treatment with RAAS inhibitors can affect the likelihood of achieving optimal HLVU. Brain natriuretic peptide levels and the magnitude of reverse left ventricular remodeling are useful noninvasive tools to evaluate HLVU in patients with functioning cfLVAD (PUBMED:34303643). Overall, the role of the RAAS in patients with LVADs is complex, and the system's modulation has implications for myocardial recovery, blood pressure control, and the prevention of complications such as arteriovenous malformations that lead to gastrointestinal bleeds. Future studies should focus on the role of RAAS antagonists in preventing myocardial fibrosis in patients with LVADs and examine the target blood pressure for these patients (PUBMED:33084480).
Instruction: Unsuccessful temporomandibular joint arthroscopy: is a second arthroscopy an acceptable alternative? Abstracts: abstract_id: PUBMED:18848106 Unsuccessful temporomandibular joint arthroscopy: is a second arthroscopy an acceptable alternative? Purpose: Arthroscopic surgery has been shown to be an effective treatment for patients with temporomandibular disorders, relieving patients' symptoms and restoring adequate mandibular function. For those patients with poor arthroscopic outcomes, various treatment modalities are possible, such as nonsurgical therapy, open surgery, or repeat arthroscopic surgery. The purpose of this study was to evaluate our results with rearthroscopy in patients with temporomandibular joint dysfunction. Patients And Methods: The clinical data and operative reports for 50 patients who underwent a second arthroscopic procedure from 1994 to 2004 were reviewed retrospectively. Outcome assessments were based on reductions in pain, measured using a visual analog scale, and improvements in maximal interincisal opening. The minimum follow-up period was 2 years. Results: Significant differences were evident between presurgical and postsurgical pain at months 1, 6, 12, and 24. The mean score of preoperative pain on the visual analogue scale was 61.65 mm, which was reduced to 36.28 mm at 2-year follow-up. With regard to mandibular function, all patients presented with restricted mouth opening, with a mean preoperative maximal interincisal opening of 26.73 mm. Postoperatively, the maximal interincisal opening showed a statistically significant improvement (P &lt; .05), and at 2-year follow-up, we obtained a total improvement of 7 mm. Only 8 patients (16%), who had an unsuccessful result after a second arthroscopy, underwent further surgical intervention (open surgery). Conclusion: Arthroscopic surgery is a reliable and effective procedure for temporomandibular joint dysfunction that improves pain and mouth opening, with the advantages of being minimally invasive and repeatable. Repeat arthroscopic surgery, with a proven history of fewer complications, can be attempted before open arthrotomy. abstract_id: PUBMED:2700680 Arthroscopy of the temporomandibular joint. Arthroscopy is a technique that allows direct visual examination of the internal surfaces of the joint. In the past decade, arthroscopy has become a major diagnostic and therapeutic tool in the treatment of articular disorders of the knee. The purpose of this report is to present a comprehensive overview of temporomandibular joint (TMJ) arthroscopy in order to familiarize the practitioner with this adjunct to temporomandibular disorder diagnosis and treatment. abstract_id: PUBMED:2489215 Clinical application of arthroscopy on temporomandibular joint We present the anatomy, methodology, procedure for arthroscopy on temporomandibular joint. Arthroscopic view of normal and pathologic view on superior articular cavity of temporomandibular joint are presented. There were no serious complications except slight swelling on temporomandibular joint. abstract_id: PUBMED:30699484 The role of arthroscopy in the diagnosis and treatment of temporomandibular joint disorders Arthroscopy means Looking into a joint and is derived from the Greek words Arthros, which means Joint and Scopien, which means To view. It is a minimally invasive surgical procedure aimed at diagnosis and treatment of intra-articular joint disorders. Arthroscopy of the temporomandibular joint was first described in 1974, and since then has become an acceptable therapeutic modality for various joint pathologies. During a 6-year period (April 2010 to January 2016), two hundred and two patients have been treated by arthroscopy in our department. The majority of patients (124) were diagnosed as suffering from acute or chronic closed lock of the temporomandibular joint. Thirty five patients suffered from mild internal derangement conditions, and 33 patients suffered from various forms of osteoarthritic conditions and degenerative joint disease. The success rate for the different groups ranged from 77% to 89%. The complication rate was low (3%) and all adverse events were temporary. The success of the procedure ..depends on careful patient selection (exclusion of muscular, neurologic, and psychogenic disorders), and strict adherence to the surgical protocol with meticulous attention to every detail. Arthroscopy of the temporomandibular joint is a safe and effective treatment,for various derangement and degenerative joint disorders, achieving high rates of success with minimal morbidity and surgical risks. abstract_id: PUBMED:2694911 Arthroscopy of the temporomandibular joint-technique and indications. Temporomandibular joint arthroscopy has gained wide interest in reviews of recent literature. With this technique a proper understanding of the anatomical landmarks is important to ensure success. The technique for anesthesia, puncture and the arthroscopic examination are fully described. An inferior lateral approach to the joint compartment is advocated. A description of the equipment used for this procedure together with the functional elements of the instruments are emphasised. Indications for diagnostic arthroscopy together with the possible postoperative complications are presented. abstract_id: PUBMED:3083020 Arthroscopy of the rabbit temporomandibular joint. Arthroscopy of the rabbit temporomandibular joint was evaluated experimentally on 22 rabbits. A small arthroscope with so-called "Selfoc"-system was employed. The reaction to arthroscopy was analyzed clinically. Macroscopic dissection and histology were then employed to detect possible pathological changes in the joints at periodic intervals. The established criteria for evaluation of postoperative reactions were erythema, suppuration, weight changes and chewing capacity. The results of this study indicate that the rabbit temporomandibular joint may be arthroscoped with subsequent minor reversible changes. It also seems feasible to assume that temporomandibular joint arthroscopy in humans runs a low risk for postoperative complications. abstract_id: PUBMED:3272415 Arthroscopy of the temporomandibular joint Arthroscopy was introduced in 1975, as a method for the diagnosis and treatment of the most common conditions involving the temporo-mandibular joint, with ample advantages over more sophisticated diagnostic techniques currently developed. The object of this report is to describe this method of diagnosis and treatment--arthroscopic technique--and list findings from 15 patients suffering from assorted abnormalities of the temporo-mandibular joint. abstract_id: PUBMED:3165118 Arthroscopy of the human temporomandibular joint. Diagnostic arthroscopy of the human temporomandibular joint was investigated on both fresh human cadavers and presurgical arthrotomy patients. It was possible to visualize all of the major anatomic structures in the superior joint space. Inferior joint space techniques were explored on cadavers but not perfected for clinical use. Closed arthroscopic surgical techniques were performed on the cadaver specimens using a variety of instruments such as probes, miniature biopsy forceps, and motorized soft tissue resection and bone abraiding devices. abstract_id: PUBMED:27267707 Temporomandibular joint arthroscopy technique using a single working cannula. The traditional arthroscopy technique includes the creation of three ports in order to enable visualization, operation, and arthrocentesis. The aim of this study was to assess an advanced temporomandibular joint (TMJ) arthroscopy technique that requires only a single cannula, through which a one-piece instrument containing a visualization canal, irrigation canal, and a working canal is inserted, as an alternative to the traditional double-puncture technique. This retrospective study assessed eight patients (13 TMJs) with pain and/or limited range of movement that was refractory to conservative therapy, who were treated between June 2015 and December 2015. The temporomandibular joint disorder (TMD) was diagnosed by physical examination and mouth opening measurements. The duration of surgery was recorded and compared to that documented for traditional arthroscopies performed by the same surgeon. Operative single-cannula arthroscopy (OSCA) was performed using a holmium YAG (Ho:YAG) 230μm fibre laser for ablation. The OSCA technique proved effective in improving mouth opening in all patients (mean increase 9.12±1.96mm) and in reducing pain (mean visual analogue scale decrease of 3.25±1.28). The operation time was approximately half that of the traditional technique. The OSCA technique is as efficient as the traditional technique, is simple to learn, and is simpler to execute. abstract_id: PUBMED:3091720 Temporomandibular joint arthroscopy by inferolateral approach. A clinically successful technique for temporomandibular joint (TMJ) arthroscopy by the inferolateral approach is reported. The outline of the superior and inferior articular cavities of the TMJ except for the lower anterior pouch could be visualised; arthroscopy could be performed on most patients under local anesthesia in the out-patient operating room. Clinical usefulness and security of the TMJ arthroscopy based upon anatomical understanding is described. Answer: Yes, a second arthroscopy, or rearthroscopy, is considered an acceptable alternative for patients with temporomandibular joint dysfunction who have had poor outcomes from their initial arthroscopic surgery. According to a study that reviewed the clinical data and operative reports for 50 patients who underwent a second arthroscopic procedure, significant improvements were observed in both pain reduction and mandibular function (PUBMED:18848106). The mean score of preoperative pain on the visual analog scale was significantly reduced at the 2-year follow-up, and there was a statistically significant improvement in maximal interincisal opening, with a total improvement of 7 mm. Only 16% of the patients who had an unsuccessful result after a second arthroscopy went on to undergo further surgical intervention, indicating that repeat arthroscopic surgery can be a reliable and effective procedure that improves pain and mouth opening. It is also noted for being minimally invasive and repeatable, with a history of fewer complications compared to open arthrotomy (PUBMED:18848106).
Instruction: Self-reported cognitive concerns and abilities: two sides of one coin? Abstracts: abstract_id: PUBMED:24700645 Self-reported cognitive concerns and abilities: two sides of one coin? Objective: Patient-reported cognitive function can be measured using negatively worded items (concerns) and positively worded (abilities) items. It is possible that reporting abilities is less subject to the influence of emotional states. This study evaluated the relationship between cognitive concerns and cognitive abilities. Methods: Cancer patients (N = 509; mean age = 61 years; 50% men; 86% White) completed concerns and abilities items developed by the National Institutes of Health Patient-Reported Outcomes Information System (PROMIS). Confirmatory factor analysis was used to evaluate the extent to which items were loaded on one single factor (unidimensionality). Multidimensionality was evaluated using bi-factor analysis (local factors: concerns and abilities). Slope parameters from multidimensional item response theory (IRT) and unidimensional IRT were compared to evaluate which factor solution fits best. Results: Acceptable fit indices were found in both one-factor confirmatory factor analysis (comparative fit index (CFI) = 0.96; root mean squared error of approximation (RMSEA) = 0.062) and bi-factor analysis (CFI = 0.98; RMSEA = 0.043). Thus, abilities and concerns could be considered as a single dimension. Yet, high loadings on the local factor in bi-factor analysis and slope discrepancies between unidimensional IRT and multidimensional IRT indicate that abilities should be considered as a separate factor from concerns. Conclusions: Concerns and abilities could be measured using one-unidimensional item bank. Results also support measuring each construct separately. We recommend a conservative approach by measuring and reporting concerns and abilities separately. We therefore recommend two separate but co-calibrated item banks in the PROMIS network: cognitive function item bank-concerns and cognitive function item bank-abilities. Both item banks showed good psychometric properties and are available for research and clinical purposes. abstract_id: PUBMED:37212461 Premorbid cognitive functioning influences differences between self-reported cognitive difficulties and cognitive assessment in multiple sclerosis. Cognitive difficulties are reported in up to 60% of people with MS (pwMS). There is often a discrepancy between self-reported cognitive difficulties and performance on cognitive assessments. Some of this discrepancy can be explained by depression and fatigue. Pre-MS cognitive abilities may be another important variable in explaining differences between self-reported and assessed cognitive abilities. PwMS with high estimated premorbid cognitive functioning (ePCF) may notice cognitive difficulties in daily life whilst performing within the average range on cognitive assessments. We hypothesised that, taking into account depression and fatigue, ePCF would predict (1) differences between self-reported and assessed cognitive abilities and (2) performance on cognitive assessments. We explored whether ePCF predicted (3) self-reported cognitive difficulties. Eighty-seven pwMS completed the Test of Premorbid Functioning (TOPF), the Brief International Cognitive Assessment for MS (BICAMS), self-report measures of cognitive difficulty (MS Neuropsychological Questionnaire; MSNQ), fatigue (MS Fatigue Impact Scale; MFIS) and depression (Hospital Anxiety and Depression Scale; HADS). Results revealed that, taking into account covariates, ePCF predicted (1) differences between self-reported and assessed cognitive abilities, p &lt; .001 (model explained 29.35% of variance), and (2) performance on cognitive assessments, p &lt; .001 (model explained 46.00% of variance), but not (3) self-reported cognitive difficulties, p = .545 (model explained 35.10% of variance). These results provide new and unique insights into predictors of the frequently observed discrepancy between self-reported and assessed cognitive abilities for pwMS. These findings have important implications for clinical practice, including the importance of exploring premorbid factors in self-reported experience of cognitive difficulties. abstract_id: PUBMED:20146127 Different sides of the same coin? Intercorrelations of cognitive biases in schizophrenia. Introduction: A number of cognitive biases have been associated with delusions in schizophrenia. It is yet unresolved whether these biases are independent or represent different sides of the same coin. Methods: A total of 56 patients with schizophrenia underwent a comprehensive cognitive battery encompassing paradigms tapping cognitive biases with special relevance to schizophrenia (e.g., jumping to conclusions, bias against disconfirmatory evidence), motivational factors (self-esteem and need for closure), and neuropsychological parameters. Psychopathology was assessed using the Positive and Negative Syndrome Scale (PANSS). Results: Core parameters of the cognitive bias instruments were submitted to a principal component analysis which yielded four independent components: jumping to conclusions, personalising attributional style, inflexibility, and low self-esteem. Conclusions: The study lends tentative support for the claim that candidate cognitive mechanisms for delusions only partially overlap, and thus encourage current approaches to target these biases independently via (meta)cognitive training. abstract_id: PUBMED:25540512 Self-reported Exhaustion is Associated with Small Life Space in Older Adults with Mild Cognitive Impairment. [Purpose] Older adults experience exhaustion-induced health problems, such as poor physical function and low physical activity levels. The associations between self-reported exhaustion and physical function and activity are not clear in older adults with mild cognitive impairment (MCI). The aim of this study was to investigate the relationships between self-reported exhaustion and physical function and activity in older adults with mild cognitive impairment. [Subjects] A total of 356 older adults with mild cognitive impairment (mean age = 71.6 ± 0.3 years, 50.8% women) were included in this study. [Methods] Self-reported exhaustion was identified by one item from the Study of Osteoporotic Fractures index. Gait speed, gait endurance, and life space were also assessed. [Results] Sixty-two participants reported having exhaustion, giving a 17.4% prevalence of self-reported exhaustion among these individuals. Logistic regression analysis showed that the Life-Space Assessment score was the only parameter significantly independently associated with exhaustion status (adjusted odds ratio 0.97, 95% confidence interval 0.95-0.99). [Conclusion] These results suggest that self-reported exhaustion is associated with life space. Future research is needed to identify ways for older people with MCI to improve their exhaustion status. abstract_id: PUBMED:36952054 Relationship between self-reported sleep and cognitive function: a specification curve analysis. The relationship between self-reported sleep and cognitive function is complex; it is unclear whether self-reported sleep is a robust correlate of people's cognitive function. We address this gap by using a comprehensive large-scale dataset (N = 1054) coupled with a novel modeling approach, specification curve analysis (SCA), to test the association between self-reported sleep and cognitive function. The results of the SCA showed robust correlations between self-reported sleep and cognitive function, with poorer sleep associated with worse cognitive function. Furthermore, the correlations between sleep components and cognitive function were heterogeneous, with differences emerging across cognitive measures and domains. Specifically, daytime dysfunction was associated with the strongest effect on subjective cognitive function, whereas sleep duration and sleep efficiency had the strongest effect on objective cognitive function. Therefore, the relationship between self-reported sleep and cognition depends largely on what and how cognitive function is measured. Our findings guide measurement and domain selection for future research on the role of sleep in cognitive function. abstract_id: PUBMED:37791129 Better than expected: the gap between self-reported and objective measures of cognitive performance in remitted bipolar disorder. Background: Studies comparing objective and self-reported cognitive functioning as well as influencing factors in individuals with remitted bipolar disorder are scarce and contradictory. Methods: The aim of this study was to compare executive functioning and other objective and self-reported cognitive impairment between 26 individuals with remitted bipolar disorder (15 BD I) and 24 healthy controls using a cross-sectional design. Executive functions were measured by the TAP Go/No-go subtest as well as the Stroop Task. Self-rated functioning was assessed using the Attention Deficit Experience Questionnaire. In addition, possible predictors of self-reported and objective cognitive functioning were examined to perform regression analyses. Results: Individuals with remitted bipolar disorder did not differ significantly in executive functions or other objective cognitive domains from the healthy control group, but showed a significantly lower level of self-reported cognitive functioning and self-esteem. While self-esteem was the strongest predictor in healthy controls for self-reported cognitive functioning, severity of illness and subthreshold depressive mood were the most important predictors in individuals with remitted bipolar disorder. Conclusion: The results once again demonstrate the cognitive heterogeneity in bipolar disorder. In the treatment of cognitive deficits, factors such as subthreshold depressive symptomatology and self-esteem should be focused on in addition to cognitive training in remitted patients. abstract_id: PUBMED:38229039 Self- and interviewer-reported cognitive problems in relation to cognitive decline and dementia: results from two prospective studies. Background: Little is known regarding the association of interviewer-reported cognitive problems (ICP) with age-related cognitive decline. We aimed to investigate the independent associations of ICP and the combined associations of ICP and self-reported cognitive problems (SCP) with subsequent cognitive decline and dementia in two prospective cohort studies. Methods: We included 10,976 Chinese (age = 57.7 ± 8.7) and 40,499 European (age = 64.6 ± 9.4) adults without dementia from the China Health and Retirement Longitudinal Study (CHARLS) and the Survey of Health, Ageing, and Retirement in Europe (SHARE). Self-rated memory (5-point scale) and interviewer-rated frequencies of asking for clarification (6-point scale) were used to define SCP and ICP (dichotomized). Outcomes included objective cognitive test scores (z-score transformation) and incident dementia. Generalized estimating equation models were performed to evaluate mean differences in objective cognitive decline. Logistic and Cox regression models were used to estimate the relative risk of dementia. Results from two cohorts were pooled using the random-effects models. Results: ICP was associated with faster cognitive decline in CHARLS (βCHARLS = -0.025 [-0.044, -0.006] z-score/year). ICP and SCP were also independently associated with higher risk of dementia in two cohorts (pooled relative risk for SCP = 1.73 [1.30, 2.29]; pooled relative risk for ICP = 1.40 [1.10, 1.79]). In the joint analysis, participants with coexistence of SCP and ICP had the fastest cognitive decline (βCHARLS = -0.051 [-0.080, -0.021]; βSHARE = -0.024 [-0.043, -0.004]; pooled β = -0.035 [-0.061, -0.009] z-score/year) and highest risk of dementia (ORCHARLS = 1.77 [1.42, 2.20]; HRSHARE = 2.94 [2.42, 3.59]; pooled relative risk = 2.29 [1.38, 3.77]). Conclusions: The study suggested that interviewer-reported cognitive problems may be early indicators of cognitive decline and dementia in middle-aged and older adults. A combination of self- and interviewer-reported cognitive problems showed the strongest associations with cognitive decline and dementia. abstract_id: PUBMED:27651016 The sense of self is supported by several independent cognitive processes in Alzheimer's disease and self-reported age tracks cognitive impairment. Do patients with Alzheimer's disease loose themselves? The impact of dementia on the sense of self has been extensively studied over the past 15 years. However, most studies investigate only one marker of the self - such as mirror self-recognition or the use of the pronoun "I" - and do not track how this marker evolves in the course of the illness in comparison to other markers. This situation has resulted in fragmented findings rather than converging evidence for a coherent model of the self in dementia. In our two studies, we use a questionnaire to investigate four markers of the self simultaneously (self-knowledge, mirror self-recognition, the bodily distinction between self and other, and self-reported age) in the same 60 patients spread across three stages of Alzheimer's disease. This method allows us to determine whether these markers are impaired independently over the progression of the illness. Our results suggest that the sense of self relies on a complex structure supported by several independent cognitive processes that are impacted differently by the progression of dementia. In particular, despite the early deterioration of self-knowledge, patients at advanced stages of the disease seem to maintain a sense of self, rooted into mirror self-recognition and the bodily self. Furthermore, self-reported age predicts the level of cognitive impairment. We suggest that a better understanding of the stage at which each marker of the self breaks down can help clinicians support their patients better by targeting the preserved dimensions of their identity at any given point in the progression of their condition. abstract_id: PUBMED:29728959 Systematic review of self-reported cognitive function in cancer patients following chemotherapy treatment. Purpose: Cognitive symptoms are common in cancer patients, with up to 70% reporting cognitive symptoms following chemotherapy. These symptoms can have a major impact on how an individual functions in all aspects of their lives. This review evaluates self-reported cognitive function and its associations with neuropsychological tests and patient-reported outcomes in adult cancer patients who received chemotherapy treatment for a solid cancer. Methods: A search of multiple databases (Medline, Ovid at Nursing, PsycINFO, Allied and Complementary Medicine) from 1936 to 2017 was conducted, identifying 1563 unique articles, of which 101 met inclusion criteria. Results: Of the 101 included studies, 48 (47%) were cross-sectional and 38 (38%) longitudinal in design, with 12 (12%) randomised controlled trials. A minority (26%) incorporated a healthy control arm in the study design, whilst the majority (79%) were in women with breast cancer. There was diversity in the assessment of self-reported cognitive symptoms. A total of 43 of 44 studies that sought an association between self-reported cognitive function and patient-reported outcomes found a moderate to strong association. Overall, 31 studies showed a lack of association between self-reported cognitive symptoms and neuropsychological results, whilst 14 studies reported a significant association between the two, but the association was often restricted to limited cognitive domains. Conclusion: The review found widespread heterogeneity in the assessment of self-reported cognitive symptoms and consistently absent or weak association with neuropsychological test scores. Implications For Cancer Survivors: This research highlights the need for a standardised approach to measurement of self-reported cognitive symptoms in cancer patients. abstract_id: PUBMED:26836953 Self-Reported Cognitive Concerns in People With Lower Limb Loss. Objectives: To investigate differences between self-reported cognitive concerns in people with lower limb loss (LLL) and normative data derived from the U.S. general population, and secondarily to determine whether there were cognitive differences based on amputation etiology or age. Design: Survey. Setting: General community. Participants: A volunteer sample of persons with LLL (N=1086) resulting from trauma or dysvascular complications who regularly use a prosthetic limb. Interventions: Not applicable. Main Outcome Measure: The Quality of Life in Neurological Disorders Applied Cognition-General Concerns Short Form version 1.0 (Neuro-QoL ACGC), an 8-item self-report measure of general cognitive concerns. Results: People with LLL reported significantly more cognitive concerns than the Quality of Life in Neurological Disorders normative sample. Mean Neuro-QoL ACGC scores were significantly lower than normative values (P&lt;.001) across subgroups defined by age (ie, &lt;40, 40-49, 50-59, 60-69, and 70+ years) and subgroups defined by etiology (ie, traumatic and dysvascular LLL). However, there were no significant differences in cognitive concerns among age subgroups (P=.84) or between the etiology subgroups (P=.58). Conclusions: When compared with the Quality of Life in Neurological Disorders normative sample, individuals with LLL report greater concerns with cognitive health. Cognitive concerns were not differentially affected by age or cause of amputation. The presence of cognitive concerns in people with LLL suggests a need to assess perceived cognitive function in order to tailor education and training in prosthetic use and care. Answer: The relationship between self-reported cognitive concerns and abilities is complex and multifaceted. The study by PUBMED:24700645 suggests that while cognitive concerns and abilities could be measured using a one-dimensional item bank, there is evidence to support measuring each construct separately. The study found that abilities should be considered as a separate factor from concerns, indicating that they may not be simply two sides of the same coin. In the context of multiple sclerosis, PUBMED:37212461 found that premorbid cognitive functioning influences the discrepancy between self-reported cognitive difficulties and performance on cognitive assessments. This suggests that individuals with higher estimated premorbid cognitive functioning may report cognitive difficulties despite performing within the average range on assessments, further complicating the relationship between self-reported concerns and objective cognitive abilities. The study by PUBMED:20146127 on schizophrenia found that cognitive biases associated with delusions are only partially overlapping, suggesting that these biases, which could be related to self-reported cognitive concerns, are not necessarily indicative of a single underlying cognitive issue. PUBMED:25540512 reported that self-reported exhaustion is associated with a smaller life space in older adults with mild cognitive impairment, indicating that self-perceptions of cognitive and physical fatigue can have real-world implications on activity and mobility. The relationship between self-reported sleep and cognitive function was explored in PUBMED:36952054, which found robust correlations between the two, with poorer sleep associated with worse cognitive function. This study highlights the importance of considering self-reported factors when assessing cognitive function. In remitted bipolar disorder, PUBMED:37791129 found that individuals did not differ significantly in objective cognitive functions from healthy controls but reported lower levels of cognitive functioning and self-esteem. This suggests that self-reported cognitive concerns may not always align with objective measures. PUBMED:38229039 showed that both self- and interviewer-reported cognitive problems are associated with cognitive decline and dementia, with the combination of both reporting methods showing the strongest associations. PUBMED:27651016 found that the sense of self in Alzheimer's disease is supported by several independent cognitive processes, and self-reported age tracks cognitive impairment, suggesting that self-perception can reflect cognitive status. Lastly, PUBMED:29728959 highlighted the lack of a standardized approach to measuring self-reported cognitive symptoms in cancer patients and the often weak association between self-reported cognitive symptoms and neuropsychological test scores. In summary, self-reported cognitive concerns and abilities are related but distinct constructs that should be measured separately (PUBMED:247 Continuing from where I left off: In summary, self-reported cognitive concerns and abilities are related but distinct constructs that should be measured separately (PUBMED:24700645). The relationship between self-reported cognitive concerns and objective cognitive abilities is influenced by various factors, including premorbid cognitive functioning (PUBMED:37212461), cognitive biases (PUBMED:20146127), physical and mental fatigue (PUBMED:25540512), sleep quality (PUBMED:36952054), mood and self-esteem (PUBMED:37791129), and the presence of cognitive decline or dementia (PUBMED:38229039). Furthermore, the sense of self in Alzheimer's disease is supported by several independent cognitive processes, and self-reported age can reflect cognitive impairment (PUBMED:27651016). The lack of a standardized approach to measuring self-reported cognitive symptoms and the weak association with neuropsychological test scores highlight the complexity of this relationship in cancer patients (PUBMED:29728959). Finally, individuals with lower limb loss report greater concerns with cognitive health, which are not differentially affected by age or cause of amputation, suggesting that self-perceived cognitive function should be assessed to tailor education and training (PUBMED:26836953). Therefore, while self-reported cognitive concerns and abilities are interrelated, they represent different aspects of cognitive functioning and are influenced by a variety of factors, suggesting they are not simply two sides of the same coin.
Instruction: Does the effect of acupuncture depend on needling sensation and manipulation? Abstracts: abstract_id: PUBMED:23642953 Does the effect of acupuncture depend on needling sensation and manipulation? Background: Acupuncture sensation and manipulation have been considered to be an important component of acupuncture in traditional Asian medicine. However, there has been limited research as to whether acupuncture sensation is associated with therapeutic benefit. This study investigated the relationship between acupuncture sensation and analgesic effect according to acupuncture manipulation. Method: Fifty-three healthy volunteers received three different forms of acupuncture in a single-blinded crossover design: superficial needling (0.3 cm), deep needling (2 cm) and needling with bi-directional rotation. The effects of acupuncture were evaluated by using the pressure pain threshold. Acupuncture sensation measurement was done in two ways. Results: Both total acupuncture sensation and increase of the pressure pain threshold were maximum in needling with rotation, followed by deep needling and superficial needling. Repeated-measure analysis of variance (ANOVA) analysis was carried out to assess whether there was a significant difference; both showed significant difference (p = 0.000, 0.003). A paired sample t-test was carried out, which revealed that needling with rotation showed significant difference from both superficial needling and deep needling. Further, the correlation between the total acupuncture sensation and changes in pressure pain threshold were calculated using Pearson correlation; there was a significant correlation (p = 0.002, p = 0.013). Conclusion: Acupuncture sensation and pressure pain threshold increase according to the depth and rotation of acupuncture. Especially, both display significant increase with needle rotation. Further, there is a significant correlation between acupuncture needling sensation and increase in pressure pain threshold. It seems that needle rotation and acupuncture sensation play an important role in verifying the effect of acupuncture. abstract_id: PUBMED:33204248 The Electric Shock during Acupuncture: A Normal Needling Sensation or a Warning Sign. The electric shock has been proposed as one of the new needling sensations in recent years. In acupuncture sensation scales, the electric shock is included by ASS and SNQS, but not SASS, MASS, and C-MMASS. Some scholars argue that the electric shock is a normal needling sensation, but some researchers do not agree with this view. This problem has not been resolved due to a lack of evidence from basic research. Literature and research point out that the electric shock is caused by inserting a needle into the nerve directly. A question of considerable scientific and practical interest is whether the electric shock should be a normal needling sensation. In this article, we review the historical documentation of the needling sensation and the process of formulating and improving acupuncture sensation scales to suggest that the electric shock may not be a normal needling sensation. Secondly, we collected and analyzed cases of nerve injury caused by acupuncture accompanied by the electric shock and why acupuncture caused the electric shock without nerve injury. It suggests that there may be a correlation between the electric shock and peripheral nerve injury, and acupuncture manipulation is an essential factor in adverse acupuncture events. Finally, we put forward that the electric shock during acupuncture is a warning sign that the peripheral nerve may be injured, rather than a normal needling sensation. In the future, we hope to have experimental studies on the mechanism of the electric shock or observational studies on the correlation between the electric shock and peripheral nerve injury to verify. abstract_id: PUBMED:1394963 Preliminary observation on the relation among needling sensation, propagated sensation along meridian (PSM), and acupuncture effect when acupuncture neiguan This paper reports 300 coronary heart disease patients with different needling sensations, mainly with distension, when acupuncturing Neiguan by identical acupuncture doctor with same manipulation. The higher rate of PSM appearance and better acupuncture effect were observed, in patients with compound sensation, such as sourness-distension and distension-numbness. All of the patients who felt pain did not appear PSM and the acupuncture effect was poor. The appearance rate of PSM and acupuncture effect of other kind needling sensations were between the two. The results showed the kind of needling sensation while acupuncture had close relation with the appearance of PSM and the acupuncture effect. abstract_id: PUBMED:29354961 Relationship between needling layers and needling sensations in acupuncture Based on the sancai (literally three talents referring to "earth", "human" and "heaven") and wuti (literally five organs referring to skin, muscle, vessel, tendon and bone, which correspond to five internal organs: lungs, spleen, heart, liver and kidney respectively) in Neijing (Internal Classic), as well as the tissue structure theory of modern anatomy, and in combination with the clinical practice of acupuncture by the author in the past over 30 years, it is proposed that four layers are divided in needling depth of acupuncture. The different needling sensations are induced at different layers. The first layer is the skin. When the needle is inserted into the skin, the main sensation is "pain". The second layer is the subcutaneous layer (superficial fascia). The needle tip at this layer generates a strong "swelling" sensation, which radiates along a certain direction. The third layer is the deep fascia and muscle layer. If the needle tip touches the deep fascia, there is a "heavy" feeling. If the needle tip goes into the muscle, an "acid" sensation will occur. The fourth layer is the periosteum and bone layer, which produces a strong sensation of "pain" when touched by the needle tip. In addition, each layer has blood vessels and nerve distribution. When the needle tip touches the blood vessels, a sense of "pain" happens. If the needle tip touches the nerve, there will be a sensation of numbness, which spreads along the direction of nerve conduction. On the contrary, through the patient's needling sensations, we can roughly know which layer and direction the needle tip goes. Better understanding of the relationship between the needling layers and needling sensations contributes to the further study of needling techniques, therapeutic effects of acupuncture and patient's sensations. It is significant for the regulations of needling sensations in patients. abstract_id: PUBMED:31056887 Progress of researches on mechanisms of needling and moxibustion sensations and their related sensation transmission There are some records about the needling and moxibustion sensations in Chinese ancient medical literature, in which, the concept of Deqi (qi arrival) and its relationship with the production of therapeutic effects were put forward. The generation of needling or moxibustion sensations is one of the symbol for qi reaching the affected site during the manipulation and is also related to the treatment outcomes. In the present paper, we reviewed current situations about the mechanisms of acupuncture and moxibustion stimulation induced sensations and their relation to the propagated sensation (PS) published in recent 10 years from the following four aspects: 1) the patterns of manifestation of needling sensations (including general sensations, and specific sensations, as well as the practitioner's hand felt heavy, tight, hard-going senses, etc.) and moxibustion sensations (local superficial and deep warmth); 2) modern anatomical and physiological basis (skin, loose and dense connective tissue, free nerve ending, various receptors); 3) neurohumoral mechanism of acupuncture stimulation induced PS phenomenon (mechanical receptors, afferent and efferent nerve fibers, neural center, various neurotransmitters/modulators, chemical substances, etc.); 4) neurohumoral mechanism of moxibustion stimulation induced channel transmission (transient receptor potential vanilloid receptors-heat sensitive receptors, heat-shock protein, immunocytes, etc.). Additionally, the authors also put forward some suggestions about future researches on acupuncture-moxibustion sensations. abstract_id: PUBMED:21442836 Enlightenment and thinking on deqi (arrival of needling sensation) by abdominal acupuncture Enlightened by clinical practice of new acupuncture techniques such as abdominal acupuncture, different recognitions on deqi (arrival of needling sensation), namely qi sensation under the tip of the needles and qi sensation at the affected region were analyzed in this article. The cause and effect relation of arrival of the qi sensation and effectiveness was unscrambled. Varying from the commonly accepted concept of qi sensation, a new concept of latent needling sensation which took the clinical effectiveness as the standard is proposed as a result to provide theoretical preparations for the clinical practice of painlessness acupuncture. abstract_id: PUBMED:22734396 Framework to quantify the needling sensation and effect of acupuncture It is lack of perfection to use the quantitative description method to evaluate the needling sensations, which blocks the further study on quantity-effectiveness relation of acupuncture. According to the results and clinical experiences of the previous quantitative research of acupuncture, the author puts forwand the quantitative description rules of needling sensation (M-QNS) and questionnaire of quantity-effectiveness relation of acupuncture (F-QNS-E) is designed. And it is concluded that both of the M-QNS and the F-QNS-E can be applied to the randomized controlled trials and clinical practice of acupuncture. abstract_id: PUBMED:32388822 Effects of Three Needling Manipulations of Zusanli (ST 36) on Deqi Sensations and Surface Myoelectricity in Healthy Participants. Objective: To investigate the effects of different acupuncture manipulations on Deqi sensations and surface myoelectricity, and explore the correlation between Deqi sensations and needling manipulations. Methods: Forty-five healthy participants accepted twirling, lifting-thrusting, and twirling plus lifting-thrusting manipulanions at right Zusanli (ST 36), respectively. The acupuncturist's and participants' Deqi sensations were collected by MGH Acupuncture Sensation Scale (MASS). The intensity and occurrence rate of soreness, dull pain, pressure, heaviness, fullness, numbness, sharp pain, warmth, coolness, and throbbing feelings of participants, and tightness, smooth, and tangle feelings of acupuncturist were measured. The correlation between the acupuncturist's and participant's Deqi sensations was analyzed. Surface electromyogram (EMG) were recorded before, during and after needling in 30 participants. The integrated EMG (iEMG), mean power frequency (MPF) and media frequency (MF) were analyzed. Result: Both fullness and soreness of participants and tightness of acupuncturist were the most frequently occurred ones. A positive correlation between participants' fullness and acupuncturist's tightness was observed during the three aforementioned needling manipulations (P&lt;0.05, OR&gt;1). Almost all the needling sensations measured in the present study could be induced by the three needling manipulations. However, strength of Deqi sensations was exhibited as lifting-thrusting &gt; twirling plus lifting-thrusting &gt; twirling according to MASS index. The iEMG values were increased and MPF, MF values were decreased during needling compaired to those before needling, especially during lifting-thrusting (P&lt;0.01). Conclusions: The intensity and occurrence rate of the different Deqi sensations induced by different needling manipulations were basically similar. The fullness and soreness were both the most frequently induced Deqi sensations. The strongest Deqi sensation could be induced by lifting-thrusting manipulation. There is a positive correlation between participants' fullness and acupuncturist's tightness during the three needling manipulations. The myoelectricity around the acupoint is related to Deqi responses. (Registration No. AMCTR-IOR-20000314). abstract_id: PUBMED:24298770 Exploration on eye needling manipulation The 40-year experiences in the clinical application of eye acupuncture in our hospital are summarized. The manipulation of needle insertion, withdrawal and puncture procedure is analyzed. The keys of the techniques of eye acupuncture are explained. The basic needling manipulations are determined. In the insertion of needle, professor Peng stressed on the stability, accuracy and fast, without lifting, thrusting, rotating, and opening/closing techniques involved. TIAN Wei-zhu emphasizes the gentle insertion, pain avoiding, apparent needling sensation and needling sensation transmission. In terms of acupuncture operation, skin stretching, patient's attention shifting, quick insertion of needle and slow-down moving of needle body are required. The outside orbit transverse needling method is recommended basically. abstract_id: PUBMED:29231591 Exploration of acupoint combination and needling techniques in the reinforcing and reducing manipulation at different acupoints The reinforcing and reducing manipulation at different acupoints is a kind of acupuncture manipulations and has satisfactory clinical therapeutic effects, combined with a proper needling techniques. The reinforcing needling method is used in the upper and the reducing one in the lower, the distal acupoints are combined with the nearby acupoints. The local acupoints or adjcant acupoints of the affected area are regarded as the nearby acupoints, e.g. the acupoints in the upper. The distant acupoints and the acupoints on the hand and foot are named as distal acupoints, e.g. the acupoint in the lower. In the reinforcing manipulation, the needle is inserted shallowly along the running direction of meridian. In the reducing manipulation, the needle is inserted deeply and against the running direction of meridian. The yin-yang couple needling technique is used with the combination of the front-mu and back-shu points. In the first option, the reinforcing and reducing needling method with rotating technique is predominated at the front-mu points, while that with lifting and thrusting technique is at the back-shu points. In the second option, when needling the back-shu points, the needling sensation is transmitted along the transverse segment and far to the chest and abdomen. These two kinds of integration of acupoint combination and needling techniques display a certain clinical significance in improving the therapeutic effects of acupuncture. Answer: Yes, the effect of acupuncture appears to depend on needling sensation and manipulation. Research has shown that acupuncture sensation and manipulation are considered important components of acupuncture in traditional Asian medicine, and there is evidence to suggest that these factors are associated with therapeutic benefit (PUBMED:23642953). For instance, a study involving healthy volunteers demonstrated that both acupuncture sensation and the increase of pressure pain threshold were greatest with needling that included rotation, followed by deep needling and superficial needling. There was also a significant correlation between the total acupuncture sensation and changes in the pressure pain threshold, indicating that needle rotation and acupuncture sensation play a crucial role in the effect of acupuncture (PUBMED:23642953). Additionally, different needling sensations are induced at different layers of tissue, with sensations such as pain, swelling, heaviness, and acid sensation occurring at various depths, which can inform the practitioner about the needle's position and potentially influence the therapeutic outcome (PUBMED:29354961). The relationship between needling sensations and the propagated sensation along meridians (PSM) has also been observed, with a higher rate of PSM appearance and better acupuncture effect in patients experiencing compound sensations such as sourness-distension and distension-numbness (PUBMED:1394963). Furthermore, different needling manipulations, such as twirling, lifting-thrusting, and a combination of both, can induce various Deqi sensations and affect surface myoelectricity, with a positive correlation observed between certain sensations experienced by participants and the acupuncturist (PUBMED:32388822). The reinforcing and reducing manipulation at different acupoints, combined with proper needling techniques, has also been reported to have satisfactory clinical effects (PUBMED:29231591). In summary, the available evidence suggests that the effect of acupuncture is indeed influenced by needling sensation and manipulation, with specific sensations and techniques potentially enhancing the therapeutic impact of the treatment.
Instruction: Does cryotherapy affect refractive error? Abstracts: abstract_id: PUBMED:11158812 Does cryotherapy affect refractive error? Results from treated versus control eyes in the cryotherapy for retinopathy of prematurity trial. Purpose: To evaluate the effect of cryotherapy on refractive error status between ages 3 months and 10 years in children with birth weights of less than 1251 g in whom severe retinopathy of prematurity (ROP) developed in one or both eyes during the neonatal period. Design: Randomized clinical trial. Participants: Two hundred ninety-one children in whom severe ROP developed during the neonatal period. Intervention: Cryotherapy for ROP. Main Outcome Measures: Cycloplegic Refraction Methods: The children underwent repeated follow-up eye examinations, including cycloplegic retinoscopy, between 3 months and 10 years after term due date. Refractive error data from all eyes that were randomized to cryotherapy were compared with data from all eyes that were randomized to serve as controls. Refractive error data were also compared for a subset of children who had both a treated and a control eye that could be refracted. Results: At all ages, the proportion of treated eyes that were unable to be refracted because of retinal detachment, media opacity, or pupillary miosis was approximately half the proportion of the control eyes that were unable to be refracted. When data from all eyes that could be refracted were considered, the distribution of refractive errors between fewer than 8 diopters (D) of myopia and more than 8 D of hyperopia was similar for treated and control eyes at all ages. The proportion of eyes with 8 D or more of myopia was much higher in treated than in control eyes at all ages after 3 months. In the subset of children who had a treated eye and a control eye that could be refracted, distributions of refractive errors in treated versus control eyes were similar at most ages. Conclusions: In both treated and control eyes, there was an increase in the prevalence of high myopia between 3 and 12 months of age. Between 12 months and 10 years of age, there was little change in distribution of refractive error in treated or control eyes. The higher prevalence of myopia of 8 D or more in treated eyes, as compared with control eyes, may be the result of cryotherapy's preservation of retinal structure in eyes that, in the absence of cryotherapy, would have progressed to retinal detachment. abstract_id: PUBMED:27162730 Late clinical characteristics of infants with retinopathy of prematurity and treated with cryotherapy. Aim: To describe the clinical characteristics and late results of patients with retinopathy of prematurity (ROP) treated with "6h cryotherapy". Methods: Out of 1252 infants screened for ROP, 52 patients were treated with temporal 6h cryotherapy from 1997 to 2005 were recalled to our clinic. Among these 23 patients were available and 46 eyes of 23 infants were included to evaluate for visual acuity, refractive error, ocular alignment, nystagmus, retinal examination (abnormal branching of retinal vessels, retinal thinning, latis degenerations, tortuosity of vessels, straightening of temporal vessels, narrowing of the angle of vessel in the juxtapapillary entrance, pigment changes, macular heterotopia), optic atrophy and optic disc cupping, axial length at birth and axial length at 1y. Results: The median age at examination was 7 (5-18)y. In 32.6% of patients, the visual acuity was ≤20/200 and the mean best corrected visual acuity was 20/35 as measured with a Snellen chart. Mean spherical refractive error was -1.76±2.69 D. The degree of myopia at the last examination was found to be correlated with the elongation of the eye in the first year of life. Exotropia was present in 17.4% (n=8) of infants and esotropia in 13% (n=6). The most common retinal abnormality was abnormal branching of retinal vessels (82.6%) followed by retinal thinning (52.2%). Conclusion: The late clinical outcomes of infants with ROP treated in our clinic with cryotherapy seems to comparable with results of laser treatment. abstract_id: PUBMED:8976729 Refractive outcome in eyes with retinopathy of prematurity treated with cryotherapy or diode laser: 3 year follow up. Aims: To compare the refractive error 1 to 3 years after cryotherapy or diode laser treatment for threshold retinopathy of prematurity. Methods: Twenty six infants treated with diode laser and 17 infants treated with cryotherapy underwent cycloplegic refraction during follow up. Results: After 3 years of follow up, 94.1% of patients had myopia (right eye if bilateral) following cryotherapy and 45.5% of patients had myopia following diode laser treatment. The difference between the two proportions was 48.7% (95% confidence interval 17.8 to 80.1, p = 0.004). In the cryotherapy group 55% of patients were highly myopic (&gt; -6.00 dioptres) while in the laser group there were no high myopes. Conclusions: In the diode laser group there were significantly fewer myopes than in the cryotherapy group up to 3 years after the procedure. There was no trend towards increasing myopia in the laser treated group and the refraction in these eyes stabilised after 1 year. In the cryotherapy group there was a significant increase in the degree of myopia between year 1 and year 3 of follow up (p = 0.02). Diode laser treatment is thought to be as effective as cryotherapy, and has the added benefit of reducing myopia, in the treatment of ROP. abstract_id: PUBMED:9135401 Cryotherapy and laser treatment for acute retinopathy of prematurity: refractive outcomes, a longitudinal study. Background: Infants who require treatment for threshold retinopathy of prematurity are at increased risk of developing refractive errors. Following the introduction of laser treatment for threshold disease, the clinical impression was that the degree of myopia was reduced compared with cryotherapy. Methods: A longitudinal study was carried out of refractive error at 3 and 12 months in 19 patients undergoing cryotherapy and 15 patients undergoing laser treatment. Results: At 3 months the median spherical equivalent refractive error in the right eye was -3.25 dioptres after cryotherapy and +0.25 dioptres after laser therapy (similar results left eye). The median spherical equivalent refractive error in the right eye at 12 months was -5.25 dioptres following cryotherapy and -0.50 dioptres after laser (similar results left eye). There was a statistically significant difference in median spherical refractive error between the therapies at 3 months and 12 months (p &lt; 0.05 Wilcoxon rank sum) in both eyes. Conclusion: Laser therapy is associated with lower degrees of myopia during the first year of life, which is clinically significant in terms of visual performance and development. abstract_id: PUBMED:9924326 Longer-term visual outcome of eyes with retinopathy of prematurity treated with cryotherapy or diode laser. Aims: Visual outcome of 66 eyes in 37 patients who had undergone treatment with either cryotherapy or diode laser for threshold retinopathy of prematurity was assessed. Methods: 17 patients, representing 30 eyes treated with cryotherapy, were examined at between 56 and 98 months corrected age (median 68 months). 20 patients representing 36 eyes treated with diode laser, were examined at between 30 and 66 months corrected age (median 51 months). Structural outcome was categorised as: optimal--flat posterior pole; suboptimal--macular ectopia, optic nerve hypoplasia, retinal fold involving the macula, and retinal detachment involving the macula. Results: Optimal structural outcome was, in the absence of amblyopia, associated with optimal visual acuity (of 6/12 or better) in all cases, with most eyes achieving a visual acuity of 6/9 or 6/6. Suboptimal structural outcome was invariably associated with suboptimal visual acuity. Amblyopia was present in eight out of 20 cryotherapy treated eyes and in five out of 26 laser treated eyes with an optimal structural outcome. Refractive errors were significantly less in laser treated eyes as was the incidence of anisometropic amblyopia. Conclusion: Eyes treated with either cryotherapy or diode laser for threshold retinopathy of prematurity with optimal structural outcome are associated with development of optimal visual acuity--that is, 6/12 or better. Treatment with either cryotherapy or laser does not in itself reduce the visual potential of these eyes. abstract_id: PUBMED:9924328 Three year visual outcome for treated stage 3 retinopathy of prematurity: cryotherapy versus laser. Background/aims: In the management of retinopathy of prematurity (ROP), several studies have demonstrated laser photocoagulation to be as effective as cryotherapy in reducing the incidence of unfavourable structural outcome. However, few data are available on the functional outcome. The 3 year visual acuity outcome of infants treated with laser or cryotherapy in a neonatal unit is presented. Methods: The case notes of 34 infants (64 eyes) treated with cryotherapy, between 1989 and 1992, and 32 infants (59 eyes) treated with laser, between 1992 and 1995, were reviewed. Results: In the cryotherapy group 69% of eyes had a favourable structural outcome. Of these structurally successful eyes 62.5%, 35.0%, and 33.3% of eyes had visual acuities within normal limits at the 12 month, 24 month, and 36 month corrected age milestones respectively. In the laser group 93% of eyes had a favourable structural outcome. Of these structurally successful eyes 96.4%, 66.7%, and 59.5% of eyes had visual acuities within normal limits at the 12 month, 24 month, and 36 month corrected age milestones respectively. Conclusion: In the management of ROP, when laser photocoagulation induces a structurally successful result, the potential for normal visual acuity development at 3 years is high. Whether the poorer functional outcome of the eyes treated with cryotherapy is an artefact of the historical nature of the study or as a result of an adverse effect of the destructive transcleral application is unknown. abstract_id: PUBMED:9083952 Randomized comparison of diode laser photocoagulation versus cryotherapy for threshold retinopathy of prematurity: 3-year outcome. Purpose: To report the structural and functional outcomes at a minimum of 3 years postmenstrual corrected age following laser- or cryoablation of threshold retinopathy of prematurity (ROP). Methods: Nineteen patients were entered into a prospective, randomized treatment protocol, in which one eye received cryotherapy and the other received diode laser photocoagulation. Asymmetric eyes were randomly assigned. Two patients have died and five were not available for 3-year outcome exams. Results: Seven males and five females with a mean birthweight of 638 g and a mean gestational age of 24.9 weeks were examined. Two patients had asymmetrical disease and received laser photocoagulation. Two discordant structural outcomes were present among the 10 symmetrical cases. The laser-treated eyes had the favorable outcome; the cryotherapy-treated eyes had the unfavorable outcome. The geometric mean visual acuity after laser photocoagulation was 20/52; after cryotherapy, it was 20/91 (P = 0.046). The mean refractive error was -6.60 diopters (D) after laser photocoagulation and -7.62 D after cryotherapy. Seven patients (58%) have developed strabismus. Conclusions: Laser photocoagulation appears to have an outcome at least comparable to cryotherapy when the patients are examined 3 years following therapy. These data, including visual acuity and refractive error, suggest that laser photocoagulation may have a minimal advantage over cryotherapy in the treatment of ROP. abstract_id: PUBMED:11438057 Randomized comparison of diode laser photocoagulation versus cryotherapy for threshold retinopathy of prematurity: seven-year outcome. Purpose: To report the structural and functional outcomes at a minimum of 7 years postmenstrual age after randomized treatment of threshold retinopathy of prematurity with laser ablation or cryotherapy. Methods: Nineteen patients were entered into a prospective, randomized protocol, in which one eye received cryotherapy, while the other eye received diode laser photocoagulation. Asymmetric eyes were randomly assigned. Two patients have died, and seven were no longer available for 7-year outcome examinations, leaving 10 children for analysis. Results: Six males and four females with a mean birthweight of 631 g and a mean gestational age of 24.8 weeks were examined. Eight were symmetrical cases and treated in both eyes. Of these, there were six concordant and two discordant structural outcomes. The laser-treated eyes had the favorable outcome in each instance. The geometric mean visual acuity of the paired eyes after laser photocoagulation was 20/33, and after cryotherapy it was 20/133 (P =.03). The mean refractive error was -6.50 diopters after laser photocoagulation and -8.25 diopters after cryotherapy (P =.27), although one of the cryotherapy eyes could not be refracted because of phthisis. Conclusions: Laser photocoagulation appears to be associated with a structural and functional outcome at least as good as cryotherapy 7 years after therapy. Visual acuity and refractive error data suggest that laser photocoagulation may have an advantage over cryotherapy. abstract_id: PUBMED:10477226 Therapeutic outcomes of cryotherapy versus transpupillary diode laser photocoagulation for threshold retinopathy of prematurity. Introduction: We undertook this study to compare the clinical outcomes in patients treated for threshold retinopathy of prematurity (ROP) with transpupillary diode laser photocoagulation versus transscleral cryotherapy. Methods: A retrospective chart review was performed of patients treated for threshold ROP at our institution between 1988 and 1997. Cryotherapy was used to treat patients before 1992, and diode laser was used thereafter. One hundred fifteen eyes of 63 patients underwent cryotherapy, and 130 eyes of 70 patients underwent laser treatment. Because strong concordance exists between fellow eyes treated for ROP, statistical analysis was done for right eye outcomes only. Demographics, short-term complications, and treatment parameters were compared on the entire cohort of patients. Structural outcomes were compared between a group of 79 cryotreated eyes (39 OD, 40 OS) and a group of 113 laser-treated eyes (56 OD, 57 OS). Mean cycloplegic refraction was compared between a group of 38 cryotreated eyes (18 OD, 20 OS) and a group of 90 laser-treated eyes (46 OD, 44 OS). Visual acuity for preverbal and nonverbal children was estimated and converted to a Snellen visual acuity equivalent. Geometric mean visual acuity was then compared between a group of 44 cryotreated eyes (22 OD, 22 OS) and a group of 92 laser-treated eyes (47 OD, 45 OS). Patients were included in each outcome analysis if adequate documentation was present in the patient record. Results: Demographics and short-term complications were not significantly different between the 2 groups. In the statistical group, 22 of 39 right eyes (56.4%) in the cryotherapy group versus 49 of 56 right eyes (87.5%) in the laser group had resolution of ROP after treatment (P = .0008). Mean spherical equivalent cycloplegic refraction at 12 months of age and over the follow-up period was not significantly different between the cryotherapy and laser groups. Estimated geometric mean visual acuity in the cryotherapy group was 20/103 and in the laser group was 20/49 at 12 months of age (P= .0099). Conclusions: Diode laser photocoagulation was associated with a better long-term structural outcome and visual acuity compared with cryotherapy for the treatment of threshold ROP. Refractive error was not significantly different between the 2 groups over an equivalent follow-up period. abstract_id: PUBMED:11986101 A comparison of laser photocoagulation with cryotherapy for threshold retinopathy of prematurity at 10 years: part 2. Refractive outcome. Objective: To compare the refractive outcome of eyes treated with cryotherapy for threshold retinopathy of prematurity (ROP) with eyes treated with laser photocoagulation. Design: Extended follow-up of a randomized controlled clinical trial. Participants: One hundred eighteen eyes from 66 patients were randomly assigned to receive either cryotherapy or laser photocoagulation for threshold ROP. Twenty-five patients (44 eyes treated) were available for follow-up examination 10 years later. Intervention: Cycloplegic autorefraction or retinoscopy was performed. Immersion ultrasound biometry (A-scan) was also performed, and an autokeratometer was used for keratometry. If an A-scan could not be tolerated or the patient was not cooperative, a B-scan through-the-lid biometry was performed. Corneal thickness was measured using optical coherence tomography. MAIN AND SECONDARY OUTCOME MEASURES: Refractive error. In addition, anterior chamber depth, lens thickness, and axial length were measured. Central corneal thickness measurements and keratometric readings were also obtained. Result: Eyes treated with cryotherapy were significantly more myopic than those treated with laser photocoagulation. When comparing patients with bilateral treatment, the mean spherical equivalent (SE) of eyes treated with laser was -4.48 diopters (D) compared with a mean SE of -7.65 D for eyes treated with cryotherapy (n = 15 pairs of eyes, P = 0.019). Cryotherapy-treated eyes had a mean axial length of 21.7 mm versus 22.9 mm for laser-treated eyes (P = 0.024, n = 12 pairs of eyes). The anterior chamber depth and lens thickness averaged 2.86 mm and 4.33 mm, respectively, in the cryotherapy-treated eyes compared with 3.42 mm and 3.95 mm in the laser-treated eyes (P &lt; 0.001, n = 12 pairs for both measurements). There were no statistical differences in anterior corneal curvature and central corneal thickness between the two treatment modalities. Crystalline lens power bore the strongest correlation to refractive outcomes in both laser-treated (r = 0.885, P &lt; 0.001) and cryotherapy-treated eyes (r = 0.591, P = 0.026). Although keratometric readings were higher than normal values in these eyes, there was no correlation to the degree of myopia. Conclusions: Laser-treated eyes were significantly less myopic than cyrotherapy-treated eyes. Lens power seemed to be the predominant factor contributing to the excess myopia. Answer: Yes, cryotherapy does affect refractive error. Studies have shown that eyes treated with cryotherapy for retinopathy of prematurity (ROP) tend to have a higher prevalence of myopia, especially high myopia, compared to control eyes or eyes treated with other methods such as diode laser photocoagulation. The PUBMED:11158812 study found that the proportion of eyes with 8 diopters (D) or more of myopia was much higher in treated eyes than in control eyes at all ages after 3 months. Similarly, the PUBMED:8976729 study reported that 94.1% of patients had myopia following cryotherapy, with 55% being highly myopic (> -6.00 diopters), whereas in the diode laser group, there were significantly fewer myopes and no high myopes. The PUBMED:9135401 study also found that laser therapy was associated with lower degrees of myopia during the first year of life compared to cryotherapy. The PUBMED:9924326 study indicated that refractive errors were significantly less in laser-treated eyes, and the PUBMED:9924328 study suggested that laser photocoagulation may have a better functional outcome than cryotherapy, with fewer eyes developing high myopia. Furthermore, the PUBMED:9083952 and PUBMED:11438057 studies reported that laser photocoagulation appears to have an outcome at least comparable to cryotherapy, with suggestions that laser may have a minimal advantage over cryotherapy in terms of visual acuity and refractive error. The PUBMED:10477226 study concluded that diode laser photocoagulation was associated with a better long-term structural outcome and visual acuity compared with cryotherapy, although refractive error was not significantly different between the two groups over an equivalent follow-up period. Lastly, the PUBMED:11986101 study found that eyes treated with cryotherapy were significantly more myopic than those treated with laser photocoagulation, with a mean spherical equivalent of -7.65 D for cryotherapy-treated eyes compared to -4.48 D for laser-treated eyes. This study also noted that lens power seemed to be the predominant factor contributing to the excess myopia in cryotherapy-treated eyes.
Instruction: Urinary tissue factor in glomerulonephritis: a potential marker of glomerular injury? Abstracts: abstract_id: PUBMED:9215153 Urinary tissue factor in glomerulonephritis: a potential marker of glomerular injury? Aim: To investigate the significance of urinary tissue factor (uTF) concentrations in patients with glomerulonephritis. Methods: Urine samples were collected from normal subjects (n = 57), patients with uncomplicated renal stones (n = 30), and patients with glomerulonephritis (n = 150). Samples were then centrifuged and the pellets solubilised in n-octyl-beta-glucopyranoside. uTF concentrations were determined using a one stage kinetic chromogenic assay. Results: The uTF concentration was higher in patients with glomerulonephritis than in normal controls (p &lt; 0.01) or in patients with renal stones (p &lt; 0.05). uTF activity correlated with the protein creatinine index (PCI, r = 0.41, p &lt; 0.001) and seven patients with glomerulonephritis and a PCI &lt; or = 0.1 g/mmol had raised uTF. Glomerulonephritis patients were subdivided into two groups depending on the PCI: &lt; 0.2 g/mmol creatinine (mild to moderate proteinuria, group I) and &gt; or = 0.2 g/mmol creatinine (heavy proteinuria, group II). In group I, uTF concentrations were higher in patients with either immune complex (IC) glomerulonephritis (p &lt; 0.01) or non-IC (p &lt; 0.05) glomerulonephritis than in normal controls. In group II, the IC glomerulonephritis group had higher uTF concentrations than normal controls (p &lt; 0.001) or patients with renal stones (p &lt; 0.01); and non-IC glomerulonephritis patients had higher uTF than normal controls (p &lt; 0.01). When the glomerulonephritis groups were divided into broad WHO subtypes, the significance level varied with the type of glomerulonephritis. Conclusions: uTF is increased in patients with glomerulonephritis, and its concentration may reflect the aetiopathogenesis of glomerulonephritis. abstract_id: PUBMED:16982632 Tissue factor and its inhibitor in human non-crescentic glomerulonephritis--immunostaining vs plasma and urinary levels. Background: Tissue factor (TF)-the most potent trigger of coagulation and emerging antiapoptotic, proliferative and angiogenic factor, along with its principal inhibitor (tissue factor pathway inhibitor, TFPI) are known to be involved in crescentic glomerulonephritis (GN). We studied the relationship between plasma and urinary levels as well as renal biopsy immunostaining of TF and TFPI antigens with reference to some clinical parameters in human chronic non-crescentic GN. Methods: We examined plasma and urinary levels of TF and total TFPI (pre-biopsy, ELISA) and the intensity of TF, TFPI 1 and TFPI 2 staining (immunoperoxidase histochemistry) in kidney biopsy specimens from 30 chronic GN patients. Results: Plasma and urinary TF (uTF) were higher in patients than in 18 healthy individuals. In normal kidneys, TF and TFPI 1/2 antigens were undetectable in glomeruli while a distinct staining of both TFPI variants was observed in tubules and interstitial microvessels. In diseased kidneys, TF was strongly expressed in glomeruli but was undetectable in tubules. In contrast, staining for TFPI 1/2 was observed in glomeruli and tubules. Neither plasma nor urinary levels of the markers correlated with the intensity of TF and TFPI 1/2 staining in biopsy specimens. uTF was significantly associated with creatinine clearance (R = 0.489, P = 0.006) and urinary TFPI (R = 0.554, P = 0.014), and tended to be lower in proliferative vs non-proliferative GN [83 (0-617) vs 281 (10-805) pg/ml; P = 0.06]. Conclusion: The intrarenal TF/TFPI system is profoundly disturbed in chronic GN. Plasma and urinary concentrations of TF and TFPI probably do not reflect genuine activity of the disease, likely due to a confounding effect of kidney insufficiency. uTF measurement seems to be helpful in initial identification of proliferative GN, yet further studies are required to validate its use as a marker of glomerular injury in chronic GN. abstract_id: PUBMED:15012709 Urinary nitric oxide levels are increased and correlated with plasma concentrations in patients with Behçet's disease: is it a new urinary activity marker? Nitric oxide (NO) is a free radical and serves many functions within the kidney. Excess NO causes glomerular injury. Behçet's disease (BD) is a systemic immunoinflammatory vasculitis, affecting every organ in the body including the kidneys (subclinic glomerulonephritis). We investigated the role of urinary total nitrite levels (end product of NO) in BD and evaluated whether urinary concentrations were correlated with its plasma levels or disease activity. Thirty-six consecutive Behçet's patients (19 men, 17 women; 35.9 years), and 20 age- and sex-matched healthy control volunteers (12 men, eight women; 33.2 years) were divided into an active (n = 16) and inactive (n = 20) period. Urinary and serum NO levels ( micromol/mg urinary creatinine) were higher in BD patients (4.1 +/- 0.3) than control subjects (1.7 +/- 0.2; P &lt; 0.001). Serum NO levels in Behçet's patients and control subjects were 51.3 +/- 9.8 and 21.7 +/- 7.3 micromol/L, respectively (P &lt; 0.001). Active patients had higher urinary NO excretion (4.9 +/- 0.3) than inactive patients (3.3 +/- 0.3; P &lt; 0.01). Urinary NO levels were correlated with its serum levels (r2 = 0.69, P &lt; 0.001). Higher urinary NO levels found in BD may be produced by the kidney as a result of an inflammatory stimulation. As excess NO is toxic to the tissues, increased NO levels may play a role in mediating subclinic glomerular injury of such patients. However, we could not determine the exact site(s) of NO synthesis by the kidney, such as the glomeruli, blood vessels and/or the tubular cells. Whatever the source, urinary NO levels may be used as a new activity marker in the diagnosis and follow up of BD by serial measurements. abstract_id: PUBMED:15931643 Urinary glycosaminoglycan composition in chronic glomerulonephritis. Background: Glycosaminoglycans (GAG) play an important role in regulating glomerular permeability, and a reduction in their plasmatic concentration or urinary loss has been implicated in the pathogenesis of diseases associated with increased albumin permeability. The purpose of this study was to evaluate GAG excretion in renal pathology by analyzing the composition of urinary GAG in antibody mediated glomerular injury, such as mesangial glomerulonephritis (IgAGN) and primitive membranous glomerulonephritis (MGN), to verify the effects of glomerular capillary wall lesion with and without mesangial cell injury. Methods: Urinary GAG excretion was analyzed in 20 patients with IgAGN, 18 patients with MGN, and in 18 healthy subjects (controls). GAG were isolated from 24-hr urine using ion-exchange chromatography on DEAE-Sephacel, and the results are expressed as mg hexuronate/g creatinine (Cr). GAG composition was determined by cellulose acetate electrophoresis and expressed as relative percentages by densitometric scanning of Alcian Blue stained strips. Results: We found total GAG levels significantly higher in the urine of patients with MGN in comparison with controls and patients with IgAGN. The electrophoretic pattern analysis demonstrated low sulfated chondroitin sulfate proteoglycan (LSC-PG) in all patients compared to 44% in controls (8/18), but also low sulfated chondroitin sulfate (LSC) in 18.4% of patients (7/38) and slow migrating LSC (SM-LSC) in 8% of patients (3/38), only in the MGN group. Moreover, in patients with MGN, the LSC-PG relative content was significantly higher when compared to that observed in controls. Finally, in IgAGN and MGN patients, a significant reduction in chondroitin sulfate (CS) relative content was observed. Conclusions: It seems likely that an increase in total GAG levels takes place when a reduction in renal function occurs, but the alteration of CS and herapan sulfate (HS) relative contents, and the presence of LSC-PG and free LSC also in the urine of IgAGN patients, allows us to suggest that the GAG distribution pattern becomes abnormal before an increase in total urine GAG excretion. In addition, the quali-quantitative determination of urinary GAG and GAGprotein complex could constitute an additional non-invasive marker to appraise the metabolism of renal connective tissue in some renal diseases. abstract_id: PUBMED:11981066 Effect of cerivastatin on proteinuria and urinary podocytes in patients with chronic glomerulonephritis. Background: We previously reported urinary podocytes to be a marker of glomerular injury. The aim of the present study was to determine whether cerivastatin, a newly developed, potent synthetic statin, affects proteinuria and urinary podocyte excretion in patients with chronic glomerulonephritis (CGN). Methods: We randomly assigned 40 normotensive hypercholesterolemic patients with CGN to receive either cerivastatin 0.15 mg/day (n=20) or placebo (n=20). Subjects comprised 24 men and 16 women, with a mean age of 40.8+/-14.4 years; 27 had IgA nephropathy and 13 had non-IgA proliferative glomerulonephritis. Treatment was continued for 6 months. Plasma total cholesterol, HDL-cholesterol, LDL-cholesterol and triglycerides, urinary protein excretion and the number of podocytes were measured before treatment and at 3 and 6 months after treatment. Results: After 6 months, a significant reduction in total cholesterol (P&lt;0.001), LDL-cholesterol (P&lt;0.001) and triglycerides (P&lt;0.05), and a significant increase in HDL-cholesterol (P&lt;0.001) were observed in the group treated with cerivastatin. Urinary protein excretion decreased from 1.8+/-0.6 to 0.8+/-0.4 g/day, (P&lt;0.01) in this group, and urinary podocyte excretion decreased from 1.6+/-0.6 to 0.9+/-0.4 cells/ml (P&lt;0.01). However, placebo showed little effect on these lipid levels, urinary protein excretion and urinary podocyte excretion. The differences between the cerivastatin group and the placebo group were significant (cholesterol, P&lt;0.001; LDL-cholesterol, P&lt;0.001; triglycerides, P&lt;0.05; HDL-cholesterol, P&lt;0.001; urinary protein, P&lt;0.01; and urinary podocytes, P&lt;0.01). Conclusion: Statins such as cerivastatin may be beneficial for restoration of injured podocytes in patients with CGN and hypercholesterolaemia. abstract_id: PUBMED:9481437 Urinary excretion of podocytes reflects disease activity in children with glomerulonephritis. The significance of the presence of podocytes in the urine was studied in various renal diseases in children. The podocytes were detected by immunofluorescence using monoclonal antibodies against the podocalyxin that is present on the surface of podocytes which serves as a glycocalyx. They were scored according to the numbers per partitioned area on cytospun urine sediments. Urine podocytes were absent in normal control, nonglomerular diseases such as urinary tract infection and nonglomerular hematuria, and glomerular, non-inflammatory diseases such as minimal change nephrotic syndrome and membranous nephropathy. Conversely, the excretion of podocytes in the urine were detected in various glomerular, inflammatory diseases. A significantly higher level of the podocyte score was found in the acute state of glomerular diseases which was defined as within 6 months after disease onset. Positive correlations were obtained between the presence of urinary podocytes and the histological features of active extracapillary changes and mesangial proliferation. Urinary podocytes were examined monthly for 12 months in 7 cases with IgA nephropathy and 2 cases with Henoch-Schönlein purpura nephritis, and a consistently higher urinary podocyte score was observed in the patients with histological progression. The scoring of urinary podocytes was found to be useful clinically, as a diagnostic tool for glomerular or nonglomerular diseases, inflammatory or noninflammatory diseases, a marker for the estimation of the severity of active glomerular injury and also as a predictor of disease progression. abstract_id: PUBMED:16224513 Significance of detecting urinary podocytes in patients with active glomerulonephritis Objective: To establish a reliable method for detecting urinary podocytes, as a non-traumatic marker to evaluate glomerular injury in patients with glomerulonephritis. Methods: Sixty patients with renal diseases in our renal wards were diagnosed based on the pathological findings in their kidney biopsy tissues, which was examined by light microscopy, immunofluorescence and electron microscopy. Sediments of morning urinary samples were collected and centrifuged onto glass slides before kidney biopsy. Thirty healthy volunteers were enrolled as controls. The podocytes were identified by immunofluorescence staining by using monoclonal antibody against human podocalyxin (PCX) presenting on the surface of podocytes. The patients were divided into active inflammation group and chronic injury group according to their glomerular lesions. Results: (1)The anti-human PCX antibody we used could specifically recognize the antigen expressed on podocytes in urine sediments examined by indirect immunofluorescence staining. (2) The PCX-positive staining cells in the urine were observed in various glomerulonephritis, and were absent in the healthy controls. (3) The rate of appearance of urinary podocytes was significantly higher in active inflammation group compared with that in chronic injury group (72% vs 22.7%, P&lt;0.05). (4) The glomerular injury index in the patients with PCX-positive staining cells in the urine was markedly increased than that in the patients with PCX-negative staining cells (154+/-60 vs 82+/-46, P&lt;0.05). Conclusion: The urinary podocytes could be detected in urine sediments from patients with glomerulonephritis by using anti-human PCX antibody, and this method may find further application in the markers to predict the activity of glomerular lesions. abstract_id: PUBMED:10231456 Detection of urinary macrophages expressing the CD16 (Fc gamma RIII) molecule: a novel marker of acute inflammatory glomerular injury. Background: The CD16 antigen is the Fc gamma receptor III. CD14+CD16+ cells are proinflammatory monocytes/macrophages (Mo/M phi) that constitute a minor population in the peripheral blood of healthy individuals. Little is known about the expression of CD16 antigen on Mo/M phi in glomerulonephritis. Methods: Flow cytometric analyses were performed on urine and blood samples obtained from 209 patients with various renal diseases. Patients variously suffered from rapidly progressive crescentic glomerulonephritis (RPGN), membranoproliferative glomerulonephritis (MPGN), postinfectious acute glomerulonephritis (AGN), Henoch-Schönlein purpura nephritis (HSPN), IgA nephropathy (IgAN), membranous nephropathy (MN), minimal change nephrotic syndrome (MCNS), lupus nephritis (LN), acute interstitial nephritis, hereditary nephropathy, idiopathic renal hematuria (IRH), and renal stone. Results: The CD16+ M phi population of cells was present in the urine of hematuria-positive patients with proliferative glomerulonephritis, including AGN, IgAN, RPGN, MPGN, and LN with acute inflammatory lesions, such as endocapillary proliferation, tuft necrosis, and cellular crescents. In contrast, the urinary CD16+ M phi population was negligible in hematuria-positive patients with nonproliferative renal disease, including hereditary nephropathy, IRH, and renal stone and also in patients with proliferative glomerulonephritis lacking acute inflammatory lesions. Total urinary M phi of these patients were much less than those of patients having proliferative glomerulonephritis with acute inflammatory lesions. Transient expansion of the CD16+ M phi population in urine was observed during the acute exacerbation of urinary abnormalities, whereas the disappearance of CD16+ M phi closely preceded the amelioration of urinary abnormalities in patients with proliferative glomerulonephritis. In 38 of the 98 patients positive for CD16+ M phi population in urine, the CD16+ Mo population was negligible in peripheral blood. Immunohistochemically, CD16+ M phi were present in the glomeruli of active proliferative glomerulonephritis, whereas such cells were absent in inactive proliferative glomerulonephritis or nonproliferative glomerular diseases. Conclusion: CD16+ M phi may be effector cells involved in the acute inflammation common to all types of proliferative glomerulonephritis. Furthermore, the detection of CD16+ M phi in urine, as well as urinary M phi counts, may serve as a useful indicator of the active stage of proliferative glomerulonephritis. abstract_id: PUBMED:11092994 Effects of angiotensin-converting enzyme inhibitor, angiotensin II receptor antagonist and calcium antagonist on urinary podocytes in patients with IgA nephropathy. The urinary podocyte is postulated to be a marker for estimation of the severity of active glomerular injury and a predictor of disease progression in children with glomerulonephritis. Non-dihydropyridine calcium antagonist, including verapamil, reduce proteinuria to an extent similar to that of the angiotensin-converting enzyme inhibitor (ACEI), including trandolapril, but to a greater extent than other antihypertensives. Angiotensin (Ang) II receptor antagonists, including candesartan cilexetil, show potent and long-term preventive effects against the progression of renal injury. The aim of the present study is to assess whether verapamil, trandolapril and candesartan cilexetil affect proteinuria and urinary podocytes in patients with IgA nephropathy. Thirty-two normotensive patients aged 18-54 years with biopsy-proven IgA nephropathy, nonnephrotic proteinuria (1-3 g/day), and normal renal function (creatinine clearance &gt;80 ml/min) were studied. Twenty patients with diffuse mesangial proliferative glomerulonephritis (non-IgA PGN) and 20 healthy controls were also included in this study. The number of urinary podocytes in patients with advanced IgA nephropathy (n = 16) was significantly higher than that in patients with the disease in the mild stage (n = 16) (p &lt; 0.01) or in patients with non-IgA PGN (p &lt; 0.01). Urinary podocytes were not detected in healthy controls. The 32 patients with IgA nephropathy were randomly divided into four treatment groups: those treated with verapamil (120 mg/day, n = 8); those treated with trandolapril (2 mg/day, n = 8); those treated with candesartan cilexetil (8 mg/day, n = 8), and those given a placebo (n = 8). Treatment continued for 3 months. Antiproteinuric response in the trandolapril group was similar to that in the candesartan cilexetil group (-38 vs. -40%). The action of trandolapril or candesartan cilexetil was greater than that of verapamil (p &lt; 0.01). Reduction in the number of urinary podocytes from baseline was significantly greater in patients treated with trandolapril or candesartan cilexetil than in patients treated with verapamil (p &lt; 0.01). However, there was no difference between patients treated with trandolapril and those treated with candesartan cilexetil. Proteinuria and urinary podocytes were unaffected in the placebo group. These data suggest that urinary podocytes may be a marker of disease activity in adult patients with IgA nephropathy and that trandolapril and candesartan cilexetil are more effective than verapamil in reducing the number of podocytes. abstract_id: PUBMED:15829708 Urinary podocyte loss is a more specific marker of ongoing glomerular damage than proteinuria. Podocyte loss contributes to the development of glomerulosclerosis. Although podocyte detachment has been recognized as a new mechanism of podocyte loss in glomerular diseases, its time course and relationship to disease activity are not known. Urinary excretion of viable podocytes was quantified in two models of transient glomerular injury, i.e., rats with puromycin aminonucleoside-induced nephrosis (PAN) and mesangioproliferative nephropathy (anti-Thy 1.1 nephritis model), as well as in a model of continuous glomerular injury, i.e., hypertensive nephropathy (5/6-nephrectomy model), and in aging rats. The number of glomerular Wilm's tumor (WT)-1-positive podocytes and the glomerular expression of cell-cycle proteins in vivo were assessed. Urinary podocyte loss occurred in both primary (PAN) and secondary (anti-Thy 1.1 nephritis) in parallel to the onset of proteinuria. However, subsequently proteinuria persisted despite remission of podocyturia. In continuous glomerular injury, i.e., after 5/6-nephrectomy, podocyturia paralleled the course of proteinuria and of systemic hypertension, whereas no podocyturia became detectable during normal aging (up to 12 mo). Despite podocyte detachment of varying degrees, no decrease in glomerular podocyte counts (i.e., WT-1 positive nuclei) was noted in either disease model. Podocyturia in the PAN and anti-Thy 1.1 nephritis model was preceded by entry of glomerular podocytes into the cell cycle, i.e., cyclin D1, cdc2, and/or proliferating cell nuclear antigen (PCNA) expression. Podocyturia is a widespread phenomenon in glomerular disease and not simply a reflection of proteinuria because it is limited to phases of ongoing glomerular injury. The data suggest that podocyturia may become a more sensitive means to assess the activity of glomerular damage than proteinuria. Answer: Urinary tissue factor (uTF) has been investigated as a potential marker of glomerular injury in patients with glomerulonephritis. A study found that uTF concentrations were significantly higher in patients with glomerulonephritis compared to normal controls and patients with uncomplicated renal stones (PUBMED:9215153). The uTF activity correlated with the protein creatinine index (PCI), suggesting that uTF may reflect the severity of glomerular injury. Additionally, when patients with glomerulonephritis were subdivided based on the PCI, those with either immune complex (IC) or non-IC glomerulonephritis had higher uTF concentrations than normal controls, with the significance level varying with the type of glomerulonephritis (PUBMED:9215153). Another study examined the relationship between plasma and urinary levels of tissue factor (TF) and its principal inhibitor (tissue factor pathway inhibitor, TFPI) in chronic non-crescentic glomerulonephritis. The study found that plasma and urinary TF were higher in patients than in healthy individuals, and uTF was significantly associated with creatinine clearance and urinary TFPI. However, neither plasma nor urinary levels of the markers correlated with the intensity of TF and TFPI staining in biopsy specimens. The study concluded that while the intrarenal TF/TFPI system is profoundly disturbed in chronic glomerulonephritis, plasma and urinary concentrations of TF and TFPI may not reflect the genuine activity of the disease due to the confounding effect of kidney insufficiency (PUBMED:16982632). These findings suggest that uTF could be a useful marker for glomerular injury in glomerulonephritis, although further studies are required to validate its use and to understand the relationship between urinary and plasma levels of TF and TFPI in the context of renal disease activity.
Instruction: Is anterior disc displacement without reduction associated with temporomandibular joint condylar height in juvenile patients younger than 20 years? Abstracts: abstract_id: PUBMED:25799357 Is anterior disc displacement without reduction associated with temporomandibular joint condylar height in juvenile patients younger than 20 years? Purpose: Because disc displacement might be accompanied by degenerative changes in the condyle, this study explored the association between condylar height and anterior disc displacement without reduction in juvenile patients. Materials And Methods: This was a retrospective cohort study that enrolled a sample of patients younger than 20 years with unilateral disc displacement without reduction. All patients had 2 magnetic resonance imaging records taken from January 2010 to June 2013 (interval, &gt;6 months). Condylar height, disc length, and displacement distance were measured. The predictor variable was joint status (healthy vs affected side), and the outcome variables were changes in condylar height, displacement distance, and disc length over time. SAS 9.13 was used for analysis and the P value was set at .05. Results: One hundred twenty-four patients (101 female and 23 male; mean age, 16.4 yr; mean interval, 13.6 months) were included. During the interval, the condylar height of the healthy side increased 0.75 mm, whereas the condylar height of the affected side decreased 0.41 mm. The disc of the affected side was shortened and more anteriorly displaced. Conclusion: Anterior disc displacement without reduction could be accompanied by a decrease in condylar height in juvenile patients, and the disc might be shortened and more anteriorly displaced. abstract_id: PUBMED:28279054 Physical therapy for temporomandibular joint anterior disc displacement without reduction Temporomandibular joint (TMJ) anterior disc displacement without reduction (ADDwoR) is a common type of temporomandibular joint disorders. Most patients experience limited mouth opening and joint pain at the same time. The standpoint of physical therapy is the function of the joint instead of the displaced disc. The treatment aims to make symptoms disappeared and joint function regained through 3M techniques, including modality, manual and movement. For ADDwoR patients with limited mouth opening within 2 month, manual therapy may reposition disc and the following splint and movement therapy can maintain disc-condyle relationship. Even so, restoring anatomical relationship is not the end of physical therapy. Enhanced health education and multidisciplinary cooperation are important for successful management of the ADDwoR patients. abstract_id: PUBMED:38475955 Analysis of clinical changes and magnetic resonance imaging features of 37 patients with temporomandibular joint disc condylar complex with anterior disc displacement without reduction. Objectives: This study aims to investigate clinical outcomes, imaging changes, and age differences with regard to temporomandibular joint disc condylar complex with anterior disc displacement without reduction (ADDWoR). Methods: A total of 37 patients (45 lateral joints) with ADDWoR who were admitted to The First Affiliated Hospital of Zheng Zhou University from January 2016 to June 2023 were selected. The patients were composed of 4 males and 33 females and had an average age of 23.5 years. The average course of the disease was 14.4 months. Clinical and magnetic resonance imaging (MRI) data were collected at the end of initial diagnosis and follow-up, and the length and thickness of the articular disc, the angle of the disc condyle, and the height of the condyle were measured. The statistical significance of the changes was assessed using SPSS 25.0 software package. Results: At the end of follow-up, disc displacement in three patients (three lateral joints) was healed. Approximately 48.4% of the patients felt that limitation of mandibular movement was not alleviated; 58.3% of patients reported that pain during mouth opening was not reduced; 54.5% reported pain while chewing; 33.3% of the patients showed facial deviation, and only one showed remission. The mean disk-condyle angle increased from 61.63° to 67.81°. The average length of articular disc shortened from 8.20 mm to 7.27 mm, and the height of the condyle significantly decreased from 23.17 mm to 22.76 mm (P&lt;0.05). The absorption ratio of the condyle increased, and no significant differences in the changes of joint soft and hard tissues between the adolescent and adult groups (P&gt;0.05). Conclusions: In different age groups of patients with ADDWoR, clinical symptoms cannot be completely relieved. The disc is anteriorly displaced and shortens, condylar height decreases, and secondary facial asymmetry and mandibular retraction occur. abstract_id: PUBMED:31649449 A multidisciplinary approach for the management of temporomandibular joint disc displacement without reduction. The temporomandibular joint (TMJ) is one of the primary components of the complex masticatory system. Any disturbance in the joint due to congenital or acquired causes such as trauma, pathology, or functional disturbance affects the masticatory system. This case report highlights the management of condylar hypoplasia due to trauma by forceps delivery which led to TMJ dysfunction and chronic disc displacement without reduction. A combination of stabilization appliance and arthrocentesis using platelet-rich plasma along with patient education has been proved effective for the conservative management of patients with chronic disc displacement and TMJ dysfunction. abstract_id: PUBMED:32620369 The effect of arthroscopic disc repositioning on facial growth in juvenile patients with unilateral anterior disc displacement. Objective: To investigate the effect of arthroscopic disc repositioning on condylar development and mandibular asymmetry in juvenile patients with unilateral temporomandibular joint (TMJ) anterior disc displacement (ADD). Patients And Methods: A retrospective cohort study was conducted on unilateral juvenile anterior disc displacement (UJADD) patients of 10-20 years of age treated in Shanghai Ninth People's Hospital from 2010 to 2017. The primary predictor was TMJ arthroscopic disc repositioning surgery, according to which patients were divided into a surgery group and a control group. Initial and re-examined (with at least 6-month interval) magnetic resonance imaging (MRI) and posteroanterior cephalometric radiographs (PA) were collected. The condylar heights on both sides of the TMJ were measured by MRI and the difference between them was calculated. Menton (Me) deviation was measured based on PA. Changes in condylar height difference and Me deviation were compared between the two groups. Results: A total of 108 UJADD patients were collected in this study, including 55 patients in the surgery group and 53 patients in the control group. There were no significant differences in age, sex and follow-up time between the two groups. The condylar height difference was (-0.61 ± 2.02) mm in surgery group and (1.68 ± 1.73) mm in control group (P &lt; 0.0001); while the Me deviation was (-1.00 ± 1.15) mm in surgery group and (1.81 ± 1.96) mm in control group (P &lt; 0.0001), indicating that TMJ arthroscopic disc repositioning surgery could significantly improve condylar height difference and Me deviation. Conclusions: UJADD should be treated as early as possible, especially in juvenile patients, in order to improve condylar development and avoid jaw deformities. abstract_id: PUBMED:28265156 Pilot study of the short-term effects of range-of-motion exercise for the temporomandibular joint in patients with temporomandibular joint disc displacement without reduction. [Purpose] This study investigated the effectiveness of a short-term exercise program combining range-of-motion exercise for the temporomandibular joint and self-traction therapy in patients with temporomandibular joint disc displacement without reduction. [Subjects and Methods] The study participants comprised 36 females with jaw trismus and moderate to severe functional pain. The range-of-motion exercise for the temporomandibular joint was performed at the first visit by the therapist, and the patients were instructed to perform self-traction therapy in the morning and during daily bathing until the next visit 2 weeks later. Maximum mouth opening distance and the visual analogue scale score were used to compare pain on motion and mastication as well as the impact of the program on daily activities at the first consultation and 2 weeks later. [Results] All symptoms were significantly improved after 2 weeks of treatment. [Conclusion] A program that combines exercise for the temporomandibular joint and self-traction therapy can improve range of motion at the joint in the short term and reduce pain and difficulty associated with daily activity in patients with temporomandibular joint disc displacement without reduction. The results of this study suggest that such a program can serve as an effective conservative treatment. abstract_id: PUBMED:35556200 Analysis of the relationship between condylar changes and anterior disc displacement with reduction: a preliminary study. Objectives: To predict temporomandibular joint (TMJ) anterior disc displacement with reduction (ADDWR) from condylar shape, position, and dimensions obtained from CBCT images. Methods: This cross-sectional study was performed on 17 patients suffering from temporomandibular disorders diagnosed by history taking according to the chart of the American Association of orthodontists, clinical examination according to the Helkimo index and MRI. CBCT and MRI examinations were performed within one-week interval. Disc position, diagnosed by MRI was used as the gold standard. TMJs with posterior disc displacement or anterior disc displacement without reduction were excluded. Qualitative and quantitative analyses were performed on CBCT images to find the correlation between condylar variables and ADDWR. A logistic regression model was created to estimate ADDWR from condylar dimensions (height, width and depth). Results: Condylar shape and condylar position in the glenoid fossa were significantly correlated with ADDWR (P &lt; 0.05). Condylar width, height and depth were significantly smaller in condyles with ADDWR compared to condylar dimensions in normal disc position. Logistic regression analysis could be used to predict the probability of anterior disc displacement with reduction from condylar dimensions. Conclusion: Condylar shape, position, and dimensions assessed by CBCT are significantly correlated with ADDWR of the TMJ. Substituting the values of condylar width, height and depth in the equation suggests the probability of ADDWR. abstract_id: PUBMED:33071510 Does Incidence of Temporomandibular Disc Displacement With and Without Reduction Show Similarity According to MRI Results? Purpose: The aim of this study is to assess MRI reports of the patients and to provide retrospective analysis by conducting detailed evaluation of temporomandibular disc position. Patients And Methods: MRI images of 259 patients from 2008 to 2017 were reviewed retrospectively. Existence of disc displacement in joint with and without reduction, existence of effusion in joint gap, arthritis, degenerative changes in joints and limitation of movement of joint and anatomic characteristics were evaluated. Results: One hundred and seventeen (45.2%) of the patients had healthy joint connection bilaterally. Anterior disc displacement was observed in 101 (38.9%) of the patients. The number of the patients having disc displacement with reduction was 50 (19.3%) and without reduction was 51 (19.6%) similarly. Conclusion: The incidence of anterior disc displacement with or without reduction was similar; besides this, the majority of the patients diagnosed with anterior disc displacement were unilateral. abstract_id: PUBMED:29305246 Incidence of anterior disc displacement without reduction of the temporomandibular joint in patients with dentofacial deformity. The aim of this study was to investigate the incidence of anterior disc displacement without reduction (ADDwoR) of the temporomandibular joint (TMJ) in patients with dentofacial deformity. Eighty-eight female patients (176 joints) with skeletal class III malocclusion and 33 female patients (66 joints) with skeletal class II malocclusion, with or without anterior open bite and asymmetry, were evaluated. Magnetic resonance imaging (MRI) of the TMJ was used to diagnose ADDwoR. A statistical analysis was performed to examine the relationship between ADDwoR and skeletal structure. ADDwoR was present in 37 of the 66 joints (56.1%) in class II compared to 34 of the 176 joints (19.3%) in class III (P&lt;0.05). In class III, ADDwoR was significantly more common in joints with mandibular asymmetry (24/74; 32.4%) than in joints with open bite (9/62; 14.5%) and joints with open bite and without mandibular asymmetry (1/38; 2.6%). In class II, ADDwoR was significantly less common in joints with mandibular asymmetry and without open bite (1/8; 12.5%). ADDwoR was only observed on the deviated side in both class III and class II with mandibular asymmetry. The prevalence of ADDwoR differed according to the dentofacial morphology. abstract_id: PUBMED:21723023 Changes in disc position, disc length, and condylar height in the temporomandibular joint with anterior disc displacement: a longitudinal retrospective magnetic resonance imaging study. Purpose: To quantitatively assess the changes in disc position relative to the condyle, disc length, and condylar height, with magnetic resonance imaging in patients with anterior disc displacement of the temporomandibular joint who had received no treatment, to provide useful information regarding treatment planning. Patients And Methods: The study included 83 joints in 62 patients. The joints were assessed quantitatively for the disc position relative to the condyle, disc length, and condylar height on magnetic resonance imaging scans of the temporomandibular joint at initial and follow-up visits. Student t tests were used to assess the statistical significance of the changes. Results: The mean age at first visit was 30.4 years. The mean follow-up interval was 10.9 months. Thirty-nine joints were shown to have disc displacement with reduction at the initial visit. Of these joints, 27 changed to having disc displacement without reduction after follow-up. The mean disc position relative to the condyle changed from 5.28 to 6.73 mm, whereas the mean disc length changed from 8.31 to 6.91 mm and the mean condylar height from 5.21 to 4.65 mm. Differences were all statistically significant. Conclusions: The disc would likely become more anteriorly displaced and shortened and the condylar height would decrease during its natural course. However, further stratified studies with longer follow-up are necessary. Answer: Yes, anterior disc displacement without reduction is associated with temporomandibular joint condylar height in juvenile patients younger than 20 years. A study found that in juvenile patients with unilateral disc displacement without reduction, the condylar height of the affected side decreased over time, while the condylar height of the healthy side increased. Additionally, the disc on the affected side was observed to be shortened and more anteriorly displaced (PUBMED:25799357). Another study reported that in juvenile patients with anterior disc displacement without reduction, the condylar height decreased, and the disc was anteriorly displaced and shortened, leading to secondary facial asymmetry and mandibular retraction (PUBMED:38475955). Furthermore, arthroscopic disc repositioning was found to significantly improve condylar height difference and mandibular asymmetry in juvenile patients with unilateral anterior disc displacement, suggesting that early treatment is important to improve condylar development and avoid jaw deformities (PUBMED:31649449). These findings indicate that anterior disc displacement without reduction can have a negative impact on condylar height and overall mandibular development in juvenile patients.
Instruction: The contribution of psychological factors to recovery after mild traumatic brain injury: is cluster analysis a useful approach? Abstracts: abstract_id: PUBMED:25409487 The contribution of psychological factors to recovery after mild traumatic brain injury: is cluster analysis a useful approach? Objectives: Outcomes after mild traumatic brain injury (MTBI) vary, with slow or incomplete recovery for a significant minority. This study examines whether groups of cases with shared psychological factors but with different injury outcomes could be identified using cluster analysis. Method: This is a prospective observational study following 147 adults presenting to a hospital-based emergency department or concussion services in Christchurch, New Zealand. This study examined associations between baseline demographic, clinical, psychological variables (distress, injury beliefs and symptom burden) and outcome 6 months later. A two-step approach to cluster analysis was applied (Ward's method to identify clusters, K-means to refine results). Results: Three meaningful clusters emerged (high-adapters, medium-adapters, low-adapters). Baseline cluster-group membership was significantly associated with outcomes over time. High-adapters appeared recovered by 6-weeks and medium-adapters revealed improvements by 6-months. The low-adapters continued to endorse many symptoms, negative recovery expectations and distress, being significantly at risk for poor outcome more than 6-months after injury (OR (good outcome) = 0.12; CI = 0.03-0.53; p &lt; 0.01). Conclusions: Cluster analysis supported the notion that groups could be identified early post-injury based on psychological factors, with group membership associated with differing outcomes over time. Implications for clinical care providers regarding therapy targets and cases that may benefit from different intensities of intervention are discussed. abstract_id: PUBMED:36763333 Management of Psychological Complications Following Mild Traumatic Brain Injury. Purpose Of Review: It has been clear for decades that psychological factors often contribute to mild traumatic brain injury (mTBI) outcome, but an emerging literature has begun to clarify which specific factors are important, when, for whom, and how they impact recovery. This review aims to summarize the contemporary evidence on psychological determinants of recovery from mTBI and its implications for clinical management. Recent Findings: Comorbid mental health disorders and specific illness beliefs and coping behaviors (e.g., fear avoidance) are associated with worse recovery from mTBI. Proactive assessment and intervention for psychological complications can improve clinical outcomes. Evidence-based treatments for primary mental health disorders are likely also effective for treating mental health disorders after mTBI, and can reduce overall post-concussion symptoms. Broad-spectrum cognitive-behavioral therapy may modestly improve post-concussion symptoms, but tailoring delivery to individual psychological risk factors and/or symptoms may improve its efficacy. Addressing psychological factors in treatments delivered primarily by non-psychologists is a promising and cost-effective approach for enhancing clinical management of mTBI. Recent literature emphasizes a bio-psycho-socio-ecological framework for understanding mTBI recovery and a precision rehabilitation approach to maximize recovery. Integrating psychological principles into rehabilitation and tailoring interventions to specific risk factors may improve clinical management of mTBI. abstract_id: PUBMED:11953001 The investigation of factors underlying deficits in self-awareness and self-regulation. Primary Objective: To examine a theoretical model which suggests that a contribution of both psychological and neuropsychological factors underlie deficits in self-awareness and self-regulation. Research Design: Multivariate design including correlations and analysis of variance (ANOVA). Methods: Sixty-one subjects with acquired brain injury (ABI) were administered standardized measures of self-awareness and self-regulation. Psychological factors included measures of coping-related denial, personality-related denial and personality change. Neuropsychological factors included an estimate of IQ and two measures of executive functioning that assess capacity for volition and purposive behaviour. Main Outcomes And Results: The findings indicated that the relative contribution of neuropsychological factors to an outcome of deficits in self-awareness and self-regulation had a more direct effect than psychological factors. In general, measures of executive functioning had a direct relationship, while measures of coping-related and personality-related denial had an indirect relationship with measures of self-awareness and self-regulation. Conclusion: The findings highlighted the importance of measuring both neuropsychological and psychological factors and demonstrated that the relative contribution of these variables varies according to different levels of self-awareness and self-regulation. abstract_id: PUBMED:24328796 Psychosocial and psychological factors associated with post-traumatic stress disorder following traumatic brain injury in adult civilian populations: a systematic review. Primary Objective: Increasing support exists for the development of post-traumatic stress disorder (PTSD) after traumatic brain injury (TBI). Despite the psychological nature of PTSD, previous reviews have mainly focused on the injury-related and neurological characteristics of its presentation in TBI. Consequently, this review systematically examined the psychological and psychosocial variables associated with PTSD symptoms after TBI in civilian adults. Methods: Detailed searching retrieved 28 relevant articles which met the inclusion criteria. Each article underwent a thorough quality assessment procedure and data were extracted relevant to the review's aims. Results: Results highlighted several psychological and psychosocial variables associated with PTSD after TBI, including historical factors and those which become relevant after the traumatic event. Furthermore, the results indicated that some factors were not associated with PTSD after TBI, despite a relationship existing with PTSD in the general population. The findings of the quality assessment were utilized throughout the formation of results. Conclusions: The review highlights the importance of addressing psychological and psychosocial factors within the assessment and treatment of PTSD after TBI. The limitations of the research are highlighted and the clinical and research implications discussed. abstract_id: PUBMED:31981329 Psychological distress and its associated factors among informal care givers of disabled young adults with traumatic brain injuries. Objective: To measure the psychological distress among the informal caregivers of disabled young adults with traumatic brain injuries (TBI) and to identify its associated factors. Methods: An analytical cross sectional study was conducted with 76 informal caregivers of young adults (19 - 50 years) with TBI, using admission records of one of the tertiary care hospitals in Karachi, Pakistan from 2015 to 2016. Data were collected using a structured questionnaire comprising a general demographic questionnaire and the General Health Questionnaire- 28. The effect of the predictors on the psychological distress was determined by applying multiple linear regression analysis. Results: The mean score of the psychological distress was found to be 23.5±14.28. The findings indicated that anxiety and insomnia, and social dysfunction were the most affected domains of psychological distress; whereas, depression was the least affected. Factors including the management of finances, number of children, patient's gender, and patient's level of disability and lack of socialization were positively associated with the level of distress. As reported by participants, religious beliefs served as a coping mechanism for most of them. Conclusions: The study showed a high level of distress among informal caregivers of young adults with TBI. For recognizing the psychological effects of TBI in patients and their informal caregivers, it is important to initiate and ensure the provision of psychological support to the patients and their families. abstract_id: PUBMED:21913563 The significance of contribution degree of hurting factors in mental injury and the research progress In forensic psychiatric evaluation, experts frequently need to assess the contribution degree of hurting factors to the victims. The contribution degree reflects the extent of hurt caused by the injurer and is the quantitative index of the responsibility which should be undertaken by the injurer. It is also important evidence for the judgement. Presently, there is no accepted and practicable quantitative tool to reflect the objective contribution degree. This article reviews domestic and international researches on the contribution degree of hurting factors in mental injury, including the concept, connotation, related assessment methods, problems in assessment and its future study trend. abstract_id: PUBMED:27703960 Effect of Mild Traumatic Brain Injury and Demographic Factors on Psychological Outcome. Background: It is well-known that severe brain injury can make people susceptible to psychological symptoms. However, mild traumatic brain injury (MTBI) is still open for discussion. Objectives: This study aimed to compare psychological symptoms of MTBI patients with those without MTBI considering demographic auxiliary variables. Patients And Methods: This prospective cohort study was conducted on 50 MTBI patients and 50 healthy subjects aged 15 - 65 years. Psychological assessment was carried out six months post-injury using a series of self-report measures including the brief symptom inventory (BSI) scale. Other information of the individuals in the two groups was recorded prospectively. Data were analyzed using the chi-square test, t-test, and multiple linear regression tests. Results: There was a significant difference between the MTBI patients and healthy subjects in all subscales and total score of BSI. Our findings showed that obsession-compulsion and anxiety subscales were significantly more common in the MTBI patients than in the healthy subjects. Also, multivariate regression analysis six months post- injury showed that head trauma and substance abuse can have an effect on psychological symptoms. Conclusions: Mild traumatic brain injuries despite of the normal CT scan and history of substance abuse are closely related to psychological symptoms. Therefore, it is recommended that patients with brain trauma 6 months post-injury and subjects with a history of substance abuse be evaluated for psychological distress to support better rehabilitation. abstract_id: PUBMED:27186650 Social anxiety following traumatic brain injury: an exploration of associated factors. Social anxiety (SA) following traumatic brain injury (TBI) has the potential to affect an individual's general psychological well-being and social functioning, however little research has explored factors associated with its development. The present study used hierarchical multiple regression to investigate the demographic, clinical and psychological factors associated with SA following TBI. A sample of 85 people who experienced TBI were recruited through social media websites and brain injury services across the North-West of England. The overall combined biopsychosocial model was significant, explaining 52-54.3% of the variance in SA (across five imputations of missing data). The addition of psychological variables (self-esteem, locus of control, self-efficacy) made a significant contribution to the overall model, accounting for an additional 12.2-13% of variance in SA above that explained by demographic and clinical variables. Perceived stigma was the only significant independent predictor of SA (B = .274, p = .005). The findings suggest that psychological variables are important in the development of SA following TBI and must be considered alongside clinical factors. Furthermore, the significant role of stigma highlights the need for intervention at both an individualised and societal level. abstract_id: PUBMED:32759012 Psychological flexibility: A psychological mechanism that contributes to persistent symptoms following mild traumatic brain injury? Persistent symptoms following a mild traumatic brain injury (mTBI) can have profound implications on all aspects of an individual's functioning. Psychological factors have a significant role in contributing to the development of persistent post-concussion symptoms and predicting outcomes. Biopsychosocial explanations have therefore been applied to prognostic models of mTBI. What is not evident in the literature to date is an understanding of the psychological mechanisms that may be important in mediating the various psychological factors in these models. The construct of psychological flexibility holds promise in this regard. Psychological flexibility is the ability to act in alignment with values in the presence of inner discomfort such as pain and distress. It is hypothesised that psychological flexibility has a significant role in the development and maintenance of persistent symptoms following mild traumatic brain injury. The rationale that forms the basis of this hypothesis is as follows: a relationship exists between psychological flexibility and pre-injury psychological risk factors; psychological flexibility is vulnerable to the pathophysiology associated with mTBI; post-injury psychological and neuropsychological factors exacerbate and maintain psychological inflexibility; and psychological flexibility underlies psychological responses to mTBI that contribute to unfavourable outcomes. A discussion of the literature that pertains to each of these points is presented. Based on this hypothesis, we conclude, that there is rationale to empirically explore the role of psychological flexibility in mTBI and its relationship to outcomes. This may also lead to evaluation of specific interventions that target this psychological mechanism in mTBI, such as Acceptance and Commitment Therapy, and improve outcomes for this population. abstract_id: PUBMED:12852690 Psychological aspects of serious head injury in the competitive athlete. In addition to the physical aspects of athletic injury, there are psychological aspects to consider. These are important during all phases of injury management: the acute injury, the rehabilitative process, and the return-to-play progression. Emotional disturbances are often associated with mild traumatic brain injury (mTBI) both directly and indirectly. Evaluating the athlete with mTBI can be made more difficult by the need to distinguish the physical and the emotional effects of injury and to differentiate the emotional factors directly related to the brain injury from those that arise indirectly (eg, reaction to being held out of play). This article discusses some of the psychological issues related to evaluating the head injured athlete. Answer: The study identified in abstract PUBMED:25409487 investigated the contribution of psychological factors to recovery after mild traumatic brain injury (MTBI) using cluster analysis. The study followed 147 adults presenting to a hospital-based emergency department or concussion services in Christchurch, New Zealand, and examined associations between baseline demographic, clinical, psychological variables (distress, injury beliefs, and symptom burden) and outcome 6 months later. The researchers applied a two-step approach to cluster analysis (Ward's method to identify clusters, K-means to refine results) and found three meaningful clusters (high-adapters, medium-adapters, low-adapters). Baseline cluster-group membership was significantly associated with outcomes over time, with high-adapters appearing recovered by 6-weeks, medium-adapters showing improvements by 6-months, and low-adapters continuing to endorse many symptoms, negative recovery expectations, and distress, being significantly at risk for poor outcome more than 6-months after injury. The study concluded that cluster analysis supported the notion that groups could be identified early post-injury based on psychological factors, with group membership associated with differing outcomes over time. This suggests that cluster analysis is indeed a useful approach for identifying psychological factors contributing to recovery after MTBI and has implications for clinical care providers regarding therapy targets and cases that may benefit from different intensities of intervention.
Instruction: Does neuromuscular electrical stimulation enhance the effectiveness of an exercise programme in subjects with knee osteoarthritis? Abstracts: abstract_id: PUBMED:22952305 Does neuromuscular electrical stimulation enhance the effectiveness of an exercise programme in subjects with knee osteoarthritis? A randomized controlled trial. Objective: To determine whether neuromuscular electrical stimulation applied to the quadriceps femoris muscle will enhance the effectiveness of an exercise programme in patients with knee osteoarthritis. Design: A randomized trial with parallel intervention treatment groups. Setting: Outpatient physical therapy clinic. Subjects: Fifty participants (mean age (SD) 68.9 (7.7) years) with symptomatic idiopathic knee osteoarthritis and radiographic evidence (grade ≥ II Kelgren's classification). Interventions: Participants were randomized into one of two groups receiving 12 biweekly treatments: An exercise-only group or an exercise combined with neuromuscular electrical stimulation group (biphasic pulses, at 75 Hz and 250 µs phase duration). Main Measures: Knee pain intensity; maximal voluntary isometric contraction and voluntary activation of the quadriceps femoris muscle; measures of functional performance. Results: A significant interaction effect (P = 0.01) indicated greater improvement in pain for the electrical stimulation group. The mean (SD) change in pain intensity was from 7.5 ± 2 to 5 ± 2.2 and from 7.4 ± 1.9 to 3.3 ± 2.4 in the exercise and electrical stimulation groups, respectively. A significant treatment effect was also noted for the voluntary activation of the quadriceps femoris, which increased by 22.2% in the electrical stimulation group and by 9.6% in the exercise group (P = 0.045). Significant improvements were observed in both groups in all remaining measures, with no differences between groups. Conclusions: Electrical stimulation treatment to the quadriceps femoris enhanced the effectiveness of an exercise programme in alleviating pain and improving voluntary activation in patients with knee osteoarthritis, but did not enhance its effect on muscle strength or functional performance. abstract_id: PUBMED:25083133 The effects of exercise and neuromuscular electrical stimulation in subjects with knee osteoarthritis: a 3-month follow-up study. Background: Strengthening exercises of the quadriceps femoris muscle (QFM) are beneficial for patients with knee osteoarthritis (OA). Studies reporting short-term effects of neuromuscular electrical stimulation (NMES) of the QFM in this population support the use of this modality as an adjunct treatment. The objectives of this follow-up study are to compare the effects of an exercise program with and without NMES of the QFM on pain, functional performance, and muscle strength immediately posttreatment and 12 weeks after completion of the intervention. Methods: Sixty-three participants with knee OA were randomly assigned into two groups receiving 12 biweekly treatments: An exercise-only program or an exercise program combined with NMES. Results: A significantly greater reduction in knee pain was observed immediately after treatment in the NMES group, which was maintained 12 weeks postintervention in both groups. Although at this stage NMES had no additive effect, both groups demonstrated an immediate increase in muscle strength and in functional abilities, with no differences between groups. Although the improvements in gait velocity and in self-report functional ability were maintained at the follow-up session, the noted improvements in muscle strength, time to up and go, and stair negotiation were not maintained. Conclusion: Supplementing an exercise program with NMES to the QFM increased pain modulation immediately after treatment in patients with knee OA. Maintenance of the positive posttreatment effects during a 12-week period was observed only for pain, self-reported functional ability, and walk velocity, with no difference between groups. Clinical Rehabilitation Effect: The effects of a comprehensive group exercise program with or without NMES are partially maintained 12 weeks after completion of the intervention. The addition of NMES is recommended primarily for its immediate effect on pain. Further studies are necessary to determine the effects of repeated bouts of exercise with and without NMES in this population. abstract_id: PUBMED:37674803 The effectiveness of peroneal nerve stimulation combined with neuromuscular electrical stimulation in the management of knee osteoarthritis: A randomized controlled single-blind study. Objectives: This study aimed to compare the effectiveness of neuromuscular electrical stimulation (NMES) combined with peroneal nerve stimulation (PNS) on muscle strength around the knee, proprioception, pain, functional status, and quality of life in patients with knee osteoarthritis (OA). Patients And Methods: The prospective, randomized, single-blinded, controlled trial included 63 patients with clinical and radiological diagnoses of knee OA between December 2019 and March 2020. The patients were divided into two groups: Group 1 (NMES+PNS, n=31) and Group 2 (NMES, n=32). The patients were followed up at two and six weeks. Main outcome measures were the Visual Analog Scale, Western Ontario and McMaster Universities Arthritis Index, Nottingham Health Profile, and 100-m walking test, quadriceps muscle strength, hamstring muscle strength (HMS), and joint position sense were evaluated using a computer-controlled isokinetic dynamometer at 60°/sec, 90°/sec, and 120°/sec angular velocities. The proprioception was evaluated at 30° and 60° flexion angles using the same device. Results: Two patients from Group 1 and two patients from Group 2 were excluded from the study after they failed to show up for the six-week control. As a result, the study was completed with 59 patients (30 females, 29 males; 55.9±6.1 years; range, 40 to 65 years). There was a significant difference between the two groups in the 100-m walking test parameter at the six-week control in favor of Group 1 (p&lt;0.05). There was a significant difference in favor of Group 1 in the parameters of proprioception (30° and 60°) and HMS (60° and 90°) in both the two-week evaluation and six-week controls (p&lt;0.05). The HMS 120° parameter showed a significant difference in favor of Group 1 at the six-week control (p&lt;0.05). Conclusion: In patients with knee OA, we believe that PNS combined with NMES may be more effective than NMES treatment alone in terms of proprioception, HMS, and functional status. abstract_id: PUBMED:25261425 Effects of neuromuscular electrical stimulation and low-level laser therapy on the muscle architecture and functional capacity in elderly patients with knee osteoarthritis: a randomized controlled trial. Objectives: To determine the effects of low-level laser therapy in combination with neuromuscular electrical stimulation on the muscle architecture and functional capacity of elderly patients with knee osteoarthritis. Design: A randomized, evaluator-blinded clinical trial with sequential allocation of patients to three different treatment groups. Setting: Exercise Research Laboratory. Subjects: A total of 45 elderly females with knee osteoarthritis, 2-4 osteoarthritis degrees, aged 66-75 years. Intervention: Participants were randomized into one of the following three intervention groups: electrical stimulation group (18-32 minutes of pulsed current, stimulation frequency of 80 Hz, pulse duration of 200 μs and stimulation intensity fixed near the maximal tolerated), laser group (low-level laser therapy dose of 4-6 J per point, six points at the knee joint) or combined group (electrical stimulation and low-level laser therapy). All groups underwent a four-week control period (without intervention) followed by an eight-week intervention period. Main Measures: The muscle thickness, pennation angle and fascicle length were assessed by ultrasonography, and the functional capacity was assessed using the 6-minute walk test and the Timed Up and Go Test. Results: After intervention, only the electrical stimulation and combined groups exhibited significant increases in the muscle thickness (27%-29%) and pennation angle (24%-34%) values. The three groups exhibited increased performance on the walk test (5%-9%). However, no significant differences in terms of functional improvements were observed between the groups. Conclusions: Neuromuscular electrical stimulation reduced the deleterious effects of osteoarthritis on the quadriceps structure. Low-level laser therapy did not potentiate the effects of electrical stimulation on the evaluated parameters. abstract_id: PUBMED:37926438 Effectiveness of neuromuscular electrical stimulation training combined with exercise on patient-reported outcomes measures in people with knee osteoarthritis: A systematic review and meta-analysis. Objective: This study examined the effectiveness of neuromuscular electrical stimulation (NMES) added to the exercise or superimposed on voluntary contractions on patient-reported outcomes measures (PROMs) in people with knee osteoarthritis (OA). Methods: This systematic review was described according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Randomized controlled trials (RCTs) were obtained from a systematic literature search in five electronic databases (PubMed, PEDro, LILACS, EMBASE, and SPORTDiscus) in April 2022. We described the effects of intervention according to each PROMs (scores for Pain; Self-reported functional ability; Symptoms (hear clicking, swelling, catching, restricted range of motion, and stiffness); Daily living function; Sports function; and Quality of life) and used a random-effect model to examine the impact of NMES plus exercise on pain compared with exercise in people with knee OA. Results: Six RCTs (n = 367) were included. In the qualitative synthesis, the systematic literature analysis showed improvement in pain after NMES plus exercise compared with exercise alone in three studies. The other three studies revealed no difference between groups in pain, although similar improvement after treatments. In the meta-analysis, NMES at a specific joint angle combined with exercise was not superior to exercise alone in pain management (standardized mean difference = -0.33, 95% CI = -1.05 to 0.39, p = 0.37). There was no additional effect of NMES on exercise on self-reported functional ability, stiffness, and physical function compared with exercise alone. In only one study, symptoms, activities of daily living, sports function, and quality of life improved after whole-body electrostimulation combined with exercise. Conclusion: This review found insufficient evidence for the effectiveness of NMES combined with exercise in treating knee OA considering PROMs. While pain relief was observed in some studies, more high-quality clinical trials are needed to support the use of NMES added to the exercise in clinical practice. Electrical stimulation in a whole-body configuration combined with exercise shows promise as an alternative treatment option. abstract_id: PUBMED:30514113 Short-term effects of neuromuscular electrical stimulation and ultrasound therapies on muscle architecture and functional capacity in knee osteoarthritis: a randomized study. Objective:: To determine the effects of ultrasound therapy and neuromuscular electrical stimulation (NMES) application on the muscle architecture and functional capacity in patients with knee osteoarthritis. Design:: A randomized study. Subjects:: A total of 60 patients with knee osteoarthritis. Interventions:: Participants were randomized into one of the following two intervention groups, five days a week, for three weeks: the combination of NMES application, hot pack, and exercise therapy was applied to the NMES group. The combination of therapeutic ultrasound, hot pack and exercise therapy was applied to the ultrasound therapy group. Main Measures:: Subjects were evaluated for pain and functional capacity with the use of the visual analog pain scale (VAS), Western Ontario and McMaster Universities Arthritis Index (WOMAC), and 15 meter walking test. The muscle architecture (muscle thickness, pennation angle and fascicle length) was assessed from vastus lateralis and quadriceps femoris muscles bilaterally by ultrasonography. Results:: Two groups presented significant improvements in all outcome measures before and after treatment ( P &lt; 0.01). There were significant improvements in VAS rest pain ( P &lt; 0.05), VAS activity pain ( P &lt; 0.05), WOMAC pain ( P &lt; 0.05), WOMAC stiffness score ( P &lt; 0.05), and WOMAC physical function ( P &lt; 0.05) for the ultrasound therapy group in comparison to the NMES group. NMES group exhibited more increases in the muscle thickness and fascicle length values when compared to ultrasound therapy group ( P &lt; 0.05). Conclusion:: Ultrasound therapy appears to be an effective treatment in reducing pain and improving functional capacity. NMES application has more effects on the muscle architecture. abstract_id: PUBMED:34262384 Adherence to Neuromuscular Electrical Stimulation Interventions for Muscle Impairment in Hip and Knee Osteoarthritis: A Systematic Review. Background: Neuromuscular electrical stimulation (NMES) provides a promising approach to counteract muscle impairment in hip and knee osteoarthritis, and to expedite recovery from joint replacement surgery. Nonetheless, application into clinical orthopaedic practice remains limited, partly due to concerns regarding patient tolerance. Objectives: This systematic review aimed to quantify levels of adherence to NMES interventions for muscle impairment in hip and knee osteoarthritis and identify strategies to increase compliance. Data Sources: Randomised controlled trials (RCTs) were identified in a web-based literature review, completed in December 2020. The databases sourced included the Cochrane Library, CINAHL Complete, Medline Complete and PubMed. Eligibility Criteria: Studies were included if they were: (i) conducted in cohorts of adults with hip or knee osteoarthritis; (ii) a protocol of electrical muscle stimulation prescribed to treat muscle impairment; and (iii) reported intervention adherence or attrition rate. Data were extracted on adherence rate, reasons for non-adherence and potential strategies to increase adherence. Risk of bias was assessed using the Physiotherapy Evidence Database (PEDro) scale. Results: The search yielded 120 articles, of which 15 studies were considered eligible and included in the analysis (n = 922). All NMES treatment was applied to the quadriceps, with 1 study targeting the quadriceps and calves. The mean PEDRO score of the included studies was 6.80 out of a possible 10 (range 6-8). Mean adherence did not differ between groups receiving treatment with NMES (85% ± 12%) and control groups receiving voluntary exercise or education (84% ± 9%) (P = .97). Reasons for non-adherence or attrition included a dislike of the device, dizziness, pain and discomfort. Strategies to increase adherence included NMES education, a familiarisation period, supervision, setting thresholds based upon patient tolerance, monitoring pain levels during stimulation and using built-in adherence trackers. Conclusions: This systematic review indicates that adherence to NMES interventions for muscle impairment in hip and knee osteoarthritis in clinical trials does not differ to control groups receiving education or voluntary exercise, and hence should not be a barrier to application in clinical practice. abstract_id: PUBMED:33059084 Computed Tomography Image Analysis of Neuromuscular Electrical Stimulation in the Treatment of Knee Osteoarthritis with Different Radiologic Characteristics Based on Iterative Reconstruction Algorithm. Objective: To use computed tomography image analysis of iterative reconstruction algorithm to understand effect of neuromuscular electrical stimulation of quadriceps on motor function rehabilitation after total knee arthroplasty. Methods: After total knee arthroplasty, 104 patients were randomly divided into 2 groups. The treatment group comprised 52 patients. The quadriceps femoris was stimulated by neuromuscular electrical stimulation, and patients were required to actively extend the knee along with the current stimulation. In the second group, which received cutaneous nerve electrical stimulation, 2 sets of electrodes were placed on knee pain points. All patients participated in conventional rehabilitation and were discharged from the hospital a mean 14.89 ± 3.65 days after treatment. Visual analog scale, range of motion of knee joint, and Knee Injury and Osteoarthritis Score were evaluated before discharge. The mean interval between follow-up evaluations after discharge was 9.5 months. Knee Society Score and Knee Injury and Osteoarthritis Score were assessed during follow-up. Results: Visual analog scale scores of treatment and control groups at discharge were 18.11 ± 9.66 and 16.13 ± 4.25, and active range of motion of the knee joint was 103.21° ± 15.44° and (99.21° ± 15.19°, respectively. Limitation of active knee extension in the treatment group was 1.93° ± 3.47°, which was significantly smaller than in the control group (6.26° ± 4.28°). Conclusions: Neuromuscular electrical stimulation of the quadriceps muscle early after total knee arthroplasty helps to improve the function of the knee extension device and accelerate functional rehabilitation. Computed tomography has found that changes in tibial subchondral bone plate and subchondral cancellous bone in patients with knee osteoarthritis are mainly due to destruction of ultrastructural homeostasis. This change may be the cause of knee osteoarthritis. abstract_id: PUBMED:29162949 Neuromuscular Electrical Stimulation for Treatment of Muscle Impairment: Critical Review and Recommendations for Clinical Practice. Purpose: In response to requests from physiotherapists for guidance on optimal stimulation of muscle using neuromuscular electrical stimulation (NMES), a review, synthesis, and extraction of key data from the literature was undertaken by six Canadian physical therapy (PT) educators, clinicians, and researchers in the field of electrophysical agents. The objective was to identify commonly treated conditions for which there was a substantial body of literature from which to draw conclusions regarding the effectiveness of NMES. Included studies had to apply NMES with visible and tetanic muscle contractions. Method: Four electronic databases (CINAHL, Embase, PUBMED, and SCOPUS) were searched for relevant literature published between database inceptions until May 2015. Additional articles were identified from bibliographies of the systematic reviews and from personal collections. Results: The extracted data were synthesized using a consensus process among the authors to provide recommendations for optimal stimulation parameters and application techniques to address muscle impairments associated with the following conditions: stroke (upper or lower extremity; both acute and chronic), anterior cruciate ligament reconstruction, patellofemoral pain syndrome, knee osteoarthritis, and total knee arthroplasty as well as critical illness and advanced disease states. Summaries of key details from each study incorporated into the review were also developed. The final sections of the article outline the recommended terminology for describing practice using electrical currents and provide tips for safe and effective clinical practice using NMES. Conclusion: This article provides physiotherapists with a resource to enable evidence-informed, effective use of NMES for PT practice. abstract_id: PUBMED:22324059 Neuromuscular electrical stimulation in the treatment of knee osteoarthritis: a systematic review and meta-analysis. Objective: To assess the effectiveness of surface neuromuscular electrical stimulation in the treatment of knee osteoarthritis. Design: Systematic review and meta-analysis of randomized controlled and controlled clinical trials Methods: Studies were identified from databases (MEDLINE, EMBASE, CINAHL, Sports Discus, PEDro and the Cochrane Library) searched to January 2011 using a battery of keywords. Two reviewers selected studies meeting inclusion criteria. The methodological quality of the included studies was assessed using the Thomas Test and the strength of the evidence was then graded using the Agency for Health Care Policy and Research guidelines. Data were pooled and meta-analyses were performed. Results: Nine randomized controlled trials and one controlled clinical trial, studying a total of 409 participants (n = 395 for randomized controlled trials, and n = 14 for controlled trial) with a diagnosis of osteoarthritis were included. Inconsistent evidence (level D) was found that neuromuscular electrical stimulation has a significant impact on measures of pain, function and quadriceps femoris muscle strength in knee osteoarthritis. Conclusion: The role of neuromuscular electrical stimulation in the treatment of knee osteoarthritis is ambiguous. Therefore, future work is needed in this field to clearly establish the role of neuromuscular electrical stimulation in this population. Answer: Neuromuscular electrical stimulation (NMES) has been studied to determine its effectiveness in enhancing exercise programs for subjects with knee osteoarthritis. The evidence from various studies presents mixed results. One randomized controlled trial found that NMES applied to the quadriceps femoris muscle did enhance the effectiveness of an exercise program in alleviating pain and improving voluntary activation in patients with knee osteoarthritis. However, it did not enhance the effect on muscle strength or functional performance (PUBMED:22952305). A follow-up study reported that the addition of NMES to an exercise program increased pain modulation immediately after treatment, with maintenance of positive posttreatment effects during a 12-week period observed only for pain, self-reported functional ability, and walk velocity, with no difference between groups (PUBMED:25083133). Another study compared the effectiveness of NMES combined with peroneal nerve stimulation (PNS) and found that this combination may be more effective than NMES alone in terms of proprioception, hamstring muscle strength, and functional status in patients with knee osteoarthritis (PUBMED:37674803). In contrast, a study examining the effects of NMES and low-level laser therapy on muscle architecture and functional capacity in elderly patients with knee osteoarthritis found that NMES reduced the deleterious effects of osteoarthritis on the quadriceps structure, but low-level laser therapy did not potentiate the effects of electrical stimulation on the evaluated parameters (PUBMED:25261425). A systematic review and meta-analysis examining the effectiveness of NMES training combined with exercise on patient-reported outcomes measures in people with knee osteoarthritis found insufficient evidence for the effectiveness of NMES combined with exercise in treating knee osteoarthritis considering PROMs. While pain relief was observed in some studies, more high-quality clinical trials are needed to support the use of NMES added to the exercise in clinical practice (PUBMED:37926438). Another study found that ultrasound therapy was effective in reducing pain and improving functional capacity, while NMES application had more effects on muscle architecture (PUBMED:30514113). Adherence to NMES interventions for muscle impairment in hip and knee osteoarthritis was found to be comparable to control groups receiving education or voluntary exercise, suggesting that adherence should not be a barrier to application in clinical practice (PUBMED:34262384).
Instruction: Does surgical volume influence short-term outcomes of laparoscopic hysterectomy? Abstracts: abstract_id: PUBMED:20430354 Does surgical volume influence short-term outcomes of laparoscopic hysterectomy? Objective: To evaluate whether surgical volume has an impact on short-term outcomes of laparoscopic hysterectomy. Study Design: This is a retrospective analysis of 1016 laparoscopic hysterectomies. Results: The surgeons were divided into 2 groups based on a cutoff of 30 cases. Patient characteristics, the rates of laparotomy (4.5% vs 6.7%), and serious complications (3.6% vs 5.5%) were similar between 9 "high" and the remaining 39 "low volume" gynecologists, respectively (P &lt; .05). Mean operating time was longer in the "low volume" group. Compared with their first 29 hysterectomies, the "high volume" surgeons decreased their operating time significantly in their subsequent cases. The "high volume" surgeons improved their conversion rate (9.2% vs 2.4%; P &lt; .0001) over time but not their serious complications. Conclusion: In laparoscopic hysterectomy, increasing the surgical volume can reduce the operating time and the risk for conversion to laparotomy but not the rate of serious complications. abstract_id: PUBMED:29540624 Surgical outcomes of total laparoscopic hysterectomy with 2-dimensional versus 3-dimensional laparoscopic surgical systems. Three-dimensional (3D) laparoscopic surgical systems have been developed to account for the lack of depth perception, a known disadvantage of conventional 2-dimensional (2D) laparoscopy. In this study, we retrospectively compared the outcomes of total laparoscopic hysterectomy (TLH) with 3D versus conventional 2D laparoscopy. From November 2014, when we began using a 3D laparoscopic system at our hospital, to December 2015, 47 TLH procedures were performed using a 3D laparoscopic system (3D-TLH). The outcomes of 3D-TLH were compared with the outcomes of TLH using the conventional 2D laparoscopic system (2D-TLH) performed just before the introduction of the 3D system. The 3D-TLH group had a statistically significantly shorter mean operative time than the 2D-TLH group (119±20 vs. 137±20 min), whereas the mean weight of the resected uterus and mean intraoperative blood loss were not statistically different. When we compared the outcomes for 20 cases in each group, using the same energy sealing device in a short period of time, only mean operative time was statistically different between the 3D-TLH and 2D-TLH groups (113±19 vs. 133±21 min). During the observation period, there was one occurrence of postoperative peritonitis in the 2D-TLH group and one occurrence of vaginal cuff dehiscence in each group, which was not statistically different. The surgeon and assistant surgeons did not report any symptoms attributable to the 3D imaging system such as dizziness, eyestrain, nausea, and headache. Therefore, we conclude that the 3D laparoscopic system could be used safely and efficiently for TLH. abstract_id: PUBMED:33289346 Short and middle-term outcomes of vaginally assisted laparoscopic sacropolpopexy. Objective: Vaginally assisted laparoscopic sacrocolpopexy (VALS), which is a combined surgical approach where a vaginal hysterectomy is initially performed, followed by transvaginal placement of synthetic mesh and laparoscopic suspension, can be an alternative to overcome the dissection, suturing limitations of laparoscopic sacrocolpopexy. The aim of this study was to compare the operative times and middle-term anatomic outcomes of women with uterovaginal prolapse undergoing VALS with those of women undergoing abdominal sacrocolpopexy. Methods: This is a prospective cohort study that evaluates operation times, anesthesia times, estimated blood loss, middle-term outcomes, perioperative and postoperative complications. We compared the results of 47 women who had the VALS to that of 32 abdominal sacrocolpopexy (AS). Results: The mean follow up was 22.4 months for AS group and 20.5 months for VALS group. The VALS group (median 1 day) had shorter hospitalization duration than the AS group (median 3 days). The mean operation time was significantly shorter in the VALS group (125.9 minutes) than the AS group (151.9 minutes) (P = .03). There was no significant difference in perioperative and postoperative complication rates. Objective failure rate (8.5% in VALS, 15.6% in AS), subjective failure rates (6.4% in VALS, 9.4% in AS), recurrence (2.1% in VALS, 9.4% in AS) and mesh exposition rates (2.1% in VALS, 9.4% in AS) were similar in both procedures. Conclusions: VALS with shorter operative time and hospitalization than conventional AS is a promising modification minimally invasive technique for sacrocolpopexy especially for those inexperienced in laparoscopic sacrocolpopexy. abstract_id: PUBMED:33747768 Does Uterine Manipulator Type Affect Surgical Outcomes of Laparoscopic Hysterectomy? Objectives: Many surgeons use uterine manipulator (UM) during laparoscopic hysterectomy (LH). In this study, we aimed to compare the outcomes of LH operations performed by using partially reusable UM with the articulated system (artUM) and disposable (dUM) UM without articulation. Materials And Methods: A total of 99 patients underwent the LH operation. This study was carried out with 35 of those 99 Caucasian patients who met the inclusion criteria. Group 1 consisted for 7 LH operations using the articulated RUMI® II/KOH-Efficient™ (Cooper Surgical, Trumbull, CT, USA) system (artUM), while Group II consisted of 28 patients using old-type V Care®(ConMed Endosurgery, Utica, New York, USA) dUM as UM. Results: Mean operation time was found to be 157.1 ± 42.0 min. The operation time was found statistically longer in Group 1, consisted of artUM used patients (P = 0.006 and P &lt; 0.05). No statistically significant difference was found between two groups in terms of surgical results such as, delta hemoglobin value (P = 0.483 and P &lt; 0.05), length of hospital stay (P = 0.138 and P &lt; 0.05), and postoperative maximum body temperature (P = 0.724 and P &lt; 0.05). Conclusion: The UM type did not alter the surgical outcomes except the operating time in our study. According to our results, the surgical technique is a more significant variable than instruments used in LH for normal size uterus. Further prospective, large-scale studies comparing various UM systems are mandatory. abstract_id: PUBMED:34557515 The Effect of Laparoscopic Radical Hysterectomy Surgical Volume on Oncology Outcomes in Early-Stage Cervical Cancer. Purpose: To examine the association between surgical volume and surgical and oncological outcomes of women with stage IB1 cervical cancer who underwent laparoscopic radical hysterectomy (LRH). Methods: We retrospectively analyzed the oncological outcomes of 1,137 patients with stage IB1 cervical cancer receiving LRH from 2004 to 2016. The surgical volume for each surgeon was defined as low [fewer than 50 surgeries, n = 392(34.5%)], mid [51-100 surgeries, n = 315(27.7%)], and high [100 surgeries or more, n = 430(37.8%)]. Surgical volume-specific survival was examined with Kaplan-Meier analysis, multivariable analysis, and propensity score matching. Results: The operative times of the high-volume group (227.35 ± 7.796 min) were significantly shorter than that of the low- (272.77 ± 4.887 min, p &lt; 0.001) and mid-volume (255.86 ± 4.981 min, p &lt; 0.001) groups. Blood loss in the high-volume group (169.42 ± 8.714 ml) was significantly less than that in the low-volume group (219.24 ± 11.299 ml, p = 0.003). The 5-year disease-free survival (DFS) and overall survival (OS) in the low-volume, mid-volume, and high-volume groups were similar (DFS: 91.9, 86.7, and 89.2%, p = 0.102; OS: 96.4, 93.5, and 94.2%, p = 0.192). Multivariable analysis revealed surgical volume was not an independent risk factor for OS or DFS. The rate of intraoperative and postoperative complications was similar among the three groups (p = 0.210). Conclusions: Surgical volume of LRH may not be a prognostic factor for patients with stage IB1 cervical cancer. Surgery at high-volume surgeon is associated with decreased operative time and blood loss. abstract_id: PUBMED:36254598 Surgical outcomes of laparoscopic trachelectomy following supracervical hysterectomy: a multicenter study. Objective: To evaluate the feasibility, safety, and surgical outcomes of laparoscopic trachelectomy after supracervical hysterectomy. Methods: This multicenter study was conducted at Tanta University, Benha University, and Aminah Laparoscopy Center (Benha, Egypt) from June 1, 2018 to October 31, 2021. Forty patients were recruited for this study and counseled on laparoscopic trachelectomy to treat their symptoms after supracervical hysterectomy. Furthermore, cervical biopsy was performed to detect and exclude any malignancy. Histopathological examination of cervical specimens was performed after surgery. Operative details and outcomes were recorded. Results: The median age of the patients was 42 years (range, 38-47). The median body mass index was 25 years (range, 22- 28). The median interval between hysterectomy and the clinical presentation was 4.40 years (range, 3.58-5.25). Most patients presented with abnormal vaginal discharge (40%) and bleeding (25%). Moreover, a cervical biopsy result revealed stump carcinoma in three cases (7.5%) that were excluded. The median operative time was 210 minutes (range, 170-220). The median blood loss was 270 mL (range, 220-320). Additionally, histopathological examinations revealed that chronic non-specific cervicitis was present in 54.05% of trachelectomy specimens. There were no significant differences between symptomatic and asymptomatic patients regarding operative outcomes, except adhesions, which were more significantly increased in symptomatic patients (P=0.015). Minimal complications, both operative and postoperative, were related to the procedure. Conclusion: Although the operative time was long and adhesions were common during laparoscopic trachelectomy, the procedure was feasible and safe, with minimal complications. abstract_id: PUBMED:28842767 Surgical volume and conversion rate in laparoscopic hysterectomy: does volume matter? A multicenter retrospective cohort study. Background: A multicenter, retrospective, cohort study was conducted in the Netherlands. The aim was to evaluate whether surgical volume of laparoscopic hysterectomies (LHs) performed by proven skilled gynecologists had an impact on the conversion rate from laparoscopy to laparotomy. Methods: In 14 hospitals, all LHs performed by 19 proven skilled gynecologists between 2007 and 2010 were included in the analysis. Surgical volume, conversion rate and type of conversion (reactive or strategic) were retrospectively assessed. To estimate the impact of surgical volume on the conversion rate, logistic regressions were performed. These regressions were adjusted for patient's age, Body Mass Index (BMI), ASA classification, previous abdominal surgery and the indication (malignant versus benign) for the LH. Results: During the study period, 19 proven skilled gynecologists performed a total of 1051 LHs. Forty percent of the gynecologists performed over 20 LHs per year (median 17.3, range 5.4-49.5). Conversion to laparotomy occurred in 5.0% of all LHs (53 of 1051); 38 (3.6%) were strategic and 15 (1.4%) were reactive conversions. Performing over 20 LHs per year was significantly associated with a lower overall conversion rate (ORadjusted 0.43, 95% CI 0.24-0.77), a lower strategic conversion rate (ORadjusted 0.32, 95% CI 0.16-0.65), but not with a lower reactive conversion rate (ORadjusted 0.96, 95% CI 0.33-2.79). Conclusion: A higher annual surgical volume of LHs by proven skilled gynecologists is inversely related to the conversion rate to laparotomy, and results in a lower strategic conversion rate. abstract_id: PUBMED:38262244 In the patient's shoes: The impact of hospital proximity and volume on stage I endometrial cancer care patterns and outcomes. Objective: To compare the impact of travel burden and hospital volume on care patterns and outcomes in stage I endometrial cancer. Methods: This retrospective cohort study identified patients from the National Cancer Database with stage I epithelial endometrial carcinoma who underwent hysterectomy between 2012 and 2020. Patients were categorized into: lowest quartiles of travel distance and hospital surgical volume for endometrial cancer (Local) and highest quartiles of distance and volume (Travel). Primary outcome was overall survival. Secondary outcomes were surgery route, lymph node (LN) assessment method, length of stay (LOS), 30-day readmission, and 30- and 90-day mortality. Results were stratified by tumor recurrence risk. Outcomes were compared using propensity-score matching. Propensity-adjusted survival was evaluated using Kaplan-Meier curves and compared using log-rank tests. Cox models estimated hazard ratios for death. Sensitivity analysis using modified Poisson regressions was performed. Results: Among 36,514 patients, 51.4% were Local and 48.6% Travel. The two cohorts differed significantly in demographics and clinicopathologic characteristics. Upon propensity-score matching (p &lt; 0.05 for all), more Travel patients underwent minimally invasive surgery (88.1%vs79.1%) with fewer conversions to laparotomy (2.0%vs2.6%), more sentinel (20.5%vs11.3%) and fewer traditional LN assessments (58.1vs61.7%) versus Local. Travel patients had longer intervals to surgery (≥30 days:56.7%vs50.1%) but shorter LOS (&lt;2 days:76.9%vs59.8%), fewer readmissions (1.9%vs2.7%%), and comparable 30- and 90-day mortality. OS and HR for death remained comparable between the matched groups. Conclusions: Compared to surgery in nearby low-volume hospitals, patients with stage I epithelial endometrial cancer who travelled longer distances to high-volume centers experienced more favorable short-term outcomes and care patterns with comparable long-term survival. abstract_id: PUBMED:30855684 Learning curve and surgical outcomes for laparoscopic surgery, including pelvic lymphadenectomy, for early stage endometrial cancer. Background: The purpose of this study was to determine the learning curve of laparoscopic surgery for the treatment of early endometrial cancer and report the surgical outcomes. Methods: Patients: Data were retrospectively extracted from the medical charts of patients who underwent laparoscopic surgery. Intervention: Laparoscopic surgery, including pelvic lymphadenectomy, was performed using the same technique and instruments for all patients. Outcomes: The learning curve for the surgical technique, characterized by the operative time, number of lymph nodes removed, and volume of intra-operative blood loss, was analyzed. Results: Over the period of observation, 82 patients were enrolled. For analysis, cases were classified into four groups, based on their surgery date. Between-group comparisons identified a significant decrease in operative time and volume of blood loss between Groups 1 and 4 (P &lt; 0.05). However, the number of lymph nodes removed, and the length of hospital stay were not influenced by learning. Conclusion: The operative time and the amount of blood loss significantly decreased with an increase in the surgeon's experience but with no effect of learning on the number of lymph nodes removed and length of hospital stay. abstract_id: PUBMED:26945605 Surgical outcomes for low-volume vs high-volume surgeons in gynecology surgery: a systematic review and meta-analysis. Objective: The aim of this study was to determine the impact of gynecological surgeon volumes on patient outcomes. Data Sources: Eligible studies were selected through an electronic literature search from database inception up until September 2015 and references in published studies. Search terms included surgical volume, surgeon volume, low-volume or high-volume, and gynecology or hysterectomy or sling or pelvic floor repair or continence procedure. Study Eligibility: The literature search was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. We defined a low-volume surgeon (LVS) as one performing the procedure once a month or less, and studies were excluded if their definition of LVS was &gt; ±33% of our definition. Primary outcomes were total complications, intraoperative complications, and postoperative complications. Study Appraisal And Synthesis Methods: All outcome data for individual studies were entered into systematic review software. When 2 or more studies evaluated a designated outcome, a meta-analysis of the entered data was undertaken as per the Cochrane database methodology. Data analysis was entered into a software product, which generated a summary of findings table that included structured and qualified grading (very low to high) of the quality for the evidence of the individual outcomes and provided a measure of effect. Results: Fourteen peer-reviewed studies with 741,760 patients were included in the systematic review. For gynecology the LVS group had an increased rate of total complications (odds ratio [OR], 1.3, 95% confidence interval [CI], 1.2-1.5), intraoperative complications (OR, 1.6, 95% CI, 1.2-2.1), and postoperative complications (OR, 1.4 95% CI, 1.3-1.4). In gynecological oncology, the LVS group had higher mortality (OR, 1.9, 95% CI, 1.3-2.6). In the urogynecology group, a single study reported that the LVS group had a higher rate of any complication (risk ratio [RR], 1.4, 95% CI, -1.2-1.6). Another single study found that LVS had higher rates of reoperation for mesh complications after midurethral sling procedures (RR, 1.4, 95% CI, 1.2-1.5). The evidence is of moderate to very low quality. Conclusion: Gynecologists performing procedures approximately once a month or less were found to have higher rates of adverse outcomes in gynecology, gynecological oncology, and urogynecology, with higher mortality in gynecological oncology. Answer: Yes, surgical volume does influence short-term outcomes of laparoscopic hysterectomy. Studies have shown that higher surgical volume is associated with improved outcomes in various aspects of laparoscopic hysterectomy. Specifically, increasing surgical volume can reduce operating time and the risk for conversion to laparotomy, although it may not affect the rate of serious complications (PUBMED:20430354). Additionally, surgeons with higher annual volumes of laparoscopic hysterectomies have been found to have lower overall conversion rates to laparotomy, particularly lower strategic conversion rates, but not necessarily lower reactive conversion rates (PUBMED:28842767). Furthermore, high-volume surgeons performing laparoscopic radical hysterectomy for early-stage cervical cancer have shorter operative times and less blood loss compared to their low- and mid-volume counterparts, although the 5-year disease-free survival and overall survival rates were similar across all groups (PUBMED:34557515). Moreover, a systematic review and meta-analysis indicated that low-volume surgeons in gynecology surgery, including laparoscopic hysterectomy, have higher rates of total complications, intraoperative complications, and postoperative complications compared to high-volume surgeons (PUBMED:26945605). These findings suggest that surgical volume is an important factor in determining the short-term outcomes of laparoscopic hysterectomy, with higher volumes generally leading to more favorable outcomes.
Instruction: Does tracking of dietary behaviours differ by parental education in children during the transition into adolescence? Abstracts: abstract_id: PUBMED:22874120 Does tracking of dietary behaviours differ by parental education in children during the transition into adolescence? Objective: The present study investigates the changes and tracking of dietary behaviours in Norwegian 11-year-olds and examines the association between parental education and dietary tracking over a time period of 20 months. Design: Longitudinal data from the Norwegian HEalth In Adolescents (HEIA) cohort study followed up at three time points (2007-2009). Setting: Intakes of fruits, vegetables and snacks were assessed by frequency and intakes of sugar-sweetened soft drinks and squash were assessed by frequency and amount. Tracking of dietary behaviours was assessed by adolescents' relative position in rank over time and Cohen's kappa was used to measure tracking coefficients. Multinomial logistic regression analyses were performed to assess the association between parental education and the tracking of dietary behaviours. Subjects: In total, 885 adolescents from the HEIA cohort study participated by answering Internet-based questionnaires at three time points. Results: The results indicated that boys and girls maintained their relative position in rank of dietary intake over time, when grouped by baseline consumption. Fair to moderate tracking coefficients of dietary variables were observed. An inverse association was found between parental education and stability of soft drink and squash consumption during the 20 months. Conclusions: The observed tracking pattern indicates the importance of promoting healthy dietary behaviours at an even earlier age. Furthermore, interventions should focus particularly on adolescents from families with low parental education and their consumption of sugar-sweetened beverages. abstract_id: PUBMED:33181820 Dietary pattern trajectories across adolescence and early adulthood and their associations with childhood and parental factors. Background: Although adolescent dietary patterns tend to be of poor quality, it is unclear whether dietary patterns established in adolescence persist into adulthood. Objectives: We examined trajectories across adolescence and early adulthood for 2 major dietary patterns and their associations with childhood and parental factors. Methods: Using data from the Western Australian Pregnancy Cohort (Raine Study), intakes of 38 food groups were estimated at ages 14, 17, 20 and 22 y in 1414 participants using evaluated FFQs. Using factor analysis, 2 major dietary patterns (healthy and Western) were consistently identified across follow-ups. Sex-specific group-based modeling assessed the variation in individual dietary pattern z scores to identify group trajectories for each pattern between ages 14 and 22 y and to assess their associations with childhood and parental factors. Results: Two major trajectory groups were identified for each pattern. Between ages 14 and 22 y, a majority of the cohort (70% males, 73% females) formed a trajectory group with consistently low z scores for the healthy dietary pattern. The remainder had trajectories showing either declining (27% females) or reasonably consistent healthy dietary pattern z scores (30% males). For the Western dietary pattern, the majority formed trajectories with reasonably consistent average scores (79% males, 81% females) or low scores that declined over time. However, 21% of males had a trajectory of steady, marked increases in Western dietary pattern scores over time. A lower maternal education and higher BMI (in kg/m2) were positively associated with consistently lower scores of the healthy dietary pattern. Lower family income, family functioning score, maternal age, and being in a single-parent family were positively related to higher scores of the Western dietary pattern. Conclusions: Poor dietary patterns established in adolescence are likely to track into early adulthood, particularly in males. This study highlights the transition between adolescence and early adulthood as a critical period and the populations that could benefit from dietary interventions. abstract_id: PUBMED:32874637 Predictors of mathematical attainment trajectories across the primary-to-secondary education transition: parental factors and the home environment. A 'maths crisis' has been identified in the UK, with many adults and adolescents underachieving in maths and numeracy. This poor performance is likely to develop from deficits in maths already present in childhood. Potential predictors of maths attainment trajectories throughout childhood and adolescence relate to the home environment and aspects of parenting including parent-child relationships, parental mental health, school involvement, home teaching, parental education and gendered play at home. This study examined the aforementioned factors as predictors of children's maths attainment trajectories (age 7-16) across the challenging transition to secondary education. A secondary longitudinal analysis of the Avon Longitudinal Study of Parents and Children found support for parental education qualifications, a harmonious parent-child relationship and school involvement at age 11 as substantial predictors of maths attainment trajectories across the transition to secondary education. These findings highlight the importance of parental involvement for maths attainment throughout primary and secondary education. abstract_id: PUBMED:29198192 Clustering of energy balance-related behaviours and parental education in European preschool children: the ToyBox study. Energy balance-related behaviours (EBRB) are established in childhood and seem to persist through to adulthood. A lower parental educational level was associated with unhealthy behavioural patterns. The aim of the study is to identify clusters of EBRB and examine their association with preschool children's BMI and maternal, paternal and parental education. A subsample of the ToyBox study (n 5387) conducted in six European countries was used. Six behavioural clusters ('healthy diet and low activity', 'active', 'healthy lifestyle', 'high water and screen time; low fruits and vegetables (F&amp;V) and physical activity (PA)', 'unhealthy lifestyle' and 'high F&amp;V consumers') emerged. The healthiest group characterised by high water and F&amp;V consumption and high PA z scores ('healthy lifestyle') was more prevalent among preschool children with at least one medium- or higher-educated parent and showed markedly healthier trends for all the included EBRB. In the opposite, the 'unhealthy lifestyle' cluster (characterised by high soft drinks and screen time z scores, and low water, F&amp;V and PA z scores) was more prevalent among children with lower parental, paternal and maternal education levels. OR identified that children with lower maternal, paternal and parental education levels were less likely to be allocated in the 'healthy lifestyle' cluster and more likely to be allocated in the 'unhealthy lifestyle' cluster. The 'unhealthy lifestyle' cluster was more prevalent among children with parents in lower parental educational levels and children who were obese. Therefore, parental educational level is one of the key factors that should be considered when developing childhood obesity prevention interventions. abstract_id: PUBMED:29559011 Parental health behaviour predictors of childhood and adolescent dietary trajectories. Objective: To determine which parental health behaviours early in childhood most strongly predict whole-of-childhood dietary trajectories. Design: Population-based Longitudinal Study of Australian Children (LSAC, waves 1-6; 2004-2014). Exposures were parents' fruit/vegetable consumption, alcohol, smoking and physical activity at child age 0-1 years (B Cohort) or 4-5 years (K Cohort). Outcomes, from repeated biennial short diet diaries, were group-based trajectories of (i) dietary scores and empirically derived patterns of (ii) healthful and (iii) unhealthful foods consumed, spanning ages 2-3 to 10-11 years (B Cohort) and 4-5 to 14-15 years (K Cohort). We investigated associations of baseline parental health behaviours with child dietary trajectories using multinomial logistic regression. Setting: Australian homes. Subjects: Of children, 4443 (87·0 %) from the B Cohort and 4620 (92·7 %) from the K Cohort were included in all trajectories. Multivariable analyses included 2719 to 2905 children and both parents. Results: Children whose primary caregiver reported the lowest fruit/vegetable consumption had markedly higher odds of belonging to the least healthy score and pattern trajectories (K Cohort: OR=8·7, 95 % CI 5·0, 15·1 and OR=8·4, 95 % CI 4·8, 14·7, respectively); associations were weaker (K Cohort: OR=2·3, 95 % CI 1·0, 5·2) for the unhealthiest pattern trajectory. Secondary caregiver fruit/vegetable associations were smaller and inconsistent. Parental alcohol, smoking and physical activity were not predictive in multivariable analyses. Results were largely replicated for the B Cohort. Conclusions: Low primary caregiver fruit/vegetable consumption increased nearly ninefold the odds of children being in the lowest intake of healthy, but only weakly predicted unhealthy, food trajectories. Healthy and unhealthy food intake may have different determinants. abstract_id: PUBMED:37957457 Developmental Trajectories of Parental Self-Efficacy as Children Transition to Adolescence in Nine Countries: Latent Growth Curve Analyses. Little is known about the developmental trajectories of parental self-efficacy as children transition into adolescence. This study examined parental self-efficacy among mothers and fathers over 3 1/2 years representing this transition, and whether the level and developmental trajectory of parental self-efficacy varied by cultural group. Data were drawn from three waves of the Parenting Across Cultures (PAC) project, a large-scale longitudinal, cross-cultural study, and included 1178 mothers and 1041 fathers of children who averaged 9.72 years of age at T1 (51.2% girls). Parents were from nine countries (12 ethnic/cultural groups), which were categorized into those with a predominant collectivistic (i.e., China, Kenya, Philippines, Thailand, Colombia, and Jordan) or individualistic (i.e., Italy, Sweden, and USA) cultural orientation based on Hofstede's Individualism Index (Hofstede Insights, 2021). Latent growth curve analyses supported the hypothesis that parental self-efficacy would decline as children transition into adolescence only for parents from more individualistic countries; parental self-efficacy increased over the same years among parents from more collectivistic countries. Secondary exploratory analyses showed that some demographic characteristics predicted the level and trajectory of parental self-efficacy differently for parents in more individualistic and more collectivistic countries. Results suggest that declines in parental self-efficacy documented in previous research are culturally influenced. abstract_id: PUBMED:24816324 Perceptions of parental pressure to eat and eating behaviours in preadolescents: the mediating role of anxiety. Previous research suggests that parental controlling feeding practices are associated with children's overeating and undereating behaviours. However, there is limited research addressing the link between children's mental health symptoms (specifically anxiety and depression) and their reports of eating behaviours, despite knowledge that these psychopathologies often co-exist. The current study aimed to identify the relationships between preadolescents' perceptions of their parents' feeding practices with reports of their own anxiety, depression and eating behaviours. Three hundred and fifty-six children (mean age 8.75 years) completed questionnaires measuring their dietary restraint, emotional eating and external eating, as well as their perceptions of their parents' use of pressure to eat and restriction of food. Children also completed measures of general anxiety, social anxiety and depression symptomology. Results indicated that preadolescents' eating behaviours were associated with their perceptions of the controlling feeding practices their parents used with them. Preadolescents' dietary restraint, emotional eating and external eating behaviours were positively associated with their reports of general and social anxiety, and depression symptomology. In addition, perceptions of parental pressure to eat were positively related to preadolescents' anxiety and depression levels. Child anxiety (general and social) was found to mediate the relationship between perceptions of parental pressure to eat and preadolescents' eating behaviours (dietary restraint, emotional eating and external eating). The results suggest that greater anxiety in preadolescents may explain why children who perceive greater pressure to eat by their parents are more likely to exhibit maladaptive eating behaviours. abstract_id: PUBMED:28841442 Externalizing and internalizing behaviours in adolescence, and the importance of parental behavioural and psychological control practices. The aims of the present study were to explore the impact of parental characteristics (behavioural control and psychological control) on adolescents' expression of externalizing and internalizing behaviours. To address the aim of the study, participants completed quantitative measures; the study included 538 adolescents and their mothers and fathers. Overall, 513 mothers and 464 fathers participated in the study. Adolescents completed the Children's Report on Parent Behaviour Inventory (CRPBI), while parents completed the Child Behaviour Checklist - Parent Report (Short Form; CBCL). Results of the study showed that only maternal and paternal psychological control predicted externalizing and internalizing behaviours. The conclusions have practical applications, as they can provide novel approaches in parent training programmes. Furthermore, results are discussed in relation to the connection with earlier studies and the theoretical contribution. abstract_id: PUBMED:38401601 Longitudinal tracking of diet quality from childhood to adolescence: The Interplay of individual and sociodemographic factors. This study aimed to examine diet tracking from childhood to adolescence, using 4 time-points, and the influence of socioeconomic and individual characteristics in this transition. The sample included 6893 children from the Portuguese birth cohort Generation XXI with complete information on Food Frequency Questionnaire in at least one of the considered follow-ups. A Healthy Eating Index (HEI), previously developed to assess adherence to WHO's dietary recommendations, was applied at all ages (4, 7, 10 and 13y). The intraclass correlation coefficient (ICC) was used to analyse the tracking of diet quality. Linear mixed-effect models were performed to estimate the association of the child's socioeconomic and individual characteristics with the HEI score and respective trajectories over time. The overall diet quality decreased from childhood (22.2 ± 3.6 at 4y) to adolescence (18.2 ± 3.9 at 13y), with moderate tracking (ICC = 0.53), showing that children who start a healthy diet earlier might have a better diet quality as time goes by. Children of older mothers (β = 0.079, 95%CI = 0.061-0.097) and with higher education (β = 0.203, 95%CI = 0.178-0.229) and a higher household monthly income (β = 0.024,95%CI = 0.007-0.041) had a higher diet quality over time. Besides family characteristics, the child's sedentary activities (β = -0.009, 95%CI = -0.014--0.003) negatively influence diet quality throughout life. In contrast, being a girl (β = -0.094, 95%CI = -0.132--0.056) and having higher sleep duration (β = 0.039, 95%CI = 0.015-0.064) are associated with a higher diet quality over time. The presence of dietary tracking from childhood to adolescence implies that promoting healthy eating habits during the first years of life is crucial for a healthier diet quality during late childhood and early adolescence, focusing on maternal and individual child characteristics. abstract_id: PUBMED:35416142 Can food parenting practices explain the association between parental education and children's food intake? The Feel4Diabetes-study. Objective: This study aimed to investigate the mediating role of food parenting practices (FPP), including home availability of different types of foods and drinks, parental modelling of fruit intake, permissiveness and the use of food as a reward in the relationship between parental education and dietary intake in European children. Design: Single mediation analyses were conducted to explore whether FPP explain associations between parents' educational level and children's dietary intake measured by a parent-reported FFQ. Setting: Six European countries. Participants: Parent-child dyads (n 6705, 50·7 % girls, 88·8 % mothers) from the Feel4Diabetes-study. Results: Children aged 8·15 ± 0·96 years were included. Parental education was associated with children's higher intake of water, fruits and vegetables and lower intake of sugar-rich foods and savoury snacks. All FPP explained the associations between parental education and dietary intake to a greater or lesser extent. Specifically, home availability of soft drinks explained 59·3 % of the association between parental education and sugar-rich food intake. Home availability of fruits and vegetables was the strongest mediators in the association between parental education and fruit and vegetable consumption (77·3 % and 51·5 %, respectively). Regarding savoury snacks, home availability of salty snacks and soft drinks was the strongest mediators (27·6 % and 20·8 %, respectively). Conclusions: FPP mediate the associations between parental education and children's dietary intake. This study highlights the importance of addressing FPP in future interventions targeting low-educated populations. Answer: Yes, tracking of dietary behaviors does differ by parental education in children during the transition into adolescence. Studies have found that children's dietary behaviors tend to maintain their relative position over time when grouped by baseline consumption, with fair to moderate tracking coefficients observed for dietary variables. An inverse association was found between parental education and the stability of soft drink and squash consumption over a 20-month period, indicating that children from families with lower parental education were more likely to maintain high consumption of sugar-sweetened beverages (PUBMED:22874120). Additionally, lower maternal education and higher BMI were positively associated with consistently lower scores of healthy dietary patterns, while lower family income, family functioning score, maternal age, and being in a single-parent family were positively related to higher scores of the Western dietary pattern, which is typically less healthy (PUBMED:33181820). Furthermore, the 'unhealthy lifestyle' cluster, characterized by high soft drinks and screen time and low water, fruits and vegetables, and physical activity, was more prevalent among children with lower parental, paternal, and maternal education levels (PUBMED:29559011). These findings suggest that interventions should focus particularly on adolescents from families with low parental education to promote healthy dietary behaviors at an even earlier age (PUBMED:22874120). The transition between adolescence and early adulthood is a critical period, and the populations that could benefit from dietary interventions have been highlighted, especially considering that poor dietary patterns established in adolescence are likely to track into early adulthood (PUBMED:33181820).
Instruction: Is the invasion depth in millimeters valid to determine the prognosis of early invasive cervical adenocarcinoma? Abstracts: abstract_id: PUBMED:11426988 Is the invasion depth in millimeters valid to determine the prognosis of early invasive cervical adenocarcinoma? A case of recurrent FIGO stage IA1 cervical adenocarcinoma. Background: Recurrence of FIGO stage IA1 cervical adenocarcinoma is extremely rare. We herein report a patient with early invasive cervical adenocarcinoma who developed a recurrence in the vaginal stump. Case: A 52-year-old female complained of contact bleeding. Biopsy of the uterine cervix verified cervical adenocarcinoma, and the patient underwent Okabayashi hysterectomy with pelvic lymphadnectomy and bilateral adnectomy. Histopathologic examination of the uterus revealed an invasive cancer 3 mm in depth. Neither lymph node metastasis nor lymph-vascular space invasion was observed. However, the depth of her normal cervical gland area was 2 mm only, and the cancer invasion involved an area which was deeper than the normal cervical gland area. The vaginal stump recurrence developed 4 years after surgery. Conclusion: The depth of invasion with reference to that of normal cervical glands may become a possible prognostic factor for early invasive cervical adenocarcinoma. abstract_id: PUBMED:10419739 Tumor size, depth of invasion, and grading of the invasive tumor front are the main prognostic factors in early squamous cell cervical carcinoma. Objective: The objective of this study was to evaluate the prognostic significance of clinical and histopathologic factors, including a new grading system focusing on the invasive tumor front. Method: A retrospective analysis of 125 surgically treated patients with squamous cell cervical carcinoma FIGO stage IB was conducted. For each tumor, the degree of keratinization, nuclear polymorphism, pattern of invasion, and degree of lymphoid infiltration at the invasive tumor front were graded and given scores between 1 and 4. Results: Clinical tumor size, depth of invasion, and grading of the invasive front had prognostic significance in multivariate analysis, while lymph vascular space involvement, lymph node status, and grade of differentiation did not. Based on clinical tumor size, depth of invasion, and grading of the invasive tumor front, patients could be separated into three groups: One group with minimal risk of recurrence (5-year disease-free survival (DFS) of 100%) consisting of 24% of the patients, an intermediate group with a fairly low risk of recurrence (5-year DFS of about 92%), and a high risk group with a 5-year DFS of 45%. This latter group contained 26% of the patients with 78% of all relapses occurring in the total group of patients. The invasive tumor front grading was reliably reproducible, with inter- and intraobserver agreement of 79 to 87% and kappa values of 0.47 to 0.66. Conclusion: Clinical tumor size, depth of invasion, and grading of the invasive tumor front were the main predictors of prognosis in patients with stage IB squamous cell cancer of the cervix. abstract_id: PUBMED:16303042 Cytological study of early cervical adenocarcinoma: special reference to the depth of invasion. Objective: Early cervical adenocarcinoma (ECA) with a tumour depth of &lt;3 mm has a good prognosis. To clarify the cytological features of ECAs with depth &lt;3 mm, these were compared with those of ECA with 3-5 mm and invasive adenocarcinoma (IA) invading the cervical wall with more than 5 mm in depth. Methods: The cervical cytological features of ECAs with depth &lt;3 mm (14 cases) were compared with those of ECA with 3-5 mm (four cases) and IA (13 cases). Cytologically, the presence or absence of tumour diathesis, number of atypical cells, crowded cell groups, groups with glandular structures, feathering, groups with palisading borders, rosettes, clusters, cell shape and size, nuclear shape and size, nucleolar shape and size, chromatin distribution, border and character of cytoplasm, and single cell pattern were evaluated. Results: A tumour diathesis was seen in five of 14 ECA &lt;3 mm in depth (36%), all four ECA with 3-5 mm (100%) and 11 of 13 IA with more than 5 mm (85%). Single cells, macronucleoli and coarsely granular chromatin pattern were less frequent in ECA of &lt;3 mm than that in ECA with 3-5 mm and IA. The number of atypical cells and glandular structures in ECA was significantly less than that in IA. Cell crowding, feathering, palisading and rosettes were common in both ECA and IA. Conclusion: The characteristic cytological features of ECA with depth &lt;3 mm, having a good prognosis, were clean background, fewer single cells and macronucleoli, and less frequent coarsely granular chromatin pattern compared with those in ECA with 3-5 mm and IA. The number of atypical cells and glandular structures in ECA was significantly less than that in IA. Familiarity with the cytological features of ECA and its mimics is essential. abstract_id: PUBMED:30655871 Perineural invasion as a prognostic risk factor in patients with early cervical cancer. The aim of the present study was to investigate the influence of perineural invasion (PNI) on the prognosis of patients with early cervical cancer (stages IA2-IIA2). A retrospective analysis was conducted on 406 patients with early cervical cancer who underwent a radical hysterectomy and pelvic lymphadenectomy between January 2007 and December 2014 at the Affiliated Hospital of Jiangnan University (Wuxi, China). The clinicopathological data of the patients were obtained and follow-up assessments were performed. A statistical analysis of the association between PNI and each index was performed, and the effect of PNI and the clinicopathological parameters on the prognosis of the patients was evaluated. Among the 406 cases with early cervical cancer, 41 cases were lost, with a follow-up rate of 89.90%. Overall, 43 PNI-positive patients were observed, with an occurrence rate of 10.59%. PNI-positivity was associated with hypertension, lymph node metastasis, depth of cervical invasion, surgical margin and vascular invasion (P&lt;0.05), but it was not associated with age, diabetes, clinical stage, histological type or tumor size (P&gt;0.05). The overall survival (OS) and disease-free survival (DFS) times of PNI-positive patients were significantly lower compared with those of PNI-negative patients. A multivariate regression analysis revealed that age, tumor size, clinical stage and PNI were independent risk factors for OS and DFS times. PNI is a poor prognostic factor for patients with early cervical cancer. abstract_id: PUBMED:38042138 Saliva as a potential non-invasive liquid biopsy for early and easy diagnosis/prognosis of head and neck cancer. Head and neck squamous cell carcinomas (HNSCCs) are the most devastating diseases in India and southeast Asia. It is a preventable and curable disease if detected early. Tobacco and alcohol consumption are the two major risk-factors but infection of high-risk HPVs are also associated with development of predominantly oral and oropharyngeal carcinomas. Interestingly, unlike cervical cancer, HPV-induced HNSCCs show good prognosis and better survival in contrast, majority of tobacco-associated HPV-ve HNSCCs are highly aggressive with poor clinical outcome. Biomarker analysis in circulatory body-fluids for early cancer diagnosis, prognosis and treatment monitoring are becoming important in clinical practice. Early diagnosis using non-invasive saliva for oral or other diseases plays an important role in successful treatment and better prognosis. Saliva mirrors the body's state of health as it comes into direct contact with oral lesions and needs no trained manpower to collect, making it a suitable bio-fluid of choice for screening. Saliva can be used to detect not only virus, bacteria and other biomarkers but variety of molecular and genetic markers for an early detection, treatment and monitoring cancer and other diseases. The performance of saliva-based diagnostics are reported to be highly (≥95 %) sensitive and specific indicating the test's ability to correctly identify true positive or negative cases. This review focuses on the potentials of saliva in the early detection of not only HPV or other pathogens but also identification of highly reliable gene mutations, oral-microbiomes, metabolites, salivary cytokines, non-coding RNAs and exosomal miRNAs. It also discusses the importance of saliva as a reliable, cost-effective and an easy alternative to invasive procedures. abstract_id: PUBMED:35860149 What are the determinants of parametrial invasion in patients with early stage cervical cancer: A cross sectional study. Introduction: There was an increase in number of patients presented with early-stage cervical cancer (CC). Tumors with favorable pathological features might be candidates for less radical surgery. Methods: We retrospectively reviewed 700 patients with histologically confirmed CC between January 2011 and March 2020. Chi-square, Fisher's exact tests and multivariate logistic regression analysis were used to assess relations between parametrial involvement (PI) and clinic-pathological variables. Results: Total number of 132 patients with stage IA to IIA were eligible to participate. Squamous cell carcinoma was reported in 100 (75.8%) patients, adenocarcinoma and other tumor pathologies were found in 24(18.2%) and 8(6.1%), respectively. Considering the 2018 FIGO stage, 11 (8.4%) patients had IA, 111 (83%%) IB and 10 (7.6%) IIA. Nine patients (6.8%) had PI on permanent pathologic report. Univariate analysis demonstrated that following variables were statistically different between patients with and without PI: age ≥50, tumor size ≥ 3 cm, lower segment involvement, poorly differentiated pathology, deep stromal invasion, pelvic lymph node, lympho-vascular involvement and positive surgical margin (all p values &lt; 0.05). Among these variables only tumor size ≥3 cm (OR: 2.1, 95% CI: 1.11-4.16, p value: 0.02), deep stromal invasion (OR: 2.2, 95% CI: 1.9-7.43, p value: 0.02) and positive surgical margin (OR: 5.1, 95% CI: 3.97-11.15, p value: 0.008) were independent risk factor of PI in multivariate analysis. Conclusions: Early stage CC might be surgically approached in a more conservative manner if patients have tumor size &lt;3 cm and do not have deep stromal invasion in conization. abstract_id: PUBMED:15196857 Depth of cervical stromal invasion as a prognostic factor after radical surgery for early stage cervical cancer. Introduction: In the United Kingdom, the Royal College of Pathologists have issued guidelines detailing how the depth of stromal invasion (DOI) in cervical cancer should be measured as a percentage of the overall cervical radius in millimeters. Several studies have found the assessment of the depth of stromal invasion by cancer of the cervix to be of prognostic value. These studies did not take into account patients whose diagnostic procedures required removal of much tumor (large loop excision of the transformation zone [LLETZ] and knife cone biopsies). Furthermore, the Royal College of Pathologists guidelines do not address this issue. Materials And Methods: Over the period of 6 years, 228 women had radical hysterectomy (RH) for stage Ib/IIa cervical cancer. The percentage of the depth of stromal invasion was measured according to the Royal College of Pathologist's guidelines in the UK. Patients who had large loop excision of the transformation zone and knife cone biopsies were excluded (91 patients). Results: A Cox regression analysis showed that when nodal involvement, depth of stromal invasion, endothelial lined space invasion (ELSI), and tumor type were fitted simultaneously, only nodal involvement remained as a marker of adverse outcome. Conclusions: We recommend that when the DOI is measured, account should be taken of the LLETZ/knife cone biopsy size. A randomized controlled trial, which concludes that DOI is an independent prognostic factor, is needed. Until this is properly evaluated, we feel that including DOI as an essential part of the standard pathological report is not warranted. abstract_id: PUBMED:35141141 Comparison of Minimally Invasive Versus Abdominal Radical Hysterectomy for Early-Stage Cervical Cancer: An Updated Meta-Analysis. Background: Although minimally invasive surgery (MIS) was commonly used to treat patients with early-stage cervical cancer, its efficacy remained controversial. Methods: We systematically searched PubMed, Web of Science, and Cochrane Library databases until March 2021 to compare the prognosis of early-stage cervical cancer patients who underwent MIS (laparoscopic or robot-assisted radical hysterectomy) or ARH. The primary outcomes included rates of 3- and 5-year disease-free survival (DFS) and overall survival (OS). The study protocol was registered in PROSPERO: CRD42021258116. Results: This meta-analysis included 48 studies involving 23346 patients (11220, MIS group; 12126, ARH group). The MIS group had a poorer medium-term (3-year) DFS (HR=1.08, 95% CI: 1.01-1.16, p=0.031) than the ARH group, without significant difference in medium-term OS as well as long-term (5-year) DFS and OS. Subgroup analysis of 3-year prognosis revealed that although patients in Western countries who underwent MIS had shorter DFS than those who underwent ARH (HR=1.10, p=0.024), no difference was observed in DFS among those in Asian countries. Moreover, MIS was linked to poorer 3-year DFS in patients with stage I cervical cancer (HR=1.07, p=0.020). Notably, subgroup analysis of 5-year prognosis revealed that patients with tumor size ≥2 cm undergoing MIS exhibited a shorter DFS than those who underwent ARH (HR=1.65, p=0.041). Conclusion: Patients with early-stage cervical cancer undergoing MIS may have a poorer prognosis than those undergoing ARH. Therefore, applying MIS in early-stage cervical cancer patients should be conducted with caution. Systematic Review Registration: The study protocol was registered in PROSPERO: CRD42021258116. abstract_id: PUBMED:30262404 Independent validation of the prognostic significance of invasion patterns in endocervical adenocarcinoma: Pattern A predicts excellent survival. Objective: Recently, the pattern of invasion in usual-type human papillomavirus-associated endocervical adenocarcinoma (AC) was put forward as a novel variable to select patients with favourable prognosis. Based on destructiveness of stromal invasion, three patterns were proposed: A - no destructive stromal invasion, B - focal destructive stromal invasion, and C - diffuse destructive stromal invasion. We aimed to independently validate the clinical significance of this classification-system in 82 AC patients, and explored associations between invasion pattern and somatic mutations. Methods: All patients surgically treated for FIGO stage IB-IIA usual type AC (1990-2011, n = 82) were retrospectively reviewed and classified into pattern A, B or C. Additional immunohistochemical analyses were performed for p53, MSH6, and PMS2. Moreover, previously obtained data on somatic hotspot mutations in 13 relevant genes was integrated. Results: Of 82 AC, 22% showed pattern A, 37% pattern B, and 41% pattern C. Significant differences were observed between invasion patterns and tumour size, depth of invasion (DOI), lymph-vascular invasion (LVI), and lymph-node metastasis. Significantly fewer mutations were present in tumours with pattern A morphology (p = 0.036). All pattern A patients survived (p = 0.002) without recurrent disease (p = 0.005). In multivariate regression analysis including tumour size, DOI, LVI, and lymph node metastasis, invasion pattern was a strong independent predictor for recurrence-free and disease-specific survival (HR 3.75, 95%CI 1.16-12.11, and HR 5.08, 95%CI 1.23-20.98, respectively). Conclusions: We have independently validated the clinical significance of invasion patterns for usual type endocervical AC. Pattern A predicts excellent survival, and a clinical trial should prove safety of a more conservative treatment for these patients. abstract_id: PUBMED:29755594 Perineural invasion in early-stage cervical cancer and its relevance following surgery. Perineural invasion (PNI) is the neoplastic invasion of nerves by cancer cells, a process that has attracted attention as a novel prognostic factor for cancer. The present study aimed to investigate the prognostic value of PNI in patients with early-stage cervical cancer (International Federation of Gynecology and Obstetrics stage IA2-IIA). A total of 210 patients who underwent radical hysterectomy and pelvic lymphadenectomy between 2007 and 2012 were included in the current study, of whom 8.57% (18/210) exhibited PNI. Patients with PNI were more likely to exhibit adverse histopathological features, such as increased tumor size, depth of stromal invasion, parametrial invasion, lymphovascular space invasion and lymph nodes metastases (all P&lt;0.05). Patients with PNI exhibited shorter disease-free and overall survival (P=0.002 and P=0.017, respectively). However, PNI was not identified as an independent risk factor for either recurrence or death by multivariate analysis. Furthermore, 88.9% (16/18) of patients with PNI received adjuvant therapy following surgery. PNI was significantly associated with well-established indicators for adjuvant therapy. In conclusion, PNI was associated with multiple high-risk factors and its presence was indicative of a poor outcome in patients with early-stage cervical cancer, which may influence management decisions regarding adjuvant therapy. Answer: The invasion depth in millimeters is considered an important factor in determining the prognosis of early invasive cervical adenocarcinoma. However, its validity as a sole prognostic factor may be subject to certain limitations. A case report of a patient with FIGO stage IA1 cervical adenocarcinoma, who experienced a recurrence in the vaginal stump, suggests that the depth of invasion relative to the normal cervical gland area may be a potential prognostic factor. In this case, the invasive cancer was 3 mm in depth, which was deeper than the normal cervical gland area of 2 mm, indicating that the depth of invasion with reference to normal cervical glands could be significant for prognosis (PUBMED:11426988). In a study evaluating early squamous cell cervical carcinoma, clinical tumor size, depth of invasion, and grading of the invasive tumor front were identified as the main prognostic factors in multivariate analysis. This suggests that while depth of invasion is important, it should be considered alongside other factors such as tumor size and grading of the invasive front (PUBMED:10419739). A cytological study of early cervical adenocarcinoma (ECA) compared features of ECAs with depths <3 mm to those with depths of 3-5 mm and invasive adenocarcinoma (IA) with more than 5 mm in depth. The study found that ECAs with depth <3 mm had a good prognosis and exhibited distinct cytological features compared to deeper invasions (PUBMED:16303042). Another study found that perineural invasion (PNI) was a poor prognostic factor for patients with early cervical cancer, suggesting that factors beyond the depth of invasion in millimeters can significantly influence prognosis (PUBMED:30655871). In summary, while the depth of invasion in millimeters is a relevant factor in determining the prognosis of early invasive cervical adenocarcinoma, it should not be the only consideration. Other factors such as the relative depth compared to normal cervical glands, tumor size, grading of the invasive tumor front, and additional pathological features like perineural invasion also play critical roles in prognostication (PUBMED:11426988, PUBMED:10419739, PUBMED:16303042, PUBMED:30655871).
Instruction: Community-associated methicillin-resistant Staphylococcus aureus skin and soft tissue infections at a public hospital: do public housing and incarceration amplify transmission? Abstracts: abstract_id: PUBMED:17533205 Community-associated methicillin-resistant Staphylococcus aureus skin and soft tissue infections at a public hospital: do public housing and incarceration amplify transmission? Background: Community-associated methicillin-resistant Staphylococcus aureus (CA-MRSA) infections have emerged among patients without health care-associated risk factors. Understanding the epidemiology of CA-MRSA is critical for developing control measures. Methods: At a 464-bed public hospital in Chicago and its more than 100 associated clinics, surveillance of soft tissue, abscess fluid, joint fluid, and bone cultures for S aureus was performed. We estimated rates of infection and geographic and other risks for CA-MRSA through laboratory-based surveillance and a case-control study. Results: The incidence of CA-MRSA skin and soft tissue infections increased from 24.0 cases per 100,000 people in 2000 to 164.2 cases per 100,000 people in 2005 (relative risk, 6.84 [2005 vs 2000]). Risk factors were incarceration (odds ratio [OR], 1.92; 95% confidence interval [CI], 1.00-3.67), African American race/ethnicity (OR, 1.91; 95% CI, 1.28-2.87), and residence at a group of geographically proximate public housing complexes (OR, 2.50; 95% CI, 1.25-4.98); older age was inversely related (OR, 0.89; 95% CI, 0.82-0.96 [for each decade increase]). Of 73 strains tested, 79% were pulsed-field gel electrophoresis type USA300. Conclusions: Clonal CA-MRSA infection has emerged among Chicago's urban poor. It has occurred in addition to, not in place of, methicillin-susceptible S aureus infection. Epidemiological analysis suggests that control measures could focus initially on core groups that have contributed disproportionately to risk, although CA-MRSA becomes endemic as it disseminates within communities. abstract_id: PUBMED:16676492 Community-associated methicillin-resistant Staphylococcus aureus: review of an emerging public health concern. Methicillin-resistant Staphylococcus aureus (MRSA) traditionally has been recognized as a virulent pathogen affiliated with health care institutions. However, community-associated strains of MRSA (CA-MRSA) have emerged over the past several years in young, healthy patients without significant health care contact. These isolates carry a distinct molecular makeup and lack the multidrug resistance pattern harbored by health care strains. CA-MRSA predominantly induces skin and soft tissue infections, though the presence of unique virulence factors may cause potentially lethal necrotizing pneumonia and other invasive infections. In response to this growing public health concern, clinicians must learn to identify risk factors for CA-MRSA, treat infections with judicious use of antimicrobial agents, and facilitate prevention strategies to limit transmission. abstract_id: PUBMED:18257337 Community-associated methicillin-resistant Staphylococcus aureus--molecular evolution, characteristics and significance Methicillin-resistant Staphylococcus aureus (MRSA) presents a significant problem for hospitals worldwide for decades now. Appearance of MRSA strains in the community is becoming a serious public health problem. Community-associated MRSA (CA MRSA) differs from hospital-acquired MRSA (HA MRSA) genotypically and phenotypically. CA MRSA are susceptible to almost all non-beta-lactam antibiotics and they cause severe skin and soft tissue infections, life-threatening necrotizing pneumonia, in previously healthy, younger people. Staphylococcal chromosome cassette mec (SCCmec) that contains mecA gene responsible for resistance to beta-lactam antibiotics is type IV and V in CA MRSA, and significantly smaller than SCCmec types I, II and III, typically present in HA MRSA. Presence of Panton Valentine leucocidin (PVL) is typical for signifficant proportion of CA MRSA. Standard laboratory procedures are used for detection of CA MRSA, as well as rapid molecular methods in high-risk populations. Rapid methods are essential for prevention of CA-MRSA spreading in hospitals. abstract_id: PUBMED:19259363 A case of primary infective endocarditis caused by community-associated methicillin-resistant Staphylococcus aureus in a healthy individual and colonization in the family. Primary community-associated methicillin-resistant Staphylococcus aureus (CA-MRSA) endocarditis has rarely been reported in healthy individuals without risk factors, such as skin and soft tissue infections, and intravenous drug abuse. We describe a case of infective endocarditis by CA-MRSA (ST72-PVL negative-SCCmec IVA) in previously healthy individuals with no underlying medical condition and CA-MRSA colonization in the family. abstract_id: PUBMED:19135915 Staphylococcus aureus: a community pathogen. Staphylococcus aureus is a common human pathogen. S aureus infections most commonly clinically manifest as skin infections. There has been much interest in S aureus infections in the community over the past decade because of the rise of community-associated methicillin-resistant S aureus (CA-MRSA) infections, which have emerged globally over a relatively short period of time. In contrast to health care-associated methicillin resistant S aureus (HA-MRSA), circulating strains of CA-MRSA have characteristic pathogenesis, strain characteristics, epidemiology, and clinical manifestations that are distinct from HA-MRSA. In fact, CA-MRSA probably behaves more like community-associated methicillin-sensitive S aureus (MSSA). This article reviews current knowledge of the epidemiology and clinical manifestations of community-associated S aureus and CA-MRSA infections. abstract_id: PUBMED:20678828 Community-acquired methicillin-resistant Staphylococcus aureus infections in children Introduction: Community-acquired methicillin-resistant Staphylococcus aureus (CA-MRSA) infections were first reported in the 1990s. Young, healthy individuals are frequently affected. The incidence of CA-MRSA in Spain is increasing. Methods: All children seen between August 2006 and January 2009 with CA-MRSA infections were included. The S. aureus isolates were studied by conventional techniques, their antibiotic susceptibility by agar disk diffusion, the presence of mecA gene was detected by multiplex polymerase chain reaction (PCR) and the gene encoding the Panton-Valentine leukocidin (PVL) by conventional PCR. CA-MRSA colonization was studied both in patients and their family members. Results: CA-MRSA was isolated in 15 samples from 12 patients, aged between 6 days and 14 years. Half of them were not native. Eight patients required hospital admission. The most common clinical presentation was skin and soft tissue infection (92%). Secondary CA-MRSA bacteraemia was present in two patients. All strains were PVL producers and two were resistant to macrolides associated to methicillin resistance and one of them was also resistant to lincosamides. An intra-familial transmission was identified. The clinical outcome was favourable in all patients. Conclusion: CA-MRSA infections are emerging in Spain. Empirical treatment of skin and soft tissue infections should not be changed, since their incidence is still low. The drainage of CA-MRSA suppurative infections plays an important role in their treatment. Clindamycin or trimethoprim-sulfamethoxazole should be used for mild or moderate skin and soft tissue infections. Controlling the spread of these strains presents a challenge in the community today. abstract_id: PUBMED:19747505 Skin infection in children colonized with community-associated methicillin-resistant Staphylococcus aureus. Objectives: The relationship between community-associated methicillin-resistant Staphylococcus aureus (CA-MRSA) nasal colonization and subsequent infection in children is unknown. We sought to define risk factors for skin and soft tissue infection (SSTI) in community children. Methods: A prior study measured S. aureus nasal colonization prevalence for 1300 community children. To detect subsequent SSTI in these children or a household member, surveys were administered 6 and 12 months following enrollment. Results: SSTIs were reported by 56/708 (8.1%) respondents during the initial 6-month interval. SSTI developed in 6/26 (23%) initially colonized with MRSA, 16/194 (8%) with methicillin-sensitive S. aureus colonization, and 34/474 (7%) not colonized with S. aureus (MRSA vs. not MRSA, univariate analysis, p = 0.014). In multivariable analysis, factors associated with SSTI included history of SSTI in the child during the year preceding enrollment (p &lt; 0.01) and SSTI in household contacts during the follow-up interval (p&lt;0.01); MRSA nasal colonization approached statistical significance (p = 0.08). Conclusions: In the current era of community MRSA transmission, SSTI is a disease of households, with recurrences in index cases and occurrences among household contacts. Children with MRSA colonization may be at risk for subsequent SSTI. Further study of MRSA transmission dynamics in households and preventive strategies should receive high priority. abstract_id: PUBMED:19100163 Community-acquired methicillin-resistant Staphylococcus aureus Recently, methicillin-resistant Staphylococcus aureus (MRSA) has emerged as a cause of community-acquired (CA) infections among patients without established risk factors for MRSA. CA-MRSA strains mainly cause mild skin and soft tissue infections in otherwise healthy children and young adults, but can also cause severe necrotizing fasciitis and pneumonia. In contrast to nosocomial MRSA, CA-MRSA are, in general, susceptible to multiple antimicrobials and present a different genotype. Most CA-MRSA strains share the staphylococcal chromosomal cassette (SCCmec) type IV and produce Panton-Valentine leukocidin (PVL), a cytotoxin that causes leukocyte destruction and tissue necrosis. At present, the predominant clone is the USA300 clone, which is widely disseminated in the United States, Europe and Australia. In Spain, the predominant clone is related to the USA300 clone. The main mechanism of transmission is close person-to-person contact, although household pets and farm animals have also been implicated. In patients with purulent skin and soft tissue infections, the mainstay of treatment is incision and drainage. Antimicrobials are indicated in patients not responding to appropriate drainage. Clindamycin, trimethoprimsulfamethoxazole or tetracyclines can be administered, while the use of fluoroquinolones should be avoided due to the rapid emergence of resistance. For severe infections, vancomycin should be used. Other alternatives are linezolid or daptomycin (only if there is no pulmonary involvement). Adequate hygiene practices are the most efficient measure to prevent spread. abstract_id: PUBMED:20192731 Community-associated methicillin-resistant Staphylococcus aureus and HIV: intersecting epidemics. Background: Single-site studies have suggested a link between human immunodeficiency virus (HIV) and community-associated methicillin-resistant Staphylococcus aureus (CA-MRSA). Methods: Population-level incidence of HIV-infected patients with CA-MRSA versus community-associated methicillin-susceptible S. aureus (CA-MSSA) infection was assessed in the Cook County Health and Hospitals System (CCHHS), a multi-hospital and ambulatory care center. Rates in zip codes, including those with a high density of individuals with prior incarceration (ie, high-risk zip codes), were calculated. We did a nested case-control analysis of hospitalized HIV-infected patients with S. aureus skin and soft-tissue infections (SSTIs). Results: In CCHHS, the incidence of CA-MRSA SSTIs was 6-fold higher among HIV-infected patients than it was among HIV-negative patients (996 per 100,000 HIV-infected patients vs 157 per 100,000 other patients; P &lt; .001). The incidence of CA-MRSA SSTIs among HIV-infected patients significantly increased from 2000-2003 (period 1) to 2004-2007 (period 2) (from 411 to 1474 cases per 100,000 HIV-infected patients; relative risk [RR], 3.6; P&lt;.001), with cases in period 1 clustering in an area 6.3 km in diameter (P=.035) that overlapped high-risk zip codes. By period 2, CA-MRSA SSTIs among HIV-infected patients were spread throughout Cook County. USA300 was identified as the predominant strain by pulsed-field gel electrophoresis (accounting for 86% of isolates). Among hospitalized HIV-infected patients, the incidence of CA-MRSA increased significantly from period 1 to period 2 (from 190 to 779 cases per 100,000 HIV-infected patients; RR, 4.1; P&lt;.001). Risks for CA-MRSA by multivariate analysis were residence in alternative housing (eg, shelters), residence in high-risk zip codes, younger age, and infection in period 2. Conclusions: HIV-infected patients are at markedly increased risk for CA-MRSA infection. This risk may be amplified by overlapping community networks of high-risk patients that may be targets for prevention efforts. abstract_id: PUBMED:22178559 Community-acquired methicillin-resistant Staphylococcus aureus: Review Community-acquired methicillin-resistant Staphylococcus aureus (CA-MRSA) have emerged worldwide. These CA-MRSA are different from classical hospital-acquired MRSA. They share common characteristics: they affect mainly young subjects, without past medical history. The majority of strains produce the Panton-Valentine leukocidin. They are mainly responsible for suppurative skin infections and rarely for invasive infections such as necrotizing pneumonia. The situation in the US is alarming with a main circulating clone the USA300 clone, whereas in Europe, the diffusion of CA-MRSA strains remains limited. It is important to take advantage of the experience acquired from the US to limit the potential spread of such CA-MRSA strains. Answer: The emergence of community-associated methicillin-resistant Staphylococcus aureus (CA-MRSA) infections among patients without health care-associated risk factors has been a growing concern. A study conducted at a public hospital in Chicago and its associated clinics found that the incidence of CA-MRSA skin and soft tissue infections increased significantly from 2000 to 2005. The study identified several risk factors for CA-MRSA, including incarceration (odds ratio [OR], 1.92), African American race/ethnicity (OR, 1.91), and residence at a group of geographically proximate public housing complexes (OR, 2.50). Older age was inversely related to the risk of CA-MRSA (OR, 0.89 for each decade increase). The majority of the strains tested (79%) were pulsed-field gel electrophoresis type USA300, suggesting a clonal spread among Chicago's urban poor. The study concluded that control measures could initially focus on core groups that have contributed disproportionately to the risk, although CA-MRSA becomes endemic as it disseminates within communities (PUBMED:17533205). This evidence suggests that public housing and incarceration may indeed amplify the transmission of CA-MRSA, as they are associated with higher odds of infection. The concentration of cases in certain geographic areas and among specific populations indicates that these settings could be critical points for targeted interventions to control the spread of CA-MRSA.
Instruction: Do scores on a tachistoscope test correlate with baseball batting averages? Abstracts: abstract_id: PUBMED:21471813 Do scores on a tachistoscope test correlate with baseball batting averages? Background: Millions of dollars are spent each year by individuals seeking to improve their athletic performance. One area of visual training is the use of the tachistoscope, which measures inspection time or visual recognition time. Although the potential of the tachistoscope as a training tool has received some research attention, its use as a means of measurement or predictor of athletic ability in sports has not been explored. The purpose of this pilot study is to assess the potential of the tachistoscope as a measurement instrument by determining if a baseball player's ability to identify a tachistoscopically presented picture of a pitch is correlated with hitting performance as measured by batting average. Methods: Using sport-specific slides, 20 subjects-all non-pitching members of the Pacific University Baseball Team-were administered a tachistoscopic test. The test consisted of identifying the type of pitch illustrated in 30 randomly ordered slides depicting a pitcher throwing four different baseball pitches. Each slide was presented for 0.2 sec. The results of the test were compared with the athlete's previous season's batting average. Results: A positive correlation was found between an athlete's ability to correctly identify a picture of a pitch presented tachistoscopically and batting average (r=0.648; P&lt;0.01). These results suggest that a superior ability to recognize pitches presented via tachistoscope may correlate with a higher skill level in batting. Conclusions: Tachistoscopic test scores correlated positively with batting averages. The tachistoscope may be an acceptable tool to help in assessing batting performance. Additional testing with players from different sports, different levels of ability, and different tachistoscopic times should be performed to determine if the tachistoscope is a valid measure of athletic ability. Implications may also be drawn in other areas such as military and police work. abstract_id: PUBMED:36775314 Analysis of trunk rotation during baseball batting with lumbar disc degeneration. Lower back pain (LBP) is common among baseball players, and the occurrence of lumbar intervertebral disc degeneration is high. The dynamic load on the lumbar spine due to the postures and movements characteristic of baseball is suspected of aggravating LBP caused by degeneration, but the difference in batting action between players with and without degeneration is not known. The purpose of this study was to investigate the difference in batting motion in the presence and absence of lumbar disc degeneration (LDD). The subjects were 18 male baseball players belonging to the University League Division I: seven with disc degeneration and 11 without. The motion task analyzed tee batting. The items examined were the angles of rotation of shoulder, pelvis, hip, and twisting motion; rotation angular velocity; time to maximum angular velocity; and muscle activity potentials of the bilateral latissimus dorsi, erector spinae, multifidus, external oblique, internal oblique, rectus abdominis, and gluteus medius muscles; at each stage of batting action. There were significant differences between the shoulder and pelvis in rotation angle, time to maximum angular velocity, and muscle activity in the presence and absence of LDD, and in the time to maximum angular velocity between the shoulder and pelvis. We infer that these differences are characteristic of batting motion due to LDD. abstract_id: PUBMED:34788743 The Effects of Concussions on Batting Performance in Major League Baseball Players: A Critically Appraised Topic. Clinical Scenario: The impact of concussion in baseball athletes is far reaching although typically less studied than collision sports. The neuromotor sequelae of these injuries can have lasting effects on the high level of coordination needed in baseball skills. In professional athletes, the long-term effects of concussion combined with the high demands for performance can potentially shorten athletes' careers. Clinical Question: Do Major League Baseball players who have had a concussion compared with those with no history of concussions have decreased batting performance after they return to full participation? Summary of Key Findings: Seven articles meeting criteria for level 2 evidence were identified in this critically appraised topic. Four of 7 studies found deficits in batting performance following concussion. However, the 3 studies that did not identify differences lacked a comparative control group. Clinical Bottom Line: Evidence supports a relationship between concussion and decreased batting performance. This suggests there is a demand for screening and rehabilitation strategies aimed at improving sport-specific neuromotor and coordination skills in these individuals. Strength of Recommendation: Collectively, the body of evidence included to answer the clinical question aligns with the strength of recommendation of B. abstract_id: PUBMED:30092394 A comparison of kinetics in the lower limbs between baseball tee and pitched ball batting. In this study, the kinetic characteristics of lower limbs during batting were investigated by comparing batting off a tee with batting a pitched ball. Participants were 10 male collegiate baseball players who performed tee batting (TB) and batting using a pitching machine (MB; approximate ball speed: 33.3 m/s). Three-dimensional coordinate data were acquired using a motion capture system, and ground reaction forces were measured using three force platforms. Lower limb joint torques were obtained by inverse dynamics calculations. The results indicated that the angular velocity of the lower trunk was larger in TB than in MB for rotation. The swing time from stride foot contact with the ground to ball impact was significantly longer in MB than in TB. The angular impulses of bilateral hip adduction, pivot hip external rotation, and stride hip and knee extension torques were significantly larger in MB, suggesting that batters exert these joint torques earlier for pitched balls to handle time constraints by changing the rotation of the lower trunk in response to the unknown ball location and speed in MB. These findings will help to fill a gap in the literature and provide coaching insights for improving batting motion. abstract_id: PUBMED:35214454 Impact Position Estimation for Baseball Batting with a Force-Irrelevant Vibration Feature. In this work we propose a novel method for impact position estimation during baseball batting, which is independent of impact intensity, i.e., force-irrelevant. In our experiments, we mount a piezoelectric vibration sensor on the knob of a wooden bat to record: (1) 3600 vibration signals (waveforms) from ball-bat impacts in the static experiment-30 impacts from each of 40 positions (distributed 1-40 cm from the end of the barrel) and 3 intensities (drop heights at 75, 100, and 125 cm, resp.), and (2) 45 vibration signals from actual battings by three baseball players in the dynamic experiment. The results show that the peak amplitude of the signal in the time domain, and the peaks of the first, second, and third eigenfrequencies (EFs) of the bat all increase with the impact intensity. However, the ratios of peaks at these three EFs (1st/2nd, 2nd/3rd, and 1st/3rd) hardly change with the impact intensity, and the observation is consistent for both the static and dynamic experiments across all impact positions. In conclusion, we have observed that the ratios of peaks at the first three EFs are a force-irrelevant feature, which can be used to estimate the impact position in baseball batting. abstract_id: PUBMED:33345055 Temporally Coupled Coordination of Eye and Body Movements in Baseball Batting for a Wide Range of Ball Speeds. We investigated the visuomotor strategies of baseball batting, in particular, the relationship between eye and body (head and hip) movements during batting for a wide range of ball speeds. Nine college baseball players participated in the experiment and hit balls projected by a pitching machine operating at four different ball speeds (80, 100, 120, 140 km/h). Eye movements were measured with a wearable eye tracker, and body movements were measured with an optical motion capture system. In the early period of the ball's flight, batters foveated the ball with overshooting head movements in the direction of the ball's flight while compensating for the overshooting head movements with eye movements for the two slower ball speeds (80 and 100 km/h) and only head rotations for the two faster ball speeds (120 and 140 km/h). After that, batters made a predictive saccade and a quick head rotation to the future ball position before the angular velocity of the ball drastically increased. We also found that regardless of the ball speed, the onsets of the predictive saccade and the quick head movement were temporally aligned with the bat-ball contact and rotation of the hip (swing motion), but were not correlated with the elapsed time from the ball's release or the ball's location. These results indicate that the gaze movements in baseball batting are not solely driven by external visual information (ball position or velocity) but are determined in relation to other body movements. abstract_id: PUBMED:33172339 Optimisation of the upper body motion for production of the bat-head speed in baseball batting. The purposes of this study were to 1) develop a simulation model of baseball batting utilising the standard motion, and 2) explore optimal motions of the upper body to increase the bat-head speed. Twenty-three male collegiate baseball players performed tee batting set at waist height. A ten-segment angle-driven simulation model consisting of a bat and upper body was driven using with the coordinate data of the standard motion. Performance optimisation was conducted to find joint angle time histories of the upper body that increase the maximum bat-head speed. In the evaluation of the simulation model, the root mean square error between the measured and simulation model was 0.19 m/s and 0.98° for the time histories of the bat-head speed and bat orientation angle. Performance optimisation was able to achieve a targeted increase in bat-head speed (35.6 m/s to 40.0 m/s) through greater barrel-side shoulder abduction, knob-side elbow flexion, and torso right lateral flexion around ball impact resulted in the bat accelerating in the hitting direction. It is concluded that the proposed simulation approach can be applied as a tool for further simulation analysis in various complex sporting motions. abstract_id: PUBMED:32223653 Timing of Return to Batting Milestones After Ulnar Collateral Ligament Reconstruction in Professional Baseball Players. Background: Ulnar collateral ligament reconstruction (UCLR) is a common procedure in professional baseball position players. Timing of return to hitting after UCLR is unknown. Purpose/hypothesis: The purpose was to determine the time to return to batting milestones after UCLR as well as the effect of UCLR upon batting performance in professional baseball players. The hypothesis was that position players would return to batting in an in-season game before fielding in an in-season game, and hitting performance would remain unchanged after UCLR. Study Design: Cohort study; Level of evidence, 3. Methods: All professional position players who underwent UCLR between 2010 and 2018 were included. Time to batting milestones after UCLR was analyzed. Batting performance before and after UCLR was compared and analyzed. Results: Overall, 141 UCLRs (96% performed on the dominant arm) in 137 position players were included (86% Minor League players). Four players underwent revision, all within 1 year of the primary UCLR. With regard to position, catchers and shortstops were overrepresented. With regard to batting side, 57% batted from the right and 12% batted as switch-hitters. Of the surgeries, 76% were on the trail/back arm. While 91% of players returned to some form of throwing, there was a progressive gradual decline as the rehabilitation process progressed, as only 77% were able to return to hitting in a real game and 75% were able to return to fielding in a real game. The first dry swing occurred at 150 ± 49 days after surgery, the first batting practice occurred at 195 ± 58 days after surgery, the first hitting in a real game occurred at 323 ± 92 days after surgery, and the first fielding in a real game occurred at 343 ± 98 days after surgery. However, players generally saw a decrease in their utilization, with fewer at bats (P &lt; .001) translating into fewer hits (P &lt; .001) and runs (P &lt; .001). Conclusion: Professional position players begin swinging at 150 days (approximately 5 months) after UCLR, while they do not hit in batting practice until 195 days (approximately 6.5 months) and do not hit in a real game until 323 days (approximately 10.7 months) after UCLR. Players see a decrease in hitting utilization after UCLR. On average, players hit in a real game 20 days before fielding in a real game. abstract_id: PUBMED:33998388 Changes in lumbar kinematics and trunk muscle electromyographic activity during baseball batting under psychological pressure. Psychological pressure during sports competition disturbs the ideal physical movement and causes injury. Baseball batting frequently causes trunk injuries. This study aimed to examine the influence of psychological pressure on the lumbar kinematics and trunk muscle activity during the baseball batting. Fourteen collegiate baseball players participated in this study. The participants performed bat swings under three different psychological conditions (non-pressure, pressure, and emphasized pressure). The lumbar kinematics and trunk muscle activity were measured during each bat swing. One- and two-way analyses of variance were performed to compare the lumbar kinematics and trunk muscle activity among different psychological pressure conditions. The lumbar flexion angle throughout the bat swing in the swing phase, from the moment of ground contact of the lead foot to the moment of ball contact, was significantly larger under the pressure and emphasized pressure conditions than under the non-pressure condition (P&lt;0.05). The bilateral lumbar erector spinae (LES) activities in the swing and follow-through phases were significantly higher under the emphasized pressure condition than under the non-pressure condition (P&lt;0.05). These results indicate that the baseball batting under psychological pressure influenced the lumbar kinematics and bilateral LES activities and may be related to the development of low back pain. abstract_id: PUBMED:34328452 Review: Approaches to Visual-motor Control in Baseball Batting. Significance: This review evaluates model-based online approaches to visual-motor control in baseball batting in light of research studies on this topic.Throughout the history of research on baseball batting, there has been a presumptive winner in terms of how visual-motor control occurs for this complex skill. Because of the severe time constraints and the short duration of the action, it has been assumed that a baseball swing must involve model-based control. That is, hitting involves using an open-loop, pre-programmed movement that is parameterized based on situational probabilities, advance cues from the pitcher's delivery, and information early in the ball flight, which are used by an internal model to predict the future location of the ball. In this review, the author contrasts two variants of this type of model-based control (a 3D world model and spatial memory of trajectories) with an online control approach in which the batter uses only currently available perceptual information to control the swing prospectively via information-movement coupling. It is shown that the assumed necessity of predictive control in baseball batting is based on too narrow of a conception of online control. Although work still needs to be done to elucidate the specifics of the control, it is argued here that online control provides an account of visual-motor control in baseball batting that is both more parsimonious and more consistent with published research findings. Answer: Yes, scores on a tachistoscope test correlate with baseball batting averages. A study found a positive correlation between an athlete's ability to correctly identify a picture of a pitch presented tachistoscopically and batting average (r=0.648; P<0.01). This suggests that a superior ability to recognize pitches presented via tachistoscope may correlate with a higher skill level in batting (PUBMED:21471813).
Instruction: Short segment columnar-lined oesophagus: an underestimated cancer risk? Abstracts: abstract_id: PUBMED:10582754 Short Barrett: prevalence and risk factors. Background: The incidence of adenocarcinoma at the gastro-oesophageal junction is on the increase. These carcinomas are usually diagnosed too late and thus have a poor prognosis. Only early diagnosis can improve the situation. Classical Barrett oesophagus (length, &gt;3 cm) is a known precancerous condition. There is also specialized columnar epithelium (SCE) in the grossly unremarkable gastro-oesophageal transitional zone (short Barrett). Methods: To determine the frequency of SCE, 370 patients were investigated by gastroscopy (OGD) consecutively between September 1995 and February 1996. Results: Classical Barrett oesophagus was found to have an incidence of 4.6%. In contrast, microscopic evidence of SCE was observed in 13.6% of the cases. Patients with short Barrett presented with reflux symptoms (odds ratio (OR), 4.7), irregular zona serrata ('tongues') in the cardia (OR, 2.8), and reflux oesophagitis significantly more frequently. Patients with reflux symptoms and concomitant 'tongues', however, had an OR of 13.16. Careful history-taking, together with a subtle histologic work-up of the gastro-oesophageal transitional zone can improve the rate of detecting patients with short Barrett. Conclusion: Patients with reflux symptoms and irregular zona serrata should be selectively biopsied at the gastro-oesophageal junction, even when the latter presents a grossly normal appearance, with the aim of detecting patients at risk of developing a Barrett carcinoma. abstract_id: PUBMED:19401529 Constitutive short telomere length of chromosome 17p and 12q but not 11q and 2p is associated with an increased risk for esophageal cancer. Shortened telomere length may cause chromosomal instability in Barrett's esophagus and thus promote tumorigenesis. However, whether short telomere length in all chromosomes or just some of them is associated with increased esophageal cancer (EC) risk is largely unknown. To address this question, we examined the overall and chromosome-specific telomere lengths of 17p, 12q, 2p, and 11q and assessed their associations with EC risk. In a case-control study with 94 EC cases and 94 matched controls, the overall telomere length and the chromosome-specific telomere lengths of 17p, 12q, 2p, and 11q in peripheral blood lymphocytes were determined by a real-time PCR and a modified single telomere length analysis assay, respectively. Multivariate logistic regression analysis was used to assess the association between telomere length and EC risk. Compared with controls, EC patients had significantly shorter overall telomere lengths (P = 0.004) and chromosome-specific telomere lengths of 17p (P = 0.003) and 12q (P = 0.006) but not of 11q (P = 0.632) and 2p (P = 0.972). Furthermore, the multivariate logistic regression analysis showed that the short overall telomere length and chromosome-specific telomere lengths of 17p and 12q were associated with a dose-dependent increase in EC risk. Our study provides the first epidemiologic evidence that short telomere length of 17p and 12q plays an important role in esophageal carcinogenesis, suggesting that short telomere length of specific chromosomes is associated with the etiology of different cancer types. abstract_id: PUBMED:18049166 Short segment columnar-lined oesophagus: an underestimated cancer risk? A large cohort study of the relationship between Barrett's columnar-lined oesophagus segment length and adenocarcinoma risk. Objective: Longer columnar-lined oesophagus (CLO) segments have been associated with higher cancer risk, but few studies have demonstrated a significant difference in neoplastic risk stratified by CLO segment length. This study establishes adenocarcinoma risk in CLO by segment length. Methods: This is a multicentre retrospective observational study. Medical records of 1000 patients registered from six centres were examined and data extracted on demographic factors, endoscopic features and histopathology of oesophageal biopsies. Adenocarcinoma incidence was evaluated for patients stratified by their diagnostic segment length. Results: Seven hundred and eighty-one patients had biopsy-proven CLO and a segment length recorded. Four hundred and ninety patients had at least 1 year of follow-up, providing 2620 patient-years of follow-up for incidence analysis. The overall annual adenocarcinoma incidence was 0.62%/year (95% confidence interval: 0.36-1.01). The annual incidence in the segment length groups was 0.59% (0.19-1.37) in short segment (&lt;or=3 cm), 0.099% (0.025-0.55) in &gt;3 &lt;or=6 cm, 0.98% (0.27-2.52) in &gt;6 &lt;or=9 cm and 2.0% (0.73-4.35) in &gt;9 cm; P=0.004. Conclusion: This study demonstrates that the neoplastic risk of CLO varies according to segment length, and that overall, the risk of adenocarcinoma development is similar in short-segment and long-segment (&gt;3 cm) CLO. The highest adenocarcinoma risk was found in the longest CLO segments and lowest risk in segments &gt;3 &lt;or=6 cm. abstract_id: PUBMED:16771934 Short segment Barrett's esophagus: ignorance may be bliss. Data suggest the incidence of Barrett's esophagus is increasing, but the vast majority are of the short segment (SS) variety. Whether SSBE has an increased cancer risk is poorly defined, whereas overdiagnosis is common and it is associated with increased insurance costs and perception of cancer risks. Maybe in this disease, ignorance is bliss. abstract_id: PUBMED:17900269 Meta analysis: Cancer risk in Barrett's oesophagus. Background: Risk of cancer in Barrett's oesophagus is reported to vary between studies and also between countries, where the studies were conducted as per several systematic reviews. Cancer incidence has implications on surveillance strategies. Aim: To perform a meta-analysis to determine the incidence of oesophageal cancer in Barrett's oesophagus. Methods: Articles retrieved by MEDLINE search (English language, 1966-2004). Studies had to necessarily include verified Barrett's oesophagus surveillance patients, documented follow-up and cancer identified as the outcome measure. A random effects model of meta-analysis was chosen and results were expressed as mean (95% CI). Results: Forty-one articles selected for conventional Barrett's oesophagus (length &gt;3 cm); eight included short segment Barrett's oesophagus (one additional article including only short segment Barrett's oesophagus). Cancer incidence was 7/1000 (6-9) person-years duration of follow-up (pyd), with no detectable geographical variation [UK 7/1000 (4-12) pyd, USA 7/1000 (5-9) pyd and Europe 8/1000 (5-12) pyd]. Cancer incidence in the UK was 10/1000 (7-14), when two large studies were excluded. Cancer incidence in SSBO was 6/1000 (3-12) pyd. When short segment Barrett's oesophagus compared to conventional Barrett's oesophagus, there was a trend towards reduced cancer risk [OR 0.55, (95% CI: 0.19-1.6), P = 0.25]. Conclusion: We found no geographical variations in Barrett's oesophagus cancer risk, but observed a trend towards reduced cancer risk in short segment Barrett's oesophagus. There is a time trend of decreasing cancer incidence. abstract_id: PUBMED:18049157 Cancer risk in Barrett's oesophagus. (Table is included in full-text article.)Barrett's oesophagus results from the replacement of the normal squamous lining of the oesophagus by a columnar epithelium. It is the sole known premalignant condition for oesophageal adenocarcinoma. The annual cancer incidence of 1% in Barrett's oesophagus, calculated from published series, has been recently considered an overestimation owing to publication bias, and a 0.5% risk was proposed. The prerequisite of the presence of intestinal metaplasia for the diagnosis of Barrett's oesophagus, although widely accepted, is questioned by some authors. How adenocarcinoma incidence is influenced by requiring or not intestinal metaplasia for Barrett's oesophagus diagnosis is unknown. Most of the published studies included only (or preferentially) patients with long segments. Data on adenocarcinoma incidence in short segments (&lt;3 cm) are very scarce, but it is believed to be lower than in long segments. The magnitude of cancer risk influences cost effectiveness of surveillance of Barrett's oesophagus. Frequently, therapeutic intervention is performed when high-grade dysplasia is diagnosed, preventing progression to adenocarcinoma. This could lead to an underestimation of cancer risk in Barrett's surveillance studies. abstract_id: PUBMED:10766679 Effect of segment length on risk for neoplastic progression in patients with Barrett esophagus. Background: The increased risk for esophageal adenocarcinoma associated with long-segment (&gt; or =3 cm) Barrett esophagus is well recognized. Recent studies suggest that short-segment (&lt;3 cm) Barrett esophagus is substantially more common; however, the risk for neoplastic progression in patients with this disorder is largely unknown. Objective: To examine the relation between segment length and risk for aneuploidy and esophageal adenocarcinoma in patients with Barrett esophagus. Design: Prospective cohort study. Setting: University medical center in Seattle, Washington. Patients: 309 patients with Barrett esophagus. Measurements: Patients were monitored for progression to aneuploidy and adenocarcinoma by repeated endoscopy with biopsy for an average of 3.8 years. Cox proportional hazards analysis was used to calculate adjusted relative risks and 95% Cls. Results: After adjustment for histologic diagnosis at study entry, segment length was not related to risk for cancer in the full cohort (P &gt; 0.2 for trend). When patients with high-grade dysplasia at baseline were excluded, however, a nonsignificant trend was observed; based on a linear model, a 5-cm difference in segment length was associated with a 1.7-fold (95% CI, 0.8-fold to 3.8-fold) increase in cancer risk. Among all eligible patients, a 5-cm difference in segment length was associated with a small increase in the risk for aneuploidy (relative risk, 1.4 [CI, 1.0 to 2.1]; P = 0.06 for trend). A similar trend was observed among patients without high-grade dysplasia at baseline. Conclusions: The risk for esophageal adenocarcinoma in patients with short-segment Barrett esophagus was not substantially lower than that in patients with longer segments. Although our results suggest a small increase in risk for neoplastic progression with increasing segment length, additional follow-up is needed to determine whether the patterns of risk occurred by chance or represent true differences. Until more data are available, the frequency of endoscopic surveillance should be selected without regard to segment length. abstract_id: PUBMED:15645398 Epidemiologic risk factors for Barrett's esophagus and associated adenocarcinoma. The incidence of esophageal adenocarcinoma (AC) has increased dramatically in the Western world over the past 20 years and the majority of these cancers arise on the background of the preinvasive lesion Barrett's esophagus. The epidemiologic factors that contribute to an individual's susceptibility for Barrett's esophagus and associated cancer are likely to be multifactorial. However, the short time frame over which the incidence of adenocarcinoma has increased, and the increase across populations, provides a strong argument for environmental factors as etiologic agents, perhaps interacting with genetically determined characteristics that define personal susceptibility. In this review we discuss the epidemiologic evidence for the proposed demographic and environmental risk factors for the development of both Barrett's esophagus and AC. The current evidence suggests that significant risk factors include male sex, Caucasian race, and the presence of duodenogastroesophageal reflux disease. The susceptibility for reflux disease may in turn be influenced by factors such as obesity, the use of drugs that lower the lower-esophageal sphincter tone, and a protective effect of Helicobacter pylori colonization. There appears to be a weak association between smoking and AC. The role of dietary factors has not been studied adequately and deserves further attention. An understanding of the factors that predispose to the development and progression of Barrett's esophagus is crucial to the implementation of effective screening and prevention programs. abstract_id: PUBMED:24589668 Symptoms and endoscopic features at barrett's esophagus diagnosis: implications for neoplastic progression risk. Objectives: Risk stratification of Barrett's esophagus (BE) patients based on clinical and endoscopic features may help to optimize surveillance practice for esophageal adenocarcinoma (EAC) development. The aim of this study was to investigate patient symptoms and endoscopic features at index endoscopy and risk of neoplastic progression in a large population-based cohort of BE patients. Methods: A retrospective review of hospital records relating to incident BE diagnosis was conducted in a subset of patients with specialized intestinal metaplasia from the Northern Ireland BE register. Patients were matched to the Northern Ireland Cancer Registry to identify progressors to EAC or esophageal high-grade dysplasia (HGD). Cox proportional hazards models were applied to evaluate the association between endoscopic features, symptoms, and neoplastic progression risk. Results: During 27,997 person-years of follow-up, 128 of 3,148 BE patients progressed to develop HGD/EAC. Ulceration within the Barrett's segment, but not elsewhere in the esophagus, was associated with an increased risk of progression (hazard ratio (HR) 1.72; 95% confidence interval (CI): 1.08-2.76). Long-segment BE carried a significant sevenfold increased risk of progression compared with short-segment BE; none of the latter group developed EAC during the study period. Conversely, the absence of reflux symptoms was associated with an increased risk of cancer progression (HR 1.61; 95% CI: 1.05-2.46). Conclusions: BE patients presenting with a long-segment BE or Barrett's ulcer have an increased risk of progressing to HGD/EAC and should be considered for more intense surveillance. The absence of reflux symptoms at BE diagnosis is not associated with a reduced risk of malignant progression, and may carry an increased risk of progression. abstract_id: PUBMED:19161747 Radiofrequency ablation of Barrett's esophagus: short-term results. Background: The presence of Barrett's esophagus (BE) increases the risk of esophageal cancer. Total regression of BE is uncommon with medication or laparoscopic fundoplication, and endoscopic techniques to obliterate BE have varied results. This study evaluated the early results of a balloon-based catheter radiofrequency ablation (RFA) system in patients with medically refractory reflux symptoms and biopsy-proven BE. Methods: The medical records of 27 consecutive patients who underwent RFA for BE from March 2005 through January 2007 were reviewed. Esophagogastroduodenoscopy was performed before ablation to document presence of BE and no cancer and at 8 weeks after the RFA to assess the presence of residual BE. Results: Mean patient age was 53.6 +/- 12.5 years; 16 (59%) were men. The average length of the Barrett segment treated was 4.6 +/- 4.7 cm. Two patients (7.4%) had low-grade dysplasia. No patient had high-grade dysplasia and cancer. There was no periprocedural morbidity or at follow-up, no postprocedure dysphagia or stricture. In all patients, the BE was completely replaced with normal squamous epithelium. Symptoms regressed in 16 patients (60%) with RFA and proton pump inhibitor therapy. Eleven required an antireflux procedure for persistent symptoms. Conclusions: Short-term results show that RFA for BE is safe and achieves 100% replacement of intestinal metaplasia. RFA of BE combined with fundoplication may be offered to patients with BE and medically refractory reflux symptoms. Long-term endoscopic surveillance is needed to determine if the risk of cancer is reduced with this bimodality therapy. Answer: The question of whether short segment columnar-lined oesophagus (SSCLO) represents an underestimated cancer risk is addressed in several studies. The prevalence of specialized columnar epithelium (SCE) in the gastro-oesophageal transitional zone, also known as short Barrett, was found to be 13.6% in a study of 370 patients, which is higher than the incidence of classical Barrett oesophagus (4.6%) (PUBMED:10582754). Patients with short Barrett presented with reflux symptoms, irregular zona serrata, and reflux oesophagitis more frequently, suggesting that these patients may be at risk of developing Barrett carcinoma and should be selectively biopsied even when the gastro-oesophageal junction appears grossly normal. A large cohort study demonstrated that the neoplastic risk of columnar-lined oesophagus (CLO) varies according to segment length, with the overall annual adenocarcinoma incidence being 0.62%/year. The study found that the risk of adenocarcinoma development was similar in short-segment and long-segment CLO, with the highest risk in the longest CLO segments (PUBMED:18049166). However, another study suggested that the risk for esophageal adenocarcinoma in patients with short-segment Barrett esophagus was not substantially lower than that in patients with longer segments (PUBMED:10766679). A meta-analysis found that the incidence of oesophageal cancer in Barrett's oesophagus was 7/1000 person-years duration of follow-up, with a trend towards reduced cancer risk in short segment Barrett's oesophagus compared to conventional Barrett's oesophagus (PUBMED:17900269). This trend towards reduced risk was also noted in another study, which suggested that the annual cancer incidence of 1% in Barrett's oesophagus might be an overestimation, and a 0.5% risk was proposed, especially when considering short segments (PUBMED:18049157). In summary, while there is evidence to suggest that short segment columnar-lined oesophagus carries a risk for neoplastic progression, the risk may be lower compared to longer segments. However, the presence of risk factors such as reflux symptoms and irregular zona serrata, as well as the potential for underestimation of risk due to surveillance practices, indicate that SSCLO may indeed represent an underestimated cancer risk. Therefore, careful monitoring and biopsy of patients with SSCLO are recommended to detect and manage potential progression to adenocarcinoma.
Instruction: Is insulin an independent risk factor for hypertension? Abstracts: abstract_id: PUBMED:11220285 Insulin resistance: an independent risk factor for cardiovascular disease? Population-based studies have shown that patients with type 2 diabetes have a higher cardiovascular morbidity and mortality and an increased all-cause mortality compared with non-diabetic subjects. In non-diabetic subjects, hyperinsulinaemia not only has been associated with an increased cardiovascular risk, but is also related with a number of other cardiovascular risk factors--hyperglycaemia, hypertension, dyslipidaemia, and a central body fat distribution--that constitute the insulin resistance syndrome and are also characteristic of type 2 diabetes. After 23 years of follow-up in the Paris Prospective Study, both fasting and 2-h insulin concentrations were related with all-cause mortality in a U-shaped fashion, with low and high concentrations being associated with an increased risk, independently of other risk factors; in contrast, while high insulin levels carried an increased risk for cardiovascular and coronary heart disease mortality, this was not independent of other risk factors. abstract_id: PUBMED:15607067 Hyperleptinemia as a risk factor in obesity-related hypertension Background And Objective: Obesity is a risk factor for high blood pressure (HBP). However, the mechanism has not been dilucidated yet. High concentrations of leptin (LEP) contributing to an increased blood pressure in obese patients via increase in the activity of the sympathetic nervous system. We explored the physiopathologic role of hyperleptinemia in HBP with regard to obesity. Subjects And Method: Prospective and comparative study. Between February 2001 and December 2002, we studied 499 subjects, 255 with HBP and 244 without hypertension. The relationship between weight, LEP, insulin, and serum lipids with blood pressure was assessed. Results: Patients with HBP versus patients without HBP had a higher body mass index (30.4 +/- 3.9 to 25.5 +/- 1.7 kg/m2) (p &lt; 0.01), higher serum concentrations of: creatinine (0.99 +/- 0.16 to 0.88 +/- 0.16 mg/dl), total cholesterol (219.5 +/- 50.4 to 190.6 +/- 40.0 mg/dl), cLDL (126.9 +/- 52.2 to 108.3 +/- 53.3 mg/dl), triglycerides (221.1 +/- 123.8 to 164.5 +/- 86.5 mg/dl), LEP (14.9 +/- 8.4 to 6.7 +/- 3.5 ng/ml) and insulin (24.2 +/- 6.5 to 16.8 +/- 4.7 mU/ml) (p &lt; 0.01). In the univariate and multivariate analysis, obesity, dyslipidemia, hyperleptinemia and hyperinsulinemia were independent risk factors for HBP (p &lt; 0.01). Conclusions: We suggest that hyperleptinemia has a direct role in the physiopathologic mechanism of obesity-associated HBP, and it could be considered as an independent risk factor for HBP and cardiovascular disease. abstract_id: PUBMED:20389129 Neonatal macrosomia is an independent risk factor for adult metabolic syndrome. Background: Weight in infancy correlates with risk of type 2 diabetes, hypertension, and obesity in adulthood. Clinical observations have been confounded by obesity-prone genotypes and obesity-linked lifestyles. Objectives: To define the effects of isolated neonatal macrosomia in isogenic animals, we compared macrosomic and control C57Bl6 mice co-fostered by healthy dams receiving standard laboratory feed. Methods: Naturally occurring neonatal macrosomia was identified by a gender-specific weanling weight above the 90th percentile for the colony. Macrosomic and control mice were phenotyped in adulthood by exercise wheel, tail cuff and intraperitoneal insulin or glucose challenge. Results: Compared to control males, adult males with a history of neonatal macrosomia had significantly increased body weight, reduced voluntary activity, insulin resistance, fasting hyperinsulinemia, and impaired glucose tolerance. In contrast, adult females with neonatal macrosomia had no significant alteration in body weight or endocrine phenotypes, but did have higher blood pressures and lower heart rates than control females. After these baseline studies, all mice were switched to a hypercaloric, high fat diet (5 kcal/g, 45% of energy as fat). Twenty weeks later, male mice had impaired glucose tolerance and insulin resistance, independent of their weanling weight classification. While on high fat feeds, macrosomic males maintained a significantly higher body weight than control males. Conclusions: We conclude that (1) in our murine model, neonatal macrosomia is an independent risk factor of adult metabolic syndrome, and (2) neonatal macrosomia accentuates the sexually dimorphic predisposition of C57Bl6 male mice towards glucose intolerance and C57Bl6 female mice towards hypertension. abstract_id: PUBMED:32509704 Effects of risk factor numbers on the development of the metabolic syndrome. This study was performed to identify the factors affecting the develop-ment of metabolic syndrome by comparing the numbers of risk factors of the syndrome and by identifying the factors influencing the develop-ment of metabolic syndrome. Two hundred forty-eight health screening of examinee were used for the study (101 males, 147 females). Diagnostic basis ratio of metabolic syndrome risk factors showed that 35.1% of the subjects had abdominal obesity, 32.7% for high blood pressure, 66.1% for high insulin blood sugar, 43.1% for high triglyceride lipidemia, and 7.3% for low high-density lipoprotein lipidemia. No significant difference of the incidence of metabolic syndrome was found between gender. The diagnostic number for male was the highest with 1 risk factor (31.7%) while the highest with 2 factors (30.6%) in female. Significant differences were found in age and body mass index (BMI) between normal group with no risk factor and metabolic syndrome group. There was significant difference in BMI between the syndrome group with 1 risk factor and 3 risk factors. BMI was significantly higher (5.282 times) compared to their counterpart (P&lt;0.001). Significant difference was found in BMI between 2 risk factors and the syndrome group with more than 3 risk factors and the incidence was higher (4.094 times) in the overweight group than their counterpart (P&lt;0.001). abstract_id: PUBMED:35023952 Metabolic Syndrome is an Independent Risk Factor for Fuhrman Grade and TNM Stage of Renal Clear Cell Carcinoma. Background: More and more evidences show that metabolic syndrome (MS) is closely related to clear cell renal cell carcinoma (ccRCC), but the impact of MS on Fuhrman grade and TNM stage of ccRCC is rarely reported. Purpose: To explore the relationship between MS and its components of Fuhrman grade and TNM stage in ccRCC. Objective: The clinical data of 247 patients with ccRCC diagnosed in our hospital from January 2016 to November 2020 were retrospectively collected and analyzed. Based on diagnostic criteria of MS, the patients were divided into MS and non-MS group. Logistic regression analysis was used to analyze the independent risk factors of ccRCC. Results: The incidence of MS was 32.79% (81/247). There was no significant difference in age, gender, smoking and drinking between MS group and non-MS group (P &gt; 0.05). In MS group, BMI ≥25kg/m2, hypertension, diabetes, hyperlipidemia, tumor diameter, poorly differentiated renal cancer, high-stage renal cancer, triglyceride, fasting blood glucose, glycated hemoglobin, fasting insulin and homeostasis model assessment index were significantly higher than those in non-MS group (P &lt; 0.001), while in high density lipoprotein cholesterol (p &lt; 0.005), islet beta cell secretory index (P &lt; 0.001), well-differentiated renal cell carcinoma (P= 0.009), and low-stage renal cell carcinoma (P = 0.019) were significantly lower than that of non-MS group. Logistic regression analysis showed that hypertension (P = 0.005), diabetes (P = 0.012), hyperlipidemia (P = 0.021) are independent risk factors for Fuhrman grade of ccRCC, while diabetes (P = 0.002), hyperlipidemia (P = 0.007) are independent risk factors for TNM staging of ccRCC. Conclusion: The patients with ccRCC and MS had higher Fuhrman grade and TNM stage. MS is an independent risk factor for Fuhrman grade and TNM stage of ccRCC. abstract_id: PUBMED:22221334 Albuminuria is an independent risk factor of erectile dysfunction in men with type 2 diabetes. Introduction: Erectile dysfunction (ED) is a frequent comorbidity in men with diabetes and is frequently overlooked in routine clinical evaluation. Albuminuria, a marker of endothelial dysfunction, may link to ED. Aim: The study evaluated the association of albuminuria with risk factors of ED in men with type 2 diabetes. Methods: The diagnosis of ED was based on a self-administered questionnaire containing Sexual Health Inventory for Men. Urinary albumin excretion rate was determined by urine albumin-to-creatinine ratio (UACR) in spot urine. Main Outcome Measures: The clinical variables and diabetes-associated complications to risk of ED were evaluated. Results: Of 666 patients who received the questionnaire, 455 patients completed it. Among them, 82.0%, 28.1%, and 35.8% reported having ED, severe ED, and albuminuria, respectively. The UACR level was significantly higher in ED (0.20 ± 0.83) and severe ED (0.34 ± 1.18) groups compared with non-ED group (0.07 ± 0.33). The presence of albuminuria adjusted for age and duration of diabetes was significantly associated with ED (OR = 2.76), and macroalbuminuria has stronger impact (OR = 4.49) than microalbuminuria (OR = 2.48). The other associated risk factors included hypertension, higher level of systolic blood pressure, lower level of serum hemoglobin, and estimated glomerular filtration rate. The presence of retinopathy, neuropathy, insulin therapy, using calcium channel blocker, and higher level of HbA1c further correlated with severe ED. Men with severe ED have higher prevalence of subnormal testosterone than the no ED patients. The high sensitivity C-reactive protein level, and the presence of metabolic syndrome were not risk factors. The 211 nonrespondents to the questionnaire had similar or worse risk profiles compared with the ED patients. Conclusion: Albuminuria is an important independent risk factor of ED in men with diabetes after adjustment of age and diabetes mellitus duration. Identification and control of albuminuria and other associated risk factors might play a role in the prevention or reversal of ED. abstract_id: PUBMED:8843883 Insulin resistance as an independent risk factor for carotid wall thickening. It has been reported that insulin resistance is associated with essential hypertension and that an aggregation of risk factors-hypertension, dyslipidemia, and glucose intolerance-together with insulin resistance leads to the more frequent appearance of coronary artery disease. We examined the relation between early asymptomatic atherosclerosis and these risk factors in 72 nondiabetic subjects with essential hypertension (41 men, 31 women) aged 50 to 59 years. Intima-media thickness and plaque formation of the carotid artery were assessed by B-mode ultrasonography, and insulin sensitivity was measured by the steady-state plasma glucose method. Lipoprotein profile was analyzed by ultracentrifugation. The intima-media thickness of the common carotid artery significantly correlated with systolic pressure; mean blood pressure; steady-state plasma glucose, indicating insulin resistance; fasting insulin; area under the curve of plasma insulin and glucose; body mass index; apolipoprotein B; apolipoprotein B in low-density lipoprotein; lower ratio of cholesterol to apolipoprotein B of low-density lipoprotein; and decreased high-density lipoprotein cholesterol. By multiple regression analysis, steady-state plasma glucose was the strongest risk, followed by lower high-density lipoprotein and systolic pressure. These three factors accounted for 54.9% of all the risk for increased intima-media thickness of the common carotid artery. In conclusion, insulin resistance was the strongest risk factor for carotid intima-media thickness, followed by lower high-density lipoprotein cholesterol and hypertension. An effort to maintain normal insulin sensitivity is essential for the prevention of early atheromatous lesions in essential hypertension. abstract_id: PUBMED:17017483 Is hyperuricemia a cardiovascular risk factor? The association of elevated serum uric acid (hyperuricemia, gout) with the presence of classical coronary risk factors and coronary artery disease (CAD) or myocardial infarction (MI) has been analysed in many epidemiological studies. Numerous studies have revealed that hypertension, high body mass index (BMI), lipid disorders (especially raised triglycerides--TG level and low high dense lipoprotein cholesterol HDL-C level), increased creatinine or insulin levels have caused hyperuricemia. No association has been observed between hyperuricemia and diabetes type 2 and uricemia and glicemia. But in some studies the relationship between cholesterol and uric acid levels has been not confirmed. Hyperuricemia has been observed in patients with non-treated hypertension. Gout has often occurred with typical disorders for the metabolic syndrome X. Significant correlation of the serum uric level and the CAD presence and severity of coronary atherosclerosis confirmed by coronary angiography has been observed in women. Hyperuricemia has also indirect influence on progress of CAD by physical activity restriction, what causes sedentary mode of life and lead to obesity. Obesity is a known risk factor diabetes, lipid disorders and hypertension. To recapitulate, it is a matter of controversy as to whether uric acid is an independent cardiovascular risk factor or rather it only represents reinforcement of typical risk factor. abstract_id: PUBMED:30350881 Association of vascular endothelial growth factor expression and polymorphisms with the risk of gestational diabetes mellitus. Objective: To study the associations of vascular endothelial growth factor (VEGF) expression and its gene polymorphisms with the risk of gestational diabetes mellitus (GDM). Methods: A total of 239 GDM patients (GDM group) and 275 healthy pregnant women (Control group) were included in this study. VEGF genotypes (including rs2146323, rs2010963, rs3025039, rs3025010, and rs833069) were analyzed by TaqMan assay. ELISA was used to determine the serum VEGF levels. The software SHEsis was performed to analyze haplotypes. Results: The carrier with the rs2146323 AA, CA+AA genotypes, and A allele, as well as the rs3025039 CT, TT, CT+TT genotypes, and T allele showed the increased risk of GDM (all P &lt; 0.05), but the distributions of genotype and allele at rs2010963, rs3025010, and rs833069 were not significantly different between GDM patients and controls (all P &gt; 0.05). Notably, the frequency of rs2010963-rs833069-rs2146323-rs3025010 haplotypes CAAC, CAAT, CACC, CACT, GACT, and GGCT was found statistically different between GDM patients and controls (all P &lt; 0.05). The patients with rs3025039 CT+TT genotype had higher VEGF levels than those with CC genotype (all P &lt; 0.05). Besides, age, family histories of diabetes, previous GDM, hypertension, pre-pregnancy body mass index, fasting plasma glucose, fasting insulin, homeostasis model assessment (HOMA)-IR, rs2146323 CA+AA, rs3025039 CT+TT, and VEGF expression level were independent risk factors, while HOMA-β was an independent protective factor for GDM (all P &lt; 0.05). Conclusion: VEGF rs2146323 and rs3025039 polymorphisms and its expression were significantly correlated with the risk of GDM, providing a great clinical value for GDM assessment and diagnosis. abstract_id: PUBMED:28414064 Obesity in coronary heart disease: An unaddressed behavioral risk factor. Obesity is an independent risk factor for the development and progression of coronary heart disease (CHD). Over 80% of patients with CHD are overweight or obese. While obesity is often considered a relatively "minor" CHD risk factor, weight loss is a broadly effective risk-factor intervention. Weight loss can profoundly influence a number of "major" risk factors including: hypertension, dyslipidemia and insulin resistance/type 2 diabetes mellitus. Despite its prominence as a risk factor most cardiac rehabilitation (CR) programs do not have a specific, targeted intervention to assist patients with weight loss. Consequently, the weight loss that occurs during CR is quite small and unlikely to appreciably alter risk factors. Relying on CR associated exercise as a sole intervention is an ineffective strategy to promote weight loss. There is evidence, however, that behavioral weight loss (BWL) interventions can be effectively employed in the CR setting. In contrast to programs that do not offer a targeted intervention, studies show that participants in CR-related BWL programs lose significantly more weight. The additional weight loss from the BWL intervention is associated with greater improvements in insulin sensitivity and other components of the metabolic syndrome such as hypertension and lipid abnormalities. As a means of maximizing CHD risk factor reduction CR programs need to incorporate BWL programs as a standard programming for overweight/obese patients. Answer: Based on the provided abstracts, insulin does appear to be associated with hypertension, but whether it is an independent risk factor is not uniformly established across all studies. In PUBMED:11220285, the study found that hyperinsulinaemia is associated with an increased cardiovascular risk and is related to a number of other cardiovascular risk factors, including hypertension. However, the study notes that while high insulin levels carried an increased risk for cardiovascular and coronary heart disease mortality, this was not independent of other risk factors. PUBMED:15607067 suggests that hyperinsulinemia is an independent risk factor for high blood pressure (HBP), as the study found obesity, dyslipidemia, hyperleptinemia, and hyperinsulinemia to be independent risk factors for HBP in a univariate and multivariate analysis. PUBMED:8843883 also supports the idea that insulin resistance, which is typically associated with higher levels of insulin, is an independent risk factor for carotid wall thickening, a surrogate marker for cardiovascular disease which includes hypertension. However, it is important to note that insulin resistance and hyperinsulinemia are often part of a cluster of metabolic abnormalities that include hypertension, and it can be challenging to completely disentangle these factors to establish insulin as an independent risk factor. The presence of other conditions, such as obesity and dyslipidemia, can also contribute to the development of hypertension, and these conditions are often associated with elevated insulin levels (PUBMED:35023952, PUBMED:32509704). In summary, while there is evidence to suggest that insulin and insulin resistance are associated with hypertension, the extent to which insulin acts as an independent risk factor for hypertension may vary and is influenced by the presence of other cardiovascular risk factors.
Instruction: MRI in the early stage of spinal cord injury: does it have clinical relevance? Abstracts: abstract_id: PUBMED:27598689 Diffusion MRI of the spinal cord: from structural studies to pathology. Diffusion MRI is extensively used to study brain microarchitecture and pathologies, and water diffusion appears highly anisotropic in the white matter (WM) of the spinal cord (SC). Despite these facts, the use of diffusion MRI to study the SC, which has increased in recent years, is much less common than that in the brain. In the present review, after a brief outline of early studies of diffusion MRI (DWI) and diffusion tensor MRI (DTI) of the SC, we provide a short survey on DTI and on diffusion MRI methods beyond the tensor that have been used to study SC microstructure and pathologies. After introducing the porous view of WM and describing the q-space approach and q-space diffusion MRI (QSI), we describe other methodologies that can be applied to study the SC. Selected applications of the use of DTI, QSI, and other more advanced diffusion MRI methods to study SC microstructure and pathologies are presented, with some emphasis on the use of less conventional diffusion methodologies. Because of length constraints, we concentrate on structural studies and on a few selected pathologies. Examples of the use of diffusion MRI to study dysmyelination, demyelination as in experimental autoimmune encephalomyelitis and multiple sclerosis, amyotrophic lateral sclerosis, and traumatic SC injury are presented. We conclude with a brief summary and a discussion of challenges and future directions for diffusion MRI of the SC. Copyright © 2016 John Wiley &amp; Sons, Ltd. abstract_id: PUBMED:17436185 MRI in the early stage of spinal cord injury: does it have clinical relevance? An experimental study Purpose: An experimental technique for producing a ventral spinal trauma which imitates a slipped intervertebral disc or a fractured vertebra was evaluated using magnetic resonance imaging and histology. The results were evaluated with respect to their clinical importance. Materials And Methods: A total of 69 Wistar rats were used for this study. An inflatable angioplasty balloon device was placed dorsally or ventrally to the spinal cord in order to produce a spinal trauma. 30 rats were used to compare neurological pathologies between ventral and dorsal trauma. 30 animals underwent graded ventral spinal cord compression. Magnetic resonance imaging was performed and the results were compared to histology. Results: Using this technique, the balloon device can be placed ventrally to the spinal cord. The compression time had a direct effect on changes on magnetic resonance images and edema in histology, but a longer compression time did not produce increased bleeding. The T2-weighted MRI scans showed hyperintense changes immediately after spinal compression. Therefore, they are the appropriate way for diagnosing acute spinal injuries. Although the T1-weighted MRI scans did not change after spinal compression, they are important for diagnosing epidural hematomas. Conclusion: With this technique common ventral spinal trauma can be readily recreated. Magnetic resonance imaging allows early detection of edematous changes in experimental spinal cord injuries in the rat. abstract_id: PUBMED:31123335 The relevance of MRI for predicting neurological recovery following cervical traumatic spinal cord injury. Study Design: Retrospective cohort study of 82 patients with cervical traumatic spinal cord injury (TSCI). Objectives: Determine the relevance of preoperative MRI to predict neurological recovery following cervical TSCI. Setting: Level I trauma center specialized in TSCI. Methods: The following three MRI parameters were assessed: presence of an intramedullary hemorrhage, intramedullary lesion length and maximal compression of the spinal cord compression (MSCC). Analyses were performed to assess the relationship between MRI parameters and three neurological outcomes: ASIA motor score (AMS), improvement by at least one ASIA impairment scale (AIS) grade (conversion of AIS grade), and reaching AIS grade D or E. Results: Predicting AMS based on initial AIS grade and intramedullary hemorrhage resulted in a validation R-squared of 0.662, and of 0.636 when using only the initial AIS grade. Predicting conversion of AIS grade based on initial AIS grade, intramedullary hemorrhage and lesion length resulted in a validation c-index of 0.704, and of 0.727 when using only the initial AIS grade. Predicting the likelihood of a follow-up AIS grade D or E based on initial AIS grade and intramedullary hemorrhage in a validation c-index of 0.903, and of 0.873 when using only the initial AIS grade. Conclusions: Intramedullary hemorrhage and lesion length assessed from preoperative MRI were predictors of the neurological recovery following cervical TSCI. However, the clinical benefit of these MRI parameters to predict the neurological recovery remains limited when the initial AIS grade is available, confirming that the initial neurological status remains the most important predictor of the neurological outcome. abstract_id: PUBMED:25798092 Correlation of in vivo and ex vivo (1)H-MRI with histology in two severities of mouse spinal cord injury. Spinal cord injury (SCI) is a debilitating neuropathology with no effective treatment. Magnetic resonance imaging (MRI) technology is the only method used to assess the impact of an injury on the structure and function of the human spinal cord. Moreover, in pre-clinical SCI research, MRI is a non-invasive method with great translational potential since it provides relevant longitudinal assessment of anatomical and structural alterations induced by an injury. It is only recently that MRI techniques have been effectively used for the follow-up of SCI in rodents. However, the vast majority of these studies have been carried out on rats and when conducted in mice, the contusion injury model was predominantly chosen. Due to the remarkable potential of transgenic mice for studying the pathophysiology of SCI, we examined the use of both in and ex vivo (1)H-MRI (9.4 T) in two severities of the mouse SCI (hemisection and over-hemisection) and documented their correlation with histological assessments. We demonstrated that a clear distinction between the two injury severities is possible using in and ex vivo (1)H-MRI and that ex vivo MR images closely correlate with histology. Moreover, tissue modifications at a remote location from the lesion epicenter were identified by conventional ex vivo MRI analysis. Therefore, in vivo MRI has the potential to accurately identify in mice the progression of tissue alterations induced by SCI and is successfully implemented by ex vivo MRI examination. This combination of in and ex vivo MRI follow-up associated with histopathological assessment provides a valuable approach for further studies intended to evaluate therapeutic strategies on SCI. abstract_id: PUBMED:28769787 A Combination of Ex vivo Diffusion MRI and Multiphoton to Study Microglia/Monocytes Alterations after Spinal Cord Injury. Central nervous system (CNS) injury has been observed to lead to microglia activation and monocytes infiltration at the lesion site. Ex vivo diffusion magnetic resonance imaging (diffusion MRI or DWI) allows detailed examination of CNS tissues, and recent advances in clearing procedures allow detailed imaging of fluorescent-labeled cells at high resolution. No study has yet combined ex vivo diffusion MRI and clearing procedures to establish a possible link between microglia/monocytes response and diffusion coefficient in the context of spinal cord injury (SCI). We carried out ex vivo MRI of the spinal cord at different time-points after spinal cord transection followed by tetrahydrofuran based clearing and examined the density and morphology of microglia/monocytes using two-photon microscopy. Quantitative analysis revealed an early marked increase in microglial/monocytes density that is associated with an increase in the extension of the lesion measured using diffusion MRI. Morphological examination of microglia/monocytes somata at the lesion site revealed a significant increase in their surface area and volume as early as 72 hours post-injury. Time-course analysis showed differential microglial/monocytes response rostral and caudal to the lesion site. Microglia/monocytes showed a decrease in reactivity over time caudal to the lesion site, but an increase was observed rostrally. Direct comparison of microglia/monocytes morphology, obtained through multiphoton, and the longitudinal apparent diffusion coefficient (ADC), measured with diffusion MRI, highlighted that axonal integrity does not correlate with the density of microglia/monocytes or their somata morphology. We emphasize that differential microglial/monocytes reactivity rostral and caudal to the lesion site may thus coincide, at least partially, with reported temporal differences in debris clearance. Our study demonstrates that the combination of ex vivo diffusion MRI and two-photon microscopy may be used to follow structural tissue alteration. Lesion extension coincides with microglia/monocytes density; however, a direct relationship between ADC and microglia/monocytes density and morphology was not observed. We highlighted a differential rostro-caudal microglia/monocytes reactivity that may correspond to a temporal difference in debris clearance and axonal integrity. Thus, potential therapeutic strategies targeting microglia/monocytes after SCI may need to be adjusted not only with the time after injury but also relative to the location to the lesion site. abstract_id: PUBMED:32564231 Can MRI findings predict the outcome of cervical spinal cord Injury? a systematic review. Introduction: MRI is the established gold standard for imaging acute spinal cord injury (SCI). Our aim was to identify the prognostic value, in terms of neurological outcome, of extradural and intradural features detected on MRI performed acutely following traumatic cervical SCI. Materials And Methods: Several databases were systematically searched to identify potentially eligible articles until December 2019. Using a standard PRISMA template, 2606 articles were initially identified. Results: A final 6 full-text articles met the inclusion criteria and were analyzed. An extradural factor, namely the maximal spinal cord compression, was associated with poor neurological outcome and statistically significant (P = 0.02 and P = 0.001 in 2 out of 3 studies). The intradural factors of length of the cord edema (P = 0.001, P = 0.006, and P &lt; 0.001 in 3 studies), intramedullary hemorrhage (P = 0.002, P &lt; 0.001, P &lt; 0.001, and P = 0.002 in 4 studies), and the length of intramedullary hemorrhage (P = 0.028, P = 0.022 in 2 studies) also significantly correlated with poor neurological recovery at follow-up. Conclusion: While early MRI is established as a gold standard imaging of acute spinal trauma, it also serves to provide prognostic value on the neurological recovery. From our systematic review, there is a strong association of the extradural finding of maximal spinal cord compression, intradural MRI findings of length of cord edema, intramedullary hemorrhage, and length of intramedullary hemorrhage with neurological recovery in traumatic cervical spinal cord injuries. Level Of Evidence: III. abstract_id: PUBMED:33098256 Quantification of Myelinated Nerve Fraction and Degeneration in Spinal Cord Neuropil by SHIFT MRI. Background: Neurodegeneration is a complex cellular process linked to prompt changes in myelin integrity and gradual neuron loss. Current imaging techniques offer estimations of myelin volumes in lesions/remyelinated areas but are limited to detect subtle injury. Purpose: To investigate whether measurements detected by a signal hierarchically isolated as a function of time-to-echo (SHIFT) MRI technique can determine changes in myelin integrity and fiber axolemma. Study Type: Prospective animal model. Animal Model: Surgically demyelinated spinal cord (SC) injury model in rodents (n = 6). Field Strength/sequence: Gradient-echo spin-echo at 3T. Assessment: Multicompartment T2 relaxations were computed by SHIFT MRI in 75-microns-resolution images of the SC injury penumbra region 2 weeks post-trauma. G-ratio and axolemma delamination were assessed by transmission electron microscopy (TEM) in intact and injured samples. SC myelinated nerve fraction was computed by SHIFT MRI prospectively and assessed histologically. Statistical Tests: Relations between SHIFT-isolated T2 -components and TEM measurements were studied using linear regression and t-tests. Pearson's correlation and significance were computed to determine the SHIFT's sensitivity to detect myelinated fibers ratio in gray matter. Regularized least-squares-based ranking analysis was employed to determine SHIFT MRI's ability to discern intact and injured myelinated nerves. Results: Biexponential signals isolated by SHIFT MRI for intact vs. lesion penumbra exhibited changes in T2 , shifting from intermediate components (25 ± 2 msec) to long (43 ± 11 msec) in white matter, and similarly in gray matter regions-of-interest (31 ± 2 to 46 ± 16 msec). These changes correlated highly with TEM g-ratio and axon delamination measurements (P &lt; 0.05). Changes in short T2 components were observed but not statistically significant (8.5 ± 0.5 to 7 ± 3 msec, P = 0.445, and 4.0 ± 0.9 to 7 ± 3 msec, P = 0.075, respectively). SHIFT MRI's ability to detect myelinated fibers within gray matter was confirmed (P &lt; 0.001). Data Conclusion: Changes detected by SHIFT MRI are associated with abnormal intermembrane spaces formed upon mild injury, directly correlated with early neuro integrity loss. Level of Evidence 1 Technical Efficacy Stage 2. abstract_id: PUBMED:38251658 Diffusion Weighted MRI of Spinal Cord Injuries after Instrumented Fusion Stabilization. Diffusion-weighted magnetic resonance imaging (DW-MRI) is a promising technique for assessing spinal cord injury that has historically been challenged by the presence of metallic stabilization hardware. This study leverages recent advances in metal-artifact resistant multi-spectral DW-MRI to enable diffusion quantification throughout the spinal cord even after fusion stabilization. Twelve cervical spinal cord injury subjects treated with fusion stabilization and 49 asymptomatic controls underwent multi-spectral DW-MRI evaluation. Apparent diffusion coefficient (ADC) values were calculated in axial cord sections. Statistical modeling assessed ADC differences across cohorts and within distinct cord regions of the SCI subjects (at, above or below injured level). Computed mod- els accounted for subject demographics and injury characteristics. ADC was found to be elevated at injured levels compared to non-injured levels (z=3.2, p = 0.001), with ADC at injured levels decreasing over time since injury (z=-9.2, p&lt;0.001). Below the injury level, ADC was reduced relative to controls (z=-4.4, p&lt;0.001), with greater reductions after more severe injuries that cor- related with lower extremity motor scores (z=2.56, p=0.012). By enabling diffusion analysis near fusion hardware, the multi-spectral DW-MRI technique allowed intuitive quantification of cord dif- fusion changes after spinal cord injury both at and away from injured levels. This demonstrates the approach's potential for assessing post-surgical spinal cord integrity throughout stabilized regions. abstract_id: PUBMED:31001195 Three-Dimensional In vivo Magnetic Resonance Imaging (MRI) of Mouse Facial Nerve Regeneration. MRI (magnetic resonance imaging) is an indispensable tool in the diagnosis of centrals nervous system (CNS) disorders such as spinal cord injury and multiple sclerosis (MS). In contrast, diagnosis of peripheral nerve injuries largely depends on clinical and electrophysiological parameters. Thus, currently MRI is not regularly used which in part is due to small nerve calibers and isointensity with surrounding tissue such as muscles. In this study we performed translational MRI research in mice to establish a novel MRI protocol visualizing intact and injured peripheral nerves in a non-invasive manner without contrast agents. With this protocol we were able to image even very small nerves and nerve branches such as the mouse facial nerve (diameter 100-300 μm) at highest spatial resolution. Analysis was performed in the same animal in a longitudinal study spanning 3 weeks after injury. Nerve injury caused hyperintense signal in T2-weighted images and an increase in nerve size of the proximal and distal nerve stumps were observed. Further hyperintense signal was observed in a bulb-like structure in the lesion site, which correlated histologically with the production of fibrotic tissue and immune cell infiltration. The longitudinal MR representation of the facial nerve lesions correlated well with physiological recovery of nerve function by quantifying whisker movement. In summary, we provide a novel protocol in rodents allowing for non-invasive, non-contrast agent enhanced, high-resolution MR imaging of small peripheral nerves longitudinally over several weeks. This protocol might further help to establish MRI as an important diagnostic and post-surgery follow-up tool to monitor peripheral nerve injuries in humans. abstract_id: PUBMED:37926719 Evaluating tissue injury in cervical spondylotic myelopathy with spinal cord MRI: a systematic review. Purpose: Cervical Spondylotic Myelopathy (CSM) is a degenerative condition that leads to loss of cervical spinal cord (CSC) integrity. Various spinal cord Magnetic Resonance Imaging (MRI) methods can identify and characterize the extent of this damage. This systematic review aimed to evaluate the diagnostic, biomarker, and predictive utilities of different spinal cord MRI methods in clinical research studies of CSM. The aim was to provide a comprehensive understanding of the progress in this direction for future studies and effective diagnosis and management of CSM. Methods: A comprehensive literature search was conducted on PubMed and EMBASE from 2010 to 2022 according to PRISMA guidelines. Studies with non-human subjects, less than 3T magnetic field strength, non-clinical design, or not quantitatively focusing on the structural integrity of CSC were excluded. The extracted data from each study included demographics, disease severity, MRI machine characteristics, quantitative metrics, and key findings in terms of diagnostic, biomarker, and predictive utilities of each MRI method. The risk of bias was performed using the guide from AHRQ. The quality of evidence was assessed separately for each type of utility for different MRI methods using GRADE. Results: Forty-seven studies met the inclusion criteria, utilizing diffusion-weighted imaging (DTI) (n = 39), magnetization transfer (MT) (n = 6), MR spectroscopy (n = 3), and myelin water imaging (n = 1), as well as a combination of MRI methods (n = 12). The metric fractional anisotropy (FA) showed the highest potential in all facets of utilities, followed by mean diffusivity. Other promising metrics included MT ratio and intracellular volume fraction, especially in multimodal studies. However, the level of evidence for these promising metrics was low due to a small number of studies. Some studies, mainly DTI, also reported the usefulness of spinal cord MRI in mild CSM. Conclusions: Spinal cord MRI methods can potentially facilitate the diagnosis and management of CSM by quantitatively interrogating the structural integrity of CSC. DTI is the most promising MRI method, and other techniques have also shown promise, especially in multimodal configurations. However, this field is in its early stages, and more studies are needed to establish the usefulness of spinal cord MRI in CSM. Answer: Yes, MRI in the early stage of spinal cord injury does have clinical relevance. Magnetic resonance imaging (MRI) is a critical tool for the early detection of edematous changes in experimental spinal cord injuries in rats, as demonstrated by a study where T2-weighted MRI scans showed hyperintense changes immediately after spinal compression, indicating their appropriateness for diagnosing acute spinal injuries (PUBMED:17436185). Additionally, preoperative MRI has been found to be relevant in predicting neurological recovery following cervical traumatic spinal cord injury (TSCI), with intramedullary hemorrhage and lesion length being predictors of neurological recovery. However, the clinical benefit of these MRI parameters is limited when the initial neurological status is known, confirming that the initial neurological status remains the most important predictor of the neurological outcome (PUBMED:31123335). Furthermore, MRI findings have been shown to predict the outcome of cervical spinal cord injury, with extradural and intradural features detected on MRI performed acutely following traumatic cervical SCI being strongly associated with neurological recovery. Specifically, maximal spinal cord compression, length of cord edema, intramedullary hemorrhage, and length of intramedullary hemorrhage have all been correlated with neurological recovery in traumatic cervical spinal cord injuries (PUBMED:32564231). In summary, MRI is not only established as the gold standard imaging of acute spinal trauma but also provides prognostic value on the neurological recovery, making it clinically relevant in the early stages of spinal cord injury.
Instruction: Is methyl prednisolone useful in acute transverse myelitis? Abstracts: abstract_id: PUBMED:32292451 Role of Methyl Prednisolone in Longitudinal Extensive Transverse Myelitis (LETM) in Children. Objective: The role of methyl prednisolone in longitudinal extensive transverse myelitis in children is not completely discovered in developing country like Pakistan. So this is the first study which aimed to evaluate the efficacy of methyl prednisolone in longitudinal extensive transverse myelitis in children. Methods: This is quasi experimental hospital based descriptive prospective study. The data was collected from 34 children admitted in Paediatric Neurology department through Outpatient/emergency department in Children's Hospital and the Institute of Child Health, Lahore for period of one year from January 2018 to December 2018. The children full filling the inclusion criteria were observed before and after giving injection methyl prednisolone 30mg/kg/dose (maximum dose one Gram irrespective of the body weight) once daily for five days in the form of intravenous infusion. Results: Complete recovery was seen in 41.2% while 58.8% showed partial recovery. The correlation of response to treatment (recovery) with gender, area of spinal cord involvement, muscle power and autonomic dysfunction is found at significance level of five percent according to Chi square test. Conclusion: Early consideration and administration of methyl prednisolone in longitudinally extensive transverse myelitis in children can be beneficial and can help to reduce the morbidity. abstract_id: PUBMED:11571658 Is methyl prednisolone useful in acute transverse myelitis? Study Design: Hospital based observational study. Objectives: To evaluate the role of methyl prednisolone (MPS) in the management of acute transverse myelitis (ATM). Methods: Twenty-one patients with ATM were included in a prospective hospital based study during 1992-1997. All the patients underwent neurological examination, spinal MRI, somatosensory and motor evoked potentials of both upper and lower limbs and concentric needle EMG study. Twelve consecutive patients did not receive MPS therapy who were managed during 1992-1994 and nine consecutive patients during 1995-1997 received MPS therapy in a dose of 500 mg i.v. for 5 days. The clinical and neurophysiological studies were repeated 3 months later. The outcome was defined on the basis of Barthel index (BI) score at the end of 3 months into good (BI&gt; or =12) and poor (BI&lt;12). Results: The age of MPS group was 25.5 years (range 12-42) and three were females. The age of non MPS group was 33.5 years (range 16-70) and two were females. In the MPS group 33% had poor outcome compared to 67% in the non MPS group. In the MPS group mean admission BI score was 7.3 which improved to 14.6 after MPS therapy. In the non MPS group, the admission BI score was 3.2 which improved to 9.6 at 3 month follow-up. In patients with complete paraplegia, evidence of denervation on EMG and unrecordable central motor conduction time to lower limb and tibial SEP were associated with poor outcome irrespective of MPS treatment. Global test statistics did not suggest a beneficial role of MPS therapy in the outcome of ATM. Conclusion: Our results do not suggest a beneficial role of methyl prednisolone on the 3 month outcome of ATM. abstract_id: PUBMED:10490859 Clinical and evoked potential changes in acute transverse myelitis following methyl prednisolone. Study Design: Observational study with sequential follow-up. Objective: To study the role of somatosensory evoked potential (SEP) and motor evoked potential (MEP) in monitoring the effect of methyl prednisolone (MPS) therapy in acute transverse myelitis (ATM). Setting: Tertiary care referral teaching hospital at Lucknow, India. Methods: In the present study, nine patients with ATM whose age ranged between 12 and 42 years and three of whom were females have been included. They were subjected to clinical examination, median and tibial SEP and Central motor conduction time (CMCT) to upper and lower limbs. The clinical and evoked potential studies were repeated after 7 and 90 days of intravenous methyl prednisolone treatment for 5 days. The outcome was defined on the basis of a 3 month Barthel Index (BI) score into poor (BI&lt;12) and good (BI&gt;/=12). Results: All the patients had varying degrees of leg weakness ranging between grade 0 and 4 on the Medical Research Council scale. Upper limbs were weak in four patients. Pinprick and joint position sensations in the lower limb were impaired in all patients. Central motor conduction time to upper limb was abnormal in two patients and to lower limbs in eight patients. Median SEPs were normal in all and tibial in two patients. On the seventh day follow-up, muscle power improved in six which correlated with CMCT-to tibialis anterior in five patients. Joint position sense improved in two patients on the seventh day but there was no further improvement at 3 months. Tibial SEP, however, improved in four patients on the seventh day and six patients at 3 months. Following methyl prednisolone therapy, both sensory and motor functions improved, but the improvement was more pronounced and more frequent at 3 months compared to that on seventh day. At a 3 month follow-up, six patients had good and three poor recovery. Conclusion: Evoked potential studies provide additional objective means for monitoring the effect of therapy in ATM. abstract_id: PUBMED:26611483 Longitudinally extensive transverse myelitis with anti-NMDA receptor antibodies during a systemic lupus erythematosus flare-up. Transverse myelitis (TM) with systemic lupus erythematosus (SLE) has been linked to the presence of autoantibodies (eg, antiaquaporin 4 (AQP4) and anticardiolipin (aCL)) and SLE-induced secondary vasculitis, but the aetiology remains incompletely understood. A 48-year-old Japanese man with a 6-year history of poorly controlled SLE had stopped glucocorticoid therapy 1 year before admission. 3 days before admission, he developed flaccid paraplegia. Spinal MRI showed a longitudinally hyperintense T2 grey matter lesion from the level of Th4 to the conus medullaris, which was considered longitudinally extensive TM (LETM). We administered steroid pulse therapy (methyl-prednisolone 1000 mg/day) for 3 days and prednisolone 50 mg/day. The patient's flaccid paralysis gradually improved. We concluded that the patient's TM was caused by SLE flare-up, even though we could not completely rule out antiphospholipid syndrome. SLE myelitis is relatively rare and many aetiologies are possible for TM in SLE. abstract_id: PUBMED:17942291 Devic's neuromyelitis optica in children: a case report and review of the literature Unlabelled: Devic neuromyelitis optica (NMO) or Devic's syndrome is an uncommon clinical syndrome associating unilateral or bilateral optic neuritis and transverse myelitis. Usually reported in adults, childhood cases constitute a distinctive clinical entity. Case Report: We report a case of NMO occurring in a 9-year-old girl, admitted for paraplegia, sphincter troubles as acute installation bladder retention and of a sudden decline of the visual acuity. Magnetic resonance imaging (MRI) revealed abnormalities of spinal cord signal with hypo intensity in T1-weighted images and hyper intensity in T2-weighted images along the spinal cord. However, the cerebral region was normal. Visual evoked potentials were consistent with retrobulbar optic neuropathy. Our patient received corticosteroids (methyl prednisolone) during 5 days followed by oral prednisone. At week three, an immunosuppressant (azathioprine) was added. Clinical outcome was favourable with disappearance of sphincter troubles, a correction of the visual acuity and a progressive disappearance of motor troubles. Conclusion: Pediatric Devic's NMO is rare. It is a different clinical entity with an excellent visual and neurological prognosis. Review of the literature shows that recurrence is rare in children and seems to be without long-term sequelae with corticosteroids and immunosuppressant therapy. abstract_id: PUBMED:38268621 An unusual case of transverse myelitis in dengue fever: A case report from Nepal. Key Clinical Message: Dengue fever can also have various neurological complications but involvement of the spinal cord is often unusual. This is a case where the patient had transverse myelitis as a complication of dengue fever. Abstract: Dengue fever can have various neurological complications but involvement of the spinal cord is often unusual. We report a case of a 49-year-old female, a known case of dengue fever, who presented with urine retention, inability to stand and walk with tingling sensation of bilateral lower limbs. Her vibration and joint position sensation was reduced below T2 level along with altered reflexes but MRI could not explain the examination findings. She was diagnosed clinically as transverse myelitis (TM) in the background of dengue fever. She showed drastic improvement with treatment of steroids. As TM as a complication in a patient with dengue fever is rare, and due to the paucity of similar case reports in Nepal, this case report is of value for the scientific community. abstract_id: PUBMED:34754669 An Uncommon Cause of Acute Transverse Myelitis Following Acinetobacter Baumannii-Associated UTI, Which Responded to Intravenous Pulse Methylprednisolone Alone. Acute transverse myelitis (ATM) is a non-compressive localized inflammation involving one or more levels of the spinal cord due to various etiologies characterized by motor weakness, sensory impairments, and autonomic dysfunction. It can be idiopathic or primary or secondary due to infection, autoimmune disorder, connective tissue disorder, and uncommonly after vaccination which came to the limelight during the ongoing massive vaccine drive against coronavirus disease 2019 (COVID-19). We report a case of a 21-years-old male who presented with gradually progressive weakness of both lower limbs following urinary tract infection (UTI) with a history of similar illness in the family which improved with high dose methylprednisolone and antibiotic therapy followed by physical rehabilitation. A diagnosis of long segment ATM possibly following UTI was suggested after ruling out other secondary causes and was confirmed by magnetic resonance imaging (MRI) of the spinal cord. Asymmetric symptoms and signs with small lesions involving &lt;two vertebral segments, peripheral lesion, presence of Lhermitte's sign and relapsing-remitting course distinguish ATM from more debilitating disorder multiple sclerosis (MS) in patients with family history. Infection like UTI can precipitate ATM as well as UTI may develop along with neurogenic lower urinary tract dysfunction (NLUTD) even after recovery from the motor and sensory impairment. Patients with acute transverse myelitis need to be on regular follow up particularly those with subacute presentation and positive family history to rule out relapse and development of multiple sclerosis. Common etiologies like UTI may precipitate uncommon disorders like ATM. abstract_id: PUBMED:23196573 Neuromyelitis optica Neuromyelitis optica (NMO), an inflammatory demyelinating disease of the central nervous system, is characterized by relapsing optic neuritis and transverse myelitis. The detection of pathogenic anti-aquaporin-4 (AQP4) antibody distinguishes NMO from other demyelinating disorders, such as multiple sclerosis. Although early administration of high-dose intravenous methyl-prednisolone may improve symptoms that occur during relapses, the application of plasmapheresis is often required for sufficient recovery from each attack. The management of relapses is done with early steroid treatment, typically 1g intravenous methyl-prednisolone for 3 to 5 days followed by oral prednisone. Relapses that do not respond to intravenous steroids could benefit from plasmapheresis. Treatments for the prevention that have been used to date for NMO have generally been immunosuppressive drugs rather than immunomodulatory agents that are commonly used for multiple sclerosis (MS). Although low dose oral prednisolone is reported to be effective for prevention in Japanese NMO, non-steroidal immuno-suppressive agents are recommended in other countries to be changed from prednisolone within six months to avoid the side effects of steroids. abstract_id: PUBMED:16977827 Acute transverse myelitis in children. A case of Acute Transverse Myelitis (ATM) is presented. ATM is a rare disease in childhood. The diagnosis stems from the clinical presentation, cerebrospinal fluid findings, appearance of the spinal cord on imaging and ruling out differential diagnoses like Guillain-Barre syndrome (GBS) and Poliomyelitis. The proposed treatment is intravenous (IV) methyl prednisolone with variable chances of recovery. A controlled multicenter study is suggested to assess epidemiology, etiology and prognosis of ATM. abstract_id: PUBMED:19176174 Neuromyelitis optica with atypical cerebral lesions demonstrated by magnetic resonance imaging in a 9-year old girl Neuromyelitis optica (NMO) is a rare inflammatory disorder characterised by optic neuritis and transverse myelitis. We report a severe pediatric case presenting with impaired vision, tetraparesis, bladder retention and lower extremity pain. Magnetic resonance imaging demonstrated longitudinally extensive lesions of the spinal chord and atypical lesions in both cerebral hemispheres. Cerebrospinal fluid was pleocytotic. Symptoms responded well to intravenous immunoglobulin G and high-dose methyl prednisolone therapy. Answer: The usefulness of methyl prednisolone in acute transverse myelitis (ATM) appears to be variable according to the provided abstracts. Some studies suggest a beneficial role, while others do not. A study on the role of methyl prednisolone in longitudinal extensive transverse myelitis (LETM) in children reported that early administration of the drug can be beneficial, with 41.2% of the children achieving complete recovery and 58.8% showing partial recovery (PUBMED:32292451). Another study involving children with Devic's neuromyelitis optica (NMO), a condition related to transverse myelitis, showed a favorable outcome with the use of methyl prednisolone, with improvements in visual acuity, motor troubles, and sphincter issues (PUBMED:17942291). However, a study evaluating the role of methyl prednisolone in the management of ATM did not find a beneficial role for the drug on the 3-month outcome of ATM, with 33% of patients in the methyl prednisolone group having a poor outcome compared to 67% in the non-treated group (PUBMED:11571658). Another observational study with sequential follow-up found that while both sensory and motor functions improved following methyl prednisolone therapy, the improvement was more pronounced at 3 months compared to the seventh day, suggesting some benefit (PUBMED:10490859). In cases of transverse myelitis associated with other conditions, such as systemic lupus erythematosus (SLE) and dengue fever, patients showed improvement with steroid therapy, including methyl prednisolone (PUBMED:26611483, PUBMED:38268621). Additionally, a case of ATM following a urinary tract infection responded well to intravenous pulse methylprednisolone (PUBMED:34754669). In summary, while some studies and case reports suggest that methyl prednisolone can be useful in the treatment of acute transverse myelitis, particularly in children and when associated with other conditions, other research does not support a clear beneficial role. The variability in outcomes may be due to differences in study design, patient populations, and underlying causes of ATM.
Instruction: Is MRI helpful in assessing the distance of the tumour from the internal os in patients with cervical cancer below FIGO Stage IB2? Abstracts: abstract_id: PUBMED:27012496 Is MRI helpful in assessing the distance of the tumour from the internal os in patients with cervical cancer below FIGO Stage IB2? Aim: To determine the ability of magnetic resonance imaging (MRI) in detecting tumour-free margins from the internal os (IO). Materials And Methods: A database search yielded 79 women with early-stage cervical cancer who underwent radical hysterectomy and preoperative MRI. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of MRI in assessment of ≤5 and &gt;5 mm IO involvement were calculated with histopathological surgical specimen findings considered to be the reference standard. A main and subset analysis was performed. The subset analysis included only those patients who would have been considered for radical trachelectomy. Results: For predicting a distance between the tumour and the IO of ≤5 mm, MRI had a sensitivity of 73%, a specificity of 98.3%, a PPV of 95%, a NPV of 88.1%, and an accuracy of 89.8% for the main analysis, and sensitivity of 81.8%, a specificity of 93.2% a PPV of 69.2% a NPV of 96.5% and an accuracy of 91.4% for the subset analysis. Conclusion: MRI has high specificity, NPV, and accuracy in detecting tumour from the IO, making MRI suitable for treatment planning in patients desiring trachelectomy to preserve fertility. abstract_id: PUBMED:29429800 Definite chemoradiotherapy is a competent treatment option in FIGO stage IB2 cervical cancer compared with radical surgery +/- neoadjuvant chemotherapy. Background: To compare the treatment outcomes of different treatment modalities for International Federation of Gynecology and Obstetrics (FIGO) stage IB2 cervical cancer. Methods: From January 2002 to July 2016, 91 patients with FIGO stage IB2 squamous cell carcinoma, adenocarcinoma or adenosquamous carcinoma of the cervix were enrolled. All of them received one of the following treatment modalities, including intensity-modulated radiotherapy (IMRT) with concurrent platinum-based chemotherapy (CCRT group, n = 27), radical surgery with or without adjuvant treatment (RH group, n = 25), or neoadjuvant chemotherapy followed by radical surgery with or without adjuvant treatment (NACT group, n = 39). Overall survival (OS), disease free survival (DFS), loco-regional failure-free survival (LRFFS) and distant metastasis-free survival (DMFS) were compared among the three different groups. Results: The median follow up durations were 63.3 months for the CCRT group, 83.5 months for the NACT group, and 89.8 months for the RH group, respectively. The 5-year OS, DFS, LRFFS and DMFS for CCRT group vs. NACT group vs. RH group were 80.1% vs. 94.1% vs. 93.8% (p = 0.197), 79.5% vs. 79.3% vs. 91.0% (p = 0.401), 88.1% vs. 81.8% vs. 95.8% (p = 0.253), and 83.3% vs. 88.8% vs. 95.2% (p = 0.422). No significant prognostic factor was found in OS. Age &gt; 48 was significant in predicting poor DFS and DMFS. The non-squamous cell carcinoma was a significant predictor of poor DFS, LRFFS and DMFS. Conclusion: CCRT is a feasible therapeutic option with acceptable acute and chronic treatment-related toxicities for patients who cannot tolerate radical surgery or neoadjuvant chemotherapy. abstract_id: PUBMED:34877225 Stage Migration in Cervical Cancer Using the FIGO 2018 Staging System: A Retrospective Survival Analysis Using a Single-Institution Patient Cohort. Introduction The International Federation of Gynecology and Obstetrics (FIGO) changed the staging system for cervical cancer in 2018 and formally allowed cross-sectional imaging for staging purposes. Stage IB is now divided into three substages based on tumor size (IB1 &lt; 2 cm, IB2 2-4 cm and IB3 &gt; 4 cm). The presence of lymph nodes in the pelvis or para-aortic region will upstage the patient to stage IIIC. The purpose of this study was to evaluate the extent of stage migration using the FIGO 2018 staging system for cervical cancer and validate the new staging system by assessing the survival outcomes. Methods An Institutional Review Board-approved and Health Insurance Portability and Accountability Act-compliant retrospective analysis was performed on 158 patients from the cervical cancer database at the University of Mississippi Medical Center, USA. Patients had been treated between January 2010 and December 2018, and they were all staged according to the FIGO 2009 staging system previously. We collected data regarding tumor size, lymph node presence, and extent of metastatic disease in the pretreatment CT, positron emission tomography (PET), or MRI scans and restaged the patients using the FIGO 2018 system. The extent of stage migration was evaluated using the new staging system. We analyzed the three-year overall survival (OS) using both FIGO 2009 and 2018 staging systems for validation purposes. Kaplan-Meier analyses were performed using SPSS version 24. Results Fifty-nine percent of the patients were upstaged when they were restaged using the FIGO 2018 staging system. In the current 2018 staging system, Stage IB3 accounted for 4%, and Stage IIIC accounted for 48% of the patient cohort, while other stages accounted for the rest. The median overall survival of the entire cohort was 20.5 months. There was a change in the survival curves using FIGO 2018 stages compared to those of FIGO 2009. There was a numerical improvement in three-year OS in stages IB and III among the two staging systems; however, it was not statistically significant. Interestingly, the three-year overall survival of Stage IIIC patients was better when compared to Stages III A&amp; B combined (61% vs. 25%, p=0.017). Conclusion The increased availability of cross-sectional imaging across the world has led to recent changes in the FIGO staging system for cervical cancer, which allowed imaging in staging. We identified a significant stage migration in our patient cohort with the FIGO 2018 staging system, but no difference in the three-year overall survival was observed. Local tumor extent may be a worse prognostic indicator than nodal metastasis among stage III patients. abstract_id: PUBMED:35282114 Development and validation of a prognostic nomogram for 2018 FIGO stages IB1, IB2, and IIA1 cervical cancer: a large multicenter study. Background: Nomograms are predictive tools widely used for estimating cancer prognosis. We aimed to develop/validate a nomogram to predict the postsurgical 5-year overall survival (OS) and disease-free survival (DFS) probability for patients with stages IB1, IB2, and IIA1 cervical cancer [2018 International Federation of Gynecology and Obstetrics (FIGO 2018)]. Methods: We retrospectively enrolled cervical cancer patients at 47 hospitals with stages IB1, IB2, and IIA1 disease from the Clinical Diagnosis and Treatment for Cervical Cancer in China database. All patients were assigned to either the development or validation cohort (75% of patients used for model construction and 25% used for validation). OS and DFS were defined as the clinical endpoints. Clinicopathological variables were analyzed based on the Cox proportional hazards regression model. A nomogram was established and validated internally (with bootstrapping) and externally, and its performance was assessed according to the concordance index (C-index), receiver-operating characteristic curve, and calibration plot. Results: In total, 4,065 patients were enrolled and assigned to the development cohort (n=3,074) or validation cohort (n=991). The OS nomogram was constructed based on age, FIGO stage, stromal invasion, and lymphovascular space invasion (LVSI). The DFS nomogram was constructed based on the FIGO stage, histological type, stromal invasion, and LVSI. Both nomograms showed greater discrimination than the FIGO 2018 staging system in the development cohort [OS nomogram vs. FIGO 2018: C-index =0.69 vs. 0.61, area under the curve (AUC): 69.8 vs. 60.3; DFS nomogram vs. FIGO 2018: C-index =0.64 vs. 0.57, AUC: 62.6 vs. 56.9], and the same results were observed the definition in the validation cohort. Calibration plots demonstrated good agreement between the predicted and actual probabilities of 5-year OS/DFS in the development and validation cohorts. We stratified the patients into 3 subgroups with differences in OS/DFS. Each risk subgroup presented a distinct prognosis. Conclusions: We successfully developed a robust and powerful model for predicting 5-year OS/DFS in stages IB1, IB2, and IIA1 cervical cancer (FIGO 2018) for the first time. Internal and external validation showed that the model had great prediction performance and was superior to the currently utilized FIGO staging system. abstract_id: PUBMED:34106355 Clinico-pathological characteristics of patients with stage IB1-IB2 (FIGO 2018) uterine cervical cancer: a nationwide study in Japan. Objective: The aim of this retrospective analysis is to reveal clinicopathological findings and clinical outcome of patients with stage IB1/IB2 (FIGO 2018) uterine cervical cancer. Methods: Based on the database of the Japanese Gynecologic Oncology Group, 2194 patients with stage IB1/IB2 (FIGO 2018), who underwent radical hysterectomy between 1/1/2004-12/31/2008, were identified as eligible for this retrospective study. Results: Patients with squamous cell carcinoma had significantly frequent lympho-vascular space invasion than those with non-squamous cell carcinoma in both stage IB1 and IB2 (stage IB1; 29.1% vs. 17.1%, p &lt; 0.0001, stage IB2; 50.5% vs. 39.7%, p = 0.0009). Among 1262 patients with stage IB1, 61.2% (772/1262) were low-risk group, 29.4% (371/1262) were intermediate-risk group (single risk: 23.3%, double risks: 6.1%). Of 932 patients with stage IB2, 32.1% (299/932) were low-risk group, 59.1% (551/932) were intermediate-risk group (single intermediate-risk: 31.0%, double intermediate-risk: 28.1%). Disease-free survival rate and overall survival rate of stage IB1 patients were significantly better than those with stage IB2 (5-year DFS; 94.7% vs. 88.6%, p &lt; 0.001, 5-yrs OS; 98.5% vs. 95.1%, p &lt; 0.001). Stage IB1 Patients with double intermediate-risk showed significantly worse survival than those with single intermediate-risk (5-yrs DFS: 96.1% vs. 84.6%, p &lt; 0.001, 5-yrs OS: 98.9% vs. 93.0%, p = 0.029). Multivariate analysis revealed that double intermediate-risk was the independent prognostic factor in stage IB1, but non-squamous cell carcinoma and intermediate-risk in stage IB2. Conclusion: Non-squamous cell carcinoma and intermediate-risk decreased survival in patients with stage IB2, whereas double intermediate-risk was a negative impact on survival in stage IB1. abstract_id: PUBMED:33573078 Diagnostic Value of 18F-FDG PET/MRI for Revised 2018 FIGO Staging in Patients with Cervical Cancer. Purpose: To evaluate the diagnostic potential of PET/MRI with 18F-fluorodeoxyglucose (18F-FDG) in cervical cancer based on the revised 2018 International Federation of Gynecology and Obstetrics (FIGO) staging system. Materials And Methods: Seventy-two patients with biopsy-proven primary cervical cancer underwent pretreatment 18F-FDG PET/MRI, CT, and pelvic MRI. The diagnostic performance of 18F-FDG PET/MRI and MRI for assessing extent of the primary tumor and 18F-FDG PET/MRI and CT for assessing nodal and distant metastases was evaluated by two experienced readers. Histopathological and follow-up imaging results were used as the gold standard. McNemar test was employed for statistical analysis. Results: Accuracy for the invasion of vagina, parametrium, side wall, and adjacent organs was 97.2%, 93.1%, 97.2%, and 100% for 18F-FDG PET/MRI; and 97.2%, 91.7%, 97.2%, and 100% for pelvic MRI, respectively (p &gt; 0.05). Patient-based accuracy for metastasis to pelvic and paraaortic lymph nodes and distant organs was 95.8%, 98.6%, and 100% for 18F-FDG PET/MRI; and 83.3%, 95.8%, and 97.2% for CT, respectively; metastasis to pelvic lymph nodes was statistically significant (p &lt; 0.01). Lesion-based sensitivity, specificity, and accuracy for lymph nodes were 83.3%, 95.9%, and 94.8% for 18F-FDG PET/MRI; and 29.2%, 98.9% and 93.1% for CT, respectively; sensitivity was statistically significant (p &lt; 0.001). After excluding patients diagnosed by conization, accuracy for revised FIGO staging 2018 was significantly better for 18F-FDG PET/MRI (82.1%) than for CT and MRI (60.7%) (p &lt; 0.01). Conclusions: 18F-FDG PET/MRI offers higher diagnostic value for revised 2018 FIGO staging, suggesting that 18F-FDG PET/MRI might provide an optimal diagnostic strategy for preoperative staging. abstract_id: PUBMED:37788511 Brachytherapy and surgery versus surgery alone for IB2 (FIGO 2018) cervical cancers: A FRANCOGYN study. Objective: Evaluation of the management by first brachytherapy followed by radical hysterectomy (Wertheim type) compared to radical hysterectomy alone (Wertheim type) for the treatment of IB2 cervical cancer. Methods: Data from women with histologically proven FIGO stage IB2 cervical cancer treated between April 1996 and December 2016 were retrospectively abstracted from twelve French institutions with prospectively maintained databases. Results: Of the 211 patients with FIGO stage IB2 cervical cancer without lymph node involvement included, 136 had surgical treatment only and 75 had pelvic lymph node staging and brachytherapy followed by surgery. The surgery-only group had significantly more adjuvant treatment (29 vs. 3; p = 0.0002). A complete response was identified in 61 patients (81%) in the brachytherapy group. Postoperative complications were comparable (63,2% vs. 72%, p = 0,19) and consisted mainly of urinary (36vs. 27) and digestive (31 vs 22) complications and lymphoceles (4 vs. 1). Brachytherapy had no benefit in terms of progression-free survival (p = 0.14) or overall survival (p = 0.59). However, for tumors of between 20 and 30 mm, preoperative brachytherapy improved recurrence-free survival (p = 0.0095) but not overall survival (p = 0.41). This difference was not observed for larger tumors in terms of either recurrence-free survival (p = 0.55) or overall survival (p = 0.95). Conclusion: Our study found that preoperative brachytherapy had no benefit for stage IB2 cervical cancers in terms of recurrence-free survival or overall survival. For tumor sizes between 2 and 3 cm, brachytherapy improves progression-free survival mainly by reducing pelvic recurrences without improving overall survival. abstract_id: PUBMED:31354353 Prognostic value of the 2018 FIGO staging system for cervical cancer patients with surgical risk factors. Objective: To determine the predictive value of the 2018 International Federation of Gynecology and Obstetrics (FIGO) staging system for cervical cancer patients with surgical risk factors. Methods: Data of 662 cervical cancer patients (stages IB and IIA) with surgical risk factors treated at Zhejiang Cancer Hospital between 2008 and 2011 were retrospectively reviewed. Univariate log-rank test and multivariate Cox regression models were adopted to evaluate the relationship between 2018 FIGO stage and survival. Results: On re-staging of patients, 17.3%, 44.5%, 25.4%, and 37.1% of the patients with FIGO 2009 stage IB1, IB2, IIA1, and IIA2, respectively, were upgraded to FIGO 2018 IIIC1P stage, and 2.1%, 3.0%, 3.1%, and 2.1% patients, respectively, were upgraded to IIIC2P stage. The 5-year overall survival (OS) rates of patients with FIGO 2018 stage IB1, IB2, IB3, IIA1, IIA2, IIIC1P, and IIIC2P were 95.3%, 95.1%, 90.4%, 92.4%, 86.4%, 81.9%, and 56.3%, respectively. The 5-year progression-free survival (PFS) rates were 94.0%, 91.0%, 88.5%, 91.4%, 86.4%, 79.5%, and 43.8%, respectively. The 5-year OS rates of patients with 1-2 positive pelvic lymph nodes (PLNs) and those with &gt;2 positive PLNs were 86.0% and 73.7%, respectively, and the 5-year PFS rates were 84.2% and 70.2%, respectively. OS and PFS of patients with 1-2 positive PLNs in stage IIIC1P were similar to those of patients in stage IIA2 without lymph node metastasis, but significantly better than those of patients with &gt;2 positive PLNs. Multivariate analysis showed FIGO 2018 stage to be an independent prognostic factor for OS and PFS. Conclusion: The 2018 FIGO staging system for cervical cancer appears to be useful for predicting prognosis of patients with risk factors after radical surgery. Survival of stage IIA1 patients is better than that of stage IB3 patients. Stage IIIC1 is not homogenous; survival in stage IIIC1P depends on the number of positive PLNs. abstract_id: PUBMED:23234021 Treatment of cervical carcinoma IB2 stage Objective: To establish the overall and disease-free survival (OS and DFS) in patients IB2 stage FIGO with primary surgical treatment or operated on after preoperative RT. Material And Methods: Between 2003-2011 98 patients (FIGO IB2 stage) were operated on. 83 patients were submitted to primary surgery (radical hysterectomy class III with pelvic lymph node dissection--group 1 and 11 were operated on after preoperative RT - 30 Gy - group 2. All operated patients were submitted to adjuvant RT - 52 Gy. Four patients were operated on after NCT (neoadjuvant chemotherapy). All patients were followed between 2 and 96 months--median period of 45 months. Results: The acturial OS and DFS for all patients were estimated as 73.5% and 75.6% respectively. The OS and DFS for group 1 were estimated as 73.5% and 74.7% respectively. The OS and DFS for group 2 were estimated as 72.8% and 82%, respectively. Conclusion: Stage IB2 cervical cancer is related to worse prognosis in comparison to IB1 FIGO stage, because of the higher incidence of distant metastases. Preoperative RT doesn't change the oncological outcomes. There is no statistical difference between group 1 and group 2 in terms of survival. Is important to perform paraaortic lymph node dissection and concurrent radiochemotherapy in these patients. abstract_id: PUBMED:37475053 A prognostic model using FIGO 2018 staging and MRI-derived tumor volume to predict long-term outcomes in patients with uterine cervical squamous cell carcinoma who received definitive radiotherapy. Background: Uterine cervical carcinoma is a severe health threat worldwide, especially in China. The International Federation of Gynecology and Obstetrics (FIGO) has revised the staging system, emphasizing the strength of magnetic resonance imaging (MRI). We aimed to investigate long-term prognostic factors for FIGO 2018 stage II-IIIC2r uterine cervical squamous cell carcinoma following definitive radiotherapy and establish a prognostic model using MRI-derived tumor volume. Methods: Patients were restaged according to the FIGO 2018 staging system and randomly grouped into training and validation cohorts (7:3 ratio). Optimal cutoff values of squamous cell carcinoma antigen (SCC-Ag) and tumor volume derived from MRI were generated for the training cohort. A nomogram was constructed based on overall survival (OS) predictors, which were selected using univariate and multivariate analyses. The performance of the nomogram was validated and compared with the FIGO 2018 staging system. Risk stratification cutoff points were generated, and survival curves of low-risk and high-risk groups were compared. Results: We enrolled 396 patients (training set, 277; validation set, 119). The SCC-Ag and MRI-derived tumor volume cutoff values were 11.5 ng/mL and 28.85 cm3, respectively. A nomogram was established based on significant prognostic factors, including SCC-Ag, poor differentiation, tumor volume, chemotherapy, and FIGO 2018 stage. Decision curve analysis indicated that the net benefits of our model were higher. The high-risk group had significantly shorter OS than the low-risk group in both the training (p &lt; 0.0001) and validation sets (p = 0.00055). Conclusions: Our nomogram predicted long-term outcomes of patients with FIGO 2018 stage II-IIIC2r uterine cervical squamous cell carcinoma. This tool can assist gynecologic oncologists and patients in treatment planning and prognosis. Answer: Yes, MRI is helpful in assessing the distance of the tumor from the internal os in patients with cervical cancer below FIGO Stage IB2. A study found that MRI had high specificity, negative predictive value (NPV), and accuracy in detecting tumor-free margins from the internal os, making it suitable for treatment planning in patients desiring trachelectomy to preserve fertility (PUBMED:27012496).
Instruction: The Ulnar Nerve After Surgical Transposition: Can Sonography Define the Reason of Persisting Neuropathy? Abstracts: abstract_id: PUBMED:26090730 The Ulnar Nerve After Surgical Transposition: Can Sonography Define the Reason of Persisting Neuropathy? Purpose: Ulnar nerve neuropathy is mainly caused by compression at the level of the cubital tunnel. Two main approaches are currently known for the surgical treatment of this condition: decompression of the nerve in its usual position or transposition to the ulnar flexor side. This study was performed to define the usefulness of ultrasound in patients with persisting symptoms after ulnar nerve transposition. Materials And Methods: We present the data of 8 subjects with persisting symptoms after nerve transposition due to compressive neuropathy. The cross-section areas (CSA) and texture changes were recorded. Each ulnar nerve was divided into 6 segments - 3 segments at the proximal pass and 3 segments at the distal pass through the subcutaneous fascia. Results: Texture changes were recorded in 4.6 (76.7 %) ± 1.2 and outer nerve sheath blurring in mean 4.1 (68.3 %) ± 1.1 of the segments. Caliber changes were found in the course of the nerve based on the 6 segments: A mean CSA of 7.45 mm² ± 2.24 was found proximal to the upper fascial passage (PUF), a mean CSA of 11.96 mm² ± 3.61 at the upper fascial passage (UF), a mean CSA of 11.49 mm² ± 8.16 distal to the upper fascial passage (DUF), a mean CSA of 10.84 mm² ± 4.73 proximal to the lower fascial passage (PLF), a mean CSA of 12.12 mm² ± 5 at the lower fascial passage (LF), and a mean CSA of 7.89 mm² ± 3.42 distal to the lower fascial passage (DLF). All transposed nerves presented relevant kinks at the UF, 6 nerves presented relevant kinks at the LF. Conclusion: In cases of secondary ulnar neuropathy after nerve transposition, ultrasound can reliably assess the actual "situation" of the nerve and thus at least ease the decision for secondary surgery. abstract_id: PUBMED:28890032 Ulnar nerve entrapment at the elbow. A surgical series and a systematic review of the literature. Chronic compression of the ulnar nerve at the elbow is the second most common entrapment neuropathy. Various surgical options have been described. Timing of surgery is also debated. In this study we report the long-term results of a consecutive surgical series of anterior subcutaneous transpositions and review the pertinent literature. Sixty consecutive patients underwent anterior subcutaneous transposition at our Institution to treat ulnar nerve compression at the elbow. McGowan scale was used in the neurological exam before surgery. Bishop rating system was used to assess outcome. Seventy-eight% of patients scored good-excellent. None of the patients worsened. No complications and no recurrences were reported. Young age and good pre-operative neurological status (McGowan grade 1) were predictive of favorable outcome both at univariate and at multivariate analysis. No differences in outcome were observed between patients with intermediate (McGowan grade 2) and severe (McGowan grade 3) neuropathy. Thirty-four studies assessing outcome of different surgical techniques were reviewed. Anterior subcutaneous transposition had the lowest recurrence rate with an excellent effectiveness and safety profile. The favorable predictive role for outcome of preoperative neurological status was confirmed. The good long-term clinical results of the present series and the results of literature analysis confirm the value of anterior subcutaneous transposition of the ulnar nerve at the elbow. This technique has a particular effectiveness in most severe compressions, where outcomes are comparable with intermediate neuropathy cases. Moreover, our results suggest an aggressive attitude towards ulnar nerve compression at the elbow, particularly in younger patients. abstract_id: PUBMED:23100298 Ulnar nerve strain at the elbow in patients with cubital tunnel syndrome: effect of simple decompression. Simple decompression of the ulnar nerve at the elbow has not been shown to reduce nerve strain in cadavers. In this study, ulnar nerve strain at the elbow was measured intraoperatively in 11 patients with cubital tunnel syndrome, before and after simple decompression. Statistical analysis was performed using a paired Student's t-test. Mean ulnar nerve strain before and after simple decompression was 30.5% (range 9% to 69%) and 5.5% (range -2% to 11%), respectively; this difference was statistically significant (p &lt; 0.01) with a statistical power of 96%. Simple decompression reduced ulnar nerve strain in all patients by an average of 24.5%. Our results suggest that the pathophysiology of cubital tunnel syndrome may be multifactorial, being neither a simple compression neuropathy nor a simple traction neuropathy, and simple decompression may be a favourable surgical procedure for cubital tunnel syndrome in terms of decompression and reduction of strain in the ulnar nerve. abstract_id: PUBMED:32410148 Management of the Ulnar Nerve in Throwing Athletes. Purpose Of Review: During overhead throwing, there are tremendous forces placed on the medial elbow that can result in injury to the ulnar nerve. The purpose of this article is to review the anatomy, biomechanics, and appropriate clinical care of an overhead throwing athlete suffering from injury to their ulnar nerve. The authors preferred clinical work-up; non-operative and operative care are included in this review. Recent Findings: More recent findings support the use of subcutaneous transposition instead of the submuscular transpositions and only to perform a transposition in the setting of specific pre-operative findings. Subcutaneous transpositions have led to decreased rates of post-operative symptoms and complications. Understanding the complexity of the medial elbow is essential to making an accurate diagnosis of ulnar neuritis/neuropathy in the overhead throwing athlete. In general, non-operative management has shown to be successful; however, in those refractory or recurrent cases of ulnar neuritis, ulnar nerve decompression with or without transposition provides these athletes with a surgical procedure that has shown to provide patients with excellent outcomes. Continued refinement surgical indications and surgical technique will allow for a high percentage of athletes who return to competitive sports following surgery for ulnar neuropathy. abstract_id: PUBMED:38356457 Ultrasonographic evaluation of ulnar nerve morphology in patients with ulnar nerve instability. Introduction/aims: Ulnar nerve instability (UNI) in the retroepicondylar groove is described as nerve subluxation or dislocation. In this study, considering that instability may cause chronic ulnar nerve damage by increasing the friction risk, we aimed to examine the effects of UNI on nerve morphology ultrasonographically. Methods: Asymptomatic patients with clinical suspicion of UNI were referred for further clinical and ultrasonographic examination. Based on ulnar nerve mobility on ultrasound, the patients were first divided into two groups: stable and unstable. The unstable group was further divided into two subgroups: subluxation and dislocation. The cross-sectional area (CSA) of the nerve was measured in three regions relative to the medial epicondyle (ME). Results: In the ultrasonographic evaluation, UNI was identified in 59.1% (52) of the 88 elbows. UNI was bilateral in 50% (22) of the 44 patients. Mean CSA was not significantly different between groups. A statistically significant difference in ulnar nerve mobility was found between the group with CSA of &lt;10 versus ≥10 mm2 (p = .027). Nerve instability was found in 85.7% of elbows with an ulnar nerve CSA value of ≥10 mm2 at the ME level. Discussion: The probability of developing neuropathy in patients with UNI may be higher than in those with normal nerve mobility. Further prospective studies are required to elucidate whether asymptomatic individuals with UNI and increased CSA may be at risk for developing symptomatic ulnar neuropathy at the elbow. abstract_id: PUBMED:34150581 An Update on Treatment Modalities for Ulnar Nerve Entrapment: A Literature Review. Context: Ulnar nerve entrapment is a relatively common entrapment syndrome second only in prevalence to carpal tunnel syndrome. The potential anatomic locations for entrapment include the brachial plexus, cubital tunnel, and Guyon's canal. Ulnar nerve entrapment is more so prevalent in pregnancy, diabetes, rheumatoid arthritis, and patients with occupations involving periods of prolonged elbow flexion and/or wrist dorsiflexion. Cyclists are particularly at risk of Guyon's canal neuropathy. Patients typically present with sensory deficits of the palmar aspect of the fourth and fifth digits, followed by motor symptoms, including decreased pinch strength and difficulty fastening shirt buttons or opening bottles. Evidence Acquisition: Literature searches were performed using the below MeSH Terms using Mendeley version 1.19.4. Search fields were varied until further searches revealed no new articles. All articles were screened by title and abstract. Decision was made to include an article based on its relevance and the list of final articles was approved three of the authors. This included reading the entirety of the artice. Any question regarding the inclusion of an article was discussed by all authors until an agreement was reached. Results: X-ray and CT play a role in diagnosis when a bony injury is thought to be related to the pathogenesis (i.e., fracture of the hook of the hamate.) MRI plays a role where soft tissue is thought to be related to the pathogenesis (i.e., tumor or swelling.) Electromyography and nerve conduction also play a role in diagnosis. Medical management, in conjunction with physical therapy, shows limited promise. However, minimally invasive techniques, including peripheral percutaneous electrode placement and ultrasound-guided electrode placement, have all been recently studied and show great promise. When these techniques fail, clinicians should resort to decompression, which can be done endoscopically or through an open incision. Endoscopic ulnar decompression shows great promise as a surgical option with minimal incisions. Conclusions: Clinical diagnosis of ulnar nerve entrapment can often be delayed and requires the suspicion as well as a thorough neurological exam. Early recognition and diagnois are important for early institution of treatment. A wide array of diagnostic imaging can be useful in ruling out bony, soft tissue, or vascular etiologies, respectively. However, clinicians should resort to electrodiagnostic testing when a definitive diagnois is needed. Many new minimally invasive techniques are in the literature and show great promise; however, further large scale trials are needed to validate these techniques. Surgical options remains as a gold standard when adequate symptom relief is not achieved through minimally invasive means. abstract_id: PUBMED:37954629 Supercharged End-to-Side Nerve Transfer for Ulnar Neuropathy: Redefining Nomenclature and Recommendations for Standardisation of Surgical Technique Description. Supercharged end-to-side (SETS) nerve transfers have been described as a treatment option for ulnar neuropathy, however, there is inconsistency in the nomenclature used to describe the microsurgical technique. The purpose of this article is to systematically review the available literature on the SETS nerve transfer technique and to provide an overview of the technical variations to facilitate standardisation of surgical method. A literature review was performed through PubMed, MEDLINE, and Ovid databases according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Studies that reported surgical technique of anterior interosseous nerve (AIN) to ulnar nerve SETS transfer were included. Studies were excluded when not referencing SETS/reverse end-to-side (RETS) nerve transfers, studies referencing nerve transfers other than AIN to motor fascicle bundle of the ulnar nerve (MUN), animal studies, and studies not reporting technique. Of the 168 studies found, 14 articles were included. In five articles, distal visualisation of the MUN in Guyon's canal was specifically cited. In the four studies that commented on donor preparation, sharp neurectomy proximal to the AIN branching point was undertaken. Recipient preparation was commented on in seven of the included studies. Two studies referred to an epineurial window only while five specifically recommended a perineurial window. Coaptation site was specified in four studies and all studies used sutures for coaptation, with four articles stipulating that 9-0 nylon was used. Additionally, fibrin glue was used in conjunction with suture technique in four studies. Consistency in nomenclature used to describe SETS microsurgical technique is needed before case series measuring outcome can be reliably interpreted. This review allowed for the development of suggestions for standardisation of nomenclature and minimal reporting requirements when describing SETS technique. Standardisation of technique will allow for reproducibility and facilitate future evaluations of outcome in prospective randomised control trials. abstract_id: PUBMED:29683338 Ulnar Nerve Complications After Ulnar Collateral Ligament Reconstruction of the Elbow: A Systematic Review. Background: While ulnar collateral ligament reconstruction (UCLR) of the elbow is an increasingly commonly performed procedure with excellent results reported in the published literature, less attention has been paid to specifically on the characterization of postoperative ulnar nerve complications, and it is unclear what operative strategies may influence the likelihood of these complications. Purpose: The purpose of this study is to examine the prevalence and type of ulnar nerve complications after UCLR of the elbow based on the entirety of previously published outcomes in the English literature. In addition, this study examined how the rate of ulnar nerve complications varied as a function of surgical exposures, graft fixation techniques, and ulnar nerve management strategies. Study Design: Systematic review and meta-analysis. Methods: A systematic review of the literature was completed using the MEDLINE, PubMed, and Ovid databases. UCLR case series that contained complications data were included. Ulnar neuropathy was defined as any symptoms or objective sensory and/or motor deficit(s) after surgery, including resolved transient symptoms. Meta-analysis of the pooled data was completed. Results: Seventeen articles (n = 1518 cases) met the inclusion criteria, all retrospective cohort studies. The mean prevalence of postoperative ulnar neuropathy was 12.0% overall after any UCLR procedure at a mean follow-up of 3.3 years, and 0.8% of cases required reoperation to address ulnar neuropathy. There were no cases of intraoperative ulnar nerve injury reported. The surgical approach associated with the highest rate of neuropathy was detachment of flexor pronator mass (FPM) (21.9%) versus muscle retraction (15.9%) and muscle splitting (3.9%). The fixation technique associated with the highest rate of neuropathy was the modified Jobe (16.9%) versus DANE TJ (9.1%), figure-of-8 (9.0%), interference screw (5.0%), docking technique (3.3%), hybrid suture anchor-bone tunnel (2.9%), and modified docking (2.5%). Concomitant ulnar nerve transposition was associated with a higher neuropathy rate (16.1%) compared with no handling of the ulnar nerve (3.9%). Among cases with concomitant transposition performed, submuscular transposition resulted in a higher rate of reoperation for ulnar neuropathy (12.7%) compared with subcutaneous transposition (0.0%). Conclusion: Despite a perception that UCLR has minimal morbidity, a review of all published literature revealed that 12.0% of UCLR surgeries result in postoperative ulnar nerve complications. UCLR techniques associated with the highest rates of neuropathy were detachment of the FPM, modified Jobe fixation, and concomitant ulnar nerve transposition, although it remains unclear whether there is a causal relationship between these factors and subsequent development of postoperative ulnar neuropathy due to limitations in the current body of published literature. abstract_id: PUBMED:18827246 Neurolysis of the ulnar nerve for neuropathy following total elbow replacement. Ulnar neuropathy presents as a complication in 5% to 10% of total elbow replacements, but subsequent ulnar neurolysis is rarely performed. Little information is available on the surgical management of persistent ulnar neuropathy after elbow replacement. We describe our experience with the surgical management of this problem. Of 1607 total elbow replacements performed at our institution between January 1969 and December 2004, eight patients (0.5%) had a further operation for persistent or progressive ulnar neuropathy. At a mean follow-up of 9.2 years (3.1 to 21.7) six were clinically improved and satisfied with their outcome, although, only four had complete recovery. When transposition was performed on a previously untransposed nerve the rate of recovery was 75%, but this was reduced to 25% if the nerve had been transposed at the time of the replacement. abstract_id: PUBMED:10829185 Compressive neuropathies of the ulnar nerve at the elbow and wrist. Compressive neuropathy of the ulnar nerve in the upper limb is a common problem that frequently results in severe disabilities. At the elbow, Lundborg concluded that the nerve was "asking for trouble" because of its anatomic course through confined spaces and posterior to the axis of elbow flexion. Normally, the ulnar nerve is subjected to stretch and compression forces that are moderated by its ability to glide in its anatomic path around the elbow. When normal excursion is restricted, irritation ensues. This results in a cycle of perineural scarring, further loss of excursion, and progressive nerve damage. Initial treatment for the acute and subacute neuropathy at the elbow is nonsurgical. Rest and avoiding pressure on the nerve may suffice, but if symptoms persist, splint immobilization of the elbow and wrist is warranted. For chronic neuropathy associated with muscle weakness, or neuropathy that does not respond to conservative measures, surgery is usually necessary. A variety of surgical procedures have been described in the medical literature, and deciding on the most effective procedure can be difficult considering the excellent results claimed by proponents for each. Unfortunately, there is a paucity of information based on prospective randomized clinical studies comparing the different surgical methods. Dellon attempted to provide some guidelines by reviewing the data in 50 articles dealing with nonsurgical and surgical treatment of ulnar neuropathies at the elbow. In order to provide uniform data, he re-interpreted the data in these articles using his own system for staging nerve compression. He reported that treatment was most successful for mild neuropathies, a conclusion few would challenge. Excellent results were also achieved in 50% of patients with mild neuropathies that were treated nonsurgically and in more than 90% treated by surgery, regardless of the procedure. For moderate neuropathies, nonsurgical treatment was generally unsuccessful, as were decompressions in situ. Medial epicondylectomies were effective in only 50% of cases and they had the highest recurrence rate. Regarding ulnar nerve transpositions, each method has its proponents, usually based on the training and experience of the surgeon. Subcutaneous transposition is the least complicated. It is an effective procedure, particularly in the elderly and in patients who have a thick layer of adipose tissue in their arms. It is the procedure of choice for repositioning the nerve during surgical reductions of acute fractures, arthroplasties of the elbow, and secondary neurorrhaphies. Intramuscular and submuscular transpositions are more complicated procedures. Although proponents of intramuscular transposition report favorable results, the procedure can result in severe postoperative perineural scarring. Submuscular transposition has a high degree of success and is generally accepted to be the preferred procedure when prior surgery has been unsuccessful. I also prefer it as the primary procedure for most chronic neuropathies that require surgery. Compressive neuropathies of the ulnar nerve in the canal of Guyon are less common, but they can also result in significant disabilities. Compression can occur in 1 of 3 zones. Zone 1 is in the most proximal portion of the canal, where the nerve is a single structure consisting of motor and sensory fascicles, and zones 2 and 3 are distal where the ulnar nerve has divided into motor and sensory branches. The clinical picture correlates with the zone in which compression occurs. Answer: Yes, sonography can define the reason for persisting neuropathy after ulnar nerve transposition. A study was performed to assess the usefulness of ultrasound in patients with persisting symptoms after ulnar nerve transposition due to compressive neuropathy. The study involved 8 subjects and recorded cross-section areas (CSA) and texture changes of the ulnar nerve, dividing it into 6 segments. The results showed texture changes in 76.7% of the segments and outer nerve sheath blurring in 68.3% of the segments. Caliber changes were found in the course of the nerve, with relevant kinks at the upper fascial passage (UF) in all transposed nerves and at the lower fascial passage (LF) in 6 nerves. The study concluded that ultrasound can reliably assess the actual "situation" of the nerve and thus at least ease the decision for secondary surgery in cases of secondary ulnar neuropathy after nerve transposition (PUBMED:26090730).
Instruction: Do medical students know enough about smoking to help their future patients? Abstracts: abstract_id: PUBMED:18820533 Do medical students know enough about smoking to help their future patients? Assessment of New York City fourth-year medical students' knowledge of tobacco cessation and treatment for nicotine addiction. Purpose: Practicing physicians underutilize U.S. Department of Health and Human Services evidence-based approaches to nicotine addiction and treatment. Few studies have assessed medical student knowledge in this area. This study examined New York City fourth-year medical students' knowledge of tobacco cessation and treatment of nicotine addiction. Method: The authors conducted a Web-based survey, comprising 27 closed- and open-ended questions, of six of seven New York City medical schools in the spring of 2004. They drew questions from international, national, and local surveys on tobacco and health, U.S. Department of Health and Human Services tobacco treatment guidelines, and prior studies. Primary outcome measures were knowledge of the epidemiology of smoking, benefits of cessation and treatment of nicotine addiction, clinical cessation practices, and students' use of tobacco and intentions to stop smoking. Results: Of 943 fourth-year medical students, 469 (50%) completed an online survey. Students had good knowledge of the epidemiology of smoking, including its prevalence and health effects, with most responding correctly to relevant questions (mean correct response 79%; SD = 9.4). Students demonstrated a fair understanding of the benefits of cessation (mean correct response, 67%; SD = 19.2) and treatment of nicotine addiction (mean correct response, 61%; SD = 13.2). Three hundred students (64%) rated their own preparation to assist patients to quit as less than adequate. Conclusions: Fourth-year medical students at the participating schools in New York City understood the harms of smoking but needed more information on the benefits of stopping smoking and treatment of nicotine addiction. abstract_id: PUBMED:27493373 Cigarette smoking among medical students in The National Ribat University, Sudan. The problem of smoking among medical students is common worldwide, but the pattern and extent of the problem varies from place to place. Data from Sudanese medical students is limited. The aims of study was to know the extent of the problem of smoking among medical students, its routes and how it can be reduced. All students in the first and fifth year in the Faculty of Medicine, The National Ribat University were asked to fill a questionnaire regarding their knowledge and practice of smoking and when they started smoking. The questionnaire inquired about the role of their peers and the staff to help them stop smoking. Two hundred and forty (96%) of the first year students and 174 (94 %) of the fifth year students responded by filling the questionnaires. Around 10 % of all students smoke. Although non-smokers knew much about the problems of smoking, many of the smokers did not. The main influence on students to start smoking was from parents, siblings and friends. Eighty per cent of the smokers are willing to give up smoking and they tried many times. The study showed that little effort was made by the University Staff to help students stop smoking. Most students started smoking in the high secondary schools. There is a need for family community and institutional campaign to contain the problem of smoking. abstract_id: PUBMED:35330863 Prevalence and Associated Factors of Cigarette Smoking Among Medical Students in Wroclaw, Poland. Objective: The study aimed to evaluate the nicotine addiction syndrome among students of the medical university of Wroclaw. Materials And Methods: The anonymous self-administered online questionnaire regarding tobacco and e-cigarettes smoking was sent to 5616 students. A total of 407 students from Wrocław Medical University who fulfilled the questionnaire were enrolled. Results: One-fourth of the surveyed medical students smoke or have smoked cigarettes in the past. About 75% of students report that there is no information about the harmfulness of smoking and its consequences in the course of teaching in medical studies. The majority of students, 74.5% of women and 62.8% of men, agree that nicotine dependence syndrome should become part of the curriculum of medical universities. Conclusion: The proportion of smokers among medical students is very high, and their knowledge of smoking is insufficient. It seems necessary to implement urgent education for medical students about the consequences of smoking and the treatment of nicotine dependence syndrome. The knowledge of future doctors about the harmfulness of smoking in the future will allow reducing the number of patients suffering from smoking-related diseases and may help reduce the number of smokers among students. It seems necessary to intensify the antismoking campaigns in Poland and worldwide. abstract_id: PUBMED:27512507 Smoking among Lebanese medical students: Prevalence and attitudes. Introduction: The tobacco epidemic is a major public health threat facing the world. Tobacco dependence is recognized as the greatest preventable cause of disease and death. Medical students are in key position influencing future tobacco cessation programs. Objectives: The primary objective of this study is to evaluate the prevalence of smoking among medical students across Lebanon and their smoking attitudes. It also investigates their attitude toward smoking, showing where they really stand on this major public health issue. This study helps better tackle anti-smoking campaigns among both physicians and patients. Methods: This cross-sectional study was conducted by sending a questionnaire to currently enrolled medical students at all seven medical schools in Lebanon. The 32-item questionnaire was used, comprised three sections assessing sociodemographic characteristics, smoking habits, and attitudes toward smoking among Lebanese medical students. The questionnaire was launched online on Limesurvey to retain anonymity. The data were then transferred to Statistical Package for Social Sciences for analysis. Data were expressed as percentages for discrete variables and as mean ± standard deviation for continuous variables. Results: One hundred sixty-three complete responses remained of the 182 obtained responses. Forty-two of the total 163 students identified themselves as either daily or occasional smokers yielding a prevalence of 25.8%. Smokers were less likely to ask patients about their smoking habit and to counsel them about smoking cessation. Almost one-third of smokers felt that they had no obligations toward the society. Conclusion: Approximately 1 in 4 Lebanese medical students is a smoker. Students who smoke are less likely to ask patients about their smoking habits and to counsel them on smoking cessation. This is a major drawback in the fight against tobacco. This calls for better education of our future doctors on smoking cessation to decrease the smoking burden on our Lebanese society and worldwide. abstract_id: PUBMED:21388944 Medical students' smoking habits and attitudes about cessation Unlabelled: Medical years are very important in shaping the attitudes of future doctors. It is proven that doctors who smoke do not advise their patient to stop smoking. We have to know the students' smoking habits and attitudes about smoking cessation to make them interested in the fight against tobacco. Aims: To investigate medical students' smoking habits and attitudes about cessation. Methods: We applied the Hungarian translation of the Global Health Professionals Student Survey. Medical students from the first and fifth year filled in the survey anonymously during the seminars. Statistical analysis was performed with SPSS. Results: In both years 245 students filled in the questionnaire. In the first year 30.8%, in the fifth year 38.9% of the students were defined as smokers. During the academic study the number of daily smokers and the number of smoked cigarettes increases. Students require training about smoking cessation, however they would entrust it to a specialist. Conclusions: Based on our data we need a teaching block in the curricula about smoking and smoking cessation. abstract_id: PUBMED:17244976 An international review of tobacco smoking among medical students. We conducted a systematic international review of tobacco smoking habits among medical students. Particular attention was paid to countries where smoking rates have been historically well-documented in local journals, but were less often included in larger international review articles. The methodology involved a search of relevant medical subject headings, after which the reference lists of journal papers were also examined to find additional publications. A total of 66 manuscripts met the inclusion criteria. The most common countries previously studied included India, the United States, Australia, Japan, Pakistan, Turkey and the United Kingdom. Overall, our review suggests that the prevalence of smoking among medical students varies widely amongst different countries and also between male and female students within the same areas. Consistently low smoking rates were found in Australia and the United States, while generally high rates were reported in Spain and Turkey. Given their important future role as exemplars, more effective measures to help reduce tobacco smoking among medical students are clearly needed worldwide. abstract_id: PUBMED:16062029 Smoking habits, attitudes and smoking cessation among sixth-year medical students of Kaunas University of Medicine Unlabelled: The aim of the study was to assess trends of smoking and the relation between smoking behavior of undergraduate medical students and their attitudes towards smoking and treatment of tobacco dependence. Methods: Data were collected from sixth-year medical students of Kaunas University of Medicine during ten consecutive years. A total of 1518 students completed the anonymous questionnaire; response rate was 83.5%. Results: The daily smoking rate was 19.4% in males and 4.2% in females; 15.9% and 10.4% respectively were irregular smokers, never smoked 41.6% of males and 64% of females. 70% of smokers and 83% of nonsmokers agreed that tobacco smoking is one of the greatest health hazards of modern times; 97.1% of all students considered passive smoking as harmful. More nonsmokers (70%) than smokers (5l%) were aware of addictive nature of tobacco, but there was a little difference in their opinion about support to smokers for quitting: 83.2% of smokers and 87.1% of nonsmokers stated that medical doctors should help smokers to give up smoking and 68% of them thought that doctors had not enough knowledge how to help. Less smokers (55%) than nonsmokers (87%) believed that smoking should be banned in hospitals. Conclusions: Prevalence of smoking was significantly lower among medical students than in general population. There were no significant changes in smoking prevalence among male medical students during the period of 1995-2004. At the same time smoking prevalence among female medical students increased more than twice. Attitudes of smokers and nonsmokers were significantly different on most items of tobacco, but knowledge of tobacco problem in the undergraduate students was not sufficient. Teaching about tobacco and related issues remains essential in the undergraduate medical course. abstract_id: PUBMED:14555574 A comparison of smoking habits among medical and nursing students. Objective: The approach and credibility of future physicians and nurses as treatment providers for smoking- and tobacco-related diseases may be influenced by their smoking habits. We compared smoking habits among medical and nursing students, and examined whether these habits changed during the course of education for each cohort. Method: Over 1,100 medical and nursing students from a university were surveyed in year 2000 using a questionnaire that included the Fagerstrom test for nicotine dependence (FTND). Results: A total of 397 medical students and 126 nursing students completed the survey. Significantly fewer medical students (3.3%) smoked compared to nursing students (13.5%). Also, significantly more nursing students were former smokers (17.8%) than medical students (9.8%). The severity of nicotine dependence, as indicated by the total FTND score as well as scores on five of the six items on the FTND, was significantly lower among medical students compared to nursing students. Smoking or quit rates did not differ across class years in both groups; however, unlike nursing students, time since quitting significantly differed across class years for medical students. Although smoking habits appear to change little during the course of education for both medical and nursing students, many smokers may have quit just prior to entering medical school but not nursing school. Conclusions: The findings confirm the continuing decline in smoking among medical students in the United States; however, increased efforts to promote tobacco education and intervention among nursing students seem necessary. Nevertheless, both groups appear to have the potential to be credible advisors to patients and public regarding smoking cessation. abstract_id: PUBMED:27935924 Smoking Cessation and Attitudes, Belief, Observation, and Education of Medical Students, in Turkey. Introduction: Tobacco use is an important public health problem around the world.Aim of this study is to assess attitudes,belief and observation of the students on smoking cessation and medical education. Methods: This study is part of a multi-country study called "Global Health Profession Student Survey". The study population consisted of third year medical students in Turkey. The sample consisted of a total of 1834 medical students from randomly selected 12 medical schools. Results: Of the students, 1209 (92.1%)thought that health professionals should get specific training on cessation techniques, and that health professionals should serve as "role models" for their patients and the public. The percentage of the students who answered "Health professionals should routinely advise their patients who smoke to quit smoking" was 1211 (93.3%). Of the students, 1204 (60.8%) responded that health professionals who use other tobacco products were less likely to advise patients to stop smoking. The percentage of the students who had received a formal training on smoking cessation approaches was 48.2% (1196). Of the students, 91.5% (1203) had heard of nicotine replacement therapies in tobacco cessation programs. More than half of smokers tried to quit smoking last year, and majority of them did not take professional help or advice. Conclusions: Majority of students are aware of health professionals' role on smoking cessation. Most of the students are willingness to take specific formal training on tobacco. Student's behaviours and attitudes were different by gender and smoking status.Improvement of tobacco cessation issues in medical curricula will be beneficial. abstract_id: PUBMED:27480707 Dental vs. Medical Students' Comfort with Smoking Cessation Counseling: Implications for Dental Education. The aim of this study was to determine if dental and medical students have similar feelings of professional responsibility, comfort, and confidence with counseling patients about smoking cessation during their clinical years. All third- and fourth-year osteopathic medical (N=580) and dental students (N=144) at Western University of Health Sciences were invited to participate in a survey in April-July 2014, either electronically or in person, regarding their perceived professional responsibility, comfort, and confidence in counseling smokers about quitting and major constraints against counseling smokers about quitting. Respondents' demographic characteristics, smoking history, and history of living with a smoker were also assessed. Response rates were 21% (124/580) for medical and 82% (118/144) for dental students. Most of the responding medical (99.2%) and dental (94.9%) students reported feeling it was their professional responsibility to counsel patients about smoking cessation. Medical student respondents were significantly more comfortable and confident counseling patients about smoking cessation than dental student respondents (p&lt;0.001). Students in the third year were just as comfortable and confident counseling patients about smoking cessation as students in the fourth year (p&gt;0.10). There were no differences by age, but students who were former smokers were significantly more comfortable and confident counseling about smoking cessation than were nonsmokers (p=0.001). While almost all of the responding students reported feeling responsible for counseling patients about smoking cessation, the medical students and former smokers were more comfortable and confident performing this counseling. These results suggest the need for additional training in counseling techniques for dental students and nonsmokers. Future studies should assess the impact of medical and dental students' smoking cessation counseling. Answer: The knowledge of medical students regarding smoking cessation and treatment for nicotine addiction varies, and there is evidence that many do not feel adequately prepared to assist future patients in this area. A study conducted in New York City found that fourth-year medical students had good knowledge of the epidemiology of smoking but needed more information on the benefits of stopping smoking and treatment of nicotine addiction, with 64% rating their own preparation as less than adequate (PUBMED:18820533). Similarly, a study in Poland reported that about 75% of medical students indicated there was no information about the harmfulness of smoking and its consequences in their medical studies, and a majority agreed that nicotine dependence syndrome should become part of the curriculum (PUBMED:35330863). In Lebanon, approximately one in four medical students was a smoker, and those who smoked were less likely to ask patients about their smoking habits and to counsel them on smoking cessation (PUBMED:27512507). A study in Hungary also highlighted the need for a teaching block in medical curricula about smoking and smoking cessation, as students required training but would entrust it to a specialist (PUBMED:21388944). An international review suggested that the prevalence of smoking among medical students varies widely among different countries, and more effective measures to help reduce tobacco smoking among medical students are needed worldwide (PUBMED:17244976). In Lithuania, while the prevalence of smoking was lower among medical students than in the general population, there was an increase in smoking prevalence among female medical students, and knowledge of the tobacco problem was not sufficient (PUBMED:16062029). In Turkey, a majority of medical students were aware of health professionals' role in smoking cessation and were willing to take specific formal training on tobacco, although their behaviors and attitudes varied by gender and smoking status (PUBMED:27935924). Overall, these studies indicate that while medical students may have a basic understanding of the harms of smoking, many do not feel fully prepared to counsel patients on cessation and may benefit from more comprehensive education and training in this area.
Instruction: Population-based versus hospital-based controls: are they comparable? Abstracts: abstract_id: PUBMED:37350938 Cancer survival: left truncation and comparison of results from hospital-based cancer registry and population-based cancer registry. Background: Cancer survival is an important indicator for evaluating cancer prognosis and cancer care outcomes. The incidence dates used in calculating survival differ between population-based registries and hospital-based registries. Studies examining the effects of the left truncation of incidence dates and delayed reporting on survival estimates are scarce in real-world applications. Methods: Cancer cases hospitalized at Nantong Tumor Hospital during the years 2002-2017 were traced with their records registered in the Qidong Cancer Registry. Survival was calculated using the life table method for cancer patients with the first visit dates recorded in the hospital-based cancer registry (HBR) as the diagnosis date (OSH), those with the registered dates of population-based cancer (PBR) registered as the incidence date (OSP), and those with corrected dates when the delayed report dates were calibrated (OSC). Results: Among 2,636 cases, 1,307 had incidence dates registered in PBR prior to the diagnosis dates of the first hospitalization registered in HBR, while 667 cases with incidence dates registered in PBR were later than the diagnosis dates registered in HBR. The 5-year OSH, OSP, and OSC were 36.1%, 37.4%, and 39.0%, respectively. The "lost" proportion of 5-year survival due to the left truncation for HBR data was estimated to be between 3.5% and 7.4%, and the "delayed-report" proportion of 5-year survival for PBR data was found to be 4.1%. Conclusion: Left truncation of survival in HBR cases was demonstrated. The pseudo-left truncation in PBR should be reduced by controlling delayed reporting and maximizing completeness. Our study provides practical references and suggestions for evaluating the survival of cancer patients with HBR and PBR. abstract_id: PUBMED:32589361 Small cell lung cancer (SCLC) incidence and trends vary by gender, geography, age, and subcategory based on population and hospital cancer registries in Hebei, China (2008-2017). Background: Lung cancer is the leading cause of morbidity and mortality worldwide. Small cell lung cancer (SCLC) has been determined to be the most lethal lung malignancy. Few studies have previously analyzed the epidemiological characteristics of SCLC in China. This study analyzed the epidemiological characteristics of SCLC aiming to provide a reference for the prevention of SCLC in Hebei Province. Methods: The epidemiological characteristics of SCLC using lung cancer data based on population and hospital cancer registries in Hebei Province between 2008 and 2017 were analyzed. Results: The proportion of both population- and hospital-based SCLC cases displayed a significant increasing trend. Moreover, the proportion of males was higher than that for female based on population- and hospital-based cases. The proportion of hospital-based SCLC cases in counties was higher than that in cities, whereas there were no significant regional differences between cities and counties based on population. The proportion of both population- and hospital-based SCLC cases decreased consistently with increasing age. There was a difference between population- and hospital-based distribution of subcategories of SCLC. Conclusions: Significant increases in the proportion of both population- and hospital-based SCLC cases over recent years, particularly in males and in patients aged over 55 years, were observed. Research on the pathogenesis of SCLC in these patients and prevention is urgently required. abstract_id: PUBMED:32447129 Hospital-based or home-based administration of oncology drugs? A micro-costing study comparing healthcare and societal costs of hospital-based and home-based subcutaneous administration of trastuzumab. Objective: To investigate resource use and time investments of healthcare professionals, patients and their family and to compare healthcare and societal costs of one single hospital-based and one single home-based subcutaneous administration of trastuzumab in The Netherlands. Method: We conducted a bottom-up micro-costing study. Patients diagnosed with HER2+ early or metastatic breast cancer were recruited in four Dutch hospitals. For healthcare costs, data were collected on drug use, consumables, use of healthcare facilities, time of healthcare professionals, and travelling distance of the nurse. For societal costs, data were collected on patient and family costs (including travelling expenses and time of informal caregivers) and productivity losses of paid and unpaid work. Results: Societal costs of one single administration of SC trastuzumab were €1753 within the home-based and €1724 within the hospital-based setting. Drug costs of trastuzumab were identical in both settings (€1651). Healthcare costs were higher for home-based administration (€91 versus €47) mainly because of more time of healthcare professionals (110 versus 38 minutes). Costs for patient and family were, however, lower for home-based administration due to travelling expenses (€7 versus €0) and time of informal caregivers (€14 versus €4). Costs for productivity losses were similar for both settings. Conclusions: Home-based subcutaneous administration of trastuzumab is more time consuming for healthcare professionals and therefore more costly than hospital-based administration. The total budget impact can be large considering that a large number of patients receive a large number of cycles of oncology treatments. If home-based administration is the way forward, novel approaches are crucial for ensuring efficiency of home-based care. abstract_id: PUBMED:30572832 Health-services utilisation amongst older persons during the last year of life: a population-based study. Background: Accurate population-based data regarding hospital-based care utilisation by older persons during their last year of life are important in health services planning. We investigated patterns of acute hospital-based service use at the end of life, amongst older decedents in New South Wales (NSW), Australia. Methods: Data from all persons aged ≥70 years who died in the state of NSW Australia in 2007 were included. Several measures of hospital-based service utilisation during the last year of life were assessed from retrospectively linked data comprising data for all registered deaths, cause of death, hospital care during the last year of life (NSW Admitted Patient Data Collection [APDC] and Emergency Department [ED] Data Collection [EDDC]), and the NSW Cancer Registry. Results: Amongst 34,556 decedents aged ≥70 years, 82% (n = 28,366) had ≥1 hospitalisation during the last year of life (median 2), and 21% &gt; 3 hospitalisations. Twenty-five percent (n = 5485) of decedents attended ED during the last week of life. Overall, 21% had a hospitalisation &gt; 30 days in the last year of life, and 7% spent &gt; 3 months in hospital; 79% had ≥1 ED attendance, 17% &gt; 3. Nine percent (n = 3239) spent time in an intensive care unit. Fifty-three percent (n = 18,437) died in an inpatient setting. Hospital records had referenced palliative care for a fifth (7169) of decedents. Adjusting for age group, sex, place of residence, area-level socioeconomic status, and cause of death, having &gt; 3 hospitalisations during the last year of life was more likely for persons dying from cancer (35% versus 16% non-cancer deaths, adjusted odds ratio [aOR] 2.33), 'younger' old decedents (29% for age 70-79 and 20% for age 80-89 versus 11% for 90+, aOR 2.42 and 1.77 respectively) and males (25% versus 17% females, aOR 1.38). Patterns observed for other hospital-based service use were similar. Conclusions: This population-based study reveals high use of hospital care among older persons during their last year of life, although this decreased with increasing older age, providing important data to inform health services planning for this population, and highlighting aspects requiring further study. abstract_id: PUBMED:25188655 Home-based versus ambulatory hospital-based training in COPD There is little experience on the effect of home training (rD) in patients with chronic obstructive pulmonary disease (COPD). Our aim was to compare the effect of rD on exercise tolerance, dyspnea and quality of life versus hospital outpatient training (rH). Two random groups of 25 patients were evaluated. Both trained during 8 weeks (24 sessions); undergoing various tests before and after, such as spirometry, questionnaires on dyspnea (MRC, Mahler and Borg) and on quality of life (SF-36 and St.George's), submaximal (6 minutes' walk, resistance-shuttle and cycle-ergometer endurance time limit, (Tlim), and - maximal exercise tests (shuttle -ST- and cardiopulmonary test). The rH group performed aerobic and strength for lower limbs (MI) and upper (MS) exercises. The rD group performed walks at 70% of the speed reached in ST and strength exercises for MI and MS. The basal condition was similar in both groups. The Tlim increased, 125% (p = 0.0001) for rH group and 63% (p = 0.0011) for rD, showing no significant differences. They also improved distance in shuttle resistance (77%, p = 0.0421 in rH and 79 %, p = 0.0197 in rD group) and in 6 minutes' test (12% in rD, p = 0.0135). St George scoring was reduced only in the rH group (p = 0.0034); 32% abandoned in rD vs. 20% in rH (p = 0.4521). Effectiveness in rD training was equal to rH for COPD patients, although rD were more likely to abandon the program. abstract_id: PUBMED:37649862 Are patients with eosinophilic esophagitis treated at an academic hospital comparable to a patient from a population-based cohort? Not in Denmark. Background And Aim: Little are known about differences in eosinophilic esophagitis (EoE) patients in the general population compared with patients treated at academic hospitals. This might affect the generalizability of study results. The aims of the study were to compare clinical features, and complications of EoE between patients from a population-based cohort (DanEoE) and patients from an academic hospital cohort in Copenhagen (EoE-Cph). Methods: The DanEoE cohort is a population- and register-based cohort including all 236 adult EoE patients diagnosed in the North Denmark Region in 2007-2017 previously described in detail. The new EoE-Cph cohort consists of 245 consecutively referred adult patients to a dedicated EoE center in an Academic Hospital in the Danish capital in 2013-2020. Data were collected from medical registries and medical files. Results: Patients in the academic cohort were at symptom debut 12 (SD 16) years younger (P = 0.001). At the time of diagnosis they were 5.4 (SD 15) years younger (P &lt; 0.001). Where Gastro-esophageal reflux disease (GORD) was present in one-third of the population-based cohort, this was only observed in 14% of the EoE-Cph group (P &lt; 0.05). Food bolus obstruction before diagnosis was 24% less common in the EoE-Cph patients (P &lt; 0.001). Conclusion: Results indicated that EoE patients referred to a Danish EoE center is a selected subgroup with disease debut at a younger age, less comorbid GORD, and rarely food bolus obstruction before diagnosis. This suggests that study results from academic hospitals might not have generalizability to the average EoE patient in a population. abstract_id: PUBMED:19080941 Population-based versus hospital-based controls: are they comparable? Objective: To compare whether there are differences among hospital and population controls. Methods: Two case-control studies were conducted on lung cancer risk factors in the Santiago de Compostela Public Health District. Whereas one used randomly chosen census-based population controls, the other used consecutive hospital controls that went to the reference hospital for non-smoking-related trivial interventions. The differences were analyzed using logistic regression. The dependent variable was type of control (hospital or population). Results: Hospital controls had a similar tobacco habit than population controls, but consumed more alcohol. For those consuming more than 50 ml daily, the risk of being a hospital control was 4.83 (95%CI: 2.55-9.14). Conclusions: There may be some differences between hospital and population-based controls, which must be taken into account in the design of case-control studies. It is necessary to ascertain whether such differences are reproduced at other geographic locations and whether they can affect estimation of exposure-disease. abstract_id: PUBMED:23612210 Evidence-based versus evidence-biased medicine Due to a burden of multiple chronic diseases, older patients are the greatest consumers of healthcare resources. However, randomized clinical trials (RCT) have excluded most older patients for many reasons, ranging from comorbidities to disability or cognitive impairment. Systematic reviews demonstrate that such stringent exclusion criteria, frequently poorly justified, are still adopted by ongoing RCT, even for conditions highly prevalent in the elderly, such as chronic heart failure. Following this approach, even after enrollment of substantial numbers of older patients, RCT have usually tested the efficacy of drugs in "ideal" patients, with fewer associated conditions, no functional limitation, and optimal adherence. Therefore, the generalizability of RCT to older populations encountered in clinical practice is limited: a situation generating a sort of evidence-biased as opposed to evidence-based medicine. Well conducted observational studies can be a valid alternative, and some of these showed that in older, frail, patients the net clinical benefits from different treatments are not only comparable, but in some cases even larger than those observed in younger, robust individuals. Italian and European regulatory agencies are acknowledging that, in the face of the current demographic transition, we are in need for generating robust evidence of treatment effectiveness in the older, frail population. abstract_id: PUBMED:10659111 Optimizing hospital-based home care for dying cancer patients: a population-based study. In many reports the percentage of home deaths in cancer is based on selected populations. In this population-based study all cancer patients who died within 12 months within a specified area were studied (n = 108). This area is covered by hospital-based home care (HBHC) on a 24-h basis, which doctors available by day and at night. Forty people (37%) out of the total cancer population died in their own homes. Another 11% would theoretically have been ideal candidates for home care at the end of life. Thus, a home death rate of about 50% of the cancer patients is a realistic figure, and much higher than the usual 5-15% reported, provided that an effective HBHC is offered. abstract_id: PUBMED:37684696 The burden of hospital admissions for skeletal dysplasias in Sri Lanka: a population-based study. Background: Skeletal dysplasias are a diverse group of rare disorders in the chondro-osseous tissue that can have a significant impact on patient's functionality. The worldwide prevalence of skeletal dysplasias at birth is approximately 1:5000 births. To date, disease burden and trends of skeletal dysplasias in the Sri Lankan population have not been described in any epidemiological study. Our aim was to evaluate the burden and the current trends in hospital admissions for skeletal dysplasias in the Sri Lankan population. A retrospective evaluation of hospital admissions for skeletal dysplasia during 2017-2020 was performed using population-based data from the eIMMR database which covers government hospitals in the entire country. The trends in hospital admissions for skeletal dysplasias by calendar year, age, and types of skeletal dysplasia were described using appropriate summary statistics. Results: Respective crude admission rates of skeletal dysplasias in the years 2017, 2018, 2019 and 2020 were 5.2, 8.1, 8.0, and 6.5 per million population. A female predominance (1.4:1) was noted during the studied period. Of all reported cases the majority (n = 268; 44.2%) were children less than 4 years. Each year, 0-4 years age group represented 40-47% of the total hospital admissions. More than half of the cases were reported from Colombo (28.1%) and Kandy (25.4%) districts combined. 60% of cases were diagnosed as osteogenesis imperfecta (OI). Rising trends were observed in the hospital admissions for osteogenesis imperfecta, achondroplasia and osteopetrosis, while other skeletal dysplasia types collectively showed a relatively stable trend. Conclusion: This preliminary study revealed a female predominance of skeletal dysplasias and a relatively high admission rate of osteogenesis imperfecta in the Sri Lankan population. A distinct trend was not visible in the studied years probably due to the impact on hospital services due to COVID- Pandemic. Future research on the healthcare burden on families affected by skeletal dysplasia is required to better understand the overall cost of care and identify therapies that reduce admission rates. This study highlights the value of analysing population-based data on rare diseases to improve healthcare in low-resource countries. Answer: The question of whether population-based controls are comparable to hospital-based controls is addressed in the abstract with PUBMED ID 19080941. The study conducted two case-control studies on lung cancer risk factors in the Santiago de Compostela Public Health District, comparing randomly chosen census-based population controls with consecutive hospital controls who went to the reference hospital for non-smoking-related trivial interventions. The study found that hospital controls had a similar tobacco habit to population controls but consumed more alcohol. Specifically, for those consuming more than 50 ml daily, the risk of being a hospital control was 4.83 times higher than being a population control. The conclusion drawn from this study is that there may be some differences between hospital and population-based controls, which must be taken into account in the design of case-control studies. It is necessary to ascertain whether such differences are reproduced at other geographic locations and whether they can affect the estimation of exposure-disease relationships. This finding suggests that while there may be some similarities between the two types of controls, there are also notable differences that could influence the outcomes of studies. Therefore, researchers should be cautious when interpreting results from studies using hospital-based controls and consider the potential biases that may arise from these differences.
Instruction: Endosonographic and histopathological staging of extrahepatic bile duct cancer: time to leave the present TNM-classification? Abstracts: abstract_id: PUBMED:15743357 Endosonographic and histopathological staging of extrahepatic bile duct cancer: time to leave the present TNM-classification? Objectives: The discrepancy between high rates of sensitivity, specificity, and accuracy for intraductal ultrasonography (IDUS) in extrahepatic bile duct carcinoma and the failure to depict different wall layers as defined by the TNM classification have not yet been elucidated sufficiently. Methods: In a prospective study, endosonographic images were correlated with histomorphology including immunohistochemistry. Using IDUS, we examined fresh resection specimens of patients who had undergone pancreato-duodenectomy. For histological analysis, the formalin-fixed and paraffin-embedded specimens were stained by hematoxylin-eosin, elastica-van-Gieson, and immunohistochemically by smooth muscle-actin. To confirm our hypothesis, further cases from the archives were analyzed histopathologically and immunohistochemically. Results: The various wall layers of the extrahepatic bile duct as described by the International Union Against Cancer are neither histomorphologically nor immunohistochemically consistently demonstrable. Especially, a clear differentiation between tumor invasion beyond the wall of the bile duct (T2) and invasion of the pancreas (T3) by histopathological means is often not possible. Endosonographic images using high-resolution miniprobes similarly confirm the difficulty in imaging various layers in the bile duct wall. Conclusions: Most adaptations made by the sixth edition of the TNM classification accommodate to the endosonographic and most of the histopathological findings as demonstrated in our study. In contrast to the new edition, however, our findings suggest to combine T2- and T3-staged tumors into one single class leading to clarification, and improved reproducibility of histopathological staging. abstract_id: PUBMED:20737151 TNM 2010. What's new? In the seventh edition of the TNM Classification of Malignant Tumours there are several entirely new classifications: upper aerodigestive mucosal melanoma, gastrointestinal stromal tumour, gastrointestinal carcinoid (neuroendocrine tumour), intrahepatic cholangiocarcinoma, Merkel cell carcinoma, uterine sarcomas, and adrenal cortical carcinoma. Significant modifications concern carcinomas of the oesophagus, oesophagogastric junction, stomach, appendix, biliary tract, lung, skin, prostate and ophthalmic tumours, which will be not addressed in this article. For several tumour entities only minor changes were introduced which might be of importance in daily practice. The new classifications and changes will be commented on without going into details. abstract_id: PUBMED:19212186 Extended liver resection for intrahepatic cholangiocarcinoma: A comparison of the prognostic accuracy of the fifth and sixth editions of the TNM classification. Objective: The present study was conducted to analyze the outcome after liver resection for intrahepatic cholangiocarcinoma (IHC) and to compare the prognostic accuracy of the fifth and sixth editions of the TNM classification of malignant tumors. Summary Background Data: A comparison of the prognostic accuracy of the fifth and sixth editions of the TNM classification of malignant tumors is missing for IHC as yet. The present report is, to our knowledge, the largest series on surgical resection of IHC in the world literature and the first comparison of long-term outcome according to the fifth and sixth edition of the TNM classification of malignant tumors. Methods: From 1988 to 2007, 195 liver resections for IHC were performed in our institution. Staging was performed according to the liver chapters of the fifth and sixth edition of the TNM classification of malignant tumors. Results: In a multivariate analysis of prognostic variables, R0-resection, UICC-stage I/II according to the sixth edition, highly or moderately differentiated IHC, and lymph node negative IHC were identified as favorable prognostic variables. UICC-stage IIIc of the sixth edition, which was almost identical to the group of lymph node positive IHC was identified as unfavorable predictor of postoperative prognosis. Formally, curative resections (R0-resections) were achieved in 138 patients (71%). One- and 5-year survival rates after R0-resections were 72.4% and 30.4%, respectively. Conclusions: Extended resections for IHC resulted in a favorable rate of R0-resection, which is the most important prognostic variable. Staging of IHC according to sixth edition of the TNM classification is superior in comparison with the fifth edition as indicated by the results of the multivariate analysis. abstract_id: PUBMED:21879422 Staging of peripheral-type intrahepatic cholangiocarcinoma: appraisal of the new TNM classification and its modifications. Background: In the seventh edition, the TNM classification for intrahepatic cholangiocarcinoma (IHC) is separated from that for hepatocellular carcinoma. Because it is the first independent classification for IHC, the validity of the new classification needs to be evaluated. Methods: The medical records of 93 patients with peripheral-type IHC who underwent treatment (61 resected and 32 unresectable tumors) were retrospectively reviewed focusing on the new TNM classification for IHC. Results: The 5-year survival rate and the median survival time for the 61 patients with resected tumors were 33.9% and 2.05 years, respectively. The survival of the patients with periductal invasion was similar to that of the patients without. The survival of the patients with metastasis to the gastrohepatic lymph nodes (G-LN) was dismal without any 2-year survivors and similar to that of the patients with unresectable tumors (P = 0.247). On multivariate analysis, histologic differentiation (P = 0.034), multiple tumors (P = 0.014), lymph node metastasis (P &lt; 0.001), and distant metastasis other than lymph node metastasis (P = 0.007) were identified as independent prognostic factors. According to the above results, we modified the new TNM classification as follows: IHC with multiple tumors was classified as pT4 disease; periductal invasion was excluded from determinant of the T-categories; and metastasis to G-LN was treated as distant metastasis. The survival curves based on our modifications were better stratified than those based on the new TNM classification. Conclusions: The TNM classification for IHC included some clinical inconsistencies; therefore, we proposed some modifications of the new TNM classification for IHC and demonstrated that these modifications offer better stratification of the survival data. Further revisions are necessary for its improvement. abstract_id: PUBMED:2013380 Endosonographic TNM staging of extrahepatic bile duct cancer: comparison with pathological staging. Endosonography was performed preoperatively in 33 patients with common bile duct carcinoma and in 43 patients with carcinoma of the common hepatic duct and its bifurcation. The results were correlated with the histology of resected specimens according to the new (1987) TNM (tumor, node, metastasis) classification. Endosonography was accurate in the evaluation of the depth of tumor infiltration. Overall accuracy for common bile duct carcinoma and common hepatic duct carcinoma was 82.8% and 85%, respectively. Endosonography was helpful in diagnosing regional lymph node metastases but not accurate in diagnosing nonmetastatic lymph nodes. With common bile duct carcinoma, the incidence of lymph node metastasis increased with progressive depth of tumor infiltration. No such correlation was found in common hepatic duct carcinomas. In the staging of distant metastasis, this technique was limited by the low-penetration depth of ultrasonography. Thus, additional transcutaneous ultrasonography or computed tomography was necessary for complete staging. The routine use of the biopsy channel for endosonographically guided aspiration puncture will further enhance the diagnostic value of endosonography in the future. abstract_id: PUBMED:2414617 Perioperative staging and the Münster TNM classification in ampullary and pancreatic cancer From 1974 to 1982 607 patients with diseases of the pancreas were treated at the Surgical University Clinic of Münster/West Germany. 256 of those suffered from pancreatic and 42 from ampullary cancer. All patients were grouped according to a TNM-System for pancreatic cancer developed for that particular purpose. Of the curative resections performed (n = 73) with a resection rate of 78.6% for ampullary and 15.6% for pancreatic cancer, a distinct preponderance of early stages (T1N0M0/T2N0M0 greater than 60%) was noticed for the first, of late stages for the latter (T1N0M0/T2N0M0 = 25%). The frequencies of distant metastases for all operations also differed significantly, being 9.5% respectively 43.4% at the time of operation (p less than 0.05). At the low mean survival time for palliative operations an mean survival time of 40.3 months for the Whipple operation was evaluated, of 28.0 months for total pancreatectomy. At the stage T1N0M0/T2N0M0 the mean survival time was greater than 63 months. abstract_id: PUBMED:37128850 Retrospective analysis of 8th edition American Joint Cancer Classification: Distal cholangiocarcinoma. Backgrounds/aims: This is a retrospective analysis of whether the 8th edition American Joint Committee on Cancer (AJCC) was a significant improvement over the 7th AJCC distal extrahepatic cholangiocarcinoma classification. Methods: In total, 111 patients who underwent curative resection of mid-distal bile duct cancer from 2002 to 2019 were included. Cases were re-classified into 7th and 8th AJCC as well as clinicopathological univariate and multivariate, and Kaplan-Meier survival curve and log rank were calculated using R software. Results: In patient characteristics, pancreaticoduodenectomy/pylorus preserving pancreaticoduodenectomy had better survival than segmental resection. Only lymphovascular invasion was found to be significant (hazard ratio 2.01, p = 0.039) among all clinicopathological variables. The 8th edition AJCC Kaplan Meier survival curve showed an inability to properly segregate stage I and IIA, while there was a large difference in survival probability between IIA and IIB. Conclusions: The 8th distal AJCC classification did resolve the anatomical issue with the T stage, as T1 and T3 showed improvement over the 7th AJCC, and the N stage division of the N1 and N2 category was found to be justified, with poorer survival in N2 than N1. Meanwhile, in TMN staging, the 8th AJCC was able differentiate between early stage (I and IIA) and late stage (IIB and III) to better explain the patient prognosis. abstract_id: PUBMED:29859128 Comparison of the sixth and the seventh editions of the UICC classification for intrahepatic cholangiocarcinoma. Background: The current seventh edition of the TNM classification for intrahepatic cholangiocarcinoma (ICC) includes tumor number, vascular invasion, lymph node involvement but no longer the tumor size as compared to the sixth edition. The impact of the seventh edition on stage-based prognostic prediction for patients with ICC was evaluated. Methods: Between 03/2001 and 02/2013, 98 patients with the diagnosis of an ICC were surgically treated at our center. Median survival times were calculated for these patients after separate classification by both sixth and seventh editions. Results: Median overall survival was increased in patients classified to the lower tumor stages I and II using the seventh as compared to the sixth edition: stage I (54.9 vs. 47.3 months), stage II (19.9 vs. 18.9 months), stage III (17.2 vs. 19.9 months), and stage IV (23.2 vs. 15.3 months), respectively. The seventh edition definition of the T category resulted in an increased median survival regarding the T1 (50.4 vs. 47.3 months) as well as the T2 category (19.9 vs. 15.6 months) and revealed a reduced median survival of patients within the T3 (21.6 vs. 24.8 months) as well as the T4 category (19.9 vs. 27.0 months). Conclusions: The UICC seventh edition TNM classification for ICC improves separation of patients with intermediate stage tumors as compared to the sixth edition. The prognostic value of the UICC staging system has been improved by the seventh edition. Trial registration The data for this study have been retrospectively registered and the study has been approved by the ethic committee of the medical faculty of the University Hospital of Essen, Germany (license number 15-6353-BO). abstract_id: PUBMED:2183284 Ampullopancreatic carcinoma: preoperative TNM classification with endosonography. Endosonography (ES) was used for the preoperative TNM (1987) staging of tumors in 43 patients with pancreatic cancer and 24 patients with ampullary carcinomas. These results were correlated with the histologic findings of resected specimens. Early-stage tumors could be distinguished from advanced stages of cancer with ES. Detailed images of ductular and parenchymal abnormalities allowed distinction between pancreatic and ampullary carcinomas based on anatomic location. The overall accuracy of ES in the assessment of tumor classification in pancreatic and ampullary carcinoma was 92% and 88%, respectively. In diagnosing regional lymph nodes in pancreatic and ampullary tumors the accuracy of ES was 74% and 54%, respectively. For diagnosing metastatic lymph nodes in pancreatic and ampullary carcinoma the accuracy of ES was 91% and 80%, respectively. The prevalence of lymph node metastases in T1 pancreatic cancers and T1 ampullary carcinomas was 40% and 0%, respectively. Discrimination between inflammation and metastases was difficult with ES. ES was not accurate in assessing distant metastases because of the limited penetration depth of ultrasound. abstract_id: PUBMED:26607711 Prognostic Accuracy of the Seventh Edition of the TNM Classification Compared with the Fifth and Sixth Edition for Distal Cholangiocarcinoma. Background: The TNM classification for distal cholangiocarcinoma was first introduced in the 7th edition, which was published in 2009; however, prognostic accuracy compared with the 5th and 6th editions has not yet been evaluated and requires validation. Methods: A prospective histological database of patients with distal bile duct cancer was analyzed, and histological parameters and stage of the distal cholangiocarcinoma were assessed according to the 5th, 6th, and 7th editions of the TNM classification. Results: Between 1994 and 2012, a total of 516 patients underwent pancreatic head resection, of whom 59 patients (11.4 %) experienced histologically confirmed distal cholangiocarcinoma. The median overall survival time was 22.2 months (13.1-31.4). Tumor recurrence occurred in 23 patients after a median disease-free survival time of 14.1 months. The 7th edition showed a monotonicity of all gradients, with a stepwise increase of mortality related to a stepwise increase of tumor stage (log-rank test; p &lt; 0.05) demonstrating best discrimination of all tested editions [area under the receiver operating characteristic curve (AUC) 0.82; 95 % CI 0.70-0.95; p = 0.012]. The discrimination rate was low for the 5th (AUC 0.67; 95 % CI 0.42-0.91; p = 0.18) and 6th editions (AUC 0.70; 95 % CI 0.47-0.93; p = 0.11), while the log-rank test did not reach statistical significance. On multivariate analysis, lymph node involvement and positive resection margins were positive and independent predictors of inferior survival (p &lt; 0.05). Conclusions: The 7th edition of the TNM classification was favorable in terms of predicting outcome, and generated a monotonicity of all grades. Strikingly, the 7th edition, but not the 5th and 6th editions, was of prognostic significance to predict outcome. Answer: The TNM classification system is a widely used method for staging various types of cancer, including extrahepatic bile duct cancer. However, there have been discussions and studies questioning the adequacy of the current TNM classification for this type of cancer, particularly in light of endosonographic and histopathological findings. A study by the authors of PUBMED:15743357 suggests that the various wall layers of the extrahepatic bile duct, as described by the International Union Against Cancer, are not consistently demonstrable either histomorphologically or immunohistochemically. They found that differentiating between tumor invasion beyond the bile duct wall (T2) and invasion of the pancreas (T3) is often not possible using histopathological means. Their endosonographic images using high-resolution miniprobes confirmed the difficulty in imaging various layers in the bile duct wall. They concluded that most adaptations made by the sixth edition of the TNM classification accommodate the endosonographic and most of the histopathological findings. However, they suggest combining T2- and T3-staged tumors into one single class to improve the clarity and reproducibility of histopathological staging. Other studies have also evaluated the TNM classification's prognostic accuracy and suggested modifications. For instance, PUBMED:21879422 proposed modifications to the new TNM classification for intrahepatic cholangiocarcinoma (IHC) to offer better stratification of survival data. They found that the TNM classification included some clinical inconsistencies and suggested that further revisions are necessary for its improvement. Similarly, PUBMED:26607711 evaluated the prognostic accuracy of the seventh edition of the TNM classification compared with the fifth and sixth editions for distal cholangiocarcinoma. They found that the seventh edition showed better discrimination of all tested editions and was favorable in terms of predicting outcomes. In light of these findings, it appears that there is a growing body of evidence suggesting that the present TNM classification for extrahepatic bile duct cancer may need to be revised or even replaced to better reflect the realities of endosonographic and histopathological staging. The studies indicate that the current classification may not be adequately representing the complexity of tumor spread and invasion, which could impact treatment decisions and prognostic evaluations.
Instruction: Does obesity predict bipolarity in major depressive patients? Abstracts: abstract_id: PUBMED:24210626 Does obesity predict bipolarity in major depressive patients? Background: Differential association of obesity in bipolar and unipolar Major Depressive Episode (MDE) has not been systematically studied. We explore the relationships between obesity and history of manic and hypomanic symptoms in a large national clinical sample of MDE patients. Method: The sample comprised 571 consecutive patients with a DSM-IV diagnosis of MDE enrolled in a 7 months period. The study involved 30 psychiatric facilities for outpatients, distributed throughout Italy. Diagnosis was formulated by psychiatrists with extensive clinical experience in the diagnosis and treatment of mood disorders. In all patients height (meters) and weight (kilograms) were systematically measured at the moment of the clinical evaluation. The severity of depressive and anxious symptomatology was self-evaluated by the means of Zung's questionnaires for depression and anxiety. For the evaluation of lifetime manic or hypomanic features, Hypomania Check List-32 was also administered. Obese and Non-Obese subgroups were identified on the basis of a &gt;30 BMI cut off point. Results: BMI ≤30 was observed in 86 (15.1%) of our MDE patients. The Obese and Non-Obese subgroups did not report differences as regards to age and gender distribution. Obese patients reported a lower number of years of education in comparison with Non-Obese patients. As regards to marital status, Obese patients were more frequently married in comparison with the Non-Obese patients. Obese patients were more frequently belonging to the bipolar group than Non-Obese patients. Obese subjects also reported more frequently than Non-Obese an HCL total score &gt;14. The effect of educational level, marital status and bipolar-unipolar distinction on the probability of Obese group membership was analyzed by stepwise logistic regression. Bipolar subtype resulted to be the strongest predictor of Obesity. Limitations: Pharmacological treatments and co-morbidity with other psychiatric disorders are not explored and accounted for in our analyses. Conclusions: Obesity in our national sample of patients with MDE is associated with bipolar subtype and (hypo)manic symptoms. These findings suggest the possibility that the presence of obesity in patients with MDE might be related to bipolarity. A common impulsive-addictive diathesis is proposed as mediating mechanism. Further longitudinal studies in clinical and non-clinical populations are necessary to better define the burden and the role of the association between obesity and bipolarity. abstract_id: PUBMED:28691250 Obesity in patients with major depression is related to bipolarity and mixed features: evidence from the BRIDGE-II-Mix study. Objectives: The Bipolar Disorders: Improving Diagnosis, Guidance and Education (BRIDGE)-II-Mix study aimed to estimate the frequency of mixed states in patients with a major depressive episode (MDE) according to different definitions. The present post-hoc analysis evaluated the association between obesity and the presence of mixed features and bipolarity. Methods: A total of 2811 MDE subjects were enrolled in a multicenter cross-sectional study. In 2744 patients, the body mass index (BMI) was evaluated. Psychiatric symptoms, and sociodemographic and clinical variables were collected, comparing the characteristics of MDE patients with (MDE-OB) and without (MDE-NOB) obesity. Results: Obesity (BMI ≥30) was registered in 493 patients (18%). In the MDE-OB group, 90 patients (20%) fulfilled the DSM-IV-TR criteria for bipolar disease (BD), 225 patients (50%) fulfilled the bipolarity specifier criteria, 59 patients (13%) fulfilled DSM-5 criteria for MDEs with mixed features, and 226 patients (50%) fulfilled Research-Based Diagnostic Criteria for an MDE. Older age, history of (hypo)manic switches during antidepressant treatment, the occurrence of three or more MDEs, atypical depressive features, antipsychotic treatment, female gender, depressive mixed state according to DSM-5 criteria, comorbid eating disorders, and anxiety disorders were significantly associated with the MDE-OB group. Among (hypo)manic symptoms during the current MDE, psychomotor agitation, distractibility, increased energy, and risky behaviors were the variables most frequently associated with MDE-OB group. Conclusions: In our sample, the presence of obesity in patients with an MDE seemed to be associated with higher rates of bipolar spectrum disorders. These findings suggest that obesity in patients with an MDE could be considered as a possible marker of bipolarity. abstract_id: PUBMED:37920541 Prevalence and associated clinical factors for overweight and obesity in young first-episode and drug-naïve Chinese patients with major depressive disorder. Background: Obesity and overweight are common in young patients with major depressive disorder (MDD). However, the prevalence and associated clinical factors of obesity/overweight in young first-episode and drug-naïve (FEDN) MDD patients are rarely reported in China. Methods: A cross-sectional study of 917 young patients (aged 18-35 years) with FEDN MDD was performed. Demographic and clinical data were collected. Depression, anxiety, and psychotic symptoms were assessed using the Hamilton Depression Scale (HAMD), the Hamilton Anxiety Scale (HAMA), and the Positive and Negative Syndrome Scale (PANSS) positive subscale, respectively. Results: Among the young MDD patients, the prevalence of obesity and overweight was 4.14 and 52.89%, respectively. Compared to normal-weight patients, overweight patients were older, had a greater age of onset, and had higher TSH and TG levels. Male MDD patients had a higher risk of obesity than female patients. Compared to obese patients, normal-weight and overweight patients had significantly lower HAMD scores, TC levels, and rates of TSH abnormalities. Logistic regression analysis showed that age, age of onset, and sex were independently associated with obesity, and TSH was independently associated with both obesity and overweight, in young MDD patients. Conclusion: Our findings suggest a high prevalence of overweight and obesity in young FEDN MDD patients. Several demographic and clinical variables are independently associated with overweight/obesity in these young MDD patients. abstract_id: PUBMED:32594934 Role of obesity in systemic low-grade inflammation and cognitive function in patients with bipolar I disorder or major depressive disorder. Background: Studies have suggested the detrimental effects of obesity and systemic inflammation on the cognitive function of patients with bipolar or major depressive disorder. However, the complex associations between affective disorder, obesity, systemic inflammation, and cognitive dysfunction remain unclear. Methods: Overall, 110 patients with affective disorder (59 with bipolar I disorder and 51 with major depressive disorder) who scored ≥61 on the Global Assessment of Functioning and 51 age- and sex-matched controls were enrolled. Body mass index ≥25 kg/m2 was defined as obesity or overweight. Levels of proinflammatory cytokines-including interleukin-6, tumor necrosis factor (TNF)-α, and C-reactive protein (CRP)-were measured, and cognitive function was assessed using various methods, including the Wisconsin Card Sorting Test (WCST) and go/no-go task. Results: Patients with bipolar I disorder or major depressive disorder were more likely to be obese or overweight, had higher CRP and TNF-α levels, and had greater executive dysfunction in the WCST than the controls. TNF-α level (P &lt; .05) but not affective disorder diagnosis or obesity/overweight was significantly associated with cognitive function deficits, although obesity/overweight and diagnosis were significantly associated with increased TNF-α level. Conclusions: Our findings may indicate that proinflammatory cytokines, but not obesity or overweight, have crucial effects on cognitive function in patients with bipolar I disorder or major depressive disorder, although proinflammatory cytokines and obesity or overweight were found to be strongly associated. The complex relationships between affective disorder diagnosis, proinflammatory cytokine levels, obesity or overweight, and cognitive function require further investigation. abstract_id: PUBMED:30188336 Characterizing, Assessing, and Treating Cognitive Dysfunction in Major Depressive Disorder. Learning Objectives: After participating in this activity, learners should be better able to:• Characterize cognitive dysfunction in patients with major depressive disorder.• Evaluate approaches to treating cognitive dysfunction in patients with major depressive disorder. Abstract: Cognitive dysfunction is a core psychopathological domain in major depressive disorder (MDD) and is no longer considered to be a pseudo-specific phenomenon. Cognitive dysfunction in MDD is a principal determinant of patient-reported outcomes, which, hitherto, have been insufficiently targeted with existing multimodal treatments for MDD. The neural structures and substructures subserving cognitive function in MDD overlap with, yet are discrete from, those subserving emotion processing and affect regulation. Several modifiable factors influence the presence and extent of cognitive dysfunction in MDD, including clinical features (e.g., episode frequency and illness duration), comorbidity (e.g., obesity and diabetes), and iatrogenic artefact. Screening and measurement tools that comport with the clinical ecosystem are available to detect and measure cognitive function in MDD. Notwithstanding the availability of select antidepressants capable of exerting procognitive effects, most have not been sufficiently studied or rigorously evaluated. Promising pharmacological avenues, as well as psychosocial, behavioral, chronotherapeutic, and complementary alternative approaches, are currently being investigated. abstract_id: PUBMED:33243554 The association of body mass index (BMI) with treatment outcomes in patients with major depressive disorder. Background: In patients with major depressive disorder (MDD), poor antidepressant treatment response might be associated with an excessive body mass index (BMI). However, the impact of underweight on treatment response is unclear. Moreover, it has not been studied whether a continuous or categorical BMI measure should be used to predict of treatment response. Methods: Post-hoc analysis of data collected in a clinical trial including adults with MDD (n=202) reporting outcomes of antidepressant medication, i.e. paroxetine, mirtazapine or paroxetine+mirtazapine. Measures included baseline BMI (underweight=BMI &lt;18.5, normal weight:=BMI:18.5-23.9, overweight=BMI≥24) and symptom severity (17-item-Hamilton Depression scale; HAMD-17) assessed at weeks 0, 2, 3, 4, 6 and 8. Univariate analyses were used to explore the effect of baseline BMI on HAMD-17 reduction, response (defined as ≥50% HAMD-17 reduction) and remission (defined as HAMD-17 ≤7) at endpoint. Pearson correlation were used to explore the relationship between body weight, BMI as continuous measure and HAMD-17 reduction. Logistic regression was used to determine the predictors for remission. Multiple linear regression was used to establish the correlation of BMI with change of HAMD-17. Results: 111 (55.0%) patients were normal weight, 20 (9.9%) were underweight, 71 (35.1%) were overweight. Underweight patients showed the best improvement to antidepressant treatment. Non-remitters had greater body weight and BMI than remitters (P&lt;0.05). The reduction of HAMD-17 was correlated with baseline body weight (r=-0.16, P=0.032) and BMI (r=-0.19, P=0.012). Logistic regression found patients with BMI&lt;24 to be 2 times (OR=1.958, 95%CI: 1.015, 3.774) remitters (P=0.045) than overweight patients. The multiple linear regression showed that the change of HAMD-17 total score decreased with increasing BMI (β=-0.32, P = 0.016). Conclusion: We confirmed that BMI can predict treatment outcomes in MDD. For the first time we found that underweight patients benefit most from antidepressant treatment. The findings may be useful to physicians in their decision regarding the choice of antidepressants according to BMI. abstract_id: PUBMED:27701935 Surrogate markers of visceral fat and response to anti-depressive treatment in patients with major depressive disorder: a prospective exploratory analysis. Background: Body mass index (BMI) and body weight have been shown to be associated to treatment outcome in patients with major depressive disorder, but this relationship is not clear. Visceral fat might be an underlying mechanism explaining this relationship. Aims: The aim of this study was to prospectively investigate whether visceral fat, as measured by hip-to-waist ratio and waist circumference, affects treatment outcome in patients with major depressive disorder in patients attending a hospital psychiatric care unit in Denmark. Methods: The study was conducted as an observational prospective study including 33 patients with major depressive disorder. Assessments were made at enrolment and after 8 weeks. Primary variables were hip-to-waist ratio and waist circumference. Outcome were remission or response of depressive symptoms measured with the Hamilton Depression Rating Scale (HAM-D17) interviews and HAM-D6 self-rating questionnaires. Results: No differences were found in outcome between groups of patients with high vs low visceral fat in this population. Conclusions: The lack of association was evident for all surrogate markers of visceral fat, and suggests that visceral fat has no impact on outcomes of depressive symptoms. However, study limitations might have contributed to this lack of association, especially sample size and considerable variations on multiple parameters including treatment received during the 8 weeks of follow-up. abstract_id: PUBMED:32924904 Cardiometabolic disease risk among siblings of patients with major depressive disorder. Studies have suggested an association between metabolic and cerebrocardiovascular diseases and major depressive disorder (MDD). However, the risk of metabolic and cerebrocardiovascular diseases in the unaffected siblings of patients with MDD remains uncertain. Using the Taiwan National Health Insurance Research Database, 22,438 unaffected siblings of patients with MDD and 89,752 age-/sex-matched controls were selected and followed up from 1996 to the end of 2011. Individuals who developed metabolic and cerebrocardiovascular diseases during the follow-up period were identified. Compared with the controls, the unaffected siblings of patients with MDD had a higher prevalence of metabolic diseases, such as hypertension (5.0% vs. 4.5%, p = 0.007), dyslipidemia (5.6% vs. 4.8%, p &lt; 0.001), and obesity (1.7% vs. 1.5%, p = 0.028), and cerebrocardiovascular diseases, such as ischemic stroke (0.6% vs. 0.4%, p &lt; 0.005) and ischemic heart disease (2.1% vs. 1.7%, p &lt; 0.001). Logistic regression analyses revealed that the unaffected siblings of patients with MDD were more likely to develop hypertension, dyslipidemia, ischemic stroke, and ischemic heart diseases during the follow-up period than the controls. Our study revealed a familial coaggregation between MDD and metabolic and cerebrocardiovascular diseases. Additional studies are required to investigate the shared pathophysiology of MDD and metabolic and cerebrocardiovascular diseases. abstract_id: PUBMED:34857255 Erectile Dysfunction is a Modifiable Risk Factor for Major Depressive Disorder: Analysis of a Federated Research Network. Background: Erectile dysfunction is one of many conditions associated with depression, but few studies exist to establish the risk of major depressive disorder (MDD) in the large population of men with erectile dysfunction, and it is unclear whether erectile dysfunction (ED) treatment is associated with decreased rates of MDD. Aim: We determined the risk of major depressive disorder in men with erectile dysfunction and evaluated whether treatment of ED with phosphodiesterase-5 inhibitor or penile prosthesis is associated with a lower risk of developing major depressive disorder. Methods: We reviewed a large, retrospective, cohort that utilized electronic health record data collected by the TriNetX Research Network, a global federated database that provides healthcare data for analysis. We performed multiple comparisons: men with ED against men without ED; men with ED treated with phosphodiesterase-5 inhibitors against untreated ED patients, and of men with ED who received penile prosthesis against those who did not. We assessed major depressive disorder (ICD-10-CM F32-F33) as a primary outcome and used propensity score matching to control for ethnicity, race, type 2 diabetes mellitus (E11), essential hypertension (I10), acute myocardial infarction (I21), chronic ischemic heart disease (I25), cerebral infarction (I63), overweight and obesity (E66), personal history of nicotine (Z87.891), hypogonadism (E29.1), and alcohol related disorders (F10). Outcomes: We assessed new diagnosis of major depressive disorder (F32-F33) within a 3-year time window following index event of ED diagnosis, visit to healthcare organization, or ED treatment with phosphodiesterase-5 inhibitor or penile prosthesis as the primary outcome. Results: ED was associated with major depressive disorder both before and after (OR 2.00, 95% CI 1.94-2.06) controlling for confounding variables through propensity score matching. Men who received ED therapies had lower rates of depression compared to those who did not, whether they were treated with phosphodiesterase-5 inhibitor (0.80, 0.77-0.83) or penile prosthesis (0.73, 0.60-0.89). Strengths And Limitations: Strengths include a large sample size and robust statistical techniques. Limitations include lack of detailed information regarding clinical severity and socioeconomic factors. Clinical Implications: Our findings indicate that clinicians should consider evaluating depressive symptoms among men with erectile dysfunction and counsel them regarding the risk of developing major depressive disorder. Conclusions: Erectile dysfunction is associated with major depressive disorder, but treatment is associated with decreased rates of MDD. S Nackeeran, A Havanur, J Ory, et al. Erectile Dysfunction is a Modifiable Risk Factor for Major Depressive Disorder: Analysis of a Federated Research Network. J Sex Med 2021;18:2005-2011. abstract_id: PUBMED:12711405 Relationship between body-mass index and depressive symptoms in patients with major depression. Background: Appetite and weight changes are commonly occurring symptoms of depressive illness. The occurrence of these symptoms may not only be related to depressive mood but may also be related to body weight. Aim: To examine the relationship between symptoms of depression and body weight. Methods: Symptoms of depression were assessed by the Montgomery-Asberg depression rating scale (MADRS) in 1694 patients seeking medical help and fulfilling DSM-IV criteria for a major depressive episode. The level of anxiety was evaluated by Covi's anxiety scale. Body weight was expressed as body-mass index (BMI, kg/m(2)) and treated both categorically and continuously. Results: The total MADRS score was not statistically different across the four BMI categories (underweight: 32.3 +/- 0.6, normal weight: 30.9 +/- 0.2, grade 1: 30.6 +/- 0.3, and 2 overweight: 30.6 +/- 0.6, P = 0.053 (NS)). In women with BMI &lt;or= 18.5 kg/m(2) MADRS was significantly higher than that in other BMI categories (underweight: 32.4 +/- 0.6, normal weight: 30.6 +/- 0.2, grade 1: 30.6 +/- 0.4, and 2 overweight: 30.6 +/- 0.6: P = 0.036). Increasing BMI was related to a linear decrease in symptoms "Reduced appetite" (P &lt; 0.0001) and "Pessimistic thoughts" (P &lt; 0.003). The presence of melancholic or atypical features was not associated with lower or higher BMI, respectively. Conclusions: In patients with major depression higher body weight is likely to be associated with less reduction in appetite and less pessimistic thoughts. Answer: Yes, obesity does appear to predict bipolarity in major depressive patients. Several studies have found an association between obesity and bipolarity in patients with major depressive episodes (MDE). One study found that obesity in a national sample of MDE patients was associated with the bipolar subtype and (hypo)manic symptoms, suggesting that the presence of obesity in MDE patients might be related to bipolarity. The study proposed a common impulsive-addictive diathesis as a mediating mechanism (PUBMED:24210626). Another study, the BRIDGE-II-Mix study, found that obesity in MDE patients was associated with higher rates of bipolar spectrum disorders, suggesting that obesity could be considered a marker of bipolarity (PUBMED:28691250). Furthermore, a study on young first-episode and drug-naïve Chinese patients with major depressive disorder (MDD) found a high prevalence of overweight and obesity, with several demographic and clinical variables independently associated with overweight/obesity in these patients (PUBMED:37920541). Another study indicated that while obesity and systemic inflammation are associated with cognitive dysfunction in patients with bipolar I disorder or MDD, proinflammatory cytokines, rather than obesity itself, may have crucial effects on cognitive function, although obesity and proinflammatory cytokines were found to be strongly associated (PUBMED:32594934). In summary, the evidence suggests that obesity is associated with bipolarity in patients with major depressive episodes, and it could potentially serve as a marker for bipolar spectrum disorders in these individuals.
Instruction: USA endometrial cancer projections to 2030: should we be concerned? Abstracts: abstract_id: PUBMED:25531045 USA endometrial cancer projections to 2030: should we be concerned? Aim: As the incidence of endometrial cancer (EC) increased considerably since 2007, this study aimed to project the burden of EC to the year 2030. Methods: Multivariate linear regression was used to project EC incidence by modeling trends in EC incidence from 1990 to 2013, while accounting for temporal changes in obesity, hysterectomy and smoking. Results: The best-fitting model predicting EC rates included a time effect plus effects for hysterectomy (12-year lag), severe obesity (3-year lag) and smoking (9-year lag). The best-fitting model projected an increase to 42.13 EC cases per 100,000 by the year 2030, a 55% increase over 2010 EC rates. Conclusion: The projected increase of EC over next 16 years indicates the need for close monitoring of EC trends. abstract_id: PUBMED:27753051 Endometrial Cancer Trends by Race and Histology in the USA: Projecting the Number of New Cases from 2015 to 2040. Objectives: The aim of this study is to explore incidence and incidence-based mortality trends for endometrial cancer in the USA and project future incident cases, accounting for differences by race and histological subtype. Methods: Data on age-adjusted and age-specific incidence and mortality rates of endometrial cancer were obtained from the Surveillance, Epidemiology, and End Results 18 registries. Trends in rates were analyzed using Joinpoint regression, and average annual percent change (AAPC) in recent years (2006-2011) was computed for histological subtypes by race. Age, histological, and race-specific rates were applied to US Census Bureau population census estimates to project new cases from 2015 to 2040, accounting for observed AAPC trends, which were progressively attenuated for the future years. Results: The annual number of cases is projected to increase substantially from 2015 to 2040 across all racial groups. Considerable variation in incidence and mortality trends was observed both between and within racial groups when considering histology. Conclusions: As the US population undergoes demographic changes, incidence of endometrial cancer is projected to rise. The increase will occur in all racial groups, but larger increases will be seen in aggressive histology subtypes that disproportionately affect black women. abstract_id: PUBMED:30920776 Orphan disease status of cancer cachexia in the USA and in the European Union: a systematic review. Background: Cachexia has significant impact on the patients' quality of life and prognosis. It is frequently observed in patients with cancer, especially in advanced stages, but prevalence data for the overall population are lacking. Good quality estimates of cancer cachexia in general and for each of the major cancer types would be highly relevant for potential treatment development efforts in this field. Both the USA and European Union (EU) have implemented special clinical development rules for such rare disorders what are called 'orphan diseases'. The cut-off level for a disease to be considered an orphan disease in the USA is 200 000 people (0.06% of the population) and EU is 5 per 10 000 people (0.05% of the population). Methods: For this systematic review, we searched at PubMed (from inception to 31 January 2018) to identify clinical studies that assessed the prevalence of cachexia in cancer patients at risk. Studies reporting the prevalence of either cancer cachexia or wasting disease in the top-10 cancer types and 4 other selected cancer types known to be particularly commonly complicated by cachexia were included in this analysis (i.e. prostate cancer, breast cancer, colorectal cancer, melanoma, endometrial cancer, thyroid cancer, urinary bladder cancer, non-hodgkin lymphoma, lung cancer, kidney and renal pelvis cancer, head and neck cancer, gastric cancer, liver cancer, and pancreatic cancer). We calculated the current burden of cancer cachexia, disease by disease, in the USA and in the EU and compared them to the current guidelines for the definition of orphan disease status. Results: We estimate that in 2014 in the USA, a total of 527 100 patients (16.5 subjects per 10 000 people of the total population), and in 2013 in the EU, a total of 800 300 patients (15.8 subjects per 10 000 people of the total population) suffered from cancer cachexia (of any kind). In the 14 separately analysed cancer types, the prevalence of cancer cachexia in the USA ranged between 11 300 (0.4/10 000, gastric cancer) and 92 000 patients (2.9/10 000, lung cancer) and in the EU between 14 300 (0.3/10 000, melanoma of the skin) and 150 100 (3.0/10 000, colorectal cancer). Conclusions: The absolute number of patients affected by cancer cachexia in each cancer group is lower than the defined thresholds for orphan diseases in the USA and EU. Cancer cachexia in each subgroup separately should be considered an orphan disease. abstract_id: PUBMED:32847996 Race-driven survival differential in women diagnosed with endometrial cancers in the USA. Objective: African American women are increasingly being diagnosed with advanced and type II histology endometrial cancers. Outcomes have been observed to be worse in African American women, but whether or not race itself is a factor is unclear. We sought to evaluate the rates of diagnosis and outcomes on a stage-by-stage basis with respect to race using a large national cancer registry database. Methods: The National Cancer Data Base was searched for patients with surgically staged non-metastatic endometrial cancer between 2004 and 2015. Women were excluded if surgical stage/histology was unknown, there was no follow-up, or no information on subsequent treatment. Pairwise comparison was used to determine temporal trends and Cox hazards tests with Bonferroni correction were used to determine overall survival. Results: A total of 286 920 women were diagnosed with endometrial cancer and met the criteria for analysis. Median follow-up was 51 months (IQR 25.7-85.3). In multivariable models, in women with stage I disease, African American women had a higher risk of death than Caucasian women (HR 1.262, 95% CI 1.191 to 1.338, p&lt;0.001) and Asian/Pacific Islander women had a lower risk of death than Caucasian women (HR 0.742, 95% CI 0.689 to 0.801, p&lt;0.001). This held for African American women with stage II type I and type II disease (HR 1.26, 95% CI 1.109 to 1.444, p&lt;0.001 and HR 1.235, 95% CI 1.098 to 1.388, p&lt;0.001) but not for Asian/Pacific Islander women. African American women with stage IIIA-B disease also had a higher risk of death for type I and type II disease versus Caucasian women (HR 1.221, 95% CI 1.045 to 1.422, p=0.010 and HR 1.295, 95% CI 1.155 to 1.452, p&lt;0.001). Asian/Pacific Islander women had a lower risk of death than Caucasian women with type I disease (HR 0.783, 95% CI 0.638 to 0.960, p=0.019) and type II disease (HR 0.790, 95% CI 0.624 to 0.999, p=0.05). African American women with stage IIIC1-2 had a higher risk of death with type I disease (HR 1.343, 95% CI 1.207 to 1.494, p&lt;0.001) and type II disease (HR 1.141, 95% CI 1.055 to 1.233, p=0.001) whereas there was no significant difference between Caucasian women and Asian/Pacific Islander women. Conclusion: Race appears to play an independent role in survival from endometrial cancer in the USA, with African American women having worse survival on a stage-for-stage basis compared with Caucasian women. abstract_id: PUBMED:38453180 Delivery of hereditary cancer genetics services to patients newly diagnosed with ovarian and endometrial cancers at three gynecologic oncology clinics in the USA, Brazil, and Mexico. Objective: Three gynecologic oncology clinics located in the USA, Brazil, and Mexico collaborated to evaluate their delivery of hereditary cancer genetics services. This descriptive retrospective review study aimed to establish baseline rates and timeliness of guideline-recommended genetics service delivery to patients with ovarian, fallopian tube, primary peritoneal (ovarian), and endometrial cancers at each clinic. Methods: Patients who were newly diagnosed with ovarian and endometrial cancers between September 1, 2018 and December 31, 2020 were identified from the medical records of the clinics. Genetics service delivery metrics included the rates of mismatch repair deficiency tumor testing for patients with endometrial cancer (microsatellite instability/immunohistochemistry, MSI/IHC), referral to genetics services for patients with ovarian cancer, completed genetics consultations, and germline genetic testing for patients with ovarian and endometrial cancers. Timeliness was calculated as the average number of days between diagnosis and the relevant delivery metric. Descriptive statistics were used to analyze data. Results: In total, 1195 patients (596 with ovarian cancer, 599 with endometrial cancer) were included in the analysis, and rates of genetics service delivery varied by clinic. For patients with ovarian cancer, referral rates ranged by clinic from 32.6% to 89.5%; 30.4-65.1% of patients completed genetics consultation and 32.6-68.7% completed genetic testing. The timeliness to genetic testing for patients with ovarian cancer ranged by clinic from 107 to 595 days. A smaller proportion of patients with endometrial cancer completed MSI/IHC testing (10.0-69.2%), with the average time to MSI/IHC ranging from 15 to 282 days. Rates of genetics consultation among patients with endometrial cancer ranged by clinic from 10.8% to 26.0% and 12.5-16.6% completed genetic testing. Conclusions: All clinics successfully established baseline rates and timeliness of delivering hereditary cancer genetics services to patients with ovarian and endometrial cancers. Lower rates of delivering genetics services to patients with endometrial cancer warrant additional research and quality improvement efforts. abstract_id: PUBMED:12346920 Injectable contraception: the USA perspective. N/A abstract_id: PUBMED:15692954 Endometrial stromal tumor with limited infiltration and probable extrauterine metastasis: report of a case. Endometrial stromal nodule (ESN) is a tumor composed of cells closely resembling those of the endometrial stroma with minimal cytologic atypia. The most important criterion for the differential diagnosis from the endometrial stromal sarcoma (ESS) is a well-defined noninfiltrative expansile border. However, the definition of the ESN also includes a tumor with the presence of focal irregularities or fingerlike projections of the margin into the adjacent myometrium, none of which exceeds 2 to 3 mm. In some cases, however, it is difficult to differentiate marginal irregularities of ESN from "true invasion" of ESS. We described a case of extrauterine ESS that was associated with small intramyometrial stromal lesions with limited infiltration. The intramyometrial lesion could be definitionally categorized as ESNs. However, peritumoral fibroblastic band and inflammatory stromal reactions, irregular fingerlike projections, and multiple concurrent extrauterine ESS strongly suggested that these were small primary focus of ESS mimicking ESN. We propose that the patient with endometrial stromal tumor with limited infiltration should be more carefully followed than the usual ESN for possible metastasis and that a hysterectomy with meticulous histological examination of the specimen be performed before a diagnosis of primary extrauterine ESS is made, even in a case showing a grossly or radiologically normal uterus. abstract_id: PUBMED:29418038 Imaging in gynecological disease (13): clinical and ultrasound characteristics of endometrioid ovarian cancer. Objective: To describe the clinical and ultrasound characteristics of ovarian pure endometrioid carcinomas. Methods: This was a retrospective multicenter study of patients with a histological diagnosis of pure endometrioid carcinoma. We identified 161 patients from the International Ovarian Tumor Analysis (IOTA) database who had undergone preoperative ultrasound examination by an experienced ultrasound examiner between 1999 and 2016, and another 78 patients from the databases of the departments of gynecological oncology in the participating centers. All tumors were described using IOTA terminology. In addition, one author reviewed all available ultrasound images and described them using pattern recognition. Results: Median age of the 239 patients was 55 years (range, 19-88 years). On ultrasound examination, two (0.8%) endometrioid carcinomas were described as unilocular cysts, three (1.3%) as multilocular cysts, 37 (15.5%) as unilocular-solid cysts, 115 (48.1%) as multilocular-solid cysts and 82 (34.3%) as solid masses. Median largest tumor diameter was 102.5 mm (range, 20-300 mm) and median largest diameter of the largest solid component was 63 mm (range, 9-300 mm). Papillary projections were present in 70 (29.3%) masses. Most cancers (188 (78.7%)) were unilateral. In 49 (20.5%) cases, the cancer was judged by the pathologist to develop from endometriosis. These cancers, compared with those without evidence of tumor developing from endometriosis, more often manifested papillary projections on ultrasound (46.9% (23/49) vs 24.7% (47/190)), were less often bilateral (8.2% (4/49) vs 24.7% (47/190)) and less often associated with ascites (6.1% (3/49) vs 28.4% (54/190)) and fluid in the pouch of Douglas (24.5% (12/49) vs 48.9% (93/190)). Retrospective analysis of available ultrasound images using pattern recognition revealed that many tumors without evidence of tumor developing from endometriosis (36.3% (41/113)) had a large central solid component entrapped within locules, giving the tumor a cockade-like appearance. Conclusions: Endometrioid cancers are usually large, unilateral, multilocular-solid or solid tumors. The ultrasound characteristics of endometrioid carcinomas developing from endometriosis differ from those without evidence of tumor developing from endometriosis, the former being more often unilateral cysts with papillary projections and no ascites. Copyright © 2018 ISUOG. Published by John Wiley &amp; Sons Ltd. abstract_id: PUBMED:25315381 Surgical and systemic management of endometrial cancer: an international survey. Purpose: To ascertain the spectrum of clinical management of endometrial carcinoma (EC) the largest international survey was conducted to evaluate and identify differences worldwide. Methods: After validation of a 15-item questionnaire regarding surgical and adjuvant treatment of EC in Germany, an English-adapted questionnaire was put online and posted to all the major gynecological cancer Societies worldwide for further distribution commencing in 2010 and continued for 26 months. Results: A total of 618 Institutions around the world participated: Central Europe (CE), Southern Europe (SE), Northern Europe (NE), Asia and USA/Canada/UK. Both a therapeutic and staging value was attributed to systematic pelvic and paraaortic lymph node dissection (LND) in CE (74.6%) and in Asia (67.2%), as opposed to USA/UK where LND was mainly for staging purposes (53.5%; p &lt; 0.001). LND was performed up to the renal veins in CE in 86.8%, in Asia in 80.8%, in USA/UK in 51.2% and in SE in 45.1% (p &lt; 0.001) of cases. In advanced disease, centers from Asia were treated most with adjuvant chemotherapy alone (93.6%), as opposed to centers in SE, CE and UK/USA that employed combination chemo-radiotherapy in 90.9% (p &lt; 0.001) of cases. Paclitaxel/carboplatin was mostly used followed by doxorubicin/cisplatin (75 vs. 23.3%; p &lt; 0.001). In total, 94% of all participants supported the concept of treating EC patients within appropriate clinical trials. Conclusions: There is broad range in both the surgical and adjuvant treatment of EC across different countries. Large-scale multicenter prospective trials are warranted to establish consistent, evidence-based guidelines to optimize treatment worldwide. abstract_id: PUBMED:32529410 Sacituzumab Govitecan: First Approval. Sacituzumab govitecan (sacituzumab govitecan-hziy; Trodelvy™) is a Trop-2-directed antibody conjugated to a topoisomerase I inhibitor (SN-38) that is being developed by Immunomedics for the treatment of solid tumours, including breast cancer. In April 2020, sacituzumab govitecan received accelerated approval in the USA for the treatment of adult patients with metastatic triple-negative breast cancer (mTNBC) who have received at least two prior therapies for metastatic disease. Sacituzumab govitecan is undergoing phase III development for breast cancer in the USA and EU, and phase II development for urothelial cancer. It is also being explored for brain metastases, glioblastoma, endometrial cancer and prostate cancer. This article summarizes the milestones in the development of sacituzumab govitecan leading to this first approval for mTNBC. Answer: Yes, there should be concern regarding the projections of endometrial cancer (EC) in the USA by 2030. Studies have projected a significant increase in the incidence of EC. One study using multivariate linear regression to model trends in EC incidence from 1990 to 2013, while accounting for changes in obesity, hysterectomy, and smoking, predicted a 55% increase in EC rates by 2030 compared to 2010 rates (PUBMED:25531045). Another study exploring incidence and mortality trends by race and histology in the USA projected a substantial increase in the annual number of EC cases from 2015 to 2040 across all racial groups, with larger increases expected in aggressive histology subtypes that disproportionately affect black women (PUBMED:27753051). These projections indicate a growing burden of EC and highlight the need for close monitoring of EC trends and potentially increased efforts in prevention, early detection, and treatment to manage this anticipated rise in cases.
Instruction: Do prescribing formularies help GPs prescribe from a narrower range of drugs? Abstracts: abstract_id: PUBMED:9463982 Do prescribing formularies help GPs prescribe from a narrower range of drugs? A controlled trial of the introduction of prescribing formularies for NSAIDs. Background: Previous studies have suggested that prescribing formularies may promote rational prescribing. The range of drugs prescribed may be one aspect of rational prescribing. Aim: To determine whether the introduction of prescribing formularies helps general practitioners (GPs) to prescribe from a narrower range of non-steroidal anti-inflammatory drugs (NSAIDs). Method: General practices in Lincolnshire were offered help in developing prescribing formularies. Ten practices decided to develop a formulary for NSAIDs. Level 3 PACT data were used to determine whether changes in prescribing had occurred with the introduction of the formulary. Matched controls were used to determine whether similar changes had occurred in other practices. Results: Between April and June 1992, and during the same period in 1993, practices that introduced a formulary for NSAIDs reduced the mean number of different drugs used (14.3 versus 13.1, P = 0.04) and increased the percentage of NSAID-defined daily doses coming from the three most commonly used drugs (70.1% versus 74.8%, P = 0.02). Similar changes were not seen in control practices. Conclusion: Following the development of a formulary for NSAIDs, practices prescribed from a narrower range of drugs and focused a greater proportion of their prescribing on their three most commonly used drugs. abstract_id: PUBMED:11549213 Personal formularies. An index of prescribing quality? Objectives: (1) To determine the extent to which Australian general practitioners (GPs) restrict the numbers of agents they prescribe within a drug class ('personal formularies'); (2) To assess concordance of these drug choices with standards based on established guidelines or recognised good prescribing practices; (3) To assess the potential of these measures as indicators of the quality of prescribing. Methods: Australian Health Insurance Commission (HIC) prescription data (1994 1997) for around 15,400 GPs providing 1500 or more Medicare services per year were analysed. Measures of an individual GP's use of a personal formulary (determined by number of agents) and concordance with prescribing criteria based on specified drugs for five classes of commonly prescribed drugs were derived. Results: Non-steroidal anti-inflammatory drugs (NSAIDs): GP concordance was higher with a non-specified personal formulary (any five NSAIDs) than with a list of specified drugs (five NSAIDs of 'low' or 'medium' risk of gastrointestinal toxicity), and concordance with both increased over time. In 1997, around 70% of GPs used five or fewer NSAIDs for 90% of their prescribing; 47% of GPs had 90% of prescribing from five selected agents. Angiotensin converting enzyme inhibitors/angiotensin-II receptor antagonists: The introduction of new agents appeared to increase the size of the GPs' personal formularies, and concordance with defined standards decreased over time. Antibacterial agents: Concordance with a specified drug standard (nine drugs listed in the Australian Antibiotic Guidelines) increased substantially over time but was largely due to increased prescribing of two heavily promoted drugs. Beta-blocking agents: Over time, GPs restricted most prescribing to two agents, atenolol and metoprolol. Calcium channel blockers: GPs did not appear to restrict prescribing of these drugs; most GPs prescribed all five agents available. Conclusions: Australian GPs use 'personal formularies'. Formulary size varies with the drug class, can change over time as new agents become available, and its contents can be influenced by promotional activities. Prescribing standards based on numbers of drugs used may not always reflect rational prescribing choices. Criteria based on specified drugs provide more rigorous prescribing standards, but may give a misleading picture of prescribing quality in the absence of information on patients and the indications for treatment. Personal formulary measures are potentially useful prescribing indicators but need to be carefully defined and interpreted. GPs should be encouraged to identify their personal formularies and review the drugs included in them. abstract_id: PUBMED:16187132 General practitioners choose within a narrow range of drugs when initiating new treatments: a cohort study of cardiovascular drug formularies. Objective: The aims of this study were (1) to develop and evaluate a new method for investigating personal drug formularies in general practice and (2) to test the hypothesis that there is a difference between personal drug formularies for incident and ongoing drug use. Methods: In 2002, we studied prescribing patterns of beta-blockers, calcium channel blockers, angiotensin converting enzyme (ACE) inhibitors, angiotensin-II antagonists and statins in the County of Funen, Denmark (180 general practices, 472,000 inhabitants). For each practice, we distinguished between an incident drug formulary consisting of prescriptions for new users and an ongoing drug formulary including prescriptions for patients already in treatment. Prescription data were retrieved from the Odense University Pharmacoepidemiologic Database (OPED). Four different formulary measures were evaluated and used for comparing incident and ongoing drug use. Results: General practitioners' (GPs') incident drug formularies comprised significantly fewer drugs than their ongoing drug formularies for all drug groups except angiotensin-II antagonists. The difference in the total number of drugs used was between 1.8 and 3.3. We found differences between 0.5 and 1.6 analogues in the DU 90% (number of analogues accounting for 90% of the prescribed volume measured in defined daily doses) segment and the formulary selectivity index between 0.05 and 0.12. The preference for the most prescribed analogue was 9-18% higher among incident patients. The formulary selectivity index was highly correlated with the other formulary measures and quantified both range and skewed distribution of drug choice. Conclusion: Analysing GPs' prescriptions to incident patients is a simple and inexpensive method for studying their own current personal drug formularies. GPs choose within a narrow range of analogues for incident patients. abstract_id: PUBMED:22384831 An evaluation of team and individual formularies to support independent prescribing in mental health care. Mental health nurses have been entitled to train to become independent prescribers since May 2006; although the number of trained nurse prescribers seems to have increased steadily, a significant number of them have not actually undertaken prescribing roles on completion of their training or have limited their practice to supplementary prescribing. In order to support existing independent prescribers and to assist those nurses who have trained but are yet to prescribe independently, a mental health trust has piloted the use of individual and team formularies. An evaluation of this project indicated that formularies were well received by existing independent prescribers and were seen as a helpful instrument to support newly qualified and supplementary prescribers in their bid to become independent. Formularies can clarify and formalize each prescriber's area of competence, thus setting clear boundaries, reducing uncertainty and enhancing prescribers' confidence and willingness to take on this new role. Formularies may also be used to enhance communication with service users and further develop shared decision making. Effective procedures need to be in place to ensure formularies are up to date and reflect local practice. abstract_id: PUBMED:7743534 Drug formularies--good or evil? A view from the EEC. The use of standard treatment schemes is believed to provide easier, safer, and more rational prescribing of drugs. Provision of neutral information on relative costs and merits of individual therapies may induce physicians to prescribe more rationally and cost effectively. A drug formulary must be developed by using clinical evidence of efficacy, taking into account all available preparations, choosing the most cost effective formulation of the drug, and as far as possible complying with formularies implemented by other health institutions. Planning process and preparation time are fundamental to formulary development work. A restrictive drug policy in the hospital environment will save time in the pharmacy, as fewer drugs are handled, and will improve prescribing as the staff have fewer drugs to familiarize themselves with. The potential benefits from increased generic prescribing, lower stock levels, and closer working relationships between members of the formulary team should all be considered. The benefit of this interdisciplinary approach to drug therapy will be rapidly recognized. abstract_id: PUBMED:37574106 To prescribe or not to prescribe? A review of the Prescribing Competencies Framework for dentistry. Objectives: Dentists in Australia are the second largest prescriber group, and are generally not formally taught how to prescribe. The objective of this review is to describe the Prescribing Competencies Framework and its relevance to dentistry. Data: The four-model stage of prescribing by Coombes and colleagues, and the seven competencies within the Prescribing Competencies Framework devised by the Australian National Prescribing Service MedicineWise, are discussed and applied to dentistry. Sources And Study Selection: Each of the seven competencies are analysed and detailed in the context of clinical dental practice. Competencies 1-5 describe the skillset and tasks required by dentists to safely prescribe, whereas Competencies 6 and 7 describe the clinical environment and recommended resources to support dentists to prescribe safely and effectively. Conclusions: The Prescribing Competencies Framework provides an overview of safe and effective prescribing. Prescribing is a process, and a separate skillset to clinical dentistry. The process involves information gathering, clinical assessment, effective communication and review of the patient. Access to timely and appropriate resources and relevant electronic sources of health information for clinicians are important to provide the support required for better informed prescribing decisions. The framework describes a patient-centered prescribing process, and ultimately prescribing should be a shared decision between the dentist and the patient. Clinical Significance: Safe and effective prescribing is an integral part of dentistry and dentists are the second largest prescriber group. However, dentists display high rates of inappropriate and unnecessary prescribing, and to minimise errors, the Prescribing Competencies Framework has been established. This article details how the Framework applies to clinical practice dentistry. abstract_id: PUBMED:17579228 Identifying widely covered drugs and drug coverage variation among Medicare part D formularies. Context: Clinicians can find it difficult to know which drugs are covered for their Medicare patients because formularies vary widely among Medicare Part D plans and many states have 50 or more such plans. Objective: To determine whether Part D formularies in California (the state with the most Medicare beneficiaries) and Hawaii have at least 1 drug within each of 8 treatment classes for hypertension, hyperlipidemia, and depression that can be identified for clinicians as "widely covered" by the vast majority of Part D plans. Design And Setting: Use of the medicare.gov Web site (March 1-April 15, 2006) to examine 72 California and 43 Hawaii Part D formularies' coverage of 8 treatment classes (angiotensin-converting enzyme inhibitors, angiotensin II receptor blockers, beta-blockers, calcium channel blockers, loop diuretics, selective serotonin reuptake inhibitors, statins, and thiazide diuretics), with evaluation of how often drugs were widely covered (defined as inclusion in &gt;or=90% of formularies at co-payments of &lt;or=$35 without prior authorization). Main Outcome Measure: Identification of treatment classes with at least 1 widely covered drug. Results: For California, coverage for the 75 drugs examined ranged from 7% to 100%. Despite this variation, 7 of 8 classes (excluding angiotensin II receptor blockers) had at least 1 widely covered drug. Of the 34 widely covered drugs (45%), all but 2 were generic. Restricting widely covered to include 95% or more of formularies at co-payments of $15 or less still resulted in 7 of 8 classes with at least 1 widely covered drug. Overall, 73% of generic drugs and 6% of brand-name drugs were widely covered. Findings were similar for Hawaii. Conclusions: Formularies varied substantially; however, all but 1 treatment class examined had 1 or more widely covered drugs at low co-payments. Knowing which drugs are widely covered would assist clinicians in prescribing, since not all generic drugs were widely covered. Clinicians should know that few brand-name drugs are widely covered and check coverage before prescribing. abstract_id: PUBMED:36127665 Profiles of GPs with high and low self-reported physician empathy-personal, professional, and antibiotic prescribing characteristics. Background: General Practitioners' (GPs) professional empathy has been hypothesized to have substantial impact on their healthcare delivery and medication prescribing patterns. This study compares profiles of personal, professional, and antibiotic prescribing characteristics of GPs with high and low empathy. Methods: We apply an extreme group approach to a unique combined set of survey and drug register data. The survey included questions about demographic, professional, and antibiotic prescribing characteristics, as well as the Jefferson Scale of Empathy for Health Professionals (JSE-HP) to assess self-reported physician empathy. It was sent to a stratified sample of 1,196 GPs comprising 30% of the Danish GP population of whom 464 (38.8%) GPs responded. GPs in the top and bottom decile of empathy levels were identified. All intra- and inter-profile descriptive statistics and differences were bootstrapped to estimate the variability and related confidence intervals. RESULTS: 61% of GPs in the top decile of the empathy score were female. GPs in this decile reported the following person-centered factors as more important for their job satisfaction than the bottom decile: The Patient-physician relationship, interaction with colleagues, and intellectual stimulation. High-empathy scoring GPs prescribed significantly less penicillin than the low-empathy GPs. This was true for most penicillin subcategories. There were no significant differences in age, practice setting (urban vs. rural), practice type (partnership vs. single-handed), overall job satisfaction, or GP's value of prestige and economic profit for their job satisfaction. The intra profile variation index and confidence intervals show less prescribing uncertainty among GPs with high empathy. Conclusions: This study reveals that high empathy GPs may have different personal, professional, and antibiotic prescribing characteristics than low empathy GPs and have less variable empathy levels as a group. Furthermore, person-centered high empathy GPs on average seem to prescribe less penicillins than low empathy GPs. abstract_id: PUBMED:17711593 Determinants of the range of drugs prescribed in general practice: a cross-sectional analysis. Background: Current health policies assume that prescribing is more efficient and rational when general practitioners (GPs) work with a formulary or restricted drugs lists and thus with a limited range of drugs. Therefore we studied determinants of the range of drugs prescribed by general practitioners, distinguishing general GP-characteristics, characteristics of the practice setting, characteristics of the patient population and information sources used by GPs. Methods: Secondary analysis was carried out on data from the Second Dutch Survey in General Practice. Data were available for 138 GPs working in 93 practices. ATC-coded prescription data from electronic medical records, census data and data from GP/practice questionnaires were analyzed with multilevel techniques. Results: The average GP writes prescriptions for 233 different drugs, i.e. 30% of the available drugs on the market within one year. There is considerable variation between ATC main groups and subgroups and between GPs. GPs with larger patient lists, GPs with higher prescribing volumes and GPs who frequently receive representatives from the pharmaceutical industry have a broader range when controlled for other variables. Conclusion: The range of drugs prescribed is a useful instrument for analysing GPs' prescribing behaviour. It shows both variation between GPs and between therapeutic groups. Statistically significant relationships found were in line with the hypotheses formulated, like the one concerning the influence of the industry. Further research should be done into the relationship between the range and quality of prescribing and the reasons why some GPs prescribe a greater number of different drugs than others. abstract_id: PUBMED:34264328 Personal Formularies of Primary Care Physicians Across 4 Health Care Systems. Importance: More conservative prescribing has the potential to reduce adverse drug events and patient harm and cost; however, no method exists defining the extent to which individual clinicians prescribe conservatively. One potential domain is prescribing a more limited number of drugs. Personal formularies-defined as the number and mix of unique, newly initiated drugs prescribed by a physician-may enable comparisons among clinicians, practices, and institutions. Objectives: To develop a method of defining primary care physicians' personal formularies and examine how they differ among primary care physicians at 4 institutions; evaluate associations between personal formularies and patient, physician, and practice site characteristics; and empirically derive and examine the variability of the top 200 core drugs prescribed at the 4 sites. Design, Setting, And Participants: This retrospective cohort study was conducted at 4 US health care systems among 4655 internal and family medicine physicians and 4 930 707 patients who had at least 1 visit to these physicians between January 1, 2017, and December 31, 2018. Exposures: Personal formulary size was defined as the number of unique, newly initiated drugs. Main Outcomes And Measures: Personal formulary size and drugs used, physician and patient characteristics, core drugs, and analysis of selected drug classes. Results: The study population included 4655 primary care physicians (2274 women [48.9%]; mean [SD] age, 48.5 [4.4] years) and 4 930 707 patients (16.5% women; mean [SD] age, 51.9 [8.3] years). There were 41 378 903 outpatient prescriptions written, of which 9 496 766 (23.0%) were new starts. Institution median personal formulary size ranged from 150 (interquartile range, 82.0-212.0) to 296 (interquartile range, 230.0-347.0) drugs. In multivariable modeling, personal formulary size was significantly associated with panel size (total number of unique patients with face-to-face encounters during the study period; 1.2 medications per 100 patients), physician's total number of encounters (5.7 drugs per 10% increase), and physician's sex (-6.2 drugs per 100 patients for female physicians). There were 1527 unique, newly prescribed drugs across the 4 sites. Fewer than half the drugs (626 [41.0%]) were used at every site. Physicians' prescribing of drugs from a pooled core list varied from 0% to 100% of their prescriptions. Conclusions And Relevance: Personal formularies, measured at the level of individual physicians and institutions, reveal variability in size and mix of drugs. Similarly, defining a list of commonly prescribed core drugs in primary care revealed interphysician and interinstitutional differences. Personal formularies and core medication lists enable comparisons and may identify outliers and opportunities for safer and more appropriate prescribing. Answer: Yes, prescribing formularies help general practitioners (GPs) prescribe from a narrower range of drugs. A controlled trial of the introduction of prescribing formularies for non-steroidal anti-inflammatory drugs (NSAIDs) showed that practices that introduced a formulary for NSAIDs reduced the mean number of different drugs used and increased the percentage of NSAID-defined daily doses coming from the three most commonly used drugs (PUBMED:9463982). Additionally, Australian GPs were found to use 'personal formularies', and the size of these formularies varied with the drug class, which can change over time as new agents become available. The study suggested that prescribing standards based on numbers of drugs used may not always reflect rational prescribing choices (PUBMED:11549213). Furthermore, a cohort study of cardiovascular drug formularies found that GPs' incident drug formularies comprised significantly fewer drugs than their ongoing drug formularies for all drug groups except angiotensin-II antagonists, indicating that GPs choose within a narrow range of analogues for incident patients (PUBMED:16187132). Another study evaluating team and individual formularies to support independent prescribing in mental health care indicated that formularies were well received by existing independent prescribers and were seen as helpful to support newly qualified and supplementary prescribers in becoming independent (PUBMED:22384831). In general, the range of drugs prescribed is a useful instrument for analyzing GPs' prescribing behavior, and it shows both variation between GPs and between therapeutic groups. The range of drugs prescribed by GPs is influenced by various factors, including the size of the patient list, prescribing volumes, and the frequency of interactions with the pharmaceutical industry (PUBMED:17711593). A study across four health care systems also highlighted the variability in personal formulary size among primary care physicians and suggested that personal formularies and core medication lists enable comparisons and may identify outliers and opportunities for safer and more appropriate prescribing (PUBMED:34264328).
Instruction: Can an adequate energy intake be able to reverse the negative nitrogen balance in mechanically ventilated critically ill patients? Abstracts: abstract_id: PUBMED:19682853 Can an adequate energy intake be able to reverse the negative nitrogen balance in mechanically ventilated critically ill patients? Purpose: Adequate energy provision and nitrogen losses prevention of critically ill patients are essentials for treatment and recovery. The aims of this study were to evaluate energy expenditure (EE) and nitrogen balance (NB) of critically ill patients, to classify adequacy of energy intake (EI), and to verify adequacy of EI capacity to reverse the negative NB. Methods: Seventeen patients from an intensive care unit were evaluated within a 24-hour period. Indirect calorimetry was performed to calculate patient's EE and Kjeldhal for urinary nitrogen analysis. The total EI and protein intake were calculated from the standard parenteral and enteral nutrition infused. Underfeeding was characterized as EI 90% or less and overfeeding as 110% or greater of EE. The adequacy of the EI (EI EE(-1) × 100) and the NB were estimated and associated with each other by Spearman coefficient. Results: The mean EE was 1515 ± 268 kcal d(-1), and most of the patients (11/14) presented a negative NB (-8.2 ± 4.7 g.d(-1)). A high rate (53%) of inadequate energy intake was found, and a positive correlation between EI EE(-1) and NB was observed (r = 0.670; P = .007). Conclusion: The results show a high rate of inadequate EI and negative NB, and equilibrium between EI and EE may improve NB. Indirect calorimetry can be used to adjust the energy requirements in the critically ill patients. abstract_id: PUBMED:9660262 Comparison of measured and predicted energy expenditure in mechanically ventilated children. Objective: To determine the energy requirements in mechanically ventilated pediatric patients using indirect calorimetry and to compare the results with the predicted metabolic rate. Design: In 50 mechanically ventilated children with a moderate severity of illness, energy expenditure was measured by indirect calorimetry. Daily caloric intake was recorded for all patients. Total urinary nitrogen excretion was determined in 31 patients. Results: Although there was a close correlation between the measured total energy expenditure (mTEE) and the predicted basal metabolic rate (pBMR) (r = 0.93, p &lt; 0.001), Bland-Altman analysis showed lack of agreement between individual mTEE and pBMR values. The ratio of caloric intake/mTEE was significantly higher in the patients with a positive nitrogen balance (1.4 +/- 0.07) compared with those with a negative nitrogen balance (0.8 +/- 0.1; p &lt; 0.001). Conclusions: Standard prediction equations are not appropriate to calculate the energy needs of critically ill, mechanically ventilated children. Individual measurements of energy expenditure and respiratory quotient by means of indirect calorimetry in combination with nitrogen balance are necessary for matching adequate nutritional support. abstract_id: PUBMED:10378198 Energy expenditure and substrate utilization in mechanically ventilated children. The objective of the study was to determine the value of indirect calorimetry and nitrogen balance (N balance) in order to evaluate the current feeding protocols of mechanically ventilated children. The study was designed as a cross-sectional prospective study. In 36 mechanically ventilated children energy expenditure was measured by indirect calorimetry, and total urinary nitrogen excretion (TUN) was determined. Substrate utilization and respiratory quotient (RQ) were calculated from the measured values of oxygen consumption (VO2), carbon dioxide production (VCO2), and TUN. The RQ was compared with the RQ of the macronutrients administered (RQmacr) according to the modified criteria of Lusk. In results, the total measured energy expenditure (TMEE) showed a wide variation (range 155-272 kJ.kg-1.d-1). The N balance was positive in 20 and negative in 16 patients. The ratio of caloric intake/TMEE was significantly higher in patients with a positive N balance (1.50 +/- 0.06) as compared with those with a negative N balance (0.8 +/- 0.1, P &lt; 0.001). There was a significant relationship between the difference of RQ-RQmacr versus the ratio caloric intake/TMEE (r = 0.72, P &lt; 0.001). Carbohydrate and fat utilization were not significantly different in patients with a positive or negative N balance. Protein utilization was significantly higher in those patients with a negative N balance. We concluded that measurement of TMEE with indirect calorimetry results in accurate determination of energy needs in critically ill mechanically ventilated children. Feeding according to or in excess of the TMEE is correlated with a positive N balance. A combination of the RQ and the RQmacr can be helpful in differentiating under- or overfeeding. abstract_id: PUBMED:31207439 Energy balance in obese, mechanically ventilated intensive care unit patients. Objectives: The aims of this study were, first, to compare the predicted (calculated) energy requirements based on standard equations with target energy requirement based on indirect calorimetry (IC) in critically ill, obese mechanically ventilated patients; and second, to compare actual energy intake to target energy requirements. Methods: We conducted a prospective cohort study of mechanically ventilated critically ill patients with body mass index ≥30.0 kg/m2 for whom enteral feeding was planned. Clinical and demographic data were prospectively collected. Resting energy expenditure was measured by open-circuit IC. American Society of Parenteral and Enteral Nutrition (APSPEN)/Society of Critical Care Medicine (SCCM) 2016 equations were used to determine predicted (calculated) energy requirements. Target energy requirements were set at 65% to 70% of measured resting energy expenditure as recommended by ASPEN/SCCM. Nitrogen balance was determined via simultaneous measurement of 24-h urinary nitrogen concentration and protein intake. Results: Twenty-five patients (mean age: 64.5 ± 11.8 y, mean body mass index: 35.2 ± 3.6 kg/m2) underwent IC. The mean predicted energy requirement was 1227 kcal/d compared with mean measured target energy requirement of 1691 kcal/d. Predicted (calculated) energy requirements derived from ASPEN/SCCM equations were less than the target energy requirements in most cases. Actual energy intake from enteral nutrition met 57% of target energy requirements. Protein intake met 25% of target protein requirement and the mean nitrogen balance was -2.3 ± 5.1 g/d. Conclusions: Predictive equations underestimated target energy needs in this population. Further, we found that feeding to goal was often delayed resulting in failure to meet both protein and energy intake goals. abstract_id: PUBMED:8334179 Effect of increasing protein ingestion on the nitrogen balance of mechanically ventilated critically ill patients receiving total parenteral nutrition The amount of protein recommended to minimise N loss in critically ill patients receiving total parenteral nutrition (TPN) varies in the literature. Therefore, we studied the effect of increased protein intake on the N balance, administering TPN with either 1.2 g protein/kg/day (low N diet) or 1.8 g protein/kg/day (high N diet). Fifteen mechanically ventilated critically ill patients were studied in a surgical intensive care unit. After at least two days of standard TPN, patients were randomly assigned to either the low or the high N diet. Ten patients were studied on the low N diet and 11 on the high N diet; 6 patients were studied on both diets. Nonprotein energy was supplied according to estimated energy requirements. For five consecutive days, the N balance was measured daily. Total urinary nitrogen (TUN) was analysed using the Kjeldahl method. There was no difference in N balance between the groups. On the low N diet, N balance was -0.113 +/- 0.088 and on the high N diet -0.113 +/- 0.109 g N/kg/day. In patients studied twice, N balance was -0.087 +/- 0.054 and -0.050 +/- 0.060 g N/kd/day respectively. Results of a previous pilot study showed that in 20 similar patients the N balance became 80% less negative (from -5.7 +/- 5.1 to -1.1 +/- 8.2 g N/day) when protein intake was increased from 0.9 to 1.5 g/kg/day. Since these results are consistent with other studies, we conclude that the optimal range of protein supply in this type of critically ill patients is approximately 1.1-1.5 g protein/kg/day. abstract_id: PUBMED:12974978 Estimation of energy requirements for mechanically ventilated, critically ill patients using nutritional status. Background: There is very little information on what is considered an adequate energy intake for mechanically ventilated, critically ill patients. The purpose of the present study was to determine this energy requirement by making use of patients' nutritional status. Methods: The study was conducted in a multidisciplinary intensive care unit of Taichung Veterans General Hospital, Taiwan. Patients were hemodynamically stable and not comatose, and were requiring at least 7 days of mechanical ventilation. Fifty-four patients successfully completed this study. The resting energy expenditure was measured using indirect calorimetry. The total energy requirement was considered 120% of the measured energy expenditure. The daily nutrient intake was recorded. Nutritional status was assessed using single and multiple parameters, nitrogen balance, and medical records, and was performed within 24 hours of admission and after 7 days in the intensive care unit. Results: Fifteen patients were being underfed (&lt;90% of total energy requirement), 20 patients were in the appropriate feeding (AF) group (within +/- 10% of total energy requirement), and 19 patients received overfeeding (&gt;110% of total energy requirement). Patients in the underfeeding group received only 68.3% of their energy requirement, while the overfeeding group patients received up to 136.5% of their required calories. Only patients in the AF group had a positive nitrogen balance (0.04 +/- 5.1) on day 7. AF group patients had a significantly higher Nutritional Risk Index value at day 7 than at day 1. Conclusion: AF patients had more improvement in nutritional status than patients in the other feeding groups. To provide at least 120% of the resting energy expenditure seemed adequate to meet the caloric energy needs of hemodynamically stable, mechanically ventilated, critically ill patients. abstract_id: PUBMED:30854273 Modified Nutrition Risk in Critically Ill (mNUTRIC) Score to Assess Nutritional Risk in Mechanically Ventilated Patients: A Prospective Observational Study from the Pakistani Population. Purpose Typical nutritional assessment criteria and screening tools are ineffective in mechanically ventilated patients who are often unable to report their food intake history. The Nutrition Risk in Critically Ill (NUTRIC) score is effective for screening mechanically ventilated patients. This prospective observational study was conducted to identify nutritional risk in mechanically ventilated patients using a modified NUTRIC (mNUTRIC) score (without using interleukin-6 values). Methods All adult patients admitted to the intensive care unit (ICU) for more than 48 hours were included in the study. Data were collected on the variables required to calculate mNUTRIC scores. Patients with mNUTRIC scores ≥5 were considered at high nutritional risk. The assessment data included total ICU length of stay, ventilator-free days, and mortality rates. Results and conclusion A total of 75 patients fit the inclusion criteria of the study, including 40 males and 35 females. The mean age was 55.8 years. Forty-five percent of mechanically ventilated patients had mNUTRIC scores ≥5. Mechanically ventilated patients with mNUTRIC scores ≥5 had longer lengths of stay in the ICU (mean ± SD = 11.5±5 days) as compared with 3.5±4 days in patients with mNUTRIC scores ≤4. Moreover, a higher mortality rate (26%) was observed in patients with mNUTRIC scores ≥5. A high mNUTRIC predicted mortality score shows a receiver operating characteristic curve of 0.637 with a confidence interval between 0.399 and 0.875. Forty-five percent of mechanically ventilated patients admitted to the ICU were at nutritional risk, and their mNUTRIC scores were directly related to higher lengths of stay and mortality. abstract_id: PUBMED:32864462 Opioid use is Associated with ICU Delirium in Mechanically Ventilated Children. Introduction: Pediatric delirium is a significant problem when encounterd in an intensive care unit (ICU). The pathophysiology of pediatric delirium is complex and the etiology is typically multifactorial. Even though various risk factors associated with pediatric delirium in a pediatric ICU have been identified, there is still a paucity of literature associated with the condition, especially in extremely critically ill children, sedated and mechanically ventilated. Aim Of The Study: To identify factors associated with delirium in mechanically ventilated children in an ICU. Material And Methods: This is a single-center study conducted at a tertiary care pediatric ICU. Patients admitted to the pediatric ICU requiring sedation and mechanical ventilation for &gt;48 hours were included. Cornell Assessment of Pediatric Delirium scale was used to screen patients with delirium. Baseline demographic and clinical factors as well as daily and cumulative doses of medications were compared between patients with and without delirium. Firth's penalized maximum likelihood logistic regression was used on a priori set of variables to examine the association of potential factors with delirium. Two regression models were created to assess the effect of daily medication doses (Model 1) as well as cumulative medication doses (Model 2) of opioids and benzodiazepines. Results: 95 patient visits met the inclusion criteria. 19 patients (20%) were diagnosed with delirium. Older patients (&gt;12 years) had higher odds of developing delirium. Every 1mg/kg/day increase in daily doses of opioids was associated with an increased risk of delirium (OR=1.977, p=0.017). Likewise, 1 mg/kg increase in the cumulative opioid dose was associated with a higher odds of developing delirium (OR=1.035, p=0.022). Duration of mechanical ventilation was associated with the development of delirium in Model 1 (p=0.007). Conclusions: Age, daily and cumulative opioid dosage and the duration of mechanical ventilation are associated with the development of delirium in mechanically ventilated children. abstract_id: PUBMED:25912187 How much protein and energy are needed to equilibrate nitrogen and energy balances in ventilated critically ill children? Background & Aims: Protein and energy requirements in critically ill children are currently based on insufficient data. Moreover, longitudinal measurements of both total urinary nitrogen (TUN) and resting energy expenditure (REE) are lacking. The aim of this study was to investigate how much protein and energy are needed to equilibrate nitrogen and energy balances in ventilated critically ill children on the basis of daily measurements of TUN, REE and protein and energy intakes. Comparisons were made with the guidelines of the American Society for Parenteral and Enteral Nutrition and the Dietary Reference Intakes. Methods: Children with an expected duration of mechanical ventilation ≥72 h were prospectively recruited. TUN was measured by chemiluminescence, and REE was measured by indirect calorimetry. Generalised linear models for longitudinal data were used to study the relation between protein intake and nitrogen balance and to calculate the minimum intake of protein needed to achieve nitrogen equilibrium. A similar approach was used for energy. Results were compared to the recommended values. Results: Based on 402 measurements performed in 74 children (median age: 21 months), the mean TUN was high at 0.20 (95% CI: 0.20, 0.22) g/kg/d and the REE was 55 (95% CI: 54, 57) kcal/kg/d. Nitrogen and energy balances were achieved with 1.5 (95% CI: 1.4, 1.6) g/kg/d of protein and 58 (95% CI: 53, 63) kcal/kg/d for the entire group, but there were differences among children of different ages. Children required more protein and less energy than the Dietary Reference Intakes. Conclusions: In critically ill children, TUN was elevated and REE was reduced during the entire period of mechanical ventilation. Minimum intakes of 1.5 g/kg/d of protein and 58 kcal/kg/d can equilibrate nitrogen and energy balances in children up to 4 years old. Older children require more protein. abstract_id: PUBMED:6436350 The main determinants of nitrogen balance during total parenteral nutrition in critically ill injured patients. Factors influencing nitrogen balance during total parenteral nutrition have been investigated in 34 critically ill injured patients studied during the first 6 days after trauma. Basal nitrogen balance was severely negative (-0.26 +/- 0.12 (SD) g X kg-1), but improved consistently during treatment. Nitrogen intake proved to be the major determinant of a positive, or less negative, nitrogen balance, only secondarily followed by total energy intake corrected to predicted basal energy expenditure, according to multiple regression analysis. The amount of non-protein calories and the non-protein calorie to nitrogen ratio appeared to have little significance on nitrogen balance, when corrected for the two former variables. Answer: Yes, an adequate energy intake can potentially reverse the negative nitrogen balance in mechanically ventilated critically ill patients. Studies have shown that there is a positive correlation between energy intake (EI) and nitrogen balance (NB) in such patients. For instance, a study found that most of the critically ill patients evaluated presented a negative nitrogen balance, and a high rate of inadequate energy intake was observed. However, there was a positive correlation between the adequacy of energy intake (EI EE(-1)) and nitrogen balance, suggesting that achieving equilibrium between energy intake and energy expenditure (EE) may improve nitrogen balance (PUBMED:19682853). Another study in mechanically ventilated children indicated that the ratio of caloric intake to measured total energy expenditure (mTEE) was significantly higher in patients with a positive nitrogen balance compared to those with a negative nitrogen balance, further supporting the importance of matching energy intake to energy expenditure to achieve a positive nitrogen balance (PUBMED:9660262). Moreover, a study that measured energy expenditure and substrate utilization in mechanically ventilated children found that feeding according to or in excess of the total measured energy expenditure (TMEE) is correlated with a positive nitrogen balance (PUBMED:10378198). These findings suggest that careful monitoring and adjustment of energy intake to meet the energy expenditure of critically ill, mechanically ventilated patients are crucial for reversing negative nitrogen balance and improving patient outcomes. Indirect calorimetry is often used to accurately determine energy needs and guide nutritional support in this patient population.
Instruction: Incidence of hip pain in a prospective cohort of asymptomatic volunteers: is the cam deformity a risk factor for hip pain? Abstracts: abstract_id: PUBMED:24481825 Incidence of hip pain in a prospective cohort of asymptomatic volunteers: is the cam deformity a risk factor for hip pain? Background: Although cam-type femoroacetabular impingement is commonly associated with labral chondral damage and hip pain, a large proportion of asymptomatic individuals will have this deformity. Purpose: To determine the incidence of hip pain in a prospective cohort of volunteers who had undergone magnetic resonance imaging (MRI) of their hips. Study Design: Case control study; Level of evidence, 3. Methods: A total of 200 asymptomatic volunteers who underwent an MRI of both hips were followed for a mean time of 4.4 years (range, 4.01-4.95 years). Thirty were lost to follow-up, leaving 170 individuals (77 males, 93 females) with a mean age of 29.5 years (range, 25.7-54.5 years). All patients were blinded to the results of their MRI. All completed a follow-up questionnaire inquiring about the presence of hip pain or a history of hip pain lasting longer than 6 weeks since the original MRI. Each patient was asked to draw where the pain was on a body diagram. Results: Eleven patients (5 males, 6 females; 6.5% of sample; mean age, 29.9 years; range, 25.7-45.6 years) reported hip pain, of which 3 (1 male, 2 females) had bilateral pain for a total of 14 hips. Seven of the 14 painful hips had a cam-type deformity at the time of the initial MRI versus 37 of the 318 nonpainful hips (P = .0002). This gave a relative risk of 4.3 (95% confidence interval [CI], 2.3-7.8) of developing hip pain if cam deformity was present. Those 14 painful hips had a significantly greater alpha angle at the radial 1:30 clock position than did those who did not develop pain with a cam deformity: 61.5° (range, 57.3°-65.7°) versus 57.9° (range, 56.9°-59.1°), respectively (P = .05). A significantly greater proportion of patients (12%) with limited internal rotation ≤20° (versus 2.7% with internal rotation &gt;20°) went on to develop hip pain (P = .009; relative risk = 3.1 [95% CI, 1.6-6.0]). Conclusion: The presence of a cam deformity represents a significant risk factor for the development of hip pain. An elevated alpha angle at the 1:30 clock position and decreased internal rotation are associated with an increased risk of developing hip pain. However, not all patients with a cam deformity develop hip pain, and further research is needed to better define those at greater risk of developing degenerative symptoms. abstract_id: PUBMED:24275223 General health status and incidence of first-onset temporomandibular disorder: the OPPERA prospective cohort study. Unlabelled: Temporomandibular disorder (TMD) overlaps with other health conditions, but no study has examined which of these conditions increase the risk of developing first-onset TMD. The authors prospectively evaluated the relationship between health status at enrollment and subsequent incidence of TMD in 2,722 men and women. Participants aged 18 to 44 years had no history of TMD and were clinically free of TMD when enrolled in 2006 to 2008 at 4 U.S. study sites in the Orofacial Pain: Prospective Evaluation and Risk Assessment (OPPERA) prospective cohort study. First-onset examiner-classified TMD developed in 260 people over a median 2.8 years of follow-up. Cox regression estimated the association between health conditions and TMD incidence while accounting for potential confounders. Incidence of first-onset TMD was 50% higher for people with low back pain (adjusted hazard ratio [AHR] = 1.50, 95% confidence limits [CLs]: 1.08, 2.10) and 75% higher for people with genital pain symptoms (AHR = 1.75, 95% CLs = 1.04, 2.93) than people without a history of these pain disorders. Digit ratio, a marker of intrauterine exposure to sex hormones, was significantly associated with TMD incidence. Other independent predictors of first-onset TMD were sleep disturbance and cigarette smoking. These findings reveal multiple influences of health status on incidence of first-onset TMD. Perspective: This article examines health conditions that commonly overlap with TMD to determine which ones predict first-onset TMD. A history of low back pain and genital pain conditions at baseline were important predictors. Novel findings were that disrupted sleep and conditions in utero may increase incidence of first-onset TMD. abstract_id: PUBMED:34038950 Incidence, Prevalence, and Severity of and Risk Factors for Hip and Groin Problems in Swedish Male Ice Hockey Players: A 1-Season Prospective Cohort Study. Context: The epidemiologic focus on time loss may underestimate the true magnitude of hip and groin problems in male ice hockey players. Objective: To describe the prevalence, incidence, and severity of hip and groin problems (time loss and non-time loss) in Swedish ice hockey players over the course of a season and explore potential preseason risk factors for these conditions. Design: Prospective 1-season cohort study. Setting: Professional and semiprofessional Swedish ice hockey players. Patients Or Other Participants: A total of 12 professional and semiprofessional male ice hockey teams were invited to participate. Of those, 9 teams agreed, and 163 players were included in the analyses. Main Outcome Measure(s): Hip and groin problems in the previous season (time loss, non-time loss), isometric adduction and abduction strength, and 5-second squeeze test results were recorded before the season and served as independent variables in the risk factor analysis. Main outcome measures were cumulative incidence of hip and groin problems, average prevalence, and odds ratios (ORs) for groin problems in season. Results: Cumulative incidence was 45.4% (95% CI = 37.6%, 53.4%) for all problems and 19% (95% CI = 13.3%, 25.9%) for substantial problems. Average prevalence was 14.1% (95% CI = 10.8%, 17.5%) for all and 5.7% (95% CI = 4.3%, 7.2%) for substantial problems. Among reported problems, 69.2% had a gradual onset, and only 17% led to time loss. Players with non-time-loss problems in the previous season had higher odds for new problems (all: OR = 3.3 [95% CI = 1.7, 6.3]; substantial: OR = 3.6 [95% CI = 1.8, 8.4]). Preseason strength was not significantly associated with the odds for subsequent problems. Conclusion: Hip and groin problems are common in ice hockey players and may lead to substantial impairments in performance. Only 1 in 5 problems led to time loss, and 7 in 10 had a gradual onset. Non-time-loss problems in the previous season were a significant risk factor for new problems, whereas decreased preseason hip-adduction and -abduction strength was not. abstract_id: PUBMED:24275220 Study protocol, sample characteristics, and loss to follow-up: the OPPERA prospective cohort study. Unlabelled: When studying incidence of pain conditions such as temporomandibular disorder (TMD), repeated monitoring is needed in prospective cohort studies. However, monitoring methods usually have limitations and, over a period of years, some loss to follow-up is inevitable. The OPPERA prospective cohort study of first-onset TMD screened for symptoms using quarterly questionnaires and examined symptomatic participants to definitively ascertain TMD incidence. During the median 2.8-year observation period, 16% of the 3,263 enrollees completed no follow-up questionnaires, others provided incomplete follow-up, and examinations were not conducted for one third of symptomatic episodes. Although screening methods and examinations were found to have excellent reliability and validity, they were not perfect. Loss to follow-up varied according to some putative TMD risk factors, although multiple imputation to correct the problem suggested that bias was minimal. A second method of multiple imputation that evaluated bias associated with omitted and dubious examinations revealed a slight underestimate of incidence and some small biases in hazard ratios used to quantify effects of risk factors. Although "bottom line" statistical conclusions were not affected, multiply-imputed estimates should be considered when evaluating the large number of risk factors under investigation in the OPPERA study. Perspective: These findings support the validity of the OPPERA prospective cohort study for the purpose of investigating the etiology of first-onset TMD, providing the foundation for other papers investigating risk factors hypothesized in the OPPERA project. abstract_id: PUBMED:25081962 Risk factors for the development of deformity in patients with spinal infection. Object: Bacterial spinal infections are infrequent but may lead to significant morbidity and death. Apart from neurological complications, infections may also lead to bone destruction resulting in deformity of the spine. However, the incidence of spinal deformities and risk factors is not well characterized in the literature. Methods: A retrospective cohort study was conducted using electronic medical records at a single institution. All patients were over 18 years of age and had a clinically and radiologically documented spinal infection that was treated surgically during the period 2006-2013. Infections were classified according to anatomical location including disc, bone, and/or epidural space. Deformities included kyphosis and/or subluxation. The authors analyzed risk factors for developing at least 1 deformity between the time of infection and operation using the Fisher exact test and chi-square test. Change in visual analog scale (VAS) scores preoperatively versus postoperatively was also analyzed using the paired t-test. Results: The study included 48 patients. The most common types of spinal infections were osteomyelitis and discitis (31%); osteomyelitis, discitis, and spinal epidural abscess (SEA; 27%); SEA only (15%); and osteomyelitis only (13%). Overall, 21 (44%) of 48 patients developed a spinal deformity. Anatomical location of infection (bone and/or disc and/or epidural space) was significantly associated with development of deformity (p &lt; 0.001). In particular, patients with SEA had lower odds of deformity compared with patients without SEA (odds ratio 0.2, 95% confidence interval 0.05-0.9; p &lt; 0.001). No other factor was significantly associated with deformity. Pain measured by VAS score tended to improve by a mean of 1.7 ± 2.7 points (p &lt; 0.001) when comparing preoperative to postoperative scores. Conclusions: In this cohort of patients, 44% developed at least 1 deformity, predominantly kyphosis. The only variable significantly associated with deformity was infection location. Patients with SEA alone demonstrated lower odds of developing a deformity compared with patients without SEA. Other analyzed variables, including age, body mass index, time from initial diagnosis to surgery, and comorbidities, were not found to be associated with development of deformity. Surgical intervention resulted in pain improvement. abstract_id: PUBMED:31610063 A prospective observational cohort study on the incidence of postoperative sore throat in the pediatric population. Background: Postoperative sore throat is common after general anesthesia. The incidence in pediatric anesthesia is variable, and the etiology unclear. Establishing risk factors would enable prevention and could improve quality of care. Aims: We performed a prospective single-center cohort study aiming to establish the incidence of postoperative sore throat in children undergoing GA with an endotracheal tube or laryngeal mask airway. Secondary aims were to identify independent risk factors for sore throat and the incidence of other postoperative complications including stridor, laryngospasm, nausea and vomiting, and delayed oral intake. Methods: Between November 2017 and April 2018, perioperative data were collected from children aged 5-16 years undergoing general, plastic, urology, renal, and orthopedic surgery. Patients completed a postoperative questionnaire within 24 hours of surgery. Results: We screened 334 children for inclusion at a tertiary pediatric hospital in the United Kingdom. One hundred and ninety-seven patients were included in the final analysis. The frequency of postoperative sore throat was 36.5%. Stridor occurred in 1.5%, laryngospasm 1.0%, postoperative nausea 59.4%, vomiting in recovery 4.6%, and delayed oral intake due to postoperative sore throat 30%. Nausea, vomiting, thirst, and pain were associated with a sore throat. Univariate analysis showed anesthesia longer than 2 hours, and use of an endotracheal tube was statistically associated with higher risk of sore throat. Over 50% of children with an endotracheal tube cuff pressure &lt;20 cmH2 O had a postoperative sore throat. Conclusion: Postoperative sore throat is common in children. Endotracheal intubation is associated with a greater incidence than laryngeal mask airway use. A high rate of postoperative sore throat in children with endotracheal tube cuff pressures within the recommended range suggests multifactorial etiology. To confirm validity of the identified risk factors, we would recommend a larger prospective multi-center study. abstract_id: PUBMED:19450971 An observational prospective cohort study of incidence and characteristics of failed spinal anaesthesia for caesarean section. Background: A prospective cohort study was performed in 800 parturients undergoing elective caesarean section under spinal anaesthesia from May 2005 to April 2006 in a large maternity hospital in Singapore, in order to determine the incidence of and risk factors for total and partial failure of spinal anaesthesia. Methods: A routine single-shot spinal technique using intrathecal 0.5% heavy bupivacaine 2.0 mL (10 mg) and morphine 100 microg was administered with a 27-gauge Whitacre spinal needle via a 20-gauge introducer. Demographic, surgical and anaesthetic data were collected to determine risk factors for failure of spinal anaesthesia. Results: Incidence of total failure requiring conversion to general anaesthesia was 0.5% (4 cases) in which three cases had inadequate block (loss of sensation to cold less than T6) and one case had no sensory block. Thirty-three parturients (4.1%) required intravenous fentanyl and seven (0.9%) required Entonox for intraoperative analgesic supplementation. Postpartum sterilization (P&lt;0.001) was an independent risk factor for partial failure requiring intravenous fentanyl and Entonox. Conclusion: Spinal anaesthesia using bupivacaine 10 mg with morphine 100 microg produces reliable anaesthesia for elective caesarean section. Postpartum sterilization involves exteriorisation of the uterus with additional surgical manipulation and hence may necessitate analgesic supplementation. The initial use of a combined spinal-epidural technique or the addition of intrathecal fentanyl or clonidine or an increased dose of local anaesthetic may be considered to decrease the incidence of intraoperative pain. abstract_id: PUBMED:37184910 The incidence, impact, and risk factors for moderate to severe persistent pain after breast cancer surgery: a prospective cohort study. Background: Few Australasian studies have evaluated persistent pain after breast cancer surgery. Objective: To evaluate the incidence, impact, and risk factors of moderate to severe persistent pain after breast cancer surgery in a New Zealand cohort. Design: Prospective cohort study. Methods: Consented patients were reviewed at 3 timepoints (preoperative, 2 weeks and 6 months postoperative). Pain incidence and interference, psychological distress and upper limb disability were assessed perioperatively. Clinical, demographic, psychological, cancer treatment-related variables, quantitative sensory testing, and patient genotype (COMT, OPRM1, GCH1, ESR1, and KCNJ6) were assessed as risk factors using multiple logistic regression. Results: Of the 173 patients recruited, 140 completed the 6-month follow-up. Overall, 15.0% (n = 21, 95% CI: 9.5%-22.0%) of patients reported moderate to severe persistent pain after breast cancer surgery with 42.9% (n = 9, 95% CI: 21.9%-66.0%) reporting likely neuropathic pain. Pain interference, upper limb dysfunction and psychological distress were significantly higher in patients with moderate to severe pain (P &lt; .004). Moderate to severe preoperative pain (OR= 3.60, 95% CI: 1.13-11.44, P = .03), COMT rs6269 GA genotype (OR = 5.03, 95% CI: 1.49-17.04, P = .009) and psychological distress at postoperative day 14 (OR= 1.08, 95% CI: 1.02-1.16, P = .02) were identified as risk factors. Total intravenous anesthesia (OR= 0.31, 95% CI: 0.10 - 0.99, P = .048) was identified as protective. Conclusion: The incidence of moderate to severe persistent pain after breast cancer surgery is high with associated pain interference, physical disability, and psychological distress. Important modifiable risk factors were identified to reduce this important condition. abstract_id: PUBMED:24275219 Summary of findings from the OPPERA prospective cohort study of incidence of first-onset temporomandibular disorder: implications and future directions. Unlabelled: Papers in this issue investigate when and how putative risk factors influence development of first-onset, painful temporomandibular disorder (TMD). The results represent first findings from the Orofacial Pain: Prospective Evaluation and Risk Assessment (OPPERA) prospective cohort study that monitored 2,737 men and women aged 18 to 44 years recruited at 4 U.S. study sites. During a median 2.8-year follow-up period, 260 participants developed TMD. The average incidence rate of 4% per annum was influenced by a broad range of phenotypic risk factors including sociodemographic characteristics, health status, clinical orofacial factors, psychological functioning, pain sensitivity, and cardiac autonomic responses. A novel method of multivariable analysis used random forest models to simultaneously evaluate contributions of all 202 phenotypic variables. Variables from the health status domain made the greatest contribution to TMD incidence, followed closely by psychological and clinical orofacial domains. However, only a few measures of pain sensitivity and autonomic function contributed to TMD incidence, and their effects were modest. Meanwhile, age and study site were independent predictors of TMD incidence, even after controlling for other phenotypes. Separate analysis of 358 genes that regulate pain found several novel genetic associations with intermediate phenotypes that, themselves, are risk factors for TMD, suggesting new avenues to investigate biological pathways contributing to TMD. Perspective: Collectively, the papers in this issue demonstrate that TMD is a complex disorder with multiple causes consistent with a biopsychosocial model of illness. It is a misnomer and no longer appropriate to regard TMD solely as a localized orofacial pain condition. abstract_id: PUBMED:25720294 A prospective study examining the incidence of asymptomatic and symptomatic lymphoceles following lymphadenectomy in patients with gynecological cancer. Objective: To identify the incidence of asymptomatic and symptomatic (i.e., causing pain, hydronephrosis, venous thrombosis, acute lymphedema of the lower or urinary urgency) lymphoceles, as well as risk factors for their development, through a prospective study of patients undergoing sole pelvic or combined pelvic and paraaortic lymphadenectomy for gynecological cancer. Methods: Patients with endometrial, ovarian or cervical cancer scheduled for sole pelvic or combined pelvic and paraaortic lymphadenectomy as a primary surgical treatment or salvage surgery for recurrence were enrolled at single institution from February 2006 to November 2010 and prospectively followed up with ultrasound. Results: Of 800 patients who underwent sole pelvic or combined pelvic and paraaortic lymphadenectomy for gynecological cancer, the overall incidence of lymphoceles was 20.2%, with symptomatic lymphoceles occurring in 5.8% of all patients. Lymphoceles are predominantly located on the left pelvic side wall. Lymphadenectomy in ovarian cancer, a higher number of lymph nodes obtained (&gt;27), and radical hysterectomy in cervical cancer were found to be independent risk factors for the development of symptomatic lymphoceles. Conclusions: The overall incidence of lymphocele development after lymphadenectomy for gynecological cancer remains high. However, the majority of lymphoceles are only incidental finding without clinical impact. A symptomatic lymphocele is an uncommon event, occurring in only 5.8% of patients. Symptomatic lymphoceles tend to develop earlier than asymptomatic. Although such risk factors are hard to avoid, patients known to be at an increased risk of developing symptomatic lymphoceles can be counseled appropriately and followed up for specific symptoms relating to lymphocele development. Answer: The study described in abstract PUBMED:24481825 aimed to determine the incidence of hip pain in a prospective cohort of asymptomatic volunteers who had undergone MRI of their hips. The study followed 200 asymptomatic volunteers for a mean time of 4.4 years, with 170 individuals completing the follow-up. The results indicated that 11 patients (6.5% of the sample) reported hip pain, with 7 of the 14 painful hips having a cam-type deformity at the time of the initial MRI. This was compared to 37 of the 318 nonpainful hips, which suggests a significant association between cam deformity and the development of hip pain (P = .0002). The study found a relative risk of 4.3 (95% CI, 2.3-7.8) of developing hip pain if a cam deformity was present. Additionally, those with painful hips had a significantly greater alpha angle at the radial 1:30 clock position than those who did not develop pain with a cam deformity: 61.5° versus 57.9°, respectively (P = .05). Furthermore, a significantly greater proportion of patients with limited internal rotation ≤20° went on to develop hip pain compared to those with internal rotation >20° (P = .009; relative risk = 3.1 [95% CI, 1.6-6.0]). The conclusion of the study was that the presence of a cam deformity represents a significant risk factor for the development of hip pain, and an elevated alpha angle at the 1:30 clock position and decreased internal rotation are associated with an increased risk of developing hip pain. However, not all patients with a cam deformity develop hip pain, indicating that further research is needed to better define those at greater risk of developing degenerative symptoms.
Instruction: The impact of stroke: are people with aphasia different to those without? Abstracts: abstract_id: PUBMED:22028531 Global aphasia without hemiparesis: A case series. Background: Global aphasia without hemiparesis (GAWH) is a rare stroke syndrome characterized by the unusual dissociation of motor and language functions. Issues regarding its etio-pathogenesis, lesion sites, and recovery patterns are extensively being debated in contemporary neuroscience literature. Materials And Methods: Four patients admitted in our hospital between 2005 and 2009 with GAWH caused by ischemic stroke were studied retrospectively with emphasis on number and site of lesions, etiology, and recovery patterns. Results: The clinical findings from our subjects showed that GAWH could result from either single/multiple lesions including subcortical lesions. The recovery was rapid, although not complete. One case evolved into Wernicke's aphasia as seen in earlier studies. Two subjects revealed evolution to transcortical sensory aphasia and one to Broca's aphasia which is distinct from previous proposals. Two cases showed lack of clinico-anatomic correlation during recovery. Conclusions: GAWH could result from both embolic and large vessel strokes and single or multiple lesions. The recovery pattern may be variable and may show lack of clinico-anatomical correlation indicating anomalous cerebral functional reorganization, questioning the conventional teaching of language representation in the brain. abstract_id: PUBMED:25657725 Lesion localization of global aphasia without hemiparesis by overlapping of the brain magnetic resonance images. Global aphasia without hemiparesis is a striking stroke syndrome involving language impairment without the typically manifested contralateral hemiparesis, which is usually seen in patients with global aphasia following large left perisylvian lesions. The objective of this study is to elucidate the specific areas for lesion localization of global aphasia without hemiparesis by retrospectively studying the brain magnetic resonance images of six patients with global aphasia without hemiparesis to define global aphasia without hemiparesis-related stroke lesions before overlapping the images to visualize the most overlapped area. Talairach coordinates for the most overlapped areas were converted to corresponding anatomical regions. Lesions where the images of more than three patients overlapped were considered significant. The overlapped global aphasia without hemiparesis related stroke lesions of six patients revealed that the significantly involved anatomical lesions were as follows: frontal lobe, sub-gyral, sub-lobar, extra-nuclear, corpus callosum, and inferior frontal gyrus, while caudate, claustrum, middle frontal gyrus, limbic lobe, temporal lobe, superior temporal gyrus, uncus, anterior cingulate, parahippocampal, amygdala, and subcallosal gyrus were seen less significantly involved. This study is the first to demonstrate the heterogeneous anatomical involvement in global aphasia without hemiparesis by overlapping of the brain magnetic resonance images. abstract_id: PUBMED:20712416 The impact of stroke: are people with aphasia different to those without? Purpose: Stroke rehabilitation programmes aim to improve functional outcomes and quality of life. This study explored long-term outcomes in a cohort of people admitted to two acute stroke units with stroke. Comparisons were drawn between people with aphasia (PWA) and people without aphasia. Methods: People admitted to hospital with a first stroke were assessed at 2-weeks, 3-months and 6-months post-stroke. Measures included: the Barthel Index for Activities of Daily Living (ADL), the Frenchay Aphasia Screening Test, the General Health Questionnaire-12 for emotional well-being and the Stroke and Aphasia Quality of Life Scale-39g. Extended ADL and social support were also measured at 3 and 6 months, with the Frenchay Activities Index and the Social Support Survey, respectively. Results: Of 126 eligible participants, 96(76%) took part and 87(69%) were able to self-report. Self-report data are reported here. Although outcomes improved significantly across time, at 6 months people continued to experience substantial functional limitations (16% aphasic; 32% dependent on basic ADL); participation limitations (79% ≤30 on the FAI); high psychological distress (45%) and compromised quality of life (54% ≤4 on the SAQOL-39g). Levels of social support remained relatively stable. Though at 3-months post-stroke PWA were significantly more likely to experience high psychological distress (93% versus 50% for those without), across time, there were no significant differences between PWA and those without on psychological distress and also ADL and social support. There were, however, significant differences on extended ADL (F(1,68) = 7.80, p &lt; 0.01) and quality of life (F(1,69) = 6.30, p &lt; 0.05). Conclusion: PWA participated in fewer activities and reported worse quality of life after stroke than people without aphasia, even when their physical abilities, well-being and social support were comparable. Implications for clinical practice and future research are discussed. abstract_id: PUBMED:33706719 Shifting of global aphasia to Wernicke's aphasia in a patient with intact motor function: a case report. Background: Global aphasia without hemiparesis (GAWH) is a rare stroke syndrome characterized by the dissociation of motor and language functions. Here, we present a case of GAWH with the patient later regaining speech fluency. Case Presentation: A 73-year-old man was admitted to our emergency department immediately after an episode of syncope. On arrival, we noted his global aphasia but without any focal neurologic signs. Computed tomography (CT) perfusion scans showed a large hypodense region over his left perisylvian area. Under the impression of acute ischaemic stroke, he received recombinant tissue plasminogen activator (rtPA) injection and was treated as an inpatient. The patient was later discharged with GAWH status and received regular speech rehabilitation. After 14 months of rehabilitation, the patient gradually recovered his language expression ability. The degree of aphasia was evaluated with the Concise Chinese Aphasia Test (CCAT), and we obtained brain single photon emission computed tomography (SPECT) scans to assess cerebral blood flow. Conclusion: A patient with severe impairments of Broca's and Wernicke's areas was able to talk fluently despite being unintelligible. SPECT revealed relative high level of radioactivity uptake in the right frontal lobe, suggesting the deficits in speech fluency could have been compensated by the right hemisphere. Although this is a single case demonstration, the results may strengthen the role of the right hemisphere in GAWH patients and suggests additional study that examines the possible benefits of stimulating activity at right homologous regions for recovering language function after global aphasia. abstract_id: PUBMED:33832678 Reading and alexia. Alexia refers to a reading disorder caused by some form of acquired brain pathology, most commonly a stroke or tumor, in a previously literate subject. In neuropsychology, a distinction is made between central alexia (commonly seen in aphasia) and peripheral alexia (a perceptual or attentional deficit). The prototypical peripheral alexia is alexia without agraphia (pure alexia), where patients can write but are impaired in reading words and letters. Pure alexia is associated with damage to the left ventral occipitotemporal cortex (vOT) or its connections. Hemianopic alexia is associated with less extensive occipital damage and is caused by a visual field defect, which creates problems reading longer words and passages of text. Reading impairment can also arise due to attentional deficits, most commonly following right hemisphere or bilateral lesions. Studying patients with alexia, along with functional imaging studies of normal readers, has improved our understanding of the neurobiological processes involved in reading. A key question is whether an area in the left ventral occipitotemporal cortex is specialized for or selectively involved in word processing, or whether reading relies on tuning of more general purpose perceptual areas. Reading deficits may also be observed in dementia and traumatic brain injury, but often with less consistent deficit patterns than in patients with focal lesions. abstract_id: PUBMED:26748167 Glioblastoma Presenting with Pure Alexia and Palinopsia Involving the Left Inferior Occipital Gyrus and Visual Word Form Area Evaluated with Functional Magnetic Resonance Imaging and Diffusion Tensor Imaging Tractography. Background: Rapid word recognition and reading fluency is a specialized cortical process governed by the visual word form area (VWFA), which is localized to the dominant posterior lateral occipitotemporal sulcus/fusiform gyrus. A lesion of the VWFA results in pure alexia without agraphia characterized by letter-by-letter reading. Palinopsia is a visual processing distortion characterized by persistent afterimages and has been reported in lesions involving the nondominant occipitotemporal cortex. Case Description: A 67-year-old right-handed woman with no neurologic history presented to our emergency department with acute cortical ischemic symptoms that began with a transient episode of receptive aphasia. She also reported inability to read, albeit with retained writing ability. She also saw afterimages of objects. During her stroke workup, an intra-axial circumscribed enhancing mass lesion was discovered involving her dominant posterolateral occipitotemporal lobe. Given the eloquent brain involvement, she underwent preoperative functional magnetic resonance imaging with diffusion tensor imaging tractography and awake craniotomy to maximize resection and preserve function. Conclusions: Many organic lesions involving these regions have been reported in the literature, but to the best of our knowledge, glioblastoma involving the VWFA resulting in both clinical syndromes of pure alexia and palinopsia with superimposed functional magnetic resonance imaging and fiber tract mapping has never been reported before. abstract_id: PUBMED:3868058 Auditory perception in patients with CVA without aphasia. Tests of central auditory function were performed variously on 64 patients with hemiplegia who had suffered a single stroke but with minimal or no aphasia symptoms and with essentially normal hearing. Both right and left hemisphere lesions were represented for comparison of effects on the tests. Reports of central auditory dysfunction generally assume localization in the temporal lobes. Since it was apparent that our tests were revealing positive findings in patients in whom there was no recognized evidence of temporal lobe involvement we deliberately selected for one study 34 patients without clinical signs of such lesions. The tests required selective listening in which patients had to a) perceive and repeat a prime message in the presence of a simultaneously competing meaningful message, either under earphones or in the sound field (loudspeaker), and b) repeat all words in a classic dichotic listening task. The tests reveal different dysfunctions in right vs left hemisphere damage and document that compensatory transfer of function from the usually dominant left side to the fight hemisphere usually occurs only in young children. The dysfunctions appear to be unrelated to temporal lobe lesions. abstract_id: PUBMED:10084536 Global aphasia without hemiparesis: language profiles and lesion distribution. Objectives: Global aphasia without hemiparesis (GAWH) is an uncommon stroke syndrome involving receptive and expressive language impairment, without the hemiparesis typically manifested by patients with global aphasia after large left perisylvian lesions. A few cases of GAWH have been reported with conflicting conclusions regarding pathogenesis, lesion localisation, and recovery. The current study was conducted to attempt to clarify these issues. Methods: Ten cases of GAWH were prospectively studied with language profiles and lesion analysis; five patients had multiple lesions, four patients had a single lesion, and one had a subarachnoid haemorrhage. Eight patients met criteria for cardioembolic ischaemic stroke. Results: Cluster analysis based on acute language profiles disclosed three subtypes of patients with GAWH; these clusters persisted on follow up language assessment. Each cluster evolved into a different aphasia subtype: persistent GAWH, Wernicke's aphasia, or transcortical motor aphasia (TCM). Composite lesion analysis showed that persistent GAWH was related to lesioning of the left superior temporal gyrus. Patients with acute GAWH who evolved into TCM type aphasia had common lesioning of the left inferior frontal gyrus and adjacent subcortical white matter. Patients with acute GAWH who evolved into Wernicke's type aphasia were characterised by lesioning of the left precentral and postcentral gyri. Recovery of language was poor in all but one patient. Conclusions: Although patients with acute GAWH are similar on neurological examination, they are heterogeneous with respect to early aphasia profile, language recovery, and lesion profile. abstract_id: PUBMED:27054313 A Comparison of the Visual Attention Patterns of People With Aphasia and Adults Without Neurological Conditions for Camera-Engaged and Task-Engaged Visual Scenes. Purpose: The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Method: Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological conditions. Participants viewed camera-engaged (i.e., human figure facing camera) and task-engaged (i.e., human figure looking at and touching an object) visual scenes. Results: Participants with aphasia responded to engagement cues by focusing on objects of interest more for task-engaged scenes than camera-engaged scenes; however, the difference in their responses to these scenes were not as pronounced as those observed in adults without neurological conditions. In addition, people with aphasia spent more time looking at background areas of interest and less time looking at person areas of interest for camera-engaged scenes than did control participants. Conclusions: Results indicate people with aphasia visually attend to scenes differently than adults without neurological conditions. As a consequence, augmentative and alternative communication (AAC) facilitators may have different visual attention behaviors than the people with aphasia for whom they are constructing or selecting visual scenes. Further examination of the visual attention of people with aphasia may help optimize visual scene selection. abstract_id: PUBMED:10973121 Global aphasia without hemiparesis: case report Symptoms and signs of a stroke indicate which areas of the brain are affected and may also suggest the pathophysiology. We report herein a case of global aphasia without hemiparesis due to embolic infarct. Our case suggests that this situation may be an important sign for embolic cerebral infarction, as reported in literature. Answer: The impact of stroke on individuals can vary significantly, and the presence of aphasia can influence outcomes and experiences post-stroke. People with aphasia (PWA) and people without aphasia experience different challenges and recovery trajectories. According to a study exploring long-term outcomes in a cohort of people admitted to two acute stroke units, although functional outcomes improved over time for all stroke survivors, at 6 months post-stroke, individuals continued to face substantial functional limitations. Notably, PWA participated in fewer activities and reported worse quality of life than those without aphasia, even when their physical abilities, well-being, and social support were comparable (PUBMED:20712416). This suggests that aphasia has a unique impact on the quality of life and participation in activities beyond the physical impairments caused by a stroke. The study found that at 3 months post-stroke, PWA were significantly more likely to experience high psychological distress compared to those without aphasia, although this difference was not observed across time for psychological distress, activities of daily living (ADL), and social support. However, there were significant differences in extended ADL and quality of life, with PWA faring worse (PUBMED:20712416). These findings indicate that the presence of aphasia can lead to more pronounced participation limitations and a lower quality of life, highlighting the need for targeted interventions and support for PWA to address these specific challenges. The impact of aphasia on individuals post-stroke is therefore distinct from those without aphasia, affecting not just communication abilities but also broader aspects of life and well-being.