input
stringlengths
6.82k
29k
Instruction: Nasal and hand carriage of Staphylococcus aureus in staff at a Department for Thoracic and Cardiovascular Surgery: endogenous or exogenous source? Abstracts: abstract_id: PUBMED:14510252 Nasal and hand carriage of Staphylococcus aureus in staff at a Department for Thoracic and Cardiovascular Surgery: endogenous or exogenous source? Objective: To investigate the rates of Staphylococcus aureus carriage on the hands and in the noses of healthcare workers (HCWs) and the relatedness of S. aureus isolates found in the two sites. Design: Point-prevalence study. Setting: Department for Thoracic and Cardiovascular Surgery at the University Hospital of Uppsala, Uppsala, Sweden. Subjects And Methods: Samples were obtained from 133 individuals, 18 men and 115 women, using imprints of each hand on blood agar and a swab from the nose. S. aureus isolates were identified by standard methods and typed by pulsed-field gel electrophoresis. Results: S. aureus was found on the hands of 16.7% of the men and 9.6% of the women, and in the noses of 33.3% of the men and 17.4% of the women. The risk ratio for S. aureus carriage on the hands with nasal carriage was 7.4 (95% confidence interval, 2.7 to 20.2; P < .001). Among the 14 HCWs carrying S. aureus on their hands, strain likeness to the nasal isolate was documented for 7 (50%). Conclusions: Half of the HCWs acquired S. aureus on the hands from patients or the environment and half did so by apparent self-inoculation from the nose. Regardless of the source of contamination, good compliance with hand hygiene is needed from all HCWs to protect patients from nosocomial infections. The moderate rate of S. aureus carriage on hands in this setting could be the result of the routine use of alcoholic hand antisepsis. abstract_id: PUBMED:23543837 The prevalence of nasal carriage of Staphylococcus aureus among healthcare workers at a tertiary care hospital in assam with special reference to MRSA. Background: The recent years have witnessed the increasing resistance of Staphylococcus aureus to many antimicrobial agents. The most notable example is the emergence of Methicillin-resistant Staphylococcus aureus (MRSA), which was reported just one year after the launch of methicillin. The ecological niches of the S. aureus strains are the anterior nares. The identification of Staphylococcus aureus by using a proper antibiogram and the detection of methicillin resistant Staphylococcus aureus greatly contribute towards the effective treatment of the patients. Aims And Objectives: To isolate Staphylococcus aureus from the nasal swabs of healthcare workers (HCWs) and to study their antimicrobial susceptibility patterns, which include methicillin resistance. Materials And Methods: Nasal swabs were collected from the healthcare workers of various clinical departments of the hospital over a period of one year. The isolation of Staphylococcus aureus and their antimicrobial susceptibility patterns were carried out by standard bacteriological procedures. Results: Staphylococcus aureus was isolated in 70 cases (22.22%). The prevalence of the S.aureus nasal carriage was higher among the male HCWs (54.28%) than among the female HCWs (45.71%). The carriage rate was the highest in the orthopaedics department, followed by those in the surgery and the gynaecology departments. All the Staphylococcus aureus isolates were sensitive to vancomycin and linezolid (100%). Penicillin and ampicillin were the most resistant, (90% and 88.6%) respectively. Methicillin resistance was seen in11.43% of the S.aureus isolates, both by the disc diffusion test and by the Oxacillin Resistance Screen Agar (ORSA) test. Conclusions: The compliance with the sanitary and the antibacterial guidelines of the health professionals is the single most important factor in preventing nosocomial infections. Simple preventive measures like hand washing before and after the patient examination, the use of sterile aprons and masks in the postoperative wards, awareness during the examination of the immunocompromised patients, and avoiding touching one's nose during work, can reduce the disease transmission rate considerably. abstract_id: PUBMED:29685170 Nasal carriage, risk factors and antimicrobial susceptibility pattern of methicillin resistant Staphylococcus aureus among healthcare workers in Adigrat and Wukro hospitals, Tigray, Northern Ethiopia. Objective: The aim of this study was to determine nasal carriage, risk factors and antimicrobial susceptibility pattern of methicillin resistant Staphylococcus aureus among health care-workers of Adigrat and Wukro hospitals Northern Ethiopia. Results: The overall prevalence of S. aureus and methicillin resistance S. aureus (MRSA) in the present study were 12% (29/242) and 5.8% (14/242) respectively. The rate of MRSA among S. aureus was 48.3%(14/29). In this study, MRSA carriage was particularly higher among nurse professionals (7.8%) and surgical ward (17.1%). None of the MRSA isolates were sensitive to penicillin and ampicillin. However, low resistance was found for chloramphenicol and clindamycin. Being diabetic and use of hands rub was statistically significant with MRSA colonization. abstract_id: PUBMED:19468188 Nasal carriage of methicillin-resistant Staphylococcus aureus among surgical unit staff. Methicillin-resistant Staphylococcus aureus (MRSA) is a problem within healthcare organizations and in the community. The aims of this study were to identify the prevalence of S. aureus in the anterior nares of surgical unit staff, to analyse their antibiogram with special reference to methicillin resistance, and to compare the isolates among surgical unit staff and in relation to the wards where they worked. Sterile swabs were used to collect the samples from the anterior nares of 100 healthcare workers working in 5 surgical wards who satisfied rigid inclusion and exclusion criteria. Standard procedures were followed for isolation, identification, and antibiotic sensitivity testing. S. aureus carrier status was observed in 13 individuals, of whom 2 (15.4%) were resistant to methicillin. All the isolates of S. aureus were multidrug-resistant but sensitive to vancomycin and bacitracin. One of the 13 was resistant to linezolid. Sixty-three of the staff were carriers of coagulase-negative Staphylococcus. The presence of methicillin resistance may cause problems in hospital infection control programs and may indicate emerging issues. This study suggests the need for periodic screening of hospital personnel in order to monitor trends and take steps to treat carriers. abstract_id: PUBMED:36964269 Staphylococcus aureus nasal colonization level and intracellular reservoir: a prospective cohort study. Staphylococcus aureus is a major pathogen in humans. The nasal vestibule is considered as the main reservoir of S. aureus. However, even though the nasal cavity may also be colonized by S. aureus, the relationships between the two sites are still unclear. We conducted a prospective study in humans to assess the S. aureus colonization profiles in the vestibule and nasal cavity, and to investigate the presence of intracellular S. aureus in the two sites. Patients undergoing ear, nose, and throat surgery were swabbed during endoscopy to determine S. aureus nasal load, genotype, and presence of intracellular S. aureus. Among per-operative samples from 90 patients, the prevalence of S. aureus carriage was 32.2% and 33.3% in the vestibule and the nasal cavity, respectively. The mean S. aureus load was 4.10 and 4.25 log10 CFU/swab for the nasal vestibule and nasal cavity, respectively (P > 0.05). Genotyping of S. aureus revealed that all nasal strains isolated from a given individual belong to the same clonal complex and spa-type. An intracellular carriage was observed in 5.6% of the patients, all of whom exhibited a S. aureus vestibule load higher than 3 log10 CFU/swab. An intracellular niche was observed in the vestibule as well as in the nasal cavity. In conclusion, the nasal cavity was also found to be a major site of S. aureus carriage in humans and should draw attention when studying host-pathogen interactions related to the risk of infection associated with colonization. abstract_id: PUBMED:31191769 Preoperative screening for nasal carriage of methicillin-resistant Staphylococcus aureus in patients undergoing general thoracic surgery. Objectives: Nasal carriage of methicillin-resistant Staphylococcus aureus (MRSA) is a risk factor for surgical site infections (SSIs). However, few studies have evaluated the rate of nasal carriage of MRSA and its effect on SSIs in patients undergoing general thoracic surgery. We investigated the importance of preoperative screening for nasal carriage of MRSA in patients undergoing general thoracic surgery. Patients and Methods: We retrospectively analyzed 238 patients with thoracic diseases who underwent thoracic surgery. We reviewed the rates of nasal carriage of MRSA and SSIs. Results: Results of MRSA screening were positive in 11 of 238 patients (4.6%), and 9 of these 11 patients received nasal mupirocin. SSIs occurred in 4 patients (1.8%). All 4 patients developed pneumonia; however, MRSA pneumonia occurred in only 1 of these 4 patients. No patient developed wound infection, empyema, and/or mediastinitis. SSIs did not occur in any of the 11 patients with positive results on MRSA screening. Conclusions: The rates of nasal carriage of MRSA and SSIs were low in this case series. Surveillance is important to determine the prevalence of MRSA carriage and infection in hospitals, particularly in the intensive care unit. However, routine preoperative screening for nasal carriage of MRSA is not recommended in patients undergoing general thoracic surgery. abstract_id: PUBMED:34074157 Rectal Staphylococcus aureus Carriage and Recurrence After Endoscopic Sinus Surgery for Chronic Rhinosinusitis With Nasal Polyps: A Prospective Cohort Study. Objective: Chronic rhinosinusitis with nasal polyps (CRSwNPs) remains a major challenge due to its high recurrence rate after endoscopic sinus surgery (ESS). We aimed to investigate the risk factors of recurrence among patients who underwent ESS for Chronic rhinosinusitis (CRS). Methods: Prospective cohort study including 391 cases in a single institution receiving ESS were included for analysis from 2014 and 2017. Baseline characteristics including rectal Staphylococcus aureus (S aureus) carriage in patients receiving ESS for CRSwNPs. The primary outcome was the recurrence of CRSwNPs. Multivariate regression model was established to identify independently predictive factors for recurrence. Results: Overall, 142 (36.3%) cases with recurrence within 2 years after ESS were observed in this study. After variable selection, multivariate regression model consisted of 4 variables including asthma (odds ratio [OR] = 3.41; P < .001), nonsteroidal anti-inflammatory drug allergy (OR = 2.27; P = .005), previous ESS (OR = 3.64; P < .001), and preoperative carriage of S aureus in rectum (OR = 2.34; P = .001). Conclusions: Based on our results, surgeons could predict certain groups of patients who are at high risk for recurrence after ESS. Rectal carriage of S aureus is more statistically related to the recurrence of CRSwNP after ESS compared with skin and nasal carriage. abstract_id: PUBMED:26457182 Nasal carriage of methicillin resistant Staphylococcus aureus among health care workers at a tertiary care hospital in Western Nepal. Background: Staphylococcus aureus is a frequent cause of infections in both the community and hospital. Methicillin-resistant Staphylococcus aureus continues to be an important nosocomial pathogen and infections are often difficult to manage due to its resistance to multiple antibiotics. Healthcare workers are important source of nosocomial transmission of MRSA. This study aimed to determine the nasal carriage rate of S. aureus and MRSA among healthcare workers at Universal College of Medical Sciences and Teaching Hospital, Nepal and to determine antibiotic susceptibility pattern of the isolates. Methods: A cross-sectional study involving 204 healthcare workers was conducted. Nasal swabs were collected and cultured on Mannitol salt agar. Mannitol fermenting colonies which were gram positive cocci, catalase positive and coagulase positive were identified as S. aureus. Antibiotic susceptibility test was performed by modified Kirby-Bauer disc diffusion method. Methicillin resistance was detected using cefoxitin disc diffusion method. Results: Of 204 healthcare workers, 32 (15.7 %) were nasal carriers of S. aureus and among them 7 (21.9 %) were carrier of MRSA. Overall nasal carriage rate of MRSA was 3.4 % (7/204). Highest MRSA nasal carriage rate of 7.8 % (4/51) was found among nurses. Healthcare workers of both surgical wards and operating room accounted for 28.6 % (2/7) of MRSA carriers each. Among MRSA isolates inducible clindamycin resistance was observed in 66.7 % (2/3) of erythromycin resistant isolates. Conclusions: High nasal carriage of S. aureus and MRSA among healthcare workers (especially in surgery ward and operating room) necessitates improved infection control measures to be employed to control MRSA transmission in our setting. abstract_id: PUBMED:17933699 Nasal carriage of meticillin resistant Staphylococcus aureus: the prevalence, patients at risk and the effect of elimination on outcomes among outclinic haemodialysis patients. Objective: Haemodialysis (HD) patients with meticillin-resistant Staphylococcus aureus (MRSA) infections face high morbidity and mortality. Nasal carriage of Staphylococcus aureus is known to play an important role as an endogenous source for HD-access-related infections that contribute significantly to morbidity, mortality and cost of end-stage renal disease (ESRD) management. This prospective investigation in regular out-clinic haemodialysis patients was undertaken to estimate the prevalence of S.aureus nasal carriage, to define patient groups at risk and to evaluate the effect of elimination on outcomes among outclinic haemodialysis patients. Methods: 136 HD patients without signs of overt clinical infection (48 women, 88 men, age 22-88 years) were screened at least twice for the nasal carriage for meticillin-susceptible SA (MSSA) or meticillin-resistant SA (MRSA). Nasal carriage of S. aureus was related to demographic (age, gender, duration on HD), comorbidity (diabetes, malignancy) and exposure to health care (dialysis staff, hospitalisation). Nasal carriers for MRSA received standardized mupirocin therapy and were followed up for elimination and infections for 1 year. Results: The prevalence of nasal carriage for staphylococcus aureus was 53 % (41 % MSSA, 12 % MRSA). Compared with patients showing no colonization or with MSSA carriers, the 16 patients with nasal carriage for MRSA were older and more likely to have acquired the bacteria while hospitalised. Genotyping of MRSA isolates revealed different strains in patients and care-providers. Mupirocin eliminated MRSA in all patients, none of these patients experienced an infection caused by staphylococcus aureus, confirming the known value of MRSA elimination from other studies. Conclusions: Elderly patients hospitalised for surgery constitute a high risk group for nasal carriage for MRSA. Early diagnosis may help prevent clinically relevant infection. Elimination of colonization by mupirocin appears to be an attractive preventive strategy. abstract_id: PUBMED:17086484 Nasal carriage of methicillin-resistant staphylococcus aureus in vascular surgery. The purpose of this study was to determine the prevalence of nasal carriage of methicillin-resistant Staphylococcus aureus (MRSA) and to define risk factors allowing identification of high-risk patients for MRSA nasal carriage at admission to the vascular surgery unit. From March 23, 2004 to July 13, 2004, screening for nasal carriage of MRSA was conducted at admission to the vascular surgery unit and 1 week thereafter. To analyze risk factors for MRSA nasal carriage at admission to the vascular surgery unit, a case-control study was carried out in patients presenting colonization at the time of admission. A total of 308 patients underwent nasal screening for MRSA. Thirteen were colonized with MRSA (nine at admission and four acquired), i.e., 4.2% of patients. Methicillin-susceptible Staphylococcus aureus (MSSA) was found in 11.4% of patients who underwent screening. Six patients with MRSA infection were identified during the study period. The two patients who acquired infection were colonized at the time of admission. Arrival from another health-care facility and from another department was a significant risk factor for carriage of MRSA. The prevalence of nasal carriage in vascular surgery was 4.2%. Nasal screening is highly cost-effective since 60% of MRSA carriers were undetected using diagnostic specimens alone. French recommendations issued for cardiac and orthopedic surgery by the consensus conference on preoperative management of infectious risk on March 5, 2004, should be extended to vascular surgery. Answer: The study conducted at the Department for Thoracic and Cardiovascular Surgery at the University Hospital of Uppsala, Sweden, investigated the rates of Staphylococcus aureus carriage on the hands and in the noses of healthcare workers (HCWs) to determine whether the source of S. aureus was endogenous or exogenous. The results showed that S. aureus was found on the hands of 16.7% of the men and 9.6% of the women, and in the noses of 33.3% of the men and 17.4% of the women. The risk ratio for S. aureus carriage on the hands with nasal carriage was 7.4, indicating a strong association between nasal carriage and hand contamination. Among the 14 HCWs carrying S. aureus on their hands, strain likeness to the nasal isolate was documented for 7 (50%). This suggests that half of the HCWs acquired S. aureus on their hands from patients or the environment (exogenous source) and half did so by apparent self-inoculation from the nose (endogenous source). The study concluded that regardless of the source of contamination, good compliance with hand hygiene is essential for all HCWs to protect patients from nosocomial infections (PUBMED:14510252).
Instruction: Does HER2/neu expression provide prognostic information in patients with advanced urothelial carcinoma? Abstracts: abstract_id: PUBMED:12209684 Does HER2/neu expression provide prognostic information in patients with advanced urothelial carcinoma? Background: Muscle-invasive urothelial carcinoma of the bladder is a highly lethal malignancy, particularly in the setting of locally advanced or metastatic disease. Prior reports of HER2/neu (c-erbB-2 or HER2) expression in bladder carcinoma have been mixed; therefore, its value in predicting metastasis or response to therapy has not been established in this tumor type. Thus, the authors evaluated a possible correlation between HER2 expression in patients with high-grade, muscle-invasive urothelial carcinoma of the bladder and outcome in patients who received paclitaxel-based chemotherapy. Methods: Archival tumor tissues from patients with advanced urothelial carcinoma who were enrolled on two clinical trials of paclitaxel-based chemotherapy regimens were analyzed for HER2/neu expression by immunohistochemistry (IHC). The authors correlated HER2 expression by IHC with clinical outcomes, such as response rate, progression free survival, and overall survival, using univariate analysis. Results: Thirty-nine tumor specimens were assessed for HER2 expression, most of which (70%) were collected from patients with metastatic disease. All were high-grade urothelial carcinomas (transitional cell carcinomas, Grade 3). Strong HER2 expression (2+/3+) was seen in 28 patients (71%). Patients with responding disease had an HER2 expression rate of 78%, similar to the rate seen in patients with stable disease (75%). In contrast, patients with progressive disease had an HER2 expression rate of 50%, although this difference did not reach statistical significance. However, univariate analysis showed that increased HER2 expression predicted an improvement in progression free and overall survival. When HER2 status was used as a dichotomous variable, tumors with positive HER2 expression did not have any association with response or with progression free survival; however, positive HER2 status was associated significantly with a decreased risk of death (P = 0.03). Conclusions: This study of HER2 expression in bladder carcinoma focused on patients who were treated prospectively in a standardized fashion, unlike prior studies that have evaluated banked, archival specimens. The authors confirmed the findings of others that high-grade, muscle-invasive urothelial carcinoma of the bladder has a significant rate of HER2 expression (71%). However, contrary to other reports, the current study found that HER2 expression in the context of paclitaxel-based chemotherapy decreased the risk of death significantly. Further research is warranted on the possible association of HER2 expression with chemosensitivitiy in urothelial carcinoma as well as the efficacy of HER2-targeted therapies (such as trastuzumab) for patients with high-grade, muscle-invasive urothelial carcinoma of the bladder. abstract_id: PUBMED:17940352 Does HER2 immunoreactivity provide prognostic information in locally advanced urothelial carcinoma patients receiving adjuvant M-VEC chemotherapy? Introduction: To evaluate the impact of HER2 immunoreactivity on clinical outcome in locally advanced urothelial carcinoma patients who received surgery alone, or methotrexate, vinblastine, epirubicin, and cisplatin (M-VEC) as adjuvant chemotherapy. Materials And Methods: We studied 114 formalin-fixed paraffin-embedded specimens obtained from locally advanced urothelial carcinoma patients receiving surgery alone or adjuvant M-VEC. The authors evaluated HER2 immunoreactivity using immunohistochemical staining and explored the influence of pathological parameters and HER2 immunoreactivity on progression-free survival (PFS) and disease-specific overall survival (OS) using univariate and multivariate Cox's analyses. Results: Urothelial carcinoma of the bladder had a significantly higher frequency of HER2 immunoreactivity than that of the upper urinary tract (60.7 vs. 20.7%, p < 0.0001). Overall, nodal status was a strong and independent prognostic indicator for clinical outcome. The HER2 immunoreactivity was significantly associated with PFS (p = 0.02) and disease-specific OS (p = 0.005) in advanced urothelial carcinoma patients. As for patients with adjuvant M-VEC, HER2 immunoreactivity was a significant prognostic factor for PFS (p = 0.03) and disease-specific OS (p = 0.02) using univariate analysis, but not multivariate analysis, and not for patients receiving watchful waiting. Conclusions: HER2 immunoreactivity might have a limited prognostic value for advanced urothelial carcinoma patients with adjuvant M-VEC. abstract_id: PUBMED:20089105 Human epidermal growth factor receptor 2 expression status provides independent prognostic information in patients with urothelial carcinoma of the urinary bladder. Objective: to test whether the expression of human epidermal growth factor receptor 2 (HER-2) is of prognostic value in a contemporary cohort of patients with urothelial carcinoma of the urinary bladder (UCB). Patients And Methods: tissue microarrays of 198 patients were constructed and immunohistochemical stainings were performed on the primary tumours and on lymphatic nodal metastases. All patients were treated with radical cystectomy (RC) and regional lymphadenectomy for UCB. HER-2 expression was assessed using continuous HER-2 expression scores (ranging from 0.1 to 3.9) generated using an automated cellular imaging system. Scores of ≥ 1.0 in at least 10% of tumour cells were regarded as HER-2 positive. We correlated HER-2 scores with pathological and clinical variables, including disease recurrence and cancer-specific mortality. Results: of 198 patients undergoing RC with lymphadenectomy, there was HER-2 positivity in 55 primary tumours (27.8%) compared with 44.2% of the evaluable positive lymph nodes (P < 0.001). HER-2 positivity was significantly associated with the presence of lymphovascular invasion (LVI; P= 0.026). With a median (range) follow-up of 35.4 (1.3-176.1) months, 101 patients (51.0%) had UCB recurrence and 82 patients (41.4%) died from the disease. In multivariable analyses that adjusted for the effects of pathological tumour stage, grade, LVI, lymph node metastasis and adjuvant chemotherapy, HER-2 positive patients were at increased risk for both UCB recurrence (hazard ratio [HR] 1.955, P= 0.003) and UCB-specific mortality (HR 2.066, P= 0.004) compared with patients with negative HER-2 expression. Conclusion: a positive HER-2 status is associated with aggressive UCB and provides independent prognostic information for UCB recurrence and mortality. Assessment of HER-2 status can be used to identify patients at high risk of disease progression who may benefit from adjuvant HER-2-targeted mono- or combined therapy after RC. abstract_id: PUBMED:17899426 Prognostic significance of Her2/neu overexpression in patients with muscle invasive urinary bladder cancer treated with radical cystectomy. Introduction: The aim of the study was to evaluate the status of Her2/neu protein expression in patients with muscle-invasive urothelial carcinomas of the bladder treated with radical cystectomy and to determine its prognostic significance. Material And Methods: We retrospectively analyzed the data of 90 patients who had undergone cystectomy for invasive transitional cell carcinoma of the urinary bladder. Immunohistochemical analysis for Her2/neu was done on paraffin-fixed tissues with CB11 antibodies (BioGenex, San Ramon, CA, USA). Sections with grade 2 and grade 3 staining were considered positive for Her2/neu. Results: Over a median follow-up period of 46 months (24-96 months) 46 patients are living without disease recurrence and six with recurrent disease either at the local site or with distant metastases. The remaining 38 patients have died. The median overall survival time was 50 months, and median disease-free survival time was 40 months. The Her2/neu status was significantly related to the tumor stage (P = 0.001), lymph node involvement (77% in N+ vs 23% in N0; P = 0.001) and the grade of the disease (32% of grade 2 vs 71% of grade 3; P = 0.037). Kaplan-Meier curves showed a significantly worse disease-related survival period (log rank P = 0.011) for patients with Her2 overexpressing tumors than for those without overexpression. In addition to tumor stage [P = 0.001; relative risk (RR) = 2.62] and lymph node status (P = 0.0001; RR = 2.95), Her2 status (P = 0.020; RR = 2.22) was identified as an independent predictor for disease-related survival in a multivariate analysis. Conclusion: These results suggest that Her2 expression might provide additional prognostic information for patients with muscle-invasive bladder cancer. Future studies on Her2 expression with chemosensitivity and the efficacy of Her2-targeted therapies in urothelial carcinomas are warranted. abstract_id: PUBMED:28750448 Role of the human ErbB family receptors in urothelial carcinoma of the bladder: mRNA expression status and prognostic relevance Background Altered expression of epidermal growth factor (EGF) family (ErbB) receptors in urothelial carcinoma of the urinary bladder (UCB) has been associated with adverse outcomes. Given the limited treatment options in UCB, EGFR and HER2 (ERBB2) represent established therapeutic targets in other entities. We assessed the expression of ErbB family receptors (ERBB1 - 4) on mRNA levels in correlation with histopathological and clinical parameters in patients treated with radical cystectomy (RC). Methods 94 patients (female = 22; male = 72; median age: 66.5 years [range 39 - 88]) with UCB (pT1 - 4) treated with RC were included. Median follow-up was 28.2 months (range 0.6 - 139). ErbB mRNA expression levels were determined after extraction from formalin-fixed, paraffin-embedded tissue. Univariate and multivariate Cox proportional hazard models were performed to assess recurrence-free survival (RFS) and cancer-specific survival (CSS). Results Overexpression was observed in 18 % (ERBB3), 39 % (EGFR), 34 % (HER2, ERBB2) and 30 % (ERBB4) of patients, respectively. Higher pathological stage (p = 0.012), a positive nodal status (p = 0.0002), high ERBB4 (p = 0.012) and high HER2 (ERBB2) levels (p = 0.014) were significantly associated with reduced RFS. A negative lymph node status (p = 0.0003) and low HER2 (ERBB2) (p = 0.042) levels had a favourable prognostic impact on CSS. In multivariate analysis, positive pN stage (p = 0.0011) and high ERBB4 (p = 0.0073) expression were independent predictors of reduced RFS. Higher pN stage (p = 0.0016) was an independent predictor of reduced CSS. Conclusions Higher HER2 (ERBB2) expression is associated with an unfavourable prognosis in patients with UCB. However, it is not an independent predictor when measured on mRNA levels. Further analyses need to clarify which patients may still benefit from HER2 (ERBB2) targeted drugs. abstract_id: PUBMED:20651405 HER-2/AKT expression in upper urinary tract urothelial carcinoma: prognostic implications. Aim: To assess HER-2 and p-AKT expression in upper urinary tract urothelial carcinoma (UTUC) in order to determine their value as prognostic factors of tumour progression and cancer-specific survival. Patients And Methods: One hundred consecutive UTUC patients were retrospectively included, between 1990-2004, in 4 tissue microarrays for immunostaining. Median follow-up: 33.03 months. Results: Positive HER-2 expression was found in 10 cases and cytoplasmic p-AKT expression in 84 cases; the expression intensity was strong: 30 cases, moderate: 28 and weak: 26. Nuclear p-AKT expression was found in 6 patients: 1 with strong, and 5 with moderate intensity. Nuclear p-AKT expression was an independent factor for tumour progression (HR=4.145, p=0.013), together with grade (HR=4.557, p=0.009) and stage (HR=2.085, p=0.003). In cancer-specific survival analysis, nuclear p-AKT expression (HR=4.268, p=0.017), together with grade (HR=5.214, p=0.035) and stage (HR=2.666, p=0.002) were identified as independent prognostic factors. Conclusion: Nuclear p-AKT expression together with stage and grade constitute independent prognostic factors for tumour progression and cancer-specific survival. abstract_id: PUBMED:22277196 Prognostic role and HER2 expression of circulating tumor cells in peripheral blood of patients prior to radical cystectomy: a prospective study. Background: Preliminary research has suggested the potential prognostic value of circulating tumor cells (CTC) in patients with advanced nonmetastatic urothelial carcinoma of the bladder (UCB). Objective: Prospectively analyze the clinical relevance and human epidermal growth factor receptor 2 (HER2) expression of CTC in patients with clinically nonmetastatic UCB. Design, Setting, And Participants: Blood samples from 100 consecutive UCB patients treated with radical cystectomy (RC) were investigated for the presence (CellSearch system) of CTC and their HER2 expression status (immunohistochemistry). HER2 expression of the corresponding primary tumors and lymph node metastasis were analyzed using fluorescence in situ hybridization. Intervention: Blood samples were taken preoperatively. Patients underwent RC with lymphadenectomy. Measurements: Outcomes were assessed according to CTC status. HER2 expression of CTC was compared with that of the corresponding primary tumor and lymph node metastasis. Results And Limitations: CTC were detected in 23 of 100 patients (23%) with nonmetastatic UCB (median: 1; range: 1-100). Presence, number, and HER2 status of CTC were not associated with clinicopathologic features. CTC-positive patients had significantly higher risks of disease recurrence and cancer-specific and overall mortality (p values: ≤ 0.001). After adjusting for effects of standard clinicopathologic features, CTC positivity remained an independent predictor for all end points (hazard ratios: 4.6, 5.2, and 3.5, respectively; p values ≤ 0.003). HER2 was strongly positive in CTC from 3 of 22 patients (14%). There was discordance between HER2 expression on CTC and HER2 gene amplification status of the primary tumors in 23% of cases but concordance between CTC, primary tumors, and lymph node metastases in all CTC-positive cases (100%). The study was limited by its sample size. Conclusions: Preoperative CTC are already detectable in almost a quarter of patients with clinically nonmetastatic UCB treated with RC and were a powerful predictor of early disease recurrence and cancer-specific and overall mortality. Thus CTC may serve as an indication for multimodal therapy. Molecular characterization of CTC may serve as a liquid biopsy to guide individual targeted therapy in future clinical trials. abstract_id: PUBMED:19810139 Prognostic impact of HER2/neu protein in urothelial bladder cancer. Survival analysis of 80 cases and an overview of almost 20 years' research. Purpose: This study was conducted to evaluate the quantitative assessment of HER2/neu immunohistochemical expression in urothelial bladder cancer in order to determine its prognostic significance. Materials And Methods: Archival tumor tissue from 80 patients with primary urothelial carcinoma were analysed for HER2/neu immunohistochemical expression. A highly reproducible standardized procedure on a Bond-X automated slide stainer was used. Results: HER2 protein was overexpressed in 41 of 80 patients (51.25%), demonstrating an increase in the expression rate corresponding to progressively advanced tumor stage (p=0.032) and tumor grade (p=0.0001). Kaplan-Meier analyses showed that positive membranous expression of HER2/neu was not associated with an increased probability of tumor recurrence (p=0.362). In contrast, HER2 scores correlated strongly with specific survival probability (p=0.002) and overall survival (p=0.025). Multivariate analysis revealed that only stage was an independent predictor of specific survival (p=0.016). HER2 expression was an independent predictor of specific survival with borderline statistical significance (p=0.08). Conclusion: HER2 overexpression represents a prognostic factor for adverse disease outcome. abstract_id: PUBMED:38155921 Advances in HER2-Targeted Treatment for Advanced/Metastatic Urothelial Carcinoma. Urothelial carcinoma (UC) represents a common malignancy of the urinary system that can involve the kidneys, ureter, bladder, and urethra. Advanced/metastatic UC (mUC) tends to have a poor prognosis. UC ranks third in terms of human epidermal growth factor receptor 2 (HER2) overexpression among all tumors. However, multiple studies found that, unlike breast cancer, variable degrees of HER2 positivity and poor consistency between HER2 protein overexpression and gene amplification have been found. Trials involving trastuzumab, pertuzumab, lapatinib, afatinib, and neratinib have failed to prove their beneficial effect in patients with HER2-positive mUC, and a clinical trial on T-DM1 (trastuzumab emtansine) was terminated prematurely because of the adverse reactions. However, a phase II trial showed that RC48-ADC was effective. In this review, we provided an in-depth overview of the advances in the research regarding HER2-targeted therapy and the role of HER2 in mUC. Furthermore, we also discussed the prospects of potential strategies aimed at overcoming anti-HER2 resistance, and summarize the novel anti-HER2 approaches for the management of mUC used in recent clinical trials. abstract_id: PUBMED:38455962 Expression of HER2 in high-grade urothelial carcinoma based on Chinese expert consensus and the clinical effects of disitamab vedotin-tislelizumab combination therapy in the treatment of advanced patients. Background: A vast number of researchers have discovered high levels of human epidermal growth factor receptor-2 (HER2) expression in urothelial carcinoma (UC), but they do not use a uniform scoring system. Based on the 2021 edition of clinical pathological expert consensus on HER-2 testing in UC in China, we investigated the expression level and clinical significance of HER2 in high-grade UC. Furthermore, we looked at the prognosis of patients with locally advanced/metastatic UC after combining HER2 targeting antibody-drug conjugates (ADC) medication disitamab vedotin (DV) with programmed cell death protein 1 (PD-1) inhibitor tislelizumab. Patients and methods: From 2019 to 2022, we collected paraffin specimens of UC from the Department of Urology at the Provincial Hospital Affiliated to Shandong First Medical University. HER2 expression-related factors were investigated. Patients with advanced UC who have failed systemic chemotherapy at least once and had received immune checkpoint inhibitor (ICI) medication during second-line treatment were selected and treated with DV in combination with tislelizumab. We assessed the therapy's efficacy and safety. Results: 185 patients with high-grade UC were included in this investigation. 127 patients (68.7%) were HER2 positive (IHC 2+/3+) according to the 2021 Clinical pathological expert consensus on HER2 testing in UC in China. The clinical stage of UC differed statistically significantly between the HER2-and HER2+ groups (p = 0.019). Sixteen advanced UC patients were treated with DV and tislelizumab for a median of 14 months. The disease control rate was 87.5%, while the objective response rate (ORR) was 62.5%. The ORR of HER2+ individuals was higher than that of HER2-individuals (70.0% vs. 50.0%). The median progression-free survival or overall survival was not reached. In this study, the incidence of treatment-related adverse events was 68.8% (11/16), with all of them being grade 1 or 2 adverse reactions. Conclusion: HER2 protein expressed at a high percentage in UC, and 68.7% patients expressed HER2 positive (IHC 2+/3+). HER2+ expression is positively correlated with higher clinical stage of UC. HER2 targeted ADC drug disitamab vedotin combining with PD-1 inhibitor tislelizumab has shown efficacy, safety and controllable adverse reactions in the treatment of advanced UC. Answer: Yes, HER2/neu expression does provide prognostic information in patients with advanced urothelial carcinoma. Several studies have investigated the correlation between HER2 expression and clinical outcomes in urothelial carcinoma, with varying results. One study found that increased HER2 expression predicted an improvement in progression-free and overall survival in patients with high-grade, muscle-invasive urothelial carcinoma of the bladder who received paclitaxel-based chemotherapy. Positive HER2 status was significantly associated with a decreased risk of death (PUBMED:12209684). Another study reported that HER2 immunoreactivity was significantly associated with progression-free survival (PFS) and disease-specific overall survival (OS) in advanced urothelial carcinoma patients. However, HER2 immunoreactivity was not an independent prognostic factor in multivariate analysis for patients receiving adjuvant M-VEC chemotherapy (PUBMED:17940352). A further study demonstrated that HER2 positivity was significantly associated with the presence of lymphovascular invasion and was an independent prognostic factor for both urothelial carcinoma recurrence and mortality. This suggests that HER2 status can be used to identify patients at high risk of disease progression who may benefit from adjuvant HER2-targeted therapy after radical cystectomy (PUBMED:20089105). In another study, HER2/neu status was significantly related to tumor stage, lymph node involvement, and the grade of the disease. HER2 overexpression was identified as an independent predictor for disease-related survival in patients with muscle-invasive bladder cancer treated with radical cystectomy (PUBMED:17899426). Additionally, one study found that higher HER2 (ERBB2) expression was associated with an unfavorable prognosis in patients with urothelial carcinoma of the bladder. However, it was not an independent predictor when measured on mRNA levels (PUBMED:28750448). Overall, these studies suggest that HER2/neu expression can provide valuable prognostic information in patients with advanced urothelial carcinoma, although its role as an independent prognostic factor may vary depending on the study and the treatment context.
Instruction: Can an inexperienced observer accurately plot disc contours using Heidelberg retinal Tomograph? Abstracts: abstract_id: PUBMED:24862770 Can an inexperienced observer accurately plot disc contours using Heidelberg retinal Tomograph? Objective: To examine the reliability of inexperienced observers in plotting optic disc contours on Heidelberg retinal tomography images before and after training. Design: Observational study. Participants: One hundred eyes that were randomly selected from the Singapore Indian Eye Study. Methods: Both eyes of subjects were imaged with Heidelberg Retina Tomograph 3 (HRT-3; Heidelberg Engineering, Heidelberg, Germany). Optic disc contours were plotted on the same images by 2 new observers on 2 separate occasions, before and after 2-hour standardized training on the skills and tools available to accurately identify and delineate optic disc contours. These plottings were compared with an experienced, trained glaucoma expert (gold standard). Agreement and variability were analyzed by interclass correlation tests and Bland-Altman plots. Results: A total of 182 images (18 excluded because of poor quality) from 89 Indian subjects were included. The mean age was 53.27 ± 7.25 years and 54.8% were male. There was moderate-to-high agreement between pretraining (both new observers) and experienced observer's results (interclass correlation values range, 0.76-0.99). The interclass correlation improved for all the HRT-3 parameters after the 2 new observers were adequately trained. Comparing the interclass correlation values before and after training, the differences for mean retinal nerve fibre layer thickness for Observer 1 and all the HRT-3 parameters for Observer 2 were statistically significant. Conclusions: This study shows that it is easy to train a new inexperienced observer to plot optic disc contours on HRT images, which translates into improved and acceptable interobserver variability and agreement. abstract_id: PUBMED:20922037 Optic disc measurements using the Heidelberg Retina Tomograph in amblyopia. Purpose: To investigate the characteristics of optic disc parameters in amblyopic eyes in which retinal involvement is uncertain. Methods: A total of 44 patients with a history of unilateral amblyopia (27 patients with persistent amblyopia and 17 patients with resolved amblyopia) were examined using the Heidelberg Retina Tomograph (HRT) II. Parameters examined included disc area, cup area, cup volume, rim area, rim volume, cup-to-disc area ratio, and mean retinal nerve fiber layer thickness. Results: In patients with persistent amblyopia, the amblyopic eyes were significantly more hyperopic than the fellow eyes. In the HRT parameters, there were no significant differences between the amblyopic and fellow eyes. In addition, after adjusting for refraction, the presence of strabismus, and the disc area, there was no significant difference in any HRT parameter between the amblyopic eyes of patients with persistent amblyopia and the previously amblyopic eyes of patients with resolved amblyopia. Conclusions: We did not find any strong evidence for the deformity of the optic disc of amblyopic eyes. abstract_id: PUBMED:9044977 Effect of the contour line on cup surface using the Heidelberg Retina Tomograph Background: Significance of topometric follow-up examinations of the optic nerve head in glaucomatous eyes depends on the reproducibility of the calculated parameters. Since the definition of the standard reference plane in software version 1.11 of the Heidelberg Retina Tomograph has been changed, intrapapillary parameters depend directly on the position of the contourline in the sector between -10 degrees to -4 degrees, and therefore on the observer variability to determine the disc border. We evaluated intra- and interobserver variability and present a simple approach to increase reproducibility. Method: The disc border of 4 glaucomatous eyes, 3 ocular hypertensive eyes and 3 eyes of healthy subjects were traced by two observers, 5 times using the free draw mode and 5 times by the addition of contourline circles. Results: We found a median variability of the mean disc radius in sector -10 degrees to -4 degrees of 51 microns, which defines the position of the standard reference plane, resulting in a median variability of the position of the standard reference plane of 33 microns which caused a variability of 81 microns2 of the cup area. Addition of contourline circles smoothing the final contourline along the border of the optic disc resulted in a decrease of the coefficient of variation of the standard reference plane of 3.76% (6.76% vs. 3.0%), of the cup area of 2.34% (3.87% vs. 1.53%) and of the rim volume of 3.41% (9.75% vs. 6.34%). Conclusion: The calculation of the cup area using software version 1.11 of the Heidelberg Retina Tomograph depends on observer variability. The addition of contourline circles to define the final contourline along the disc border increases reproducibility. However, in follow-up of topometric examinations of the optic nerve head the software supported transfer mode should be used. Comparing topometric data of an individual optic disc in follow-up suppose the same definition of the contourline. Therefore, topometric data evaluated using software version 1.10 or earlier needs to be recalculated. abstract_id: PUBMED:26799143 Comparison of Heidelberg Retina Tomograph with disc-macula distance to disc diameter ratio in diagnosing optic nerve hypoplasia. Purpose: To evaluate whether Heidelberg Retinal Tomograph (HRT) is a valid test for diagnosing congenital optic nerve hypoplasia (CONH) compared to the ratio of the distance between the centre of the optic disc and the centre of the macula and the mean optic disc diameter (DM:DD ratio). Furthermore, to determine the optimal cut-off value of HRT disc area to differentiate a hypoplastic disc from a normal optic disc. Methods: A total of 33 subjects with CONH (4-67 years old) and 160 normal subjects (5-65 years old) were recruited and underwent comprehensive eye examinations, fundus photography and HRT. Receiver operating characteristic curves for DM:DD ratio and HRT disc area were constructed based on data from the 46 CONH eyes and 160 control eyes. Results: Mean (±S.D.) HRT disc area was 1.94 (±0.54) mm(2) for the control eyes and 0.84 (±0.35) mm(2) for the CONH eyes (p < 0.0001). The area under the curve (AUC) for DM:DD ratio was 0.83 (95% confidence interval: 0.76-0.90). The AUC for HRT disc area was 0.96 (95% confidence interval: 0.94-0.99). A statistically significant difference was found between AUC for HRT disc area and that for DM:DD ratio (p = 0.0004). The optimal cut-off value for HRT disc area was 1.42 mm(2) with 95% sensitivity and 85% specificity. The optimal cut-off value for DM:DD ratio was 3.20 with 78% sensitivity and 78% specificity. Conclusions: Both HRT and the DM:DD ratio are valid tests to aid diagnosis of CONH. HRT is superior to DM:DD ratio in diagnosing CONH with higher sensitivity and specificity. We suggest the optimal cut-off value for HRT disc area as 1.42 mm(2) in order to discriminate a hypoplastic disc from a normal optic disc. abstract_id: PUBMED:23601764 Retinal nerve fibre layer imaging: comparison of Cirrus optical coherence tomography and Heidelberg retinal tomograph 3. Background: The purpose of this study was to analyze the relationship between retinal nerve fibre layer thickness measured by spectral domain optical coherence tomography and confocal scanning laser ophthalmoscope. Design: Prospective, cross-sectional study. Hospital setting. Participants: One hundred seventy-three subjects (85 glaucoma and 88 normal subjects). Methods: One eye from each individual was selected randomly for imaging by the spectral domain Cirrus optical coherence tomography and Heidelberg retinal tomograph 3. Main Outcome Measures: Global thickness and measurements at the four quadrants around the optic disc. Results: Measurements as determined by Heidelberg retinal tomograph 3 were significantly larger than measurements done by Cirrus optical coherence tomography (respectively in mm, for global thickness: 200.0 ± 87.2 and 80.7 ± 14.7; for temporal quadrant: 75.3 ± 31.9 and 59.1 ± 13.8; for superior quadrant: 223.2 ± 128.4 and 97.7 ± 20.9; for nasal quadrant: 208.0 ± 102.9 and 66.8 ± 11.8; and for inferior quadrant: 224.4 ± 116.9 and 99.1 ± 26.6, for all P < 0.01). Significant correlation was found for all measurements (P ≤ 0.009), but a pattern of proportional bias was demonstrated. The agreement of categorical classification (within normal limits, borderline or outside normal limits) ranged between poor and fair. Conclusions: The thickness easurements by the two technologies are strongly correlated but significantly different. The differences are substantial and proportional to the retinal nerve fibre layer thickness. The normative diagnostic classification of the two technologies may not agree. The results preclude interchangeable use of these measurements in clinical practice. abstract_id: PUBMED:27413492 Effect of Photorefractive Keratectomy on Optic Nerve Head Topography and Retinal Nerve Fiber Layer Thickness Measured by Heidelberg Retina Tomograph 3. Purpose: To determine whether photorefractive keratectomy (PRK) has a significant effect on optic nerve head (ONH) parameters and peripapillary retinal nerve fiber layer (RNFL) thickness measured by the Heidelberg Retina Tomograph 3 (Heidelberg Engineering GmbH, Heidelberg, Germany) in eyes with low to moderate myopia. Methods: This prospective, interventional case series, includes 43 consecutive myopic eyes which were assessed on the day of PRK and 3 months postoperatively using the HRT3. Among the stereometric parameters, we compared disc area, linear cup disc ratio, cup shape measure, global rim area, global rim volume, RNFL height variation contour and mean RNFL thickness; out of the Glaucoma Probability Score (GPS) we assessed changes in global value, rim steepness temporal/superior, and temporal/inferior, as well as cup size and cup depth before and after PRK. Results: Mean refractive error before and after PRK were -3.24 ± 1.31 and -0.20 ± 0.42 diopters, respectively. No significant change occurred in disc area, linear cup disc ratio, cup shape measure, rim area and rim volume among the stereometric parameters; and in rim steepness temporal/superior and rim steepness temporal/inferior in the GPS before and after PRK using the default average keratometry. However, RNFL height variation contour, mean RNFL thickness, and cup size and depth were significantly altered after PRK (P < 0.05). Conclusion: PRK can affect some HRT3 parameters. Although the most important stereometric parameters for differentiating normal, suspect or glaucomatous patients such as rim and cup measurements in stereometric parameters were not changed. abstract_id: PUBMED:19668399 False negative results in glaucoma detection with Heidelberg Retina Tomograph II. Purpose: To evaluate the rate of false negative results with the Heidelberg Retina Tomograph (HRT II) in a glaucoma practice. Design: Cross-sectional study. Methods: We analyzed the HRTs taken between October 2002 and October 2003 in our glaucoma clinic, and selected the patients who had a good quality image (SD < 40 mu) with a normal Moorfield's Regression Analysis (MRA). A masked independent observer classified those patients as normal, glaucoma suspect, or glaucomatous on the basis of optic disc stereo photos (ODP) and at least 2 consecutive reliable automated perimetries. The diagnosis of glaucoma was based on a glaucomatous optic disc with a congruent, reproducible visual field defect. Results: Four hundred and fifty patients who had undergone an HRT examination were analyzed. One hundred and nine patients had an HRT classified as normal on the MRA, and a good quality image. Fifteen of those 109 patients (13.7%) were classified as glaucomatous on the basis of an abnormal ODP with corresponding visual field defect. Seven (6.4%) patients were classified as glaucoma suspect. Conclusion: Fourteen percent of glaucoma patients with glaucoma remained undetected with the HRT II Moorfield's regression analysis as a sole means to detect glaucoma. abstract_id: PUBMED:10464730 Interobserver agreement of Heidelberg retina tomograph parameters. Purpose: To measure the interobserver agreement of Heidelberg Retina Tomograph (HRT; Heidelberg Engineering, Heidelberg, Germany) parameters as a result of different observers' contour line placement. Methods: The optic nerve heads of 50 patients with glaucoma were imaged with the HRT. Five observers traced each disc margin with a contour line. Each observer was masked to the contour line tracings of the other observers, and there was no formal discussion as to where to place the contour line. The following stereometric parameters were calculated for each image for each observer: disc area, mean height of contour, cup shape, rim volume using the standard reference plane from software version 1.11, rim volume using a reference plane of 320 microns below the retinal plane, and volume above curved surface. Agreement between the five observers was tested for each parameter using intraclass correlation coefficients (ICCs). Results: Interobserver agreement between the five observers was substantial for both rim volumes (ICC = 0.73) and for disc area (ICC = 0.67). Agreement was almost perfect for mean height of contour (ICC = 0.94), cup shape (ICC = 0.92), and volume above curved surface (ICC = 0.83). Conclusion: The interobserver agreement for the HRT parameters was substantial to almost perfect, indicating that the HRT results as defined by the five observers were interchangeable. abstract_id: PUBMED:24716836 Optic nerve head assessment: comparison of Cirrus optic coherence tomography and Heidelberg Retinal Tomograph 3. Background: The purpose of this study was to analyse the relationship between optic nerve head (ONH) parameters measured by spectral domain optical coherence tomography and confocal scanning laser ophthalmoscope. Design: Prospective, cross-sectional study. Hospital setting. Participants: One hundred seventy-three subjects (85 glaucoma and 88 normal subjects). Methods: One eye from each individual was selected randomly for ONH imaging by the spectral domain Cirrus OCT and Heidelberg Retinal Tomograph 3 (HRT3). Main Outcome Measures: Four ONH parameters that are measured by both technologies (average cup-to-disc ratio [CDR], rim area, disc area and cup volume) were analysed and compared for differences, agreement of the categorical classification, diagnostic sensitivities and specificities and the area under the receiver operating characteristic curves (AUC). Results: ONH parameters, as determined by the two technologies were significantly different but strongly correlated. Proportional bias was demonstrated for all measurements. The agreement of categorical classification was excellent for CDR (κ = 0.94) and good for rim area and cup volume (κ = 0.63 and 0.71, respectively). The highest sensitivities at fixed specificities were achieved by Cirrus OCT. AUCs for CDR, rim area, disc area and cup volume were not significantly different between the two technologies. Conclusions: The diagnostic capability of ONH measurements by both technologies is similar. Paired ONH measurements by Cirrus OCT and HRT3 are strongly correlated but significantly different and proportionally biased. The results preclude interchangeable use of the absolute values, but categorical classification of ONH parameters may be interchangeable in clinical practice. abstract_id: PUBMED:9469546 Comparison of optic disc measurements by Heidelberg Retina Tomograph and manual planimetric techniques. The inner edge of the scleral ring is used as a reference plane for manual, computer assisted optic disc measurements. As it is possible to identify this level with the Heidelberg Retina Tomograph, our aim was to compare manual measurements to those taken with the Heidelberg Retina Tomograph. The material of this pilot study consisted of 12 eyes of 12 patients with early glaucomatous optic disc, retinal nerve fiber layer and/or visual field abnormalities. Measurements of the optic disc and neuroretinal rim area were obtained using our manual planimetric techniques, and the Heidelberg Retina Tomograph. The mean optic disc areas measured smaller with the Heidelberg Retina Tomograph than with the manual techniques (average difference 0.13 mm2; p < 0.05). The neuroretinal rim area measurements and cup-to-disc area ratio using manual and Heidelberg Retina Tomograph techniques did not differ statistically significantly from each other. With the scleral ring as a reference plane, comparable optic disc measurements can be obtained with manual and laser techniques. Answer: Yes, an inexperienced observer can accurately plot disc contours using Heidelberg retinal Tomograph (HRT) after receiving adequate training. A study examined the reliability of inexperienced observers in plotting optic disc contours on HRT images before and after they underwent a 2-hour standardized training. The study found that there was moderate-to-high agreement between the pretraining results of the new observers and those of an experienced observer. After training, the interclass correlation improved for all the HRT-3 parameters, indicating that training led to improved and acceptable interobserver variability and agreement. The differences in mean retinal nerve fibre layer thickness for one observer and all HRT-3 parameters for the second observer before and after training were statistically significant, demonstrating that training can effectively increase the accuracy of inexperienced observers in plotting optic disc contours on HRT images (PUBMED:24862770).
Instruction: Are tablets a practical source of protein substitute in phenylketonuria? Abstracts: abstract_id: PUBMED:12651759 Are tablets a practical source of protein substitute in phenylketonuria? Background: A phenylalanine-free amino acid based protein substitute is necessary to provide the major source of protein in phenylketonuria (PKU). Protein substitutes in PKU are usually given as drinks. These are unpalatable and compliance is often poor. Tablets containing a suitable mixture of phenylalanine-free amino acids (Aminogran Food Supplement, UCB) are now available. Aims: To compare the effectiveness and acceptability of these tablets with conventional protein substitute drinks. Methods: Twenty one subjects with PKU, aged 8-25 years, participated in a randomised crossover study. During one phase, subjects received at least 40% of their protein substitute requirements from the amino acid tablets and the rest from their usual protein substitute tablets. During the other phase, they received their usual protein substitute. Each period lasted 12 weeks. Blood phenylalanine concentrations were measured at least once every two weeks and other plasma amino acids were measured at the beginning, at crossover, and at the end of the study. The subjects kept a diary of all protein substitute taken. Results: Compliance appeared to be better with the new tablets than with patients' usual protein substitutes. Ninety per cent (18/20) recorded that they took the tablets as prescribed, compared with 65% (13/20) fully compliant with their usual protein substitute. Moreover, plasma phenyalanine was lower on the amino acid tablets, and the median difference in blood concentrations between the two groups was 46 micro mol/l (95% CI 14.8 to 89.0, p = 0.02). Tyrosine increased by a median of 16 micro mol/l daily on the amino acid tablets (95% CI 7.1 to 40.5, p = 0.01). Most subjects (70%) preferred incorporating the new tablets into their usual protein substitute regimen. Conclusions: Amino acid tablets are an effective and relatively popular protein substitute in older children, teenagers, and adults with PKU. abstract_id: PUBMED:24724767 Randomized controlled trial of a protein substitute with prolonged release on the protein status of children with phenylketonuria. Objective: To examine whether a phenylalanine-free protein substitute with prolonged release may be beneficial to the protein status of children with phenylketonuria (PKU) compared to conventional substitutes. Methods: Sixty children with PKU, 7 to 16 years of age, were randomly allocated to receive either a prolonged-release (test) or the current conventional protein substitute for 30 days. Subjects were additionally sex and age matched with 60 subjects with mild hyperphenylalaninemia and 60 unaffected subjects. The protein status in children with PKU was assessed by albumin, transthyretin, and retinol-binding protein (RBP), and changes throughout the trial period were the primary outcome measures. Results: Children with PKU did not differ in anthropometry from children with mild hyperphenylalaninemia or unaffected children but they ingested lower amounts of proteins (p < 0.01). No differences occurred throughout the trial between or within children with PKU who received the test or conventional substitute for macronutrient intake. Albumin and RBP concentrations were within the age-specific reference range for all children. The rate of protein insufficiency (transthyretin concentration less than 20 mg/dL) did not differ statistically between children receiving test or conventional substitute (recruitment 51.8% vs 53.6%; end of the trial 44.4% vs 50.0%) but mean transthyretin recovered over 20 mg/dL in children who received the test substitute, increasing from 19.1 to 20.7 mg/dL (mean change, 1.6 mg/dL; 95% confidence interval 0.4 to 2.8 mg/dL). In children receiving conventional substitute mean transthyretin changed from 19.0 to 19.2 mg/dL (0.2; -0.2 to 0.6) mg/dL. Conclusions: Protein substitutes with prolonged release might be beneficial to protein status in children with phenylketonuria. abstract_id: PUBMED:37365866 Introducing a granule based protein substitute to the diet of a child with phenylketonuria to address reluctance to ingest phenylalanine-free protein substitute: A case report. Phenylalanine (Phe)-free protein substitutes are used within the management of phenylketonuria (PKU). However, adherence to the Phe-restricted diet is often challenging. A child (age 4.5 years) with PKU rejected the Phe-free protein substitutes used within her therapeutic diet, causing stress for herself and family at mealtimes. Switching to a new Phe-free protein substitute that can be mixed into other foods [PKU GOLIKE® (3-16)] provided an alternative strategy that was acceptable to the child. Good control of blood Phe was maintained. Newer Phe-free protein substitutes may provide a strategy for maintaining the therapeutic diet for PKU where the patient has difficulty doing so on standard substitutes. Here, the use of a Phe-free protein substitute with improved palatability and ease of use supported maintenance of the Phe-restricted diet for a child with PKU who struggled to maintain the diet on standard substitutes. abstract_id: PUBMED:32899129 An Observational Study Evaluating the Introduction of a Prolonged-Release Protein Substitute to the Dietary Management of Children with Phenylketonuria. Dietary restriction of phenylalanine combined with a protein substitute prevents intellectual disability in patients with phenylketonuria (PKU). However, current protein substitutes are associated with low adherence owing to unpalatability and burdensome administration regimens. This prospective, observational acceptability study in children with PKU assessed the use of a prolonged-release protein substitute designed with an ethyl cellulose and arginate coating masking the bitter taste, smell and reducing the osmolarity of free amino acids. The study product was mixed with the subject's food or drink and replaced ≥1 dose per day of the subject's usual protein substitute for 7 days. Seven of 13 subjects were able to take their prescribed dose over the 7 day period. Most subjects mixed the test protein substitute with food or fruit juice. Reduced blood phenylalanine levels (n = 5) and improved phenylalanine/tyrosine ratio (n = 4) were recorded from baseline to Day 7, respectively. Four subjects reported fewer gastrointestinal symptoms compared to baseline. There were no cases of diarrhoea, constipation, bloating, nausea or vomiting. No adverse reactions were reported. In conclusion, the novel prolonged-release protein substitute was taken in a different way to a typical protein substitute and enabled satisfactory blood phenylalanine control. The study product was well tolerated; subjects experienced fewer gastrointestinal symptoms than with their previous treatment. Although the results of this pilot study provide reassuring data, longer-term studies evaluating adherence and blood phenylalanine control are necessary. abstract_id: PUBMED:36303225 Transitioning of protein substitutes in patients with phenylketonuria: evaluation of current practice. Background: In children with phenylketonuria (PKU), transitioning protein substitutes at the appropriate developmental age is essential to help with their long-term acceptance and ease of administration. We assessed the parental experiences in transitioning from a second stage to third stage liquid or powdered protein substitute in patients with PKU. Results: Sixteen interviews (23 open-ended questions) were carried out with parents/caregivers of children with PKU (8 females, 50%) with a median age of 8 years (range 5-11 years), continuously treated with diet, and on a third stage protein substitute. Parents/caregivers identified common facilitators and barriers during the third stage protein substitute transition process. The main facilitators were: child and parent motivation, parent knowledge of the transition process, a role model with PKU, low volume and easy preparation of the third stage protein substitute (liquid/powder), anticipation of increasing child independence, lower parent workload, attractive packaging, better taste and smell, school and teacher support, dietetic plans and guidance, PKU social events, child educational materials and written resources. The main barriers were child aversion to new protein substitutes, poor child behaviour, child aged > 5 years, parental fear of change, the necessity for parental time and persistence, loss of parental control, high product volume, different taste, smell, and texture of new protein substitutes, and peer bullying. Conclusion: A stepwise, supportive approach is necessary when transitioning from second to third stage protein substitutes in PKU. Future studies are needed to develop guidance to assist parents/caregivers, health professionals, and teachers during the transition process. abstract_id: PUBMED:23628728 Maternal phenylketonuria Elevated maternal phenylalanine levels during pregnancy are teratogenic, and may result in embryo-foetopathy, which could lead to stillbirth, significant psychomotor handicaps and birth defects. This foetal damage is known as maternal phenylketonuria. Women of childbearing age with all forms of phenylketonuria, including mild variants such as hyperphenylalaninaemia, should receive detailed counselling regarding their risks for adverse foetal effects, optimally before contemplating pregnancy. The most assured way to prevent maternal phenylketonuria is to maintain the maternal phenylalanine levels within the optimal range already before conception and throughout the whole pregnancy. Authors review the comprehensive programme for prevention of maternal phenylketonuria at the Metabolic Center of Budapest, they survey the practical approach of the continuous maternal metabolic control and delineate the outcome of pregnancies of mothers with phenylketonuria from the introduction of newborn screening until most recently. abstract_id: PUBMED:33807079 Protein Substitute Requirements of Patients with Phenylketonuria on BH4 Treatment: A Systematic Review and Meta-Analysis. The traditional treatment for phenylketonuria (PKU) is a phenylalanine (Phe)-restricted diet, supplemented with a Phe-free/low-Phe protein substitute. Pharmaceutical treatment with synthetic tetrahydrobiopterin (BH4), an enzyme cofactor, allows a patient subgroup to relax their diet. However, dietary protocols guiding the adjustments of protein equivalent intake from protein substitute with BH4 treatment are lacking. We systematically reviewed protein substitute usage with long-term BH4 therapy. Electronic databases were searched for articles published between January 2000 and March 2020. Eighteen studies (306 PKU patients) were eligible. Meta-analyses demonstrated a significant increase in Phe and natural protein intakes and a significant decrease in protein equivalent intake from protein substitute with cofactor therapy. Protein substitute could be discontinued in 51% of responsive patients, but was still required in 49%, despite improvement in Phe tolerance. Normal growth was maintained, but micronutrient deficiency was observed with BH4 treatment. A systematic protocol to increase natural protein intake while reducing protein substitute dose should be followed to ensure protein and micronutrient requirements are met and sustained. We propose recommendations to guide healthcare professionals when adjusting dietary prescriptions of PKU patients on BH4. Studies investigating new therapeutic options in PKU should systematically collect data on protein substitute and natural protein intakes, as well as other nutritional factors. abstract_id: PUBMED:28940742 Fifteen years of using a second stage protein substitute for weaning in phenylketonuria: a retrospective study. Background: In phenylketonuria (PKU), during weaning, it is necessary to introduce a second stage phenylalanine (Phe)-free protein substitute (PS) to help meet non-Phe protein requirements. Semi-solid weaning Phe-free PS have been available for >15 years, although no long-term studies have reported their efficacy. Methods: Retrospective data from 31 children with PKU who commenced a weaning PS were collected from clinical records from age of weaning to 2 years, on: gender; birth order; weaning age; anthropometry; blood Phe levels; age commenced and dosage of weaning PS and Phe-free infant L-amino acid formula; natural protein intake; and issues with administration of PS or food. Results: Median commencement age for weaning was 17 weeks (range 12-25 weeks) and, for weaning PS, 20 weeks (range 13-37 weeks). Median natural protein was 4 g day-1 (range 3-11 g day-1 ) and total protein intake was >2 g kg-1 day-1 from weaning to 2 years of age. Children started on 2-4 g day-1 protein equivalent (5-10 g day-1 of powder) from weaning PS, increasing by 0.2 g kg-1 day-1 (2 g day-1 ) monthly to 12 months of age. Teething and illness adversely affected the administration of weaning PS and the acceptance of solid foods. Altogether, 32% of children had delayed introduction of more textured foods, associated with birth order (firstborn 80% versus 38%; P = 0.05) and food refusal when teething (80% versus 29%; P = 0.02). Conclusions: Timing of introduction of solid foods and weaning PS, progression onto more textured foods and consistent feeding routines were important in aiding their acceptance. Any negative behaviour with weaning PS was mainly associated with food refusal, teething and illness. Parental approach influenced the acceptance of weaning PS. abstract_id: PUBMED:18843667 Protein substitute for children and adults with phenylketonuria. Background: Phenylketonuria is an inherited metabolic disorder characterised by an absence or deficiency of the enzyme phenylalanine hydroxylase. The aim of treatment is to lower blood phenylalanine concentrations to the recommended therapeutic range to prevent developmental delay and support normal growth. Current treatment consists of a low-phenylalanine diet in combination with a protein substitute which is free from or low in phenylalanine. Guidance regarding the use, dosage, and distribution of dosage of the protein substitute over a 24-hour period is unclear, and there is variation in recommendations among treatment centres. Objectives: To assess the benefits and adverse effects of protein substitute, its dosage, and distribution of dose in children and adults with phenylketonuria who are adhering to a low-phenylalanine diet. Search Strategy: We searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register which consists of references identified from comprehensive electronic database searches and hand searches of relevant journals and abstract books of conference proceedings. We also contacted manufacturers of the phenylalanine-free and low-phenylalanine protein substitutes for any data from published and unpublished randomised controlled trials.Date of the most recent search of the Group's Trials Register: April 2008. Selection Criteria: All randomised or quasi-randomised controlled trials comparing: any dose of protein substitute with no protein substitute; an alternative dosage; or the same dose, but given as frequent small doses throughout the day compared with the same total daily dose given as larger boluses less frequently. Data Collection And Analysis: Both authors independently extracted data and assessed trial quality. Main Results: Three trials (69 participants) are included in this review. One trial investigated the use of protein substitute in 16 participants, while a further two trials investigated the dosage of protein substitute in a total of 53 participants. Due to issues with data presentation in each trial, described in full in the review, formal statistical analyses of the data were impossible. Investigators are being contacted for further information. Authors' Conclusions: No conclusions could be drawn about the short- or long-term use of protein substitute in phenylketonuria due to the lack of adequate or analysable trial data. Additional data and randomised controlled trials are needed to investigate the use of protein substitute in phenylketonuria. Until further evidence is available, current practice in the use of protein substitute should continue to be monitored with care. abstract_id: PUBMED:25723866 Protein substitute for children and adults with phenylketonuria. Background: Phenylketonuria is an inherited metabolic disorder characterised by an absence or deficiency of the enzyme phenylalanine hydroxylase. The aim of treatment is to lower blood phenylalanine concentrations to the recommended therapeutic range to prevent developmental delay and support normal growth. Current treatment consists of a low-phenylalanine diet in combination with a protein substitute which is free from or low in phenylalanine. Guidance regarding the use, dosage, and distribution of dosage of the protein substitute over a 24-hour period is unclear, and there is variation in recommendations among treatment centres. This is an update of a Cochrane review first published in 2005, and previously updated in 2008. Objectives: To assess the benefits and adverse effects of protein substitute, its dosage, and distribution of dose in children and adults with phenylketonuria who are adhering to a low-phenylalanine diet. Search Methods: We searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register which consists of references identified from comprehensive electronic database searches and hand searches of relevant journals and abstract books of conference proceedings. We also contacted manufacturers of the phenylalanine-free and low-phenylalanine protein substitutes for any data from published and unpublished randomised controlled trials.Date of the most recent search of the Group's Inborn Errors of Metabolism Trials Register: 03 April 2014. Selection Criteria: All randomised or quasi-randomised controlled trials comparing: any dose of protein substitute with no protein substitute; an alternative dosage; or the same dose, but given as frequent small doses throughout the day compared with the same total daily dose given as larger boluses less frequently. Data Collection And Analysis: Both authors independently extracted data and assessed trial quality. Main Results: Three trials (69 participants) are included in this review. One trial investigated the use of protein substitute in 16 participants, while a further two trials investigated the dosage of protein substitute in a total of 53 participants. Due to issues with data presentation in each trial, described in full in the review, formal statistical analyses of the data were impossible. Investigators will be contacted for further information. Authors' Conclusions: No conclusions could be drawn about the short- or long-term use of protein substitute in phenylketonuria due to the lack of adequate or analysable trial data. Additional data and randomised controlled trials are needed to investigate the use of protein substitute in phenylketonuria. Until further evidence is available, current practice in the use of protein substitute should continue to be monitored with care. Answer: Yes, tablets appear to be a practical source of protein substitute in phenylketonuria (PKU). A study comparing the effectiveness and acceptability of amino acid tablets with conventional protein substitute drinks found that compliance was better with the new tablets than with patients' usual protein substitutes. Ninety percent of participants recorded that they took the tablets as prescribed, compared with 65% compliance with their usual protein substitute. Additionally, plasma phenylalanine levels were lower on the amino acid tablets, and most subjects (70%) preferred incorporating the new tablets into their usual protein substitute regimen (PUBMED:12651759). This suggests that amino acid tablets are an effective and relatively popular protein substitute in older children, teenagers, and adults with PKU.
Instruction: Does the presence of von Willebrand factor in FVIII-deficient plasma influences the measurement of FVIII inhibitor titres in haemophilia A patients? Abstracts: abstract_id: PUBMED:24815078 Does the presence of von Willebrand factor in FVIII-deficient plasma influences the measurement of FVIII inhibitor titres in haemophilia A patients? Introduction: Reliable measurement of FVIII inhibitor is critical in the follow-up of haemophilia A patients. We performed a multicentre study to evaluate whether the presence of von Willebrand factor (VWF) in FVIII-deficient plasma (FVIII-DP) influences FVIII inhibitor titres. Methods: Six French haematology laboratories participated in this study. Three samples with varying FVIII inhibitor titres (1, 5 and 15 BU/mL) and one sample without any detectable FVIII inhibitor were tested using four different procedures for FVIII inhibitor assay. The Nijmegen method and a modified assay with imidazole were performed using FVIII-DP with and without VWF in the control mixture and as substrate plasma in the FVIII one stage assay (OSA). Each mixture (reference and test) was incubated for two hours at 37 °C with buffered normal pool plasma. Results: Higher inhibitor titres were measured in 5 and 15 BU/mL samples when assays were performed with the Nijmegen method and FVIII-DP without VWF. When samples were diluted in imidazole buffer, similar inhibitor titres, close to expected values, were measured whether VWF was present in the FVIII-DP or not. The data obtained were also more accurate when residual FVIII activity levels between 40% and 60% were used to calculate inhibitor titres, despite a linear type I reaction kinetics. Conclusion: These results support the hypothesis that reliable FVIII inhibitor titres can be measured without the use of FVIII-DP containing VWF when an imidazole-modified assay is used. abstract_id: PUBMED:24745722 Inhibitors in patients with haemophilia A. Inhibitor development is the most problematic and costly complication of haemophilia treatment. Inhibitor development depends on a complex multifactorial immune response that is influenced by patient- and treatment-related factors. Considerable research is focussed on inhibitor development as well as the mechanism of eradication through immune tolerance induction (ITI). Once an inhibitor develops, two general treatment options are available: to treat acute bleeds through bypassing agents, and to eradicate the inhibitor permanently through ITI. Previously untreated haemophilia A patients (PUPs) are at greatest risk of inhibitor development within the first 20 exposure days to factor VIII (FVIII). Inhibitor incidence in PUP studies ranges from 0% to as high as 52%. Plasma-derived FVIII concentrates have repeatedly been shown in cohort studies to be associated with a decreased inhibitor risk compared with recombinant FVIII concentrates, but results from randomized clinical trials are lacking; although one such trial is ongoing (SIPPET study). The occurrence of an inhibitor represents a major hardship for the patient and his family, and can result in high morbidity and a significant reduction in quality of life. Inhibitor eradication often requires the need for demanding and expensive treatment strategies aimed at inducing immune tolerance or bypassing the inhibitor. The role of von Willebrand factor (VWF) in immunoprotection is currently under review. The high-purity, pasteurized, plasma-derived FVIII concentrate, Beriate(®), contains sufficient amounts of VWF to not only bind all FVIII molecules but also provide additional FVIII binding sites, and may have additional beneficial effects that reduce the general immunogenicity of FVIII. abstract_id: PUBMED:29902361 The Japanese Immune Tolerance Induction (J-ITI) study in haemophilia patients with inhibitor: Outcomes and successful predictors of ITI treatment. Introduction: Immune tolerance induction (ITI) was the primary therapeutic approach to eradicate inhibitors in haemophilia patients. Several large ITI registries had been reported, but successful predictors of ITI outcome are still debated. No reports are available on large ITI studies in non-caucasian countries. Aim: We designed a retrospective cohort study of ITI in Japanese haemophilia patients with inhibitor. Methods: Retrospective data were collected from 155 haemophilia (H)A (140 severe-type) and 7 HB (7 severe-type) patients treated at 45 institutions. ITI outcome was centrally reviewed. We defined "success" as undetectable inhibitor after 2 consecutive measurements. Results: The ITI success rate was 71.2% for HA and 83.3% for HB. Cumulated success rates for HA achieving 50% and 75% were 0.7 and 2 years after treatment, respectively. Significant successful predictors in HA were low-responding inhibitors compared to high-responding inhibitors, shorter time to the start of ITI, and lower historical and treatment peak titres of inhibitor. Dose regimen (high dose; ≥90 IU/kg every day, low dose; ≤75 IU/kg, 3 d/wk) and the type of therapeutic product did not affect outcomes. The success rate of salvage ITI using von Willebrand factor-containing factor VIII was 50% (n = 6/12), and patient age at the start of salvage ITI was a significant predictor. The inhibitor recurred in 6 HA cases (3.9%). Conclusion: The results provided potentially important information for improving future success rates for ITI in inhibitor patients. abstract_id: PUBMED:27528280 The burden of inhibitors in haemophilia patients. The burden of disease in haemophilia patients has wide ranging implications for the family and to society. There is evidence that having a current inhibitor increases the risk of morbidity and mortality. Morbidity is increased by the inability to treat adequately and its consequent disabilities, which then equates to a poor quality of life compared with non-inhibitor patients. The societal cost of care, or `burden of inhibitors', increases with the ongoing presence of an inhibitor. Therefore, it is clear that successful eradication of inhibitors by immune tolerance induction (ITI) is the single most important milestone one can achieve in an inhibitor patient. The type of factor VIII (FVIII) product used in ITI regimens varies worldwide. Despite ongoing debate, there is in vitro and retrospective clinical evidence to support the use of plasma-derived VWF-containing FVIII concentrates in ITI regimens in order to achieve early and high inhibitor eradication success rates. abstract_id: PUBMED:34797008 Low-dose immune tolerance induction therapy in children of Arab descent with severe haemophilia A, high inhibitor titres and poor prognostic factors for immune tolerance induction treatment success. Introduction: Immune Tolerance Induction (ITI) is the first-choice therapy to eradicate Factor VIII (FVIII) neutralizing antibodies in patients with haemophilia A (HA). There is limited published data on ITI from East Mediterranean countries. Aim: To assess the effectiveness of a low-dose ITI regimen to eradicate FVIII neutralizing antibodies in children with severe HA and high-titre inhibitors. Methods: A prospective, single-arm study was conducted in children with HA (FVIII < 1 IU/dl), high-titre inhibitors and poor prognostic factors for successful ITI. Patients were treated with ∼50 IU/kg plasma-derived FVIII containing von Willebrand factor (pdFVIII/VWF) concentrate (Koate-DVI, Grifols) three times a week. Time to achieve tolerance, total and partial success were analysed after ITI. Annual bleeding rate (ABR), number of target joints, FVIII recovery and school absence were compared before and after ITI. Results: Twenty patients with median (range) age of 6.2 (3-12) years and pre-ITI inhibitor titre of 36.5 (12-169) BU were enrolled. ITI lasted ≤12 months (early tolerization) in 45% of patients. Median follow-up was 12 months (3-22) and total response rate was 80% (60% total success; 20% partial success). Patients with two and three poor prognosis factors achieved overall success rate of 60% and 50%, respectively. ABR, target joints and school absence were reduced after ITI by 60%, 50% and 44.1%, respectively. In successful ITI tolerized patients, FVIII recovery was 90 (60-100)%. Conclusion: A low-dose ITI therapy using a pdFVIII/VWF concentrate achieved at least partial tolerance in 80% of patients, and reduced annual bleeds in children with high inhibitor titres and at least one poor prognosis factor for ITI treatment success. abstract_id: PUBMED:27214015 Variation in factor VIII inhibitor reactivity with different commercial factor VIII preparations. During treatment of a haemophilia A patient with a high-responding inhibitor against factor VIII coagulant activity (VIII:C), we observed a difference in recovery of VIII:C depending upon which factor concentrate was infused. Inhibitor plasma samples or IgG fraction from seven patients were tested against a panel of seven different commercially available factor VIII concentrates of which five were plasma-derived and two recombinant. In two of the plasma samples, inhibitor titres manifested a wide range of values depending upon which concentrate was used in the test system. Thus, inhibitor neutralization was less and VIII:C recovery greater when factor VIII concentrates containing large amounts of von Willebrand factor were used than when highly purified concentrates containing no von Willebrand factor or only trace amounts were used. In both of these two patients the inhibitor was directed against the light chain of factor VIII, and it is possible that the epitope of the light chain with which the inhibitor reacts is partly blocked by the von Willebrand factor. We conclude that inhibitors may differ in their reactivity with factor VIII molecules contained in clotting factor concentrates, and that there is factor VIII epitope variation between different concentrates. These findings have implications for the selection of concentrates for the treatment of inhibitor patients and the haemostatic effect may be improved if a concentrate giving the lowest inhibitor titre is chosen. Thus, in vitro testing of inhibitor reactivity with a panel of concentrates is recommended when treatment of inhibitor patients with factor VIII concentrates is considered. abstract_id: PUBMED:27878207 Plasma-derived versus recombinant factor concentrates in PUPs: a never ending debate? Inhibitor development in haemophilia is a serious complication to treatment with factor concentrates. Since the advent of more pure products, especially developed using recombinant DNA technology, some studies have shown an increased incidence of inhibitors in previously untreated patients (PUPs) receiving recombinant products whereas plasma-derived concentrates sometimes have been claimed to have a protective role, probably due to the content of von Willebrand factor (VWF). In fact, experiments indicate that the VWF may block uptake of factor VIII into macrophages for further processing to the immune system. Also, a competition between VWF and inhibitor binding to the C2 domain of factor VIII has been suggested. Recently, large cohort and surveillance studies have created a vigorous debate about the role of product class for inhibitor development as results have been conflicting. The only randomised prospective study, the SIPPET study, was published in 2016, and substantiated previous reports claiming that plasma derived concentrates give less inhibitors in patients with severe haemophilia A, previously not exposed to factor VIII. The debate will continue. abstract_id: PUBMED:35654086 Plasma-derived FVIII/VWF complex shows higher protection against inhibitors than isolated FVIII after infusion in haemophilic patients: A translational study. Introduction: Presence of von Willebrand factor (VWF) in FVIII concentrates offers protection against neutralizing inhibitors in haemophilia A (HA). Whether this protection is more evident in plasma-derived (pd) FVIII/VWF or recombinant (r) FVIII concentrates remains controversial. Aim: We investigated the protection exerted by VWF against FVIII inhibitors in an in vivo mouse model of HA exposed to pdFVIII/VWF or to various rFVIII concentrates. Methods: Haemophilia A mice received the different FVIII concentrates after administration of vehicle or an inhibitory IgG purified from a commercial pool of HA plasma with inhibitors and FVIII:C recoveries were measured. Furthermore, using a novel clinically oriented ex vivo approach, Bethesda inhibitory activities (BU) of a commercial pool of HA plasma with inhibitors were assessed using normal plasma, or plasma from severe HA patients, without inhibitors, after treatment with the same concentrates. Results: in vivo studies showed that pdFVIII/VWF offers markedly higher protection against inhibitors when compared with any of the FVIII products without VWF. More importantly, in the ex vivo studies, plasma from patients treated with pdFVIII/VWF showed higher protection against inhibitors (P values ranging .05-.001) in comparison with that observed in plasma from patients who received FVIII products without VWF, regardless of the type of product evaluated. Conclusion: Data indicate that FVIII+VWF complexes assembled in the circulation after rFVIII infusion are not equivalent to the naturally formed complex in pdFVIII/VWF. Therefore, rFVIII infused into HA patients with inhibitors would be less protected by VWF than the FVIII in pdFVIII/VWF concentrates. abstract_id: PUBMED:11776311 The type of factor VIII deficient plasma used influences the performance of the Nijmegen modification of the Bethesda assay for factor VIII inhibitors. We have investigated the influence of the type of factor VIII deficient plasma used on the assay results of the Nijmegen modification of the Bethesda method for factor VIII inhibitors. Immuno depleted factor VIII deficient plasmas, lacking besides factor VIII also von Willebrand factor, gave decreased inhibitor titres compared to assay results with factor VIII deficient plasmas containing von Willebrand factor suggesting the need of the latter in the test system for the stability of factor VIII:C. Moreover the performance of the assay with immuno depleted plasma was contaminated in a certain type of this plasma by the presence of a factor VIII:C inhibitor. Chemically depleted factor VIII deficient plasma appeared to give falsely elevated titres when used in combination with other types of deficient plasmas as substrate plasma in the factor VIII:C assay due to the presence of activated factor Va in the preparation. Suggestions are described with respect to the observed limitations in order to obtain reliable results. abstract_id: PUBMED:29314439 Low incidence of factor VIII inhibitors in previously untreated patients with severe haemophilia A treated with octanate® : Final report from a prospective study. Introduction: Octanate® is a human, plasma-derived, von Willebrand factor-stabilized coagulation factor VIII (FVIII) concentrate with demonstrated haemostatic efficacy in previously treated patients with haemophilia A. Aim: This prospective, open-label study aimed to assess the immunogenicity of octanate® in previously untreated patients (PUPs). Methods: The study monitored development of FVIII inhibitors in 51 PUPs. Tolerability, viral safety, FVIII recovery and efficacy of octanate® for the prevention and treatment of bleeds and in surgical procedures were also assessed. Results: Five (9.8%) of the 51 patients developed inhibitors during the study, 4 of which (7.8%) were high titre. Three inhibitor cases (5.9%) were considered clinically relevant; 2 were transient inhibitors that disappeared during regular octanate® treatment without a change in dose or treatment frequency. Amongst 45 patients with FVIII:C <1% at baseline and who received ≥20 exposure days (EDs) or had <20 EDs but developed an inhibitor, inhibitor incidence was 11.1% (6.7% clinically relevant). All clinically relevant inhibitors developed within 20 EDs of on-demand treatment. No inhibitors developed in PUPs receiving prophylaxis. All patients who developed inhibitors had either intron 22 inversions or large deletions. Irrespective of the reason for administration, haemostatic efficacy was rated as "excellent" in 99.6% of all infusions (4700 of 4717 infusions), and no complications were reported in 23 surgical procedures. Mean incremental in vivo recovery was 2.0%/IU/kg (±0.7) and 1.9%/IU/kg (±0.5) for the first and second assessments, respectively. Tolerability was rated "very good" in 99.9% of infusions. Conclusion: In PUPs with severe haemophilia A, octanate® demonstrated haemostatic efficacy with a low rate of inhibitor development. Answer: Yes, the presence of von Willebrand factor (VWF) in factor VIII (FVIII) deficient plasma does influence the measurement of FVIII inhibitor titres in haemophilia A patients. A multicentre study evaluated whether the presence of VWF in FVIII-deficient plasma (FVIII-DP) affects FVIII inhibitor titres and found that higher inhibitor titres were measured in samples when assays were performed with the Nijmegen method and FVIII-DP without VWF. However, when samples were diluted in imidazole buffer, similar inhibitor titres were measured regardless of the presence of VWF in the FVIII-DP. This suggests that reliable FVIII inhibitor titres can be measured without the use of FVIII-DP containing VWF when an imidazole-modified assay is used (PUBMED:24815078). Additionally, other studies have indicated that the type of FVIII product used in inhibitor assays can affect the measurement of inhibitor titres. Inhibitor reactivity can vary with different commercial FVIII preparations, and concentrates containing large amounts of VWF may result in lower inhibitor titres and greater VIII:C recovery compared to highly purified concentrates without VWF (PUBMED:27214015). This is because VWF may block the uptake of FVIII into macrophages for further processing to the immune system and compete with inhibitor binding to the C2 domain of FVIII (PUBMED:27878207). Furthermore, plasma-derived FVIII/VWF complex has been shown to offer higher protection against inhibitors than isolated FVIII after infusion in haemophilic patients (PUBMED:35654086). In conclusion, the presence of VWF in FVIII-DP can influence the measurement of FVIII inhibitor titres, and the use of plasma-derived FVIII/VWF concentrates may provide a more accurate assessment of inhibitor levels in haemophilia A patients.
Instruction: Should Immunomodulation Therapy Alter the Surgical Management in Patients With Rectovaginal Fistula and Crohn's Disease? Abstracts: abstract_id: PUBMED:27270520 Should Immunomodulation Therapy Alter the Surgical Management in Patients With Rectovaginal Fistula and Crohn's Disease? Background: Rectovaginal fistula in Crohn's disease is challenging for both healthcare providers and patients. The impact of immunomodulation therapy on healing after surgery is unclear. Objective: The purpose of this study was to examine whether immunomodulation therapy impacts healing after surgery for rectovaginal fistula in Crohn's disease. Design: This was a retrospective analysis with a follow-up telephone survey. Settings: The study was conducted at two major tertiary referral centers. Patients: All of the patients who underwent rectovaginal fistula repair from 1997 to 2013 at our centers were included. Main Outcome Measures: A χ test and logistical regression analysis were used to study treatment outcomes according to type of procedure, recent use of immunosuppressives, and number of previous attempted repairs. Age, BMI, smoking, comorbidities, previous vaginal delivery/obstetric injury, use of probiotics, diverting stoma, and use of seton were also analyzed. Results: A total of 120 (62%) patients were contacted, and 99 (51%) of them agreed to participate in the study. Mean follow-up after surgical repair was 39 months. Procedures included advancement flap (n = 59), transvaginal repair (n = 14), muscle interposition (n = 14), episioproctotomy (n = 6), sphincteroplasty (n = 3), and other (n = 3); overall, 63% of patients experienced healing. Sixty-eight patients underwent recent immunomodulation therapy but did not exhibit statistical significance in outcome after surgical repair. In the subset of patients with fistula related to obstetric injury, a 74% (n = 26) healing rate after surgical repair was observed. Age, BMI, diabetes mellitus, use of steroids, probiotics, seton before repair, fecal diversion, and number of repairs did not affect healing. Limitations: This was a retrospective analysis; the high volume tertiary referral inflammatory bowel disease centers studied may not be reflective of rectovaginal fistula presentation, treatment, or results in all patients, and the 3-year follow-up may not be sufficiently long. Conclusions: Despite a relatively low success rate (63%) in healing after surgical repair of a rectovaginal fistula, the recent use of immunomodulation therapy did not negatively impact healing. However, tissue interposition techniques had the highest success rates. abstract_id: PUBMED:23177070 Surgical management of Crohn's disease. Although medical management can control symptoms in a recurring incurable disease, such as Crohn's disease, surgical management is reserved for disease complications or those problems refractory to medical management. In this article, we cover general principles for the surgical management of Crohn's disease, ranging from skin tags, abscesses, fistulae, and stenoses to small bowel and extraintestinal disease. abstract_id: PUBMED:7973923 Current surgical management of inflammatory bowel disease. When surgery is required for complications of inflammatory bowel disease (IBD) or for failure of medical management, numerous options exist. This review focuses on surgical alternatives, technical considerations, and complications for both routine and unusual problems associated with IBD. Restorative proctocolectomy for chronic ulcerative colitis, intestine-sparing procedures for Crohn's disease, and the management of Crohn's disease in difficult anatomic sites or with unusual complications are discussed. abstract_id: PUBMED:24011379 Surgical repair of rectovaginal fistulas in patients with Crohn's disease. Objectives: To report surgical outcomes of patients who underwent rectovaginal fistula (RVF) repair with a history of Crohn's disease utilizing several reconstructive techniques. Study Design: Retrospective case series of women (n=6) with Crohn's disease surgically treated with either vaginal or rectal advancement flaps. Demographic information and data specific to Crohn's disease at the time of surgery were collected. In addition, operative reports and postoperative follow-up visits were reviewed. Results: During the study period, six women with the diagnosis of Crohn's disease and RVF underwent surgical management. Five patients had a vaginal advancement flap (VAF) by Female Pelvic Medicine and Reconstructive Surgery and one patient was treated by the rectal advancement flap by Colorectal Surgery. The failure rate in our study population was 33% (2/6). Of note, two of the patients who had a successful VAF had a previous failure after RAF. In addition, four patients who had a repair via the transvaginal approach had a concomitant pedicled flap procedure (i.e. Martius or gracilis flap). The average follow-up for all our patients was 5 months (+/- 6.5 months). No patients failed if they received a VAF with a concomitant flap procedure. Conclusions: This case series illustrates several techniques utilized for the repair of RVF in patients with Crohn's disease. The use of a bulbocavernosus flap during the primary repair of RVF in this patient population may be considered to bolster the rectovaginal septum. abstract_id: PUBMED:30207932 Imaging and Surgical Management of Anorectal Vaginal Fistulas. Anorectal vaginal fistulas (ARVFs) can result in substantial morbidity and potentially embarrassing symptoms in adult women of all ages. Despite having what may be obvious clinical manifestations, the fistulas themselves can be difficult to identify with imaging. MRI is the modality of choice for the diagnosis and characterization of ARVFs. A dedicated protocol involving the use of vaginal gel and optimized imaging planes with respect to the vagina, as well as an understanding of the MRI pelvic floor anatomy, is crucial for reporting surgically relevant details. Ancillary findings such as postsurgical changes, inflammation, abscess, sphincter destruction, and neoplasm are well evaluated. Vaginography, contrast enema, endoscopic US, and CT can be highly useful complementary diagnostic examinations. The entities that result in ARVFs may be obstetric, inflammatory (eg, Crohn disease and diverticulitis), neoplastic, iatrogenic, and/or radiation induced. Surgical management is heavily dependent on the cause and complexity of the fistulizing disease, which are related to the location of the fistula in the vagina, the type and extent of fistula branching, the number of fistulas, sphincter tears, inflammation, and abscess. ©RSNA, 2018. abstract_id: PUBMED:1855419 Surgical repair of rectovaginal fistulas in patients with Crohn's disease: transvaginal approach. The surgical management of rectovaginal fistulas complicating Crohn's disease has been associated with unacceptably high failure rates. We sought to modify the available surgical techniques to provide a solution to this challenging problem. Between December 1983 and January 1990, 14 patients with Crohn's disease underwent repair of a rectovaginal fistula. A modified transvaginal approach was employed by the authors. A diverting loop ileostomy was performed on all patients, either as the initial step in the staged management of intractable perianal disease or concurrent with the repair of the rectovaginal fistula. The fistula was completely eradicated in 13 of the 14 women and did not recur during the mean follow-up period of 55.0 months (range, 3-77 months). Intestinal continuity was reestablished in these 13 patients within 6 months after the initial fistula repair. One patient with a very low-lying fistula constituted our only failure. We have found the transvaginal method preferable to the transanal approach because of the relative ease in raising the vaginal flap as compared with a flap of fibrotic and inflamed anorectal mucosa. On the basis of this study, we conclude that a modified transvaginal approach is an effective method for repair of rectovaginal fistulas secondary to Crohn's disease. abstract_id: PUBMED:25400993 Contemporary surgical management of rectovaginal fistula in Crohn's disease. Rectovaginal fistula is a disastrous complication of Crohn's disease (CD) that is exceedingly difficult to treat. It is a disabling condition that negatively impacts a women's quality of life. Successful management is possible only after accurate and complete assessment of the entire gastrointestinal tract has been performed. Current treatment algorithms range from observation to medical management to the need for surgical intervention. A wide variety of success rates have been reported for all management options. The choice of surgical repair methods depends on various fistula and patient characteristics. Before treatment is undertaken, establishing reasonable goals and expectations of therapy is essential for both the patient and surgeon. This article aims to highlight the various surgical techniques and their outcomes for repair of CD associated rectovaginal fistula. abstract_id: PUBMED:28395390 Surgical Treatment of Rectovaginal Fistula in Crohn's Disease: A Tertiary Center Experience. Background: Rectovaginal fistula (RVF) is a disastrous complication of Crohn's disease (CD) that is exceedingly difficult to treat. It is a disabling condition that negatively impacts a woman's quality of life. Current treatment algorithms range from observation to medical management to the need for surgical intervention. A wide variety of success rates have been reported for all management options. The choice of surgical repair methods depends on various fistula and patient characteristics, and its published success rates vary with initial success being around 50% rising to 80% with repeated surgery. Several surgical and sphincter sparing approaches have been described for the management of rectovaginal fistula, aimed to minimize the recurrence and to preserve the continence. Materials And Methods: A retrospective study was performed for RVF repair between 2008 and 2014 in our tertiary centre at the University Hospital of Tor Vergata, Italy. All the patients were affected by Crohn's disease and underwent surgery for an RVF under the same senior surgeon. All patients were prospectively evaluated. Results: All 43 patients that underwent surgery for RVF were affected by Crohn's disease. The median age was 43 years (range 21-53). Four different surgical approaches were performed: drainage and seton, rectal advacenment flap (RAF), vaginal advancement flap (VAF), transperineal approach using porcine dermal matrix (PDM), and martius flap (MF). The median time to success was six months (range 2-11). None of the patients were lost during the 18 months of follow-up. The failure group rate was 19% in contrast with the healing rate group that was 81%. No demographic of disease-related factors were found to influence healing. Conclusion: The case series of this study supports the dogma that "there are no absolute rules when treating Crohn's fistula". There is no gold standard technique; however, it is mandatory to minimize the recurrence with a sphincter saving technique. Randomized trials are needed to find a standard surgical approach. abstract_id: PUBMED:28000189 Diagnosis and surgical treatment for rectovaginal fistula Rectovaginal fistulas are distressing conditions to patients and present a therapeutic challenge to surgeons. Whether the etiology of the fistula is obstetric, Crohn's disease-related, or cryptoglandular, a thorough anatomy evaluation is required in order to select the correct repair. No single surgical technique is suitable for all rectovaginal fistulas as of now. Less invasive surgery should be selected in primary repair, and endorectal advancement flap repair was recommended as the first line therapy in most guidelines for the treatment of rectovaginal fistulas. Preoperative fecal diversion has not been shown consistently to lead to better outcomes, thus most surgeons suggested that diverting stoma is not imperative in majority of patients, unless the tissue interposition was undertaken. The tissue interposition or transabdominal repair should be considered for multiple failure or recurrent complex rectovaginal fistulas. Familiarity with the various surgical techniques described and the ability to apply the appropriate surgery to correct patients will increase the chance of a successful intervention. abstract_id: PUBMED:16735205 Surgical management of pouch-vaginal fistula after restorative proctocolectomy. Background: Pouch-vaginal fistula (PVF) is a devastating complication after restorative proctocolectomy with ileal pouch anal anastomosis (IPAA). The aim of this study was to evaluate the surgical management of PVF. Methods: After Institutional Review Board approval, all patients treated for PVF between 1988 and 2003 were retrospectively reviewed. Success of treatment was defined as the complete absence of symptoms or no radiologic evidence of fistula. Results: The study included 23 female patients; indications for IPAA were mucosal ulcerative colitis in 20 (87%), indeterminate colitis in 1 (4.3%), and familial adenomatous polyposis in 2 (8.7%) patients. Seven patients with mucosal ulcerative colitis were postoperatively diagnosed with Crohn's disease. Mean time interval from initial IPAA to development of symptomatic fistula was 17.2 months. Mean number of surgical treatments was 2.2. Overall, success was achieved in 17 (73.9%) patients at a mean followup of 52.3 months. Fistulas in patients with Crohn's disease occurred relatively late after IPAA (p = 0.015) and required a median of three (p = 0.001) surgical procedures, compared with patients without Crohn's disease. Pelvic sepsis after original IPAA occurred in eight (35.8%) patients, four (50%) of whom ultimately required pouch excision. Conclusions: Fecal diversion and local procedures are effective in the majority of patients with PVF after IPAA. Patients with Crohn's disease tend to have a delayed onset of fistula occurrence and require more extensive surgical management. Pelvic sepsis can be a predictive factor of poor outcomes. Answer: The impact of immunomodulation therapy on the healing after surgery for rectovaginal fistula in Crohn's disease is not entirely clear. A retrospective analysis with a follow-up telephone survey conducted at two major tertiary referral centers included patients who underwent rectovaginal fistula repair from 1997 to 2013. The study found that recent use of immunomodulation therapy did not negatively impact healing after surgical repair of a rectovaginal fistula in Crohn's disease patients. The overall healing rate after surgical repair was 63%, and tissue interposition techniques had the highest success rates. The study concluded that despite the relatively low success rate, immunomodulation therapy should not alter the surgical management in patients with rectovaginal fistula and Crohn's disease (PUBMED:27270520). Other literature on the surgical management of Crohn's disease suggests that surgery is reserved for disease complications or problems refractory to medical management (PUBMED:23177070). Surgical alternatives, technical considerations, and complications for both routine and unusual problems associated with inflammatory bowel disease, including Crohn's disease, are diverse and numerous (PUBMED:7973923). Surgical outcomes of patients with Crohn's disease who underwent rectovaginal fistula repair using several reconstructive techniques have been reported, with varying success rates and techniques utilized (PUBMED:24011379, PUBMED:25400993, PUBMED:28395390, PUBMED:28000189, PUBMED:16735205). In conclusion, while immunomodulation therapy does not appear to negatively impact the healing outcomes after surgical repair of rectovaginal fistulas in Crohn's disease, the decision to alter surgical management should be based on a comprehensive assessment of the patient's condition, the complexity of the fistula, and the surgeon's experience with various surgical techniques. The literature suggests that no single surgical technique is suitable for all cases, and a tailored approach that considers the patient's specific circumstances and the characteristics of the fistula is essential for successful management (PUBMED:28000189).
Instruction: Acute retinpathia praematurorum. Is plasma prorenin level of prognostic value? Abstracts: abstract_id: PUBMED:11105545 Acute retinpathia praematurorum. Is plasma prorenin level of prognostic value? Background: In patients with diabetes mellitus an elevated level of plasma prorenin (PP) may be associated with proliferative diabetic retinopathy. Although retinopathy of prematurity (ROP) is also characterized by retinal vasoproliferation, no study on PP in ROP appears to have been carried out. This study investigated PP prospectively in preterm infants with high risk of ROP. Patients And Methods: In 304 preterm infants (gestational age 24-36 weeks, mean +/- SD 29.8 +/- 2.6 weeks; birth weight 570-1750 g, 1180 +/- 294 g) PP was examined prospectively between 3 and 14 weeks postnatally. Renin and total renin (after cryoactivation) were determined by radioimmunoassay. Total renin minus renin is the PP level; PP was correlated with the presence of ROP, stage of ROP, gestational age, birth weight, and postnatal age. Results: There was no significant difference between mean PP in 112 preterm infants with ROP (682 +/- 666 ng/l) and that in 192 preterm infants without ROP (622 +/- 454 ng/l). There was no correlation between PP and the stage of ROP, gestational age, or birth weight. Mean PP decreased with increasing postnatal age (postnatal age 3-4 weeks: 906 +/- 587 ng/l; 7-8 weeks: 585 +/- 423 ng/l; 13-14 weeks: 326 +/- 205 ng/l). Conclusion: This study found no significant difference in PP between preterm infants with and those without ROP. Thus PP is not a valid predictor or indicator of ROP. However, the study showed a hitherto unknown correlation between PP and postnatal age in preterm infants. abstract_id: PUBMED:35689092 Plasma and serum prorenin concentrations in diabetes, hypertension, and renal disease. Although the renin-angiotensin-aldosterone system plays a crucial role in fluid homeostasis and cardiovascular disease pathophysiology, measurements of plasma prorenin levels are still unavailable in clinical practice. We previously found that prorenin molecules in human blood underwent significant posttranslational modifications and were undetectable using immunological assays that utilized antibodies specifically recognizing unmodified recombinant prorenin. Using a sandwich enzyme-linked immunosorbent assay that captures posttranslationally modified prorenins with their prosegment antibodies, we measured plasma and serum prorenin concentrations in 219 patients with diabetes mellitus, hypertension and/or renal disease and compared them with those of 40 healthy controls. The measured values were not significantly different from those of the healthy controls and were 1,000- to 100,000-fold higher than previously reported levels determined using conventional assay kits. Multiple regression analyses showed that body weight, serum albumin levels, and serum creatinine levels negatively correlated with plasma prorenin levels, while the use of loop diuretics was associated with elevated plasma prorenin levels. Blood pressure, HbA1c, and plasma renin activity were not independent variables affecting plasma prorenin levels. In contrast, serum prorenin levels were unaffected by any of the above clinical parameters. The association of the plasma prorenin concentration with indices reflecting body fluid status suggests the need to scrutinize its role as a biomarker, while serum prorenins are less likely to have immediate diagnostic value. abstract_id: PUBMED:33564180 Circulating prorenin: its molecular forms and plasma concentrations. The renin-angiotensin-aldosterone system plays pivotal roles in the maintenance of fluid homeostasis and in the pathophysiology of major human diseases. However, the molecular forms of plasma renin/prorenin have not been fully elucidated, and measurements of plasma prorenin levels are still unavailable for clinical practice. We attempted to evaluate the molecular forms of human plasma prorenin and to directly measure its concentration without converting it to renin to determine its activity. Polyacrylamide gel electrophoresis and subsequent immunoblotting using antibodies that specifically recognise prosegment sequences were used to analyse its molecular forms in plasma. We also created a sandwich enzyme-linked immunosorbent assay suitable for directly quantifying the plasma concentration. The plasma level in healthy people was 3.0-13.4 μg/mL, which is from 3 to 4 orders of magnitude higher than the levels reported thus far. Plasma immunoreactive prorenin consists of three major distinct components: a posttranslationally modified full-length protein, an albumin-bound form and a smaller protein truncated at the common C-terminal renin/prorenin portion. In contrast to plasma renin activity, plasma prorenin concentrations were not affected by the postural changes of the donor. Hence, plasma prorenin molecules may be posttranslationally modified/processed or bound to albumin and are present in far higher concentrations than previously thought. abstract_id: PUBMED:21234784 The association of plasma prorenin level with an oxidative stress marker, 8-OHdG, in nondiabetic hemodialysis patients. Background: Circulating prorenin contributes to the pathogenesis of tissue damage leading to cardiovascular disease (CVD) in hypertension and diabetic mellitus (DM) by activating the tissue renin-angiotensin-aldosterone (RAS) system; however, little is known about its roles in hemodialysis (HD) patients. Methods: We evaluated plasma prorenin level and prorenin receptor [(P)RR] expression in peripheral blood mononuclear cells (PBMCs) in 49 nondiabetic HD (non-DM-HD) patients. Then we investigated the association between plasma prorenin level or (P)RR expression level in PBMCs and CVD-predictive biomarkers. Results: The plasma prorenin level increased in non-DM-HD patients [147.1 ± 118.9 pg/ml (standard value <100 pg/ml)]. The (P)RR mRNA expression level in PBMCs also increased 1.41 ± 0.39-fold in non-DM-HD patients compared with that in healthy control subjects (p < 0.001). Although plasma prorenin level did not correlate with plasma BNP level and plasma high-sensitivity C-reactive protein level, it significantly correlated with plasma 8-hydroxydeoxyguanosine (8-OHdG) level (r = 0.535, p < 0.001). The plasma prorenin level did not correlate with plasma renin activity (PRA), plasma angiotensin I (AT I) level, plasma angiotensin II (AT II) level and plasma aldosterone (Ald) level. PRA, plasma AT I level, plasma AT II level and plasma Ald level did not correlate with the level of any CVD predictive biomarker. (P)RR expression level in PBMCs did not correlate with the level of any CVD predictive biomarker. Conclusions: The plasma prorenin level and (P)RR expression level in PBMCs increased, and the plasma prorenin level was associated with plasma 8-OHdG level independent of circulating RAS in non-DM-HD patients. abstract_id: PUBMED:26167285 Higher plasma prorenin concentration plays a role in the development of coronary artery disease. Background: Prorenin and renin are both involved in atherosclerosis. However, the role of plasma prorenin and renin in the development and progression of coronary artery disease (CAD) is still not clear. Thus, we aimed to examine the relationships among plasma prorenin concentration, CAD and clinical parameters. Methods: We measured plasma prorenin and renin concentrations and other parameters in 85 patients who underwent coronary angiography. Patients were divided into a CAD group (≥75 % stenosis in one or more coronary arteries) and a non-CAD group. Results: There was a weak correlation between prorenin and plasma renin concentration (r =0.35, p =0.001), and plasma renin activity (r =0.34, p =0.001). There was no significant difference in the plasma prorenin concentration between the CAD group and non-CAD group. However, patients with a high plasma prorenin concentration frequently suffered CAD. Receiver-operating-characteristic curve analysis showed that the optimal cutoff value of plasma prorenin concentration to detect CAD was 1,100 pg/ml, with a positive predictive value of 94 % and a negative predictive value of 36 %. Conclusion: The plasma prorenin concentration increases with increases in plasma renin concentration. Higher plasma prorenin concentration (>1,100 pg/ml) plays a role in the development of CAD. abstract_id: PUBMED:32605403 The prognostic value of plasma fibrinogen level in patients with acute myeloid leukemia: a systematic review and meta-analysis. Increasing evidence has revealed that plasma fibrinogen levels may serve as prognostic indicators in patients with acute myeloid leukemia (AML), yet the exact association is still elusive. We conducted a systematic review and meta-analysis of all available studies concerning the relationship between plasma fibrinogen level and survival in AML patients. The pooled hazard ratio (HR) and 95% confidence intervals (CIs) for overall survival (OS) were calculated to evaluate the effect. A random-effect model was applied and the robustness of the pooled results was confirmed by subgroup and sensitivity analysis. A total of 9 studies were eligible to assess the association between plasma fibrinogen level and prognosis in AML. Among these investigations above, 5 studies adopted OS as their outcome indicator and were selected for the final meta-analysis. The pooled result suggested that plasma fibrinogen level was significantly relevant to increased mortality risk in AML patients (HR = 1.21, 95% CI: 1.01-1.44, p = .000, I2=85.4%). In conclusion, high plasma fibrinogen level may independently predict worse OS in patients with AML. abstract_id: PUBMED:35141939 Prognostic value of pulmonary ultrasound score combined with plasma miR-21-3p expression in patients with acute lung injury. Purpose: The aim of this study was to explore the value of the combination between lung ultrasound score (LUS) and the expression of plasma miR-21-3p in predicting the prognosis of patients with acute lung injury (ALI). Patients And Methods: A total of 136 ALI patients were divided into survival (n = 86) and death groups (n = 50), or into low/middle-risk (n = 77) and high-risk groups (n = 59) according to APACHE II scores. Bioinformatics was used to explore the mechanism of action of miR-21-3p in humans. Real-time fluorescent quantitative PCR was used to detect the expression of miR-21-3p in plasma, and LUS was recorded. Receiver operator characteristic (ROC) curve and Pearson correlation were also used. Results: The LUS and expression level of plasma miR-21-3p in the death and high-risk groups were significantly higher than those in the survival and low/middle-risk groups (p < 0.01 and p < 0.05). miR-21-3p expression leads to pulmonary fibrosis and promotes the deterioration of ALI patients by regulating fibroblast growth factor and other target genes. ROC curve analysis showed that the best cutoff values for LUS and plasma miR-21-3p expression were 18.60 points and 1.75, respectively. LUS score and miR-21-3p combined predicted the death of ALI patients with the largest area under the curve (0.907, 95% CI: 0.850-0.964), with sensitivity and specificity of 91.6% and 85.2%, respectively. The expression level of plasma miR-21-3p was positively correlated with LUS in the death group (r = 0.827, p < 0.01). Conclusion: LUS and expression level of miR-21-3p in plasma are correlated with the severity and prognosis of ALI patients, and their combination has a high value for the prognostic assessment of ALI patients. abstract_id: PUBMED:35535372 Prognostic Value of Estimated Plasma Volume Status in Patients With Sepsis. Background: In patients with sepsis, timely risk stratification is important to improve prognosis. Although several clinical scoring systems are currently being used to predict the outcome of sepsis, but they all have certain limitations. The objective of this study was to evaluate the prognostic value of estimated plasma volume status (ePVS) in patients admitted to the intensive care unit (ICU) with sepsis or septic shock. Methods: This single-center, prospective observational study, included 100 patients admitted to the ICU with sepsis or septic shock. Informed consent, blood samples, and co-morbidity data were obtained from the patients on admission, and the severity of sepsis was recorded. The primary outcome was in-hospital mortality and multivariable logistic regression analysis was used to adjust for confounding factors to determine the significant prognostic factor. Results: The in-hospital mortality was 47%. The ePVS was correlated with the amount of total fluids administered 24 hours before the ICU admission. The mean ePVS in patients who died was higher than in those who survived (7.7 ± 2.1 dL/g vs. 6.6 ± 1.6 dL/g, P = 0.003). To evaluate the utility of ePVS in predicting in-hospital mortality, a receiver operating characteristic curve was produced. Sensitivity and specificity were optimal at a cut-off point of 7.09 dL/g, with an area under the curve of 0.655. In the multivariate analysis, higher ePVS was significantly associated with higher in-hospital mortality (adjusted odds ratio, 1.39; 95% confidence interval, 1.04-1.85, P = 0.028). The Kaplan-Meier curve showed that an ePVS value above 7.09 was associated with an increased risk of in-hospital mortality compared with the rest of the population (P = 0.004). Conclusion: The ePVS was correlated with the amount of intravenous fluid resuscitation and may be used as a simple and novel prognostic factor in patients with sepsis or septic shock who are admitted to the ICU. abstract_id: PUBMED:3294878 Sequential changes in plasma luteinizing hormone and plasma prorenin during the menstrual cycle. Prorenin, the enzymatically inactive biosynthetic precursor of renin, is secreted by the kidneys. However, the ovaries appear to be the source of the cyclical increase in plasma prorenin that occurs in the middle of the menstrual cycle. In this study we examined the temporal relationship between changes in plasma prorenin and LH in normal women to determine whether ovarian prorenin secretion might be stimulated by LH. Blood was collected from nine normal women daily for 7 days in the midcycle period and from six of them every 8 h on 6 of these days. Time zero was taken as the highest plasma LH value. The initial rise in LH (-24 h) preceded the initial rise in prorenin (-8 h) and the LH peak preceded the prorenin peak by 8-16 h. These sequential increases in plasma LH and prorenin occurred in the presence of high plasma estradiol levels. While LH fell in parallel with estradiol, the prorenin peak was more sustained and plasma prorenin remained above baseline at 40 and 48 h, at a time when both estradiol and LH had reached a new basal level. These results suggest that gonadotropins stimulate ovarian prorenin release. The timing of the changes in plasma prorenin and its presence in high concentrations in ovarian follicular fluid suggest that prorenin may be involved in the process of ovulation. The results also suggest that changes in plasma prorenin may determine the activity of an ovarian renin system that functions independently of circulating active renin. abstract_id: PUBMED:34422339 Prognostic value of copeptin in patients with acute myocardial infarction treated with percutaneous coronary intervention: a prospective cohort study. Background: Ischemic myocardial injury leads to neurohormonal system activation and increased release of copeptin. Although diagnostic value of copeptin has been widely described, data on its prognostic performance in patients with myocardial infarction is inconclusive. The aim of this study was to asses if elevated copeptin concentration provides prognostic information for long-term adverse cardiac events in a cohort of first acute myocardial infarction patients treated with percutaneous coronary intervention. Methods: Copeptin concentration was assessed in a cohort of 100 consecutive patients (39% women; mean age 63±7 years) presenting with first acute myocardial infarction and subjected to percutaneous coronary intervention. Samples were collected at the time of admission and on the 4th/5th day of hospitalisation. All patients were followed-up prospectively for 12 months for the occurrence of major adverse cardiovascular events defined as reinfarction, unscheduled coronary revascularisation and all-cause death. Results: Elevated copeptin concentration on the 4th/5th day of hospitalisation was identified as a predictor of major adverse cardiovascular events (P=0.0445). The increase between copeptin level on admission and on day 4th/5th was associated with the requirement for unscheduled coronary revascularisation in receiver operating characteristics (ROC) analysis (AUC =0.639; 95% CI: 0.504-0.773; P=0.0430). In a multivariate analysis, copeptin concentration on the 4th/5th day of hospitalisation and left ventricular ejection fraction assessed by transthoracic echocardiography, were the only predictors for major adverse cardiac events during follow-up (P=0.024 and P=0.001, respectively). Conclusions: Copeptin seems to be a prognostic marker in patients with first myocardial infarction treated with percutaneous coronary intervention. Answer: The study on acute retinopathy of prematurity (ROP) and plasma prorenin (PP) levels in preterm infants found no significant difference in PP between preterm infants with and without ROP. The study included 304 preterm infants and examined PP prospectively between 3 and 14 weeks postnatally. The results showed no correlation between PP and the stage of ROP, gestational age, or birth weight. Although the study did observe a correlation between PP and postnatal age, with mean PP decreasing as postnatal age increased, it concluded that PP is not a valid predictor or indicator of ROP (PUBMED:11105545). Therefore, based on this study, plasma prorenin level is not of prognostic value for acute retinopathy of prematurity.
Instruction: Do personality differences between teachers and learners impact students' evaluations of a surgery clerkship? Abstracts: abstract_id: PUBMED:21481802 Do personality differences between teachers and learners impact students' evaluations of a surgery clerkship? Objective: Historically, the surgery clerkship at the Indiana University School of Medicine (IUSM) has received poor evaluations from medical students, and the authors of this article hypothesized that this negative feedback may reflect, at least in part, inherent differences in the personality styles of the learners compared with those of the surgery teachers (faculty and residents). Differences between teachers and learners could impede effective communication and impact adversely students' perception of, and satisfaction with, the learning environment. The objective of this study was to compare the inherent personality styles of surgery teachers and medical students. Design: Using the Myers-Briggs Type Indicator (MBTI) to assess personality styles, we administered the instrument to 154 teachers in the surgery department and to 1395 medical students. Aggregate MBTI data for teachers and learners were analyzed based on four dichotomous scales. Chi square tests of independence were performed to examine the relationship between teachers and learners on the MBTI scales. Setting: The study was undertaken at IUSM, which has been engaged in a process of cultural change for over 10 years, in part to ensure that both the formal curriculum and the learning environment support the development of self-awareness and professionalism among our graduates. Results: We found that teachers were similar to learners on the Introversion/Extraversion scale and dissimilar from learners on the three remaining scales: Sensing/Intuition scale (p < 0.008), Thinking/Feeling scale (p < 0.000), and the Judging/Perceiving scale (p < 0.022). Conclusions: These results suggest that differences in personality styles may affect the teacher-learner interaction during the surgery clerkship and may influence negatively students' perception of the learning environment. abstract_id: PUBMED:24666987 Correlating surgical clerkship evaluations with performance on the National Board of Medical Examiners examination. Background: Evaluation of medical students during the surgical clerkship is controversial. Performance is often based on subjective scoring, whereas objective knowledge is based on written examinations. Whether these measures correspond or are relevant to assess student performance is unknown. We hypothesized that student evaluations correlate with performance on the National Board Of Medical Examiners (NBME) examination. Methods: Data were collected from the 2011-2012 academic year. Medical students underwent a ward evaluation using a seven-point Likert scale assessing six educational competencies. Students also undertook the NBME examination, where performance was recorded as a percentile score adjusted to national standards. Results: A total of 129 medical students were studied. Scores on the NBME ranged from the 52nd to the 96th percentile with an average in the 75th percentile (±9). Clerkship scores ranged from 3.2-7.0 with a mean of 5.7 (±0.8). There was a strong positive association between higher NBME scores and higher clerkship evaluations shown by a Pearson correlation coefficient of 0.47 (P<0.001). Students clustered with below average ward evaluations (3.0-4.0) were in the 69.5th percentile of NBME scores, whereas students clustered with above average ward evaluations (6.0-7.0) were in the 79.2th percentile (P<0.001). Conclusions: A strong positive relationship exists between subjective ward evaluations and NBME performance. These data may afford some confidence to surgical faculty and surgical resident ability to accurately evaluate medical students during clinical clerkships. Understanding factors in student performance may help in improving the surgical clerkship experience. abstract_id: PUBMED:33129771 Implicit Gender Bias in Third-Year Surgery Clerkship MSPE Narratives. Objective: To assess implicit gender bias in surgery clerkship evaluations of third-year medical students at a large, academic hospital in the Southeast. Methods: University of North Carolina at Chapel Hill School of Medicine has multiple branch campuses where students can complete their surgical clerkship including 1 large academic center, 1 hybrid academic and community-based practice, and 3 community-based hospitals. All residents and faculty evaluations of medical students who completed the surgery clerkship from March 1, 2018 to February 28, 2019 were analyzed. Evaluations were anonymized and names and pronouns were removed to mitigate evaluator bias. A word dictionary was created guided by previous literature and categorized descriptive adjectives into 4 categories: ability, grindstone, standout, and personality traits. Adjectives used to describe students, and references to the student using gendered pronouns or gender-fair language were coded and quantified as percentage of total evaluation word content. These percentages were compared between male and female students. A subsequent analysis was completed to assess the effects of gendered pronouns on linguistic patterns. Results: A total of 583 evaluations from the surgery clerkship were available for 183 students (51.9% female, 48.1% male). When gender-fair language was used, there was no difference in the adjectives used to describe female and male students. Male evaluators were more likely to use female gendered pronouns compared to male gendered pronouns (3.1% vs 2.3%, p = 0.028). When gendered pronouns were present, evaluations of female students were more likely to contain grindstone adjectives but less likely to contain standout terms compared to evaluations of male students (4.4% vs 2.8%, p = 0.006; 0.6% vs 1.3%, p = 0.006). Conclusion: For students who have completed their surgical clerkship, the language patterns in evaluations differ between female students and male students. When the female pronoun was used, narratives contained more grindstone adjectives and fewer standout adjectives. Our results are consistent with previous literature and may be a manifestation of "othering" or a compensatory means of describing female students. This is potential manifestation of implicit gender bias. abstract_id: PUBMED:38123386 When I Don't see me, Am I seen? Race and student perception of the surgery clerkship. Introduction: Increasing interest in general surgery from students who are Under-Represented in Medicine (URiM) is imperative to advancing diversity, equity, and inclusion efforts. We examined medical student third year surgery clerkship evaluations quantitatively and qualitatively to understand the experiences of URiM and non-URiM learners at our institution. Methods: Evaluations from 235 graduated medical students between the years of 2019 and 2021 were analyzed. T-tests were used to compare numerical data. Free-text comments were qualitatively analyzed using inductive thematic analysis by two independent reviewers with conflicts resolved by a third. Results: Evaluations were completed by 214 non-URiM students (91.1 ​%) and 21 (8.9 ​%) URiM students. There were no significant differences between URiM and non-URiM students in ratings of faculty and resident teaching. When asked whether residents were positive role models for patient care, non-URiM students were more likely than URiM students to agree (3.284 vs. 2.864, p ​= ​0.040). When asked whether they considered faculty to be positive role models, non- URM students were also more likely to answer affirmatively than URiM students (3.394 vs. 2.909 p ​= ​0.013). Qualitative comments were similar between the two groups. When asked what the strengths of the clerkship were, the most commonly evoked theme was "interactions with team" with subthemes of "team integration" "feeling valued" and positive "faculty" or "resident" interactions. "Operative experience" was the second most commonly evoked strength of the clerkship. The most common criticisms of the clerkship involved "negative interactions with team" with subthemes of "not prioritized above other learners" and "ignored." Negative "academic experience" was the next most commonly evoked weakness, with an affiliated theme of "lack of teaching." Conclusions: URiM students are less likely than non-URiM students to see surgical residents and faculty as positive role models. Integrating medical students into the team, taking time to teach, and allowing students to feel valued in their roles improves the clerkship experience for trainees and can contribute to recruitment efforts. abstract_id: PUBMED:36462056 The impact of preclinical clerkship in general surgery on medical students' attitude to a surgical career. Purpose: With the advent of a new program for postgraduate medical students in 2004, the number of applicants choosing surgical careers in Japan has been declining. We conducted this study to evaluate the impact of preclinical clerkship and how it affects students' attitudes toward a surgical career. Methods: The subjects of our study were fifth-year medical students who participated in a clinical clerkship in general surgery in our department between April 2021 and March 2022. We conducted pre- and post-preclinical clerkship surveys to assess the perceived image of surgeons and the impact of clerkship on surgical career interest. Results: Among 132 medical students (77 men and 55 women) who rotated through preclinical clerkship in our department, 125 participated in the survey and 66% expressed interest in a surgical career. In the post-clerkship survey, an increased interest in a surgical career was expressed by 79% of the students; notably, including those who initially expressed interest. Approximately 77% of students were satisfied with the practical skill training they received. Conclusion: Engaging medical students early in surgical experience through a preclinical clerkship for general surgery appears to promote their interest in a surgical career. abstract_id: PUBMED:9679473 Evaluation of students' learning in an interdisciplinary medicine--surgery clerkship. Purpose: To evaluate the impact of an interdisciplinary medicine-surgery clerkship (created to foster generalist education) on students' performances on National Board of Medical Examiners' (NBME) subject examinations. Method: Test data for the 226 students who participated in the 16-week combined clerkship and for the 265 students who had completed the traditional clerkships (12 weeks of medicine, 12 weeks of surgery) were compiled and analyzed using t-tests for independent samples. Results: Mean scores on the NBME subject examination in medicine increased significantly after the combined medicine-surgery clerkship (from 433 to 455, p < or = 0.5). Mean scores on the NBME subject examination in surgery were similar to those achieved in the traditional clerkship years. Conclusion: Since the medicine and surgery clerkships were combined into a single, interdisciplinary clerkship, students' scores have increased on the medicine NBME subject examination and have remained relatively unchanged on the surgery NBME subject examination, despite a substantial reduction in students' clinical experience in the combined clerkship from the traditional clerkships (16 vs 24 weeks). abstract_id: PUBMED:30217776 Team-Based Learning in the Surgery Clerkship: Impact on Student Examination Scores, Evaluations, and Perceptions. Objective: There is little evidence for effectiveness of team-based learning (TBL) in specialties such as Surgery. We developed and instituted TBLs in surgery clerkship and compared National Board of Medical Examiners (NBME) Surgery Subject Exam scores before and after implementation. We also analyzed students' feedback for their perception of TBLs. Design, Setting, And Participatnts: The TBLs were transitioned into the curriculum during the 2013-2014 academic year. The "before" and "after" implementation periods were 2011-2013 and 2014-2016, respectively. NBME Surgery Subject Examination scores at our institution and nationally were compared using the independent samples t test. Satisfaction with the clerkship was assessed with Association of American Medical Colleges Graduate Questionnaire data. Student feedback regarding TBL was gathered at the end of each surgery rotation and were analyzed for themes, both positive and negative. Results: Mean NBME score was higher at our institution than nationally, both before (77.10 ± 8.75 vs. 75.20 ± 8.95, p = 0.032) and after (74.65 ± 8.0 vs. 73.10 ± 8.55, p = 0.071) TBL implementation. The mean score decreased following TBL implementation at our medical school (77.10 ± 8.75 vs. 74.65 ± 8.00, p = 0.039), but it was also lower nationally (75.20 ± 8.95 vs. 73.10 ± 8.55, p < 0.001). Further, students were more likely to rate the surgery clerkship as "good and/or excellent" on the Association of American Medical Colleges Graduate Questionnaire after TBL implementation (84.6% vs. 73.7%). In qualitative assessment, learners stated that TBLs were informative, helpful in studying for the shelf exam, and viewed them as an opportunity for interactive learning, and thus requested more TBLs. Areas for improvement included reading materials, directions, and organization of sessions. Conclusions: Student perception of TBL into our surgery clerkship has been both positive and provided feedback for improvement. In addition, our medical school graduates have continued to assess their surgery experience as "good" or "excellent" by a large majority. Concurrently, our NBME scores remain above the national mean. We believe our medical students benefit from a well-organized TBL and its active approach to learning during the surgery clerkship with no loss of fundamental surgery knowledge. abstract_id: PUBMED:26505109 How Do Surgery Students Use Written Language to Say What They See? A Framework to Understand Medical Students' Written Evaluations of Their Teachers. Background: There remains debate regarding the value of the written comments that medical students are traditionally asked to provide to evaluate the teaching they receive. The purpose of this study was to examine written teaching evaluations to understand how medical students conceptualize teachers' behaviors and performance. Method: All written comments collected from medical students about teachers in the two surgery clerkships at the University of Alberta in 2009-2010 and 2010-2011 were collated and anonymized. A grounded theory approach was used for analysis, with iterative reading and open coding to identify recurring themes. A framework capturing variations observed in the data was generated until data saturation was achieved. Domains and subdomains were named using an in situ coding approach. Results: The conceptual framework contained three main domains: "Physician as Teacher," "Physician as Person," and "Physician as Physician." Under "Physician as Teacher," students commented on specific acts of teaching and subjective perceptions of an educator's teaching values. Under the "Physician as Physician" domain, students commented on elements of their educator's physicianship, including communication and collaborative skills, medical expertise, professionalism, and role modeling. Under "Physician as Person," students commented on how both positive and negative personality traits impacted their learning. Conclusions: This framework describes how medical students perceive their teachers and how they use written language to attach meaning to the behaviors they observe. Such a framework can be used to help students provide more constructive feedback to teachers and to assist in faculty development efforts aimed at improving teaching performance. abstract_id: PUBMED:36208957 Medical students and mattering on the surgery clerkship rotation. Background: Mattering is a psychosocial construct that describes an individual's perception that they make a difference in the lives of others and that they are significant in the world. The purpose of this study was to explore the current perception of behaviors that impact mattering among third year medical students on their surgery clerkship with the goal of improving the clerkship experience. Methods: A qualitative interview study was conducted during the 2019-2020 academic year. Medical students who had completed their surgery clerkship at a single institution volunteered to participate. Qualitative thematic analysis of students' comments during interviews were categorized to the three primary domains of mattering: awareness, importance, and reliance. Results: Six medical student interviews were conducted and responses were coded for the three primary domains of mattering. Eight subthemes emerged highlighting positive observations that may influence student mattering on the surgery clerkship. Awareness behaviors included acknowledging the student's presence, maintaining eye contact, educators offering students their undivided attention, and getting to know the students as individuals. Importance behaviors included taking time to teach, setting expectations early, and providing timely feedback. Reliance behaviors involve developing trust to match autonomy with experience and depending on students to provide unique information about patient care to the team. Conclusions: These findings can help educators recognize the words, actions, and behaviors that make medical students feel they matter on their surgery clerkship. Interventions should continue focus on how to increase the sense of awareness, importance, and reliance for both the students and faculty. abstract_id: PUBMED:22461753 Creating a surgery clerkship in a changing environment: reality, simulation, and the rules of engagement. This review describes the current challenges associated with creating a successful surgical clerkship and the ways in which teacher-focused and curriculum-focused initiatives can address these challenges. The challenges are both systemic (reflected by changes in our health care system and training programs) and institutional (reflected by factors that affect curriculum design and faculty advancement). Particular attention is paid to residents as teachers, faculty as mentors, the educational impact of the operating room, and the role of simulation. Strategies for engaging students, residents, and faculty are explored. The premise and impact of a comprehensive simulation course on the clinical education of medical students is detailed. Emphasis is placed on the educational validity of accountability and engagement of both the teachers and the learners. Answer: Yes, personality differences between teachers and learners can impact students' evaluations of a surgery clerkship. A study conducted at the Indiana University School of Medicine (IUSM) used the Myers-Briggs Type Indicator (MBTI) to assess personality styles and found that teachers were similar to learners on the Introversion/Extraversion scale but dissimilar on the Sensing/Intuition scale, Thinking/Feeling scale, and Judging/Perceiving scale. These differences in personality styles may affect the teacher-learner interaction during the surgery clerkship and may negatively influence students' perception of the learning environment (PUBMED:21481802).
Instruction: Is there correlation between cognition and functionality in severe dementia? Abstracts: abstract_id: PUBMED:25410450 Is there correlation between cognition and functionality in severe dementia? the value of a performance-based ecological assessment for Alzheimer's disease. Objective: Besides significant cognitive decline, patients in later stages of Alzheimer's disease (AD) also present global functional impairment, usually reported by their caregivers. This study searched for preserved activities of daily living by investigating correlations among specific instruments for severe dementia with a performance-based functional scale. Method: A sample of 95 moderate to severe AD patients and their caregivers underwent a neuropsychological battery consisting of screening tools, the Functional Assessment Staging Test (FAST), the Severe Mini-Mental State Examination (MMSEsev) and a performance-based ecological scale, the Performance Test of Activities of Daily Living (PADL). Results: Consistent findings emerged from the comparisons among tests. PADL showed significant statistical correlation with MMSEsev (ρ<0.001), according to FAST subdivisions. Conclusion: Upon suspicion of unreliable caregiver reports, ecological scales may be useful for disease staging. Variable degrees of functionality and cognition may be present even in later stages of AD, requiring proper assessment. abstract_id: PUBMED:31647053 Modelling the impact of functionality, cognition, and mood state on awareness in people with Alzheimer's disease. Objectives: To investigate the nature of the relationship between cognitive function, mood state, and functionality in predicting awareness in a non-clinically depressed sample of participants with mild to moderate Alzheimer's disease (AD) in Brazil. Methods: People with AD (PwAD) aged 60 years or older were recruited from an outpatient unit at the Center of AD of the Federal University of Rio de Janeiro, Brazil. Measures of awareness of condition (Assessment Scale of the Psychosocial Impact of the Diagnosis of Dementia), cognitive function (Mini-Mental State Examination), mood state (Cornell Scale for Depression in Dementia), and functionality (Pfeffer Functional Activities Questionnaire) were applied to 264 people with mild to moderate AD and their caregivers. Hypotheses were tested statistically using SEM approach. Three competing models were compared. Results: The first model, in which the influence of mood state and cognitive function on awareness was mediated by functionality, showed a very good fit to the data and a medium effect size. The competing models, in which the mediating variables were mood state and cognitive function, respectively, only showed poor model fit. Conclusion: Our model supports the notion that the relationship between different factors and awareness in AD is mediated by functionality and not by depressive mood state or cognitive level. The proposed direct and indirect effects on awareness are discussed, as well as the missing direct influence of mood state on awareness. The understanding of awareness in dementia is crucial and our model gives one possible explanation of its underlying structure in AD. abstract_id: PUBMED:30517240 Cognition, functionality and symptoms in patients under home palliative care. Objective: Evaluating the degree of cognition, functionality, presence of symptoms and medications prescribed for patients under palliative home care. Method: Descriptive, cross-sectional study where 55 patients under palliative home care were interviewed. Cognition was evaluated using the Mini-Mental state examination (MM), with patients being separated into two groups: with preserved cognitive ability (MM>24), or altered (MM <24). The functionality was verified by the Palliative Performance Scale (PPS) and the patients were divided into two groups: PPS≤50 and PPS≥60. The symptoms presence was evaluated by ESAS (Edmonton Symptom Assessment System) being considered as mild (ESAS 1-3), moderate (ESAS 4-6) or severe (ESAS 7-10) symptoms. Medications prescribed to control the symptoms were registered. Statistical analysis used Student's t test (p <0.05). Results: Most of the 55 patients were women (63.6%), 70.9% of these had MM> 24, 83.6% had PPS <50 and 78.2% presented chronic non-neoplastic degenerative disease. There was a significant relationship between PPS≤50 and MM≤24. Symptoms were present in 98% of patients. Asthenia was more frequently reported and was not treated in 67% of the cases. Severe pain was present in 27.3%: 46% without medication and 13% with medication, if necessary. Most patients with severe dyspnea used oxygen. Conclusions: Most of the analysed patients had their cognition preserved, presented low functionality and 98% reported the presence of symptoms. Severe pain was present in almost 1/3 of the patients without effective treatment. Re-evaluate palliative home care is suggested to optimize patient's quality of life. abstract_id: PUBMED:33439392 Diabetes and impaired fasting glucose in a population-based sample of individuals aged 75 + years: associations with cognition, major depressive disorder, functionality and quality of life-the Pietà study. Objectives: To investigate the rates of diabetes mellitus (DM) and impaired fasting glucose (IFG) in a population-based sample of individuals aged 75 + years old and their associations with cognitive performance, depression, functionality, and quality of life (QoL). Study Design: Overall, 350 people participated in the study. Assessments of cognition, mood, functionality and QoL were performed using the mini-mental state examination (MMSE), clock-drawing, category fluency tests, the Mini-International Neuropsychiatric Interview, Pfeffer's Functional Activities Questionnaire, and the WHO Quality of Life-Old (WHOQOL-OLD). Results: IFG (ADA criteria) was identified in 42.1% of the sample, while the DM rate was 24.1%. Lack of knowledge of the DM diagnosis and lack of treatment occurred in 27% and 39% of the sample, respectively. Rates of dementia and depression, MMSE, category fluency scores, and previous cardiovascular events did not differ between the glycaemic groups. Individuals with DM performed worse on the clock-drawing test, functionality, and WHOQOL-OLD than the other participants. Individuals with IFG presented similar QoL and functionality when compared with the group without DM. Conclusions: IFG and DM were common in this population-based sample aged 75 + years old, as were inadequate diagnoses and treatments of DM. DM individuals presented poor performance in the executive function test, functionality, and QoL. Further studies are recommended to investigate the value of an IFG diagnosis among the most elderly population. abstract_id: PUBMED:34326633 Prevalence of Severe Neurocognitive Impairment and Its Association with Socio-Demographics and Functionality Among Ugandan Older Persons: A Hospital-Based Study. Background: The prevalence of neurocognitive disorders, especially dementia, is rising due to an increase in longevity. Early detection and diagnosis of neurocognitive impairments are important for early interventions and appropriate management of reversible causes, especially by the primary health workers. However, this study aimed to determine the prevalence and associated factors of severe neurocognitive impairment among elderly persons attending a tertiary hospital in Uganda. Methods: This cross-sectional survey was conducted in a Ugandan hospital setting, where older adults go for treatment for their chronic health problems. Following the inclusion criteria, interviews were conducted, where information about socio-demographics was collected, whereas neurocognitive impairment and functionality were assessed by Mini-Mental State Examination and Barthel Index, respectively. Chi-square test, Pearson correlation test, and logistic regression were performed to determine the factors associated with severe neurocognitive impairment. Results: A total of 507 elderly persons aged 60 years and above were enrolled in this study (mean age 68.62 ±7.95 years), and the prevalence of severe neurocognitive impairment was 28.01%. Advanced age, female gender, lower education level, and functional dependency were significantly associated with severe neurocognitive impairment. Conclusion: Severe neurocognitive impairment is prevalent among Ugandan hospital attending elderlies with functional dependency. This suggests a need to routinely screen cognitive disorders among older persons who visit the healthcare facilities with other physical complaints to enable early detection and treatment of reversible causes of neurocognitive impairment, such as depression and delirium to enable better functionality. abstract_id: PUBMED:29205252 Correlation between Cognition and Function across the Spectrum of Alzheimer's Disease. Background: Both cognitive and functional deterioration are characteristic of the clinical progression of Alzheimer's disease (AD). Objectives: To systematically assess correlations between widely used measures of cognition and function across the spectrum of AD. Design: Spearman rank correlations were calculated for cognitive and functional measures across datasets from various AD patient populations. Setting: Post-hoc analysis from existing databases. Participants: Pooled data from placebo-treated patients with mild (MMSE score ≥20 and ≤26) and moderate (MMSE score ≥16 and ≤19) AD dementia from two Phase 3 solanezumab (EXPEDITION/2) and two semagecesatat (IDENTITY/2) studies and normal, late mild cognitive impairment (LMCI) and mild AD patients from the Alzheimer's Disease Neuroimaging Initiative 2-Grand Opportunity (ADNI-2/GO). Intervention (if any): Placebo (EXPEDITION/2 and IDENTITY/2 subjects). Measurements: Cognitive and functional abilities were measured in all datasets. Data were collected at baseline and every three months for 18 months in EXPEDITION and IDENTITY studies; and at baseline, 6, 12, and 24 months in the ADNI dataset. Results: The relationship of cognition and function became stronger over time as AD patients progressed from preclinical to moderate dementia disease stages, with the magnitude of correlations dependent on disease stage and the complexity of functional task. The correlations were minimal in the normal control population, but became stronger with disease progression. Conclusions: This analysis found that measures of cognition and function become more strongly correlated with disease progression from preclinical to moderate dementia across multiple datasets. These findings improve the understanding of the relationship between cognitive and functional clinical measures during the course of AD progression and how cognition and function measures relate to each other in AD clinical trials. abstract_id: PUBMED:32662123 Social cognition: Patterns of impairments in mild and moderate Alzheimer's disease. Objective: Social cognition (SC) deficits in Alzheimer's Disease (AD) are commonly associated with the progression of the disease, and mainly as a result of global cognition deterioration. We aimed to investigate the relationship between SC, global cognition, and other clinical variables in mild and moderate people with AD and their caregivers. We also investigated the differences between self-reported SC and family caregivers' ratings of SC. Methods: We included 137 dyads of people with AD (87 mild and 50 moderate) and caregivers. We evaluated social cognition, global cognition, quality of life, dementia severity, mood, functionality, neuropsychiatric symptoms, and caregiver burden. Results: SC presented a specific pattern of impairment, especially when related to global cognition deficits. Although the moderate AD group showed significant worsening in cognition, functionality and neuropsychiatric symptoms, when compared to the mild group, SC did not present significant differences between the groups. The multivariate regression analysis showed that in the mild group, self-reported SC was related to age and years of education. In the moderate group, SC was related to gender. For caregivers, in the mild group, SC was related to functionality and quality of life, while in the moderate group, was associated with quality of life. Conclusion: The pattern of impairment of SC may be more stable as it implies interaction with cognition, mainly in the mild stage, but also include subjective factors as a personal perception about oneself and others, values, and beliefs that evokes individual, social, cultural, and contextual factors. abstract_id: PUBMED:34879370 Evaluating Residual Cognition in Advanced Cognitive Impairment: The Residual Cognition Assessment. Background: In nursing homes, most of the patients with dementia are affected by severe cognitive disorder. Care interventions follow an accurate and recurring multidimensional assessment, including cognitive status. There is still a need to develop new performance-based scales for moderate-to-advanced dementia. Objectives: The development of the Residual Cognition Assessment (RCA) responds to the need to create new scales for global cognitive screening in advanced dementia, with some peculiar features: performance based, brief (<5 m), available without specific training, and suitable for nonverbal patients with minimal distress. Methods: Two raters have administered the RCA and the Severe Impairment Battery-short version (SIB-S) to 84 participants with MMSE = 0. After 2-3 weeks, the same sample has been retested. The RCA has been also administered to 40 participants with MMSE 1-10 for a comparison. Results: The RCA has exhibited excellent values for test-retest reliability (intraclass correlation [ICC] = 0.956) as well as for inter-rater reliability (ICC = 0.997). The concurrent validity analyzes have shown strong correlations between the RCA and the SIB-S with ρ = 0.807 (p < 0.01), and the RCA and the Clinical Dementia Rating (CDR) with ρ = -0.663 (p < 0.01). Moderate correlation has been found between the RCA and the Functional Assessment Staging Scale with ρ = -0.435 (p < 0.01). The instrument has showed high internal reliability, too (total: α = 0.899). The RCA has low floor effect (2%) with respect to the SIB-S (58%) but shows ceiling effect in the MMSE 1-10 sample (50%). The ROC curve analyses demonstrate that the RCA is acceptably able to discriminate between subjects with CDR 4/5 with an AUC of 0.92. Exploratory factor analysis shows 3 factors, defined as three major degrees of cognitive performance in advanced dementia, indeed hierarchically structured in three possible levels of decline. Conclusions: The RCA has showed excellent validity and reliability as well as good sensitivity to identify advanced cognitive impairment in dementia, without floor effect. The RCA seems complementary to the MMSE, so advisable when the latter reaches 0. Administration and scoring are simple, and only few minutes are required to assess the patient. The RCA can discriminate at least 3 different major stages in advanced dementias: severe, profound, and late. abstract_id: PUBMED:38467581 Association of Alcohol Consumption with Cognition and Functionality in Older Adults Aged 75+ Years: The Pietà Study. The relationship between alcohol consumption and cognition is still controversial. This is a cross-sectional population-based study conducted in Caeté (MG), Brazil, where 602 individuals aged 75+ years, 63.6% female, and with a mean education of 2.68 years, were submitted to thorough clinical assessments and categorized according to the number of alcoholic beverages consumed weekly. The prevalence rates of previous and current alcohol consumption were 34.6% and 12.3%, respectively. No association emerged between cognitive diagnoses and current/previous alcohol consumption categories. Considering current alcohol intake as a dichotomous variable, the absence of alcohol consumption was associated with dementia (OR = 2.34; 95%CI: 1.39-3.90) and worse functionality (p = 0.001). Previous consumption of cachaça (sugar cane liquor) increased the risk of dementia by 2.52 (95%CI: 1.25-5.04). The association between the consumption of cachaça and dementia diagnosis has not been described before. abstract_id: PUBMED:36051206 The effect of risk factors on cognition in adult cochlear implant candidates with severe to profound hearing loss. Hearing loss has been identified as a major modifiable risk factors for dementia. Adult candidates for cochlear implantation (CI) represent a population at risk of hearing loss-associated cognitive decline. This study investigated the effect of demographics, habits, and medical and psychological risk factors on cognition within such a cohort. Data from 34 consecutive adults with post-lingual deafness scheduled for CI were analyzed. Pure tone audiometry (PTA4) and Speech Discrimination Score (SDS) were recorded. The Repeatable Battery for Assessment of Neuropsychological Status for Hearing impaired individuals (RBANS-H) was used to measure cognition. Demographics (sex, age, years of education), habits (smoking, alcohol intake, physical inactivity), and medical factors (hypertension, diabetes, traumatic brain injury) were evaluated. Depression was measured using the Hospital Anxiety and Depression Scale (HADS), and social inhibition with the Type D questionnaire (DS14). All participants (mean age 62 ± 15 years) suffered from severe to profound hearing loss (PTA4:129 ± 60 dB; SDS:14 ± 24%). The mean RBANS-H total score was 83 ± 16. Participants reported a mean of years of formal education of 12 ± 5 years. The prevalence of habits and medical risk factors was: physical inactivity (29%), body mass index >30 (28%), traumatic brain injury (25%), hypertension (24%), heavy alcohol consumption (13%), smoking (13%), and diabetes (0%). Regarding psychological factors, the mean scores of social inhibition and depression were 10 ± 6 and 6 ± 5, respectively. The number of years of education was significantly correlated with the RBANS-H total score (p < 0.001), and with the domains "Immediate memory" (p = 0.003), "Visuospatial/constructional" (p < 0.001), and "Attention" (p < 0.001). The mean RBANS-H total score in participants who had university studies or higher level (12/34) was 97 ± 9, with the remaining participants reporting a mean score of 75 ± 15. Men performed better in the "Visuospatial/constructional" (p = 0.008). Physical inactivity was associated with lower scores in the "Delayed memory" (p = 0.031); hypertension correlated with lower RBANS-H total scores (p = 0.025) and "Attention" (p = 0.006). Depression and social inhibition were negatively correlated with RBANS-H total score and with the "Immediate memory," "Visuospatial/constructional," and "Attention" (all p < 0.05). In adults with late-onset deafness scheduled to CI, educational level has a significant effect. Additionally, sex, physical inactivity, hypertension, and psychological traits of social inhibition and depression may also influence cognitive status. Long-term studies with more participants would enable us better understand the effects different risk factors on cognitive status. Answer: Yes, there is a correlation between cognition and functionality in severe dementia. Studies have shown that even in the later stages of Alzheimer's disease (AD), there can be variable degrees of functionality and cognition present, which require proper assessment. The Performance Test of Activities of Daily Living (PADL) has demonstrated significant statistical correlation with the Severe Mini-Mental State Examination (MMSEsev), according to subdivisions of the Functional Assessment Staging Test (FAST) (PUBMED:25410450). Additionally, the relationship between cognitive function and functionality becomes stronger over time as AD patients progress from preclinical to moderate dementia disease stages (PUBMED:29205252). Furthermore, the Residual Cognition Assessment (RCA) has shown excellent validity and reliability in identifying advanced cognitive impairment in dementia without a floor effect, indicating that it can discriminate between different major stages in advanced dementias (PUBMED:34879370). Moreover, the prevalence of severe neurocognitive impairment among elderly persons attending a Ugandan hospital was found to be 28.01%, with advanced age, female gender, lower education level, and functional dependency significantly associated with severe neurocognitive impairment (PUBMED:34326633). This suggests the importance of routinely screening for cognitive disorders among older persons to enable early detection and treatment of reversible causes of neurocognitive impairment, which can improve functionality. In summary, the correlation between cognition and functionality in severe dementia is evident, and performance-based functional scales, such as the PADL and RCA, can be useful tools for assessing this relationship and guiding care interventions.
Instruction: Private care and public health: do vaccination and prenatal care rates differ between users of private versus public sector care in India? Abstracts: abstract_id: PUBMED:15544642 Private care and public health: do vaccination and prenatal care rates differ between users of private versus public sector care in India? Objective: To determine whether patients who use private sector providers for curative services have lower vaccination rates and are less likely to receive prenatal care. Data Sources/study Setting: This study uses data from the 52d round of the National Sample Survey, a nationally representative socioeconomic and health survey of 120,942 rural and urban Indian households conducted in 1995-1996. Study Design: Using logistic regression, we estimate the relationship between receipt of preventive care at any time (vaccinations for children, prenatal care for pregnant women) and use of public or private care for outpatient curative services, controlling for demographics, household socioeconomic status, and state of residence. Data Collection/extraction Methods: We analyzed samples of children ages 0 to 4 and pregnant women who used medical care within a 15-day window prior to the survey. Principal Findings: With the exception of measles vaccination, predicted probabilities of the receipt of vaccinations and prenatal care do not differ based on the type of provider at which children and women sought curative care. Children and pregnant women in households who use private care are almost twice as likely to receive preventive care from private sources, but the majority still obtains preventive care from public providers. Conclusions: We do not find support for the hypothesis that children and pregnant women who use private care are less likely to receive public health services. Results are consistent with the notion that Indian households are able to successfully navigate the coexisting public and private systems, and obtain services selectively from each. However, because the study employed an observational, cross-sectional study design, findings should be interpreted cautiously. abstract_id: PUBMED:24337056 Quality of prenatal care in public and private services Purpose: To analyze prenatal care in public and private services. Methods: A cross-sectional, retrospective and analytic study was conducted based on the audit of files of pregnant women who had given birth at a reference hospital for low risk cases in the area of Campos Gerais - Paraná State, in the first semester of 2011. The Yates chi-squared test or exact Fisher test were used to determine the association between the lack of registration files for pregnant women regarding prenatal assistance in the public and private services, with the level of significance set at p ≤ 0.05. The quality of prenatal care was determined based on the percentile of non-registrations. Results: A total of 500 prenatal files were analyzed. There was a significant attendance of six or more prenatal visits, with a larger proportion in the private service (91.9%). The laboratory and obstetric exams most frequently not registered in the public and in the private services were, respectively: hepatitis B (79.3 and 48.4%), hemoglobin and hematocrit values (35.6 and 21.8%), anti-HIV serology (29.3 and 12.9%), fetal movement (84.3 and 58.9%) and length (60.4 and 88.7%), edema verification (60.9 and 54.8%), and fetal presentation (52.4 and 61.3%). The audit of the files of pregnant women allowed to determine the quality of the prenatal care provided and confirmed differences in assistance according to the place, showing excellent and good quality of private care, and regular public care for ultrasonography and blood type/Rh factor; regular quality of private care and poor quality of public care for urine tests and weight. For the other types of laboratory and obstetric exams and vaccines, the quality was poor or very poor in both types of services. Conclusion: The differences between the services showed that there is a need for actions aiming at the improvement of the prenatal care provided by public services. abstract_id: PUBMED:17540472 'Where is the public health sector?' Public and private sector healthcare provision in Madhya Pradesh, India. Objective: This paper aims to empirically demonstrate the size and composition of the private health care sector in one of India's largest provinces, Madhya Pradesh. Methodology: It is based on a field survey of all health care providers in Madhya Pradesh (60.4 million in 52,117 villages and 394 towns). Seventy-five percent of the population is rural and 37% live below poverty line. This survey was done as part of the development of a health management information system. Findings: The distribution of health care providers in the province with regard to sector of work (public/private), rural-urban location, qualification, commercial orientation and institutional set-up are described. Of the 24,807 qualified doctors mapped in the survey, 18,757 (75.6%) work in the private sector. Fifteen thousand one hundred forty-two (80%) of these private physicians work in urban areas. The 72.1% (67793) of all qualified paramedical staff work in the private sector, mostly in rural areas. Conclusion: The paper empirically demonstrates the dominant heterogeneous private health sector and the overall the disparity in healthcare provision in rural and urban areas. It argues for a new role for the public health sector, one of constructive oversight over the entire health sector (public and private) balanced with direct provision of services where necessary. It emphasizes the need to build strong public private partnerships to ensure equitable access to healthcare for all. abstract_id: PUBMED:15459165 Health care of female outpatients in south-central India: comparing public and private sector provision. The object of this study was to compare components of quality of care provided to female outpatients by practitioners working in the private and public sectors in Karnataka State, India. Consultations conducted by 18 private practitioners and 25 public-sector practitioners were observed for 5 days using a structured protocol. Private practitioners were selected from members of the Indian Medical Association in a predominantly rural sub-district of Kolar District. Government doctors were selected from a random sample of hospitals and health centres in three sub-districts of Mysore District. A total of 451 private-sector and 650 public-sector consultations were observed; in each sector about half involved a female practitioner. The mean length of consultation was 2.81 minutes in the public sector and 6.68 minutes in the private sector. Compared with public-sector practitioners, private practitioners were significantly more likely to undertake a physical examination and to explain their diagnosis and prognosis to the patient. Privacy was much better in the private sector. One-third of public-sector patients received an injection compared with two-thirds of private patients. The mean cost of drugs dispensed or prescribed were Rupees 37 and 74 in public and private sectors, respectively. Both in terms of thoroughness of diagnosis and doctor-patient communication, the quality of care appears to be much higher in the private than in the public sector. However, over-prescription of drugs by private practitioners may be occurring. abstract_id: PUBMED:12797696 The private-public divide: impact of conflicting perceptions between the private and public health care sectors in India. Setting: India's private health care sector manages half the nation's tuberculosis (TB) patients, accounting for an estimated sixth of global TB cases. While several studies have demonstrated private physicians' dubious diagnosis and treatment styles and lack of cooperation with public physicians, very little is still known about the private sector. Objectives: Using a detailed questionnaire to randomly survey private and public practitioners in Ahmedebad, Gujarat, India, we quantified perceptions held by each sector. Study Design: Cross-sectional survey of private and public physicians. Results: Significant conflicts in perception were found regarding interpretation of general facts, attitudes towards each sector, and effectiveness and social implications of DOTS. We also found that such differences in perception were likely to result in mistrust, differing views on reform propositions, conflicting mindsets about social agendas, and unwillingness to cooperate. Conclusion: Our data suggest that reconciliation is attainable by obtaining and distributing unbiased, evidence-based information and exposing physicians to both private and public health care sectors in a professional setting. abstract_id: PUBMED:16438997 Public-private partnerships for equity of access to care for tuberculosis and HIV/AIDS: lessons from Pune, India. The private medical sector is an important and rapidly growing source of health care in India. Private medical providers (PMP) are a diverse group, known to be poorly regulated by government policies and variable in the quality of services provided. Studies of their practices have documented inappropriate prescribing as well as violation of ethical guidelines on patient care. However, despite the critique that inequitable services characterise the private medical sector, PMPs remain important and preferred providers of primary care. This paper argues that their greater involvement in the public health framework is imperative to addressing the goal of health equity. Through a review of two research studies conducted in Pune, India, to examine the role of PMPs in tuberculosis (TB) and HIV/AIDS care, the themes of equity and access arising in private sector delivery of care for TB and HIV/AIDS are explored and the future policy directions for involving PMPs in public health programmes are highlighted. The paper concludes that public-private partnerships can enhance continuity of care for patients with TB and HIV/AIDS and argues that interventions to involve PMPs must be supported by appropriate research, along with political commitment and leadership from both public and private sectors. abstract_id: PUBMED:31278642 Evaluation of biomedical waste management practices in public and private sector of health care facilities in India. Proper management of biomedical waste (BMW) is required to avoid environmental and human health risks. The current study evaluated the BWM practices in public and private health care facilities of Fatehgarh Sahib District in Punjab, India. The study was conducted, using a modified World Health Organization (WHO) tool in 120 health care facilities randomly selected from rural and urban areas. At primary health care level, BMW management guidelines were followed in 67.2% of the public sector and 40.4% of the private sector facilities, whereas in secondary health care sectors both private and public sector follows 100% compliance. Health facilities were graded into different categories according to median score, i.e., scores less than < 2.5 was categorized as red (no credible BMW management system in place), scores between 2.5 to 7.5 as yellow (system present but needs major improvement) and scores > 7.5 as green (good system in place for BMW). It was observed that among primary health care facilities, 85% of the public sector and 64% of private sector facilities falls in the red category, whereas for secondary health care facilities only 8% fall in the red category. Logistic regression helped to identify the major factors that affect the performance of the health care facility, and it shows that regular training on BMW and improved infrastructure can improve the BMW management practices. Further, proper management of BMW requires multi-sectoral coordination, which can be better addressed through policies and by providing periodical training to all stakeholders. abstract_id: PUBMED:12567924 Prenatal care services in the public and private arena. Purpose: This exploratory study described the prenatal care experience in the public and private arena from the perceptions of childbearing women using interpretive interactionism. Data Sources: A face-to-face interview comprised of eight open-ended questions was used to obtain pregnant women's perceptions of their prenatal care experience and prenatal care needs. The purposive sample consisted of six women who received private prenatal care and 14 women who received public prenatal care. Conclusions: Five essential elements of the prenatal care experience were identified. Prenatal care was viewed as a cooperative effort between informal self-care and formal care by health professionals. Issues related to individuality and normality were important considerations in the delivery of prenatal care. Implications For Practice: Controversy exists over the effectiveness of prenatal care in preventing poor outcomes, as the definition of what constitutes adequate prenatal care remains unclear. Advanced practice nurses (APNs) continue to play a pivotal role in the provision of prenatal care services. The expanded knowledge and skills possessed by APNs place them in a pivotal position to develop and implement individualized, developmentally appropriate prenatal care that the women in this study so desperately wanted. In addition, they can assist women in continuing the health promoting behaviors initiated prenatally through out their lifespan. abstract_id: PUBMED:26970466 Public-private mix for TB care in India: Concept, evolution, progress. To achieve "Universal access to TB care and treatment for all", Revised National Tuberculosis Control Programme (RNTCP) has taken steps to reach the unreached by synergizing the efforts of all partners and stakeholders. RNTCP is engaging with private sector partners in major cities of India with primary focus on notification through innovative partnership mechanisms. The manuscript details the concept behind the public-private mix for TB Care in RNTCP, its evolution and progress over the decades in India. abstract_id: PUBMED:31625556 The quality of care in outpatient primary care in public and private sectors in Malaysia. In Malaysia, first-contact, primary care is provided by parallel public and private sectors, which are completely separate in organization, financing and governance. As the country considers new approaches to financing, including using public schemes to pay for private care, it is crucial to examine the quality of clinical care in the two sectors to make informed decisions on public policy. This study intends to measure and compare the quality of clinical care between public and private primary care services in Malaysia and, to the extent possible, assess quality with the developed economies that Malaysia aspires to join. We carried out a retrospective analysis of the National Medical Care Survey 2014, a nationally representative survey of doctor-patient encounters in Malaysia. We assessed clinical quality for 27 587 patient encounters using data on 66 internationally validated quality indicators. Aggregate scores were constructed, and comparisons made between the public and private sectors. Overall, patients received the recommended care just over half the time (56.5%). The public sector performed better than the private sector, especially in the treatment of acute conditions, chronic conditions and in prescribing practices. Both sectors performed poorly in the indicators that are most resource intensive, suggesting that resource constraints limit overall quality. A comparison with 2003 data from the USA, suggests that performance in Malaysia was similar to that a decade earlier in the USA for common indicators. The public sector showed better performance in clinical care than the private sector, contrary to common perceptions in Malaysia and despite providing worse consumer quality. The overall quality of outpatient clinical care in Malaysia appears comparable to other developed countries, yet there are gaps in quality, such as in the management of hypertension, which should be tackled to improve overall health outcomes. Answer: Based on the provided abstracts, it appears that there are differences in vaccination and prenatal care rates between users of private versus public sector care in India, although the extent and nature of these differences vary. One study found that, with the exception of measles vaccination, predicted probabilities of the receipt of vaccinations and prenatal care do not differ based on the type of provider at which children and women sought curative care. It also noted that children and pregnant women in households who use private care are almost twice as likely to receive preventive care from private sources, but the majority still obtains preventive care from public providers (PUBMED:15544642). Another study highlighted that the quality of prenatal care was better in private services, with a significant attendance of six or more prenatal visits, and better quality of private care for certain laboratory and obstetric exams. However, it also pointed out that public care was of regular quality for some services, and poor for others, indicating a need for improvement in the public sector (PUBMED:24337056). The abstract discussing healthcare provision in Madhya Pradesh, India, did not directly address vaccination and prenatal care rates but emphasized the dominant role of the private health sector and the disparity in healthcare provision in rural and urban areas. It suggested a new role for the public health sector in overseeing the entire health sector, including private providers, to ensure equitable access to healthcare (PUBMED:17540472). In summary, while there are differences in the quality and extent of prenatal care and vaccination rates between private and public sector users in India, the evidence suggests that households are able to navigate both systems to obtain services. However, there is a clear indication that the quality of care, particularly prenatal care, is generally better in the private sector, and there is room for improvement in the public sector to achieve better health outcomes (PUBMED:15544642, PUBMED:24337056, PUBMED:17540472).
Instruction: Does the maxillary midline diastema close after frenectomy? Abstracts: abstract_id: PUBMED:35248905 Association between superior labial frenum and maxillary midline diastema - a systematic review. Background: Pediatric otolaryngologists have seen an increased focus on upper lip frenum as a possible culprit for feeding difficulties and the development of maxillary midline diastema (MMD). This increase may be encouraged by parents' exposure to medical advice over the internet about breastfeeding and potential long-term aesthetic concerns for their children. Subsequently, there has been increased pressure on pediatric otolaryngologists to perform superior labial frenectomies. There has been a reported 10-fold increase in frenectomies since the year 2000. However, there is no consensus within the literature regarding the benefit of superior labial frenectomy in preventing midline diastema. Objective: To provide physicians and parents with the most updated information by systematically reviewing the available literature for the association between superior labial frenum and midline diastema. Methods: A literature search was performed in MEDLINE (PubMed), EMBASE, Web of Science, the Cochrane Library and Dental and Oral Sciences Source (DOSS). Using the Covidence platform, a systematic review was conducted. The initial 314 articles identified underwent systematic review and 11 studies were included in the final review. Results/discussion: Available data, primarily from the dental literature, showed that two subtypes of frenum: papillary and papillary penetrating frenum, are associated with maxillary midline diastema. Superior labial frenectomy should be delayed until permanent lateral incisors have erupted, as this can spontaneously close the physiological MMD. Current literature recommends against frenectomy before addressing the diastema with orthodontics, which helps to prevent diastema relapse. It is also imperative to rule out other odontogenic and oral cavity causes of diastema, such as thumb sucking, dental agenesis, and other causes. Online information may not always be fully representative and should be interpreted in the full context of the patient's medical history before referral for surgical intervention. abstract_id: PUBMED:24392496 Does the maxillary midline diastema close after frenectomy? Objective: To analyze the closure, persistence or reopening of the maxillary midline diastema after frenectomy in patients with and without subsequent orthodontic treatment. Method And Materials: All patients undergoing frenectomy with a CO2 laser were included in this retrospective study during the period of September 2002 to June 2011. Age and sex, the dimension of the diastema, eruption status of the maxillary canines, and the presence of an orthodontic treatment were recorded at the day of frenectomy and during follow-up. Results: Of the 59 patients fulfilling the inclusion criteria, 31 (52.5%) had an active orthodontic therapy, while 27 (45.8%) had a frenectomy without orthodontic treatment. For one patient, information concerning orthodontic treatment was not available. In the first follow-up (2 to 12 weeks), only four diastemas closed after frenectomy and orthodontic treatment, and none after frenectomy alone. In the second follow-up (4 to 19 months), statistically significantly (P = .002) more diastemas (n = 20) closed with frenectomy and orthodontic treatment than with frenectomy alone (n = 3). At the long-term (21 to 121 months) follow-up, only four patients had a persisting diastema, and in three patients orthodontic treatment was ongoing. Conclusion: Closure of the maxillary midline diastema with a prominent frenum is more predictable with frenectomy and concomitant orthodontic treatment than with frenectomy alone. This study demonstrates the importance of an interdisciplinary approach to treat maxillary midline diastemas, ideally including general practitioners, oral surgeons, periodontists, and orthodontists. abstract_id: PUBMED:35283058 Safety and efficacy of maxillary labial frenectomy in children: A retrospective comparative cohort study. Background: Maxillary frenectomy in children is a common procedure, but concerns about scar tissue affecting diastema closure prevent many clinicians from treating prior to orthodontics. Objectives: To determine if maxillary frenectomy is safe and if diastema size is affected by early treatment. Materials And Methods: Paediatric patients with hypertrophic maxillary frena were treated under local anaesthesia with diode laser and CO2 laser. Diastema width was compared by calibrating and digitally measuring initial and postoperative intraoral photographs. Results: In total, 109 patients were included: 95 patients with primary dentition (39% male; mean age 1.9 years±1.5 years) and 14 with mixed dentition (43% male; mean age 8.1±1.3 years) with a mean follow-up of 18.0±13.2 months. No adverse outcomes were noted other than minor pain and swelling. In the primary dentition, a decrease in diastema width was observed in 94.7% with a mean closure of -1.4±1.0mm (range +0.7 to -5.1mm). In the mixed dentition, a decrease in diastema width was observed in 92.9% with a mean closure of -1.8±0.8mm (range 0 to -3.5mm). 74.5% of patients in the primary dentition and 75% of patients in the mixed dentition with preoperative diastema>2mm improved to<2mm width postoperatively. Conclusions: Frenectomy is associated with cosmetic and oral hygiene benefits and when performed properly, does not impede diastema closure and may aid closure. Technique and case selection are critical to successful outcomes. IRB ethics approval was obtained from Solutions IRB protocol #2018/12/8, and this investigation was self-funded. abstract_id: PUBMED:23285469 Frenectomy: a review with the reports of surgical techniques. The frenum is a mucous membrane fold that attaches the lip and the cheek to the alveolar mucosa, the gingiva, and the underlying periosteum. The frena may jeopardize the gingival health when they are attached too closely to the gingival margin, either due to an interference in the plaque control or due to a muscle pull. In addition to this, the maxillary frenum may present aesthetic problems or compromise the orthodontic result in the midline diastema cases, thus causing a recurrence after the treatment. The management of such an aberrant frenum is accomplished by performing a frenectomy.The present article is a compilation of a brief overview about the frenum, with a focus on the indications, contraindications, advantages and the disadvantages of various frenectomy techniques, like Miller's technique, V-Y plasty, Z-plasty and frenectomy by using electrocautery. A series of clinical cases of frenectomy which were approached by various techniques have also been reported. abstract_id: PUBMED:34250477 Evaluation of the distance between the central teeth after frenectomy: a randomized clinical study. Purpose: The present study aimed to evaluate the periodontal status and the distance between the teeth one year after frenectomy in patients with abnormal frenums in the maxillary and mandibular midline. Materials And Methods: This study included 50 patients (24 men and 26 women) between the ages of 13 and 53 who have frenum-induced diastemas between the incisors. The abnormal frenums were removed via conventional frenectomy. The distances between the teeth before and one year after the surgery were measured with a caliper. To determine the periodontal status, the pocket depth, plaque index, and bleeding on probing were measured from four surfaces. In addition, the amount of attached gingiva and degree of gingival recession were recorded and were statistically analysed. Results: A significant decrease in the distance between teeth before and after frenectomy was observed (p<0.05). There was a statistically significant difference in the amount of gingival attachment, pocket depth, degree of gingival recession, plaque index, and bleeding on probing (p<0.05). Conclusion: The removal of abnormal frenums with frenectomy can contribute to the reduction in the distance between the teeth. In addition, frenectomy increases the amount of gingiva and decreases the depth of the pocket, gingival recession, amount of plaque, and bleeding. abstract_id: PUBMED:36407220 Multidisciplinary Approach to Treatment of Midline Diastema With Edge-to-Edge Bite. Aesthetic treatments have gained massive popularity in the recent past. Patients with midline diastema and spacing are among the most common complaints reported to an orthodontic clinic. The major complaint with such cases is the poor aesthetics that accompany them. Although many restorative treatment options are available to treat these cases, their long-term success is still questionable. The primary aetiology is abnormal frenal attachment, as seen in the case. Getting rid of the etiologic factor is vital to attain a stable treatment. In the present case, a frenectomy was performed to correct the abnormal frenal attachment. Even after correcting the aetiology, correct retention protocol is equally essential. The present article presents the treatment of a case with midline diastema and an edge-to-edge bite, and a high frenal attachment. abstract_id: PUBMED:17456963 Spontaneous closure of midline diastema following frenectomy. Maxillary midline diastema is a common aesthetic problem in mixed and early permanent dentitions. The space can occur either as a transient malocclusion or created by developmental, pathological or iatrogenical factors. Many innovative therapies varying from restorative procedures such as composite build-up to surgery (frenectomies) and orthodontics are available. Although literature says every frenectomy procedure should be preceded by orthodontic treatment, we opted for frenectomy technique without any orthodontic intervention. Presented herewith is a case report of a 9-year-old girl with a high frenal attachment that had caused spacing of the maxillary central incisors. A spontaneous closure of the midline diastema was noted within 2 months following frenectomy. The patient was followed up for 4 months after which the space remained closed and there was no necessity for an orthodontic treatment at a later stage. abstract_id: PUBMED:23928441 Modified frenectomy: a review of 3 cases with concerns for esthetics. The maxillary labial frenum is a normal anatomical structure in the oral cavity. An abnormal labial frenum causes localized gingival recession and midline diastema, both of which can interfere with oral hygiene procedures, and eventually affect esthetics. When the frenum maintains its high papillary attachment, frenectomy is the treatment of choice. Though this technique has undergone many modifications, the zone of attachment and esthetics in the anterior maxillary region have been neglected. This article highlights a new frenectomy technique that results in good esthetics, excellent color match, gain in attached gingiva, and healing by primary intention at the site of thick, extensive abnormal frena. abstract_id: PUBMED:36946620 Maxillary midline diastema closure with sectional feldspathic porcelain veneers: A case series followed 1 to 4 years. Objective: To evaluate clinical outcome of maxillary midline diastema closure using sectional feldspathic porcelain veneers up to 4 years. Materials And Methods: Five female patients with stable maxillary midline diastema were included in the current study and restored with minimal invasive sectional feldspathic porcelain veneers under dental microscope. Esthetic, functional, as well as biologic properties of sectional veneers were evaluated according to the World Dental Federation (FDI) criteria after 1 to 4 years' follow-up. Results: Based on FDI criteria, all five cases were evaluated as clinically excellent to satisfactory during a mean observation period of 26.4 months. Slight to moderate cervical marginal staining was observed in two cases after 3 to 4 years' follow-up. Slight white line or ditching of labial surface margin was found in most cases, but it is not easy to be noticed with saliva covering the margins. Conclusions: Sectional feldspathic porcelain veneers presented satisfying clinical outcome with potential long-term esthetic risk for maxillary midline diastema closure. Clinical Significance: 1. Sectional feldspathic porcelain veneers presented satisfying short to medium term clinical performance when applied to close maxillary midline diastema. 2. The exposure of labial surface margin may increase a long-term potential esthetic risk. abstract_id: PUBMED:36457638 Midline Diastema Closure Using an M Spring in a Young Adult: A Case Report. The malocclusion known as maxillary midline diastema frequently occurs. For the management of diastema, a variety of therapy approaches are recommended. This article gives a case report of a female patient, age 24, who underwent treatment for a 4-mm maxillary midline diastema following the extraction of an additional tooth between her upper central incisors. The orthodontic treatment index was minimal (grade 2). The maxillary central incisors were fitted with bonded McLaughlin-Bennett-Trevisi (MBT) 0.018" brackets. A rectangular archwire was used after the circular archwire to ligate and activate the M spring. The diastema was successfully closed after a total of four months of treatment. This approach is effective and efficient and uses minimal inventory. This shortens the length of the orthodontic treatment and conserves valuable chairside time. Answer: The closure of maxillary midline diastema (MMD) after frenectomy depends on several factors, including whether orthodontic treatment is used in conjunction with the frenectomy. According to a retrospective study, closure of MMD with a prominent frenum is more predictable with frenectomy and concomitant orthodontic treatment than with frenectomy alone (PUBMED:24392496). This study found that in the first follow-up, only four diastemas closed after frenectomy and orthodontic treatment, and none after frenectomy alone. In the second follow-up, statistically significantly more diastemas closed with frenectomy and orthodontic treatment than with frenectomy alone. Another study reported that a significant decrease in the distance between teeth before and after frenectomy was observed, suggesting that the removal of abnormal frenums with frenectomy can contribute to the reduction in the distance between the teeth (PUBMED:34250477). Additionally, a case report presented a spontaneous closure of the midline diastema within 2 months following frenectomy without any orthodontic intervention (PUBMED:17456963). However, it is important to note that the literature recommends against performing a frenectomy before addressing the diastema with orthodontics to help prevent diastema relapse (PUBMED:35248905). Moreover, a systematic review indicated that superior labial frenectomy should be delayed until permanent lateral incisors have erupted, as this can spontaneously close the physiological MMD (PUBMED:35248905). In summary, while frenectomy can contribute to the closure of maxillary midline diastema, the procedure is more effective when combined with orthodontic treatment, and the timing of the intervention should be carefully considered to optimize outcomes.
Instruction: Do clinicians conceptualize DSM-IV disorders hierarchically? Abstracts: abstract_id: PUBMED:22549235 Do clinicians conceptualize DSM-IV disorders hierarchically? Objective: All classification systems of psychopathology use hierarchical categories. The purpose of the two studies in this article was to test whether clinicians think hierarchically about mental disorders. Method: Seventy six clinicians were asked to sort 67 diagnostic categories into groups using different instruction sets, either to make progressively larger and smaller groups of diagnoses (Study 1) or to place similar groups next to each other (Study 1 and Study 2). Results: Clinicians' sortings of mental disorders had a hierarchical structure regardless of the methodology, profession, expertise, and instructional set used. Conclusions: Given that all modern diagnostic systems have been hierarchical, it is important to know that clinicians' thinking is also hierarchical. abstract_id: PUBMED:8582301 DSM-IV The fourth edition of the Diagnostic and Statistical Manual of Mental Disorders has been published by the American Psychiatric Association in 1994. DSM-IV relies upon the same basic concepts as DSM-III and DSM-III-R: explicit diagnostic criteria, multiaxial system, and a descriptive approach that attempts to be neutral with respect to theories of etiology. The DSM-IV revision process has included comprehensive and systematic reviews of the published literature, reanalyses of already-collected data sets and extensive issue-focused field trials. Considerable efforts have been made to ensure that the codes and terms provided in DSM-IV are fully compatible with both ICD-9-CM and ICD-10. According to the authors of DSM-IV, the major innovation of DSM-IV lies not in any of its specific content changes but rather in the systematic and explicit process by which it was constructed and documented. abstract_id: PUBMED:8582310 DSM IV and training: the limits Since its third edition, DSM has been considered to be an excellent tool for psychiatric research. The primary objective of this classificatory system was to put forward internationally accepted standard definitions. DSM diagnostic criteria are now indispensable for any publication in the scientific literature. It appears however that this work has gradually lost sight of its initial objective and is used as an educational tool for training of clinicians. What are the limits and risks of such a use? Can the DSM IV philosophy be reconciled with the objectives of training? Are the criteria in force for the selection of homogeneous patients groups identical to those which enable knowledge acquisition required for identification of disorders and their treatment? What is the heuristic value of enumerating symptoms and syndromes isolated from any theoretical context? Can symptoms be separated from the patient's history and personality? Is the excessive use of concurrent disorders not likely to be a source of conceptual and therapeutic inflation? Is a purely descriptive approach to psychiatric disorders not likely to run the risk of overestimating them? The points are discussed in succession by the authors. abstract_id: PUBMED:25750592 Alcohol Use Disorders: Translational Utility of DSM-IV Liabilities to the DSM-5 System. Objectives: Young adults have some of the highest rates of problem drinking and alcohol use disorders (AUDs) relative to any other age. However, recent evidence suggests that the DSM-IV hierarchical classification system of AUDs does not validly represent symptoms in the population; instead, it evinces a unitary, dimensional classification scheme. The DSM-5 has been altered to fit this changing, evidence-based conceptualization. Nevertheless, little is understood about the degree to which known risk factors for DSM-IV AUD diagnoses will transfer to the new DSM-5 guidelines in this group of high-risk drinkers. The current study built a coherent model of liabilities for DSM-IV AUDs in young adults and tested for transferability to DSM-5. Methods: N = 496 college students (51.10% male) were assessed on a variety of factors related to AUD risk, including demographics, substance use (past 90-days), and drinking motives. Liability models were created using all variables in Structural Equation Modeling to test direct and indirect effects on DSM diagnostic status. The best model under the DSM-IV was chosen based on fit and parsimony. This model was then applied to the DSM-5 system to test for transferability. Results: The best the fitting model for DSM-IV included direct influences of drug use, quantity-frequency of alcohol consumption, and social and coping drinking motives. Improved model fit was found when the DSM-5 system was the outcome. Conclusions: Knowledge of risk factors for AUDs appear to transfer well to the new diagnostic system. abstract_id: PUBMED:23932575 DSM-IV personality disorders and associations with externalizing and internalizing disorders: results from the National Epidemiologic Survey on Alcohol and Related Conditions. Background: Although associations between personality disorders and psychiatric disorders are well established in general population studies, their association with liability dimensions for externalizing and internalizing disorders has not been fully assessed. The purpose of this study is to examine associations between personality disorders (PDs) and lifetime externalizing and internalizing Axis I disorders. Methods: Data were obtained from the total sample of 34,653 respondents from Wave 2 of the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). Drawing on the literature, a 3-factor exploratory structural equation model was selected to simultaneously assess the measurement relations among DSM-IV Axis I substance use and mood and anxiety disorders and the structural relations between the latent internalizing-externalizing dimensions and DSM-IV PDs, adjusting for gender, age, race/ethnicity, and marital status. Results: Antisocial, histrionic, and borderline PDs were strong predictors for the externalizing factor, while schizotypal, borderline, avoidant, and obsessive-compulsive PDs had significantly larger effects on the internalizing fear factor when compared to the internalizing misery factor. Paranoid, schizoid, narcissistic, and dependent PDs provided limited discrimination between and among the three factors. An overarching latent factor representing general personality dysfunction was significantly greater on the internalizing fear factor followed by the externalizing factor, and weakest for the internalizing misery factor. Conclusion: Personality disorders offer important opportunities for studies on the externalizing-internalizing spectrum of common psychiatric disorders. Future studies based on panic, anxiety, and depressive symptoms may elucidate PD associations with the internalizing spectrum of disorders. abstract_id: PUBMED:24643833 Addictive behaviours from DSM-IV to DSM-5 Background: The 5th edition of the DSM was published in May, 2013. The new edition incorporates important changes in the classification of addiction. Aim: To compare the classification of addictive behaviours presented in DSM-IV with the classification presented in DSM-5 and to comment on the changes introduced into the new version. Method: First of all, the historical developments of the concept of addiction and the classification of addictive behaviours up to DSM-IV are summarised. Then the changes that have been incorporated into DSM-5 are described. Results: The main changes are: (1) DSM-IV substance related disorders and DSM-IV pathological gambling have been combined into one new DSM-5 category, namely 'Substance Related and Addictive Disorders'; (2) DSM-IV abuse and dependence have been combined into one new DSM-5 diagnosis, namely 'Substance Use Disorder'; (2a) the DSM-IV abuse criterion 'recurrent substance-related legal problems' and the DSM-5 criterion 'craving' has been introduced; and (2b) the criteria for (partial) remission have been sharpened. Conclusion: DSM-5 is an improvement on DSM-IV, but for the diagnosis of a psychiatric disorder and the treatment of a psychiatric patient, classification needs to be complemented with staging and profiling. abstract_id: PUBMED:9661099 Diagnostic assignment of criteria: clinicians and DSM-IV. The study examined clinician matching of diagnostic criteria to selected DSM-IV Axis I and II disorders. A national sample of clinical psychologists and psychiatrists assigned symptom criteria, presented in scrambled order by axis, to DSM-IV diagnoses with which they believed the criteria belonged, without using the DSM. On average, clinicians assigned 69% of Axis I criteria and 75% of Axis II criteria to the designated DSM-IV diagnosis. The Axis II data represent increased agreement over the 66% found for DSM-III-R. Reasons for the increase are discussed, focusing on modifications made in DSM-IV and increased familiarity with personality disorders. The significantly higher rate of agreement for Axis II over Axis I contrasts with typical reliability data which suggests that Axis I disorders are better defined. Specific points of disagreement between clinician criteria assignments and the DSM-IV are discussed. abstract_id: PUBMED:18729621 An alternative hierarchical organization of the mental disorders of the DSM-IV. With the approaching publication of the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM), alternative organizations of the DSM (4th ed.; DSM-IV; American Psychiatric Association, 1994) categories have been proposed. This article compares several published alternative organizations to clinicians' organization of the DSM-IV categories. As demonstrations of their organization of DSM-IV categories, psychologists and psychiatrists sorted 66 DSM-IV diagnostic categories into groups of similar diagnoses and then made progressively larger and smaller groups of diagnoses or placed similar groups next to each other on a table. Hierarchical agglomerative data analysis of clinicians' individual sortings showed that clinicians retained many lower level DSM-IV categories (e.g., anxiety disorders, mood disorders), but not the higher level DSM-IV categories (e.g., Axis I vs. Axis II). Instead, at the highest hierarchical level, clinicians' categories resembled the structure of the first edition of the DSM (American Psychiatric Association, 1952), which followed clinicians' diagnostic decision-making scheme, dividing mental disorders into organic versus nonorganic and then psychotic versus neurotic disorders. At minimum, these data suggest a DSM organization that makes sense to clinicians. abstract_id: PUBMED:23627600 Axis IV--psychosocial and environmental problems--in the DSM-IV. The aim of this study was to further explore the properties of axis IV in the Diagnostic and statistical manual of mental disorders, 4th edition (DSM-IV). In a naturalistic cross-sectional design, a group (n = 163) of young (18-25 years old) Swedish psychiatric outpatients was assessed according to DSM-IV. Psychosocial and environmental problems/axis IV were evaluated through structured interviewing by a social worker and by self-assessment on a questionnaire. Reliability between professional assessment and self-assessment of axis IV was examined. Concurrent validity of axis IV was also examined. Reliability between professional and self-assessed axis IV was fair to almost perfect, 0.31-0.83, according to prevalence and bias-adjusted kappa. Categories of psychosocial stress and environmental problems were related to the presence of axis I disorders, co-morbidity, personality disorders and decreasing Global Assessment of Functioning (GAF) values. The revised axis IV according to DSM-IV seems to have concurrent validity, but is still hampered by limited reliability. abstract_id: PUBMED:28963857 Prevalence and severity of eating disorders: A comparison of DSM-IV and DSM-5 among German adolescents. Objective: Changes in the DSM-5 eating disorders criteria sought to increase the clarity of the diagnostic categories and to decrease the preponderance of nonspecified eating disorders. The first objective of this study was to analyze how these revisions affect threshold and EDNOS/OSFED eating disorder diagnoses in terms of prevalence, sex ratios, and diagnostic distribution in a student sample. Second, we aimed to compare the impairment levels of participants with a threshold, an EDNOS/OSFED and no diagnosis using both DSM-IV and DSM-5. Method: A sample of 1654 7th and 8th grade students completed self-report questionnaires to determine diagnoses and impairment levels in the context of an eating disorder prevention program in nine German secondary schools. Height and weight were measured. Results: The prevalence of threshold disorders increased from .48% (DSM-IV) to 1.15% (DSM-5). EDNOS disorders increased from 2.90 to 6.23% when using OSFED-categories. A higher proportion of girls was found throughout all the diagnostic categories, and the sex ratios remained stable. The effect sizes of DSM-5 group differences regarding impairment levels were equal to or larger than those of the DSM-IV comparisons, ranging from small to medium. Discussion: We provide an in-depth overview of changes resulting from the revisions of DSM eating disorder criteria in a German adolescent sample. Despite the overall increase in prevalence estimates, the results suggest that the DSM-5 criteria differentiate participants with threshold disorders and OSFED from those no diagnosis as well as or even more distinctly than the DSM-IV criteria. Answer: Yes, clinicians do conceptualize DSM-IV disorders hierarchically. A study involving 76 clinicians who were asked to sort 67 diagnostic categories into groups using different instruction sets found that clinicians' sortings of mental disorders had a hierarchical structure regardless of the methodology, profession, expertise, and instructional set used (PUBMED:22549235). This suggests that clinicians naturally organize mental disorders in a hierarchical manner, which aligns with the hierarchical structure of modern diagnostic systems like the DSM-IV.
Instruction: Late potential analysis: is a mathematically-derived X,Y,Z lead system comparable to a true orthogonal X,Y,Z lead system? Abstracts: abstract_id: PUBMED:12431307 Late potential analysis: is a mathematically-derived X,Y,Z lead system comparable to a true orthogonal X,Y,Z lead system? Background: Analysis of ventricular late potentials (LP) with signal-averaged ECG (SAECG) using three bipolar, orthogonal X,Y, Z leads is a validated method of risk-stratification in patients prone to ventricular tachycardia. The aim of this study was to validate a ECG system, which allows LP analysis using X,Y, Z leads mathematically derived from the standard 12-lead ECG. Methods And Results: In 36 patients (age 56 +/- 12 years, coronary artery disease 71%, LVEF 46 +/- 14%) with known or suspected ventricular tachyarrhythmia, two consecutive SAECGs were recorded, one with mathematically derived and another one with true X,Y, Z leads. Time domain measurements with these different lead systems were compared using linear regression analysis and "Bland-Altman" plots. Correlation was good (r = 0.92) for the filtered QRS complex duration, but poor for the terminal QRS amplitude (RMS) and duration (LAS) criteria (r = 0.66 and 0.61, respectively; P &lt; 0.0001). Defining LPS as present if at least two of the three time domain criteria were abnormal, the result matched in 28 (78%), but differed in 8 (22%) patients. Conclusion: SAECG using X,Y, Z leads mathematically derived from the standard 12-lead ECG compared to true bipolar X,Y, Z leads show a close correlation in filtered QRS duration, but can differ considerably in the other time domain measurements, resulting in different interpretation of LP analysis in 22%. Therefore, SAECG registration should currently be performed with true X,Y, Z leads, until the accuracy of other approaches is validated. abstract_id: PUBMED:510349 An anatomical orthogonal four-electrode X-Y-Z lead system for universal ECG recording. Features of the P, QRS and T waves in the normal 12-lead ECG have been measured and the information displayed is estimated to be 75% redundant. The high level of redundancy results in an excessive volume of superfluous data. A simple 4-electrode 3-lead X-Y-Z system has been developed and is proposed for wide use in electrocardiology. The electrodes are anatomically orthogonal rather than electrically orthogonal. In a clinical test using only 3 of the standard leads, 95 of 100 records could be adequately interpreted. This high level of satisfactory interpretation with 3 leads has been experienced by other investigators. The 4-electrode system is currently being used for studies in high fidelity electrocardiology. It is suggested that a more appropriate name for vectorcardiogram (VCG) would be correlocardiogram (CCG). The 4-electrode system would simplify and be useful for this application. A simple 4-electrode 3-lead X-Y-Z system would facilitate the teaching recording and interpretation of ECG information by eliminating excessive redundant data. Evaluation of this lead system by other investogators is invited. abstract_id: PUBMED:33020917 A method for direct imaging of x-z cross-sections of fluorescent samples. The x-z cross-sectional profiles of fluorescent objects can be distorted in confocal microscopy, in large part due to mismatch between the refractive index of the immersion medium of typical high numerical aperture objectives and the refractive index of the medium in which the sample is present. Here, we introduce a method to mount fluorescent samples parallel to the optical axis. This mounting allows direct imaging of what would normally be an x-z cross-section of the object, in the x-y plane of the microscope. With this approach, the x-y cross-sections of fluorescent beads were seen to have significantly lower shape-distortions as compared to x-z cross-sections reconstructed from confocal z-stacks. We further tested the method for imaging of nuclear and cellular heights in cultured cells, and found that they are significantly flatter than previously reported. This approach allows improved imaging of the x-z cross-section of fluorescent samples. LAY DESCRIPTION: Optical distortions are common in confocal microscopy. In particular, the mismatch between the refractive index of the immersion medium of the microscope objective and the refractive index of the sample medium distorts the shapes of fluorescent objects in the x-z plane of the microscope. Here, we introduced a method to eliminate the shape-distortion in the x-z cross-sections. This was achieved by mounting fluorescent samples on vertical glass slides such that the cross-sections orthogonal to the glass surface could be imaged in the x-y plane of the microscope. Our method successfully improved the imaging of nuclear and cellular heights in cultured cells and revealed that the heights were significantly flatter than previously reported with conventional approaches. abstract_id: PUBMED:36892759 Crystallization of Z-DNA in Complex with Chemical and Z-DNA Binding Z-Alpha Protein. The molecular basis of Z-DNA recognition and stabilization is mostly discovered via X-ray crystallography. The sequences composed with alteration of purine and pyrimidine are known to adopt Z-DNA conformation. Due to the energy penalty for forming Z-DNA, the small molecular stabilizer or Z-DNA-specific binding protein is required for DNA to adopt Z conformation prior to crystallizing Z-DNA. Here we described the methods ranging from preparation of DNA and Z-alpha protein to crystallization of Z-DNA in detail. abstract_id: PUBMED:15058266 Protein Z Protein Z (PZ) is a 6.2 kDa vitamin K-dependent protein, synthesized in the liver. The gene for human PZ is localized to chromosome 13 at band 34 q. The structure of PZ is very similar to that of factors VII, IX, X and protein C. Very low plasma levels of protein Z were observed under oral anticoagulant treatment. The cause of this phenomenon might be increased protein Z binding on the surface of endothelial cells. Protein Z is consumed during coagulopathy. About 60% of humans suffering from a bleeding tendency of unknown origin presented with decreased plasma levels of protein Z. PZ forms a Ca(2+)-dependent complex with activated factor X (Xa) on phospholipid surfaces, that leads to the inhibition of factor Xa and decrease in thrombin generation. Inhibition of factor Xa may be caused directly by protein Z or indirectly by the activity of protein Z-dependent protease inhibitor (ZPI). ZPI is a 72 kDa member of the serpin family of proteinase inhibitors, synthesized in the liver. ZPI circulates in plasma in complex with protein Z. ZPI in the presence of Ca2+ and phospholipids inhibits factor Xa. The presence of protein Z enhances this process by more than 1000 times. ZPI also inhibits activated factor XI in the absence of protein Z, Ca2+ or phospholipids. Protein Z deficiency may induce bleeding as well as prothrombotic tendencies and might occur as an inherited disorder. Protein Z deficiency may aggravate mild bleeding tendency in subjects with diagnosed borderline decrease in von Willebrand factor and factor VII activity. Patients presenting with factor V Leiden mutation and low protein Z levels show earlier onset and higher frequency of thromboembolic events comparing to patients with normal protein Z levels. abstract_id: PUBMED:11848125 L-shell x-ray fluorescence measurements of lead in bone: system development. This paper reports on the development of an L-shell x-ray fluorescence (XRF) bone lead measurement system. A secondary target gave greater lead x-ray peak signal-to-background ratios than partially plane polarized XRF. Filtration did not improve the lead x-ray peak signal-to-background ratio: the gains in spectrum quality were outweighed by the losses caused by attenuation. There was a substantial matrix effect: the signal from a calcium-rich matrix was far lower than that from a calcium-free matrix. The effect of attenuation was, as expected, profound for the lead L x-rays: detection limits ranged from 18 to 217 microg Pb/g plaster with attenuation equivalent to 0-2.1 mm of skin or 0-3.7 mm of adipose tissue for the Pb Lalpha x-ray group (10.5 keV), and from 16 to 184 microg Pb/g plaster with attenuation equivalent to 0-1.3 mm of skin or 0-2.3 mm of adipose tissue for the Pb Lbeta x-ray group (12.6 keV). abstract_id: PUBMED:17441240 A 4 x 500 mm2 cloverleaf detector system for in vivo bone lead measurement. A 4 x 500 mm2 "cloverleaf" low energy germanium detector array has been assembled for the purpose of in vivo bone lead measurement through x-ray fluorescence. Using 109Cd as an exciting source, results are reported from a leg phantom simulating measurement of lead in a human tibia. For high activity (4.0-4.4 GBq) and low activity (0.18-0.19 GBq) sources, measurement results are reported for both the cloverleaf system and a conventional single detector system of equivalent surface area (2000 mm2). The mean uncertainty and reproducibility of measurement were both significantly improved for the cloverleaf system with a high activity 109Cd source. When using a source activity of 4.4 GBq, measurement of the phantom resulted in an average bone lead uncertainty of 0.79 microg/g and a reproducibility of 0.84 microg/g. These results represent the highest precision yet reported from a bone lead x-ray fluorescence system. abstract_id: PUBMED:12499590 Taxumairols X--Z, new taxoids from Taiwanese Taxus mairei. In addition to 19-dydroxybaccatin III, 1beta-hydroxy-5 alpha-deacetylbaccatin I, taxayuntin G and 13-O-deacetyltaxumairol Z (4), three new taxane diterpenoids, taxumairols X (1), Y (2), Z (3) have been isolated from extracts of the Formosan Taxus mairei (LEMEE &amp; LEVL.) S. Y. HU. Compounds 1-2 belong to the 11(15--&gt;1)-abeo-taxane system, having a tetrahydrofuran ring at C-2, C-3, C-4 and C-20. The new compound 3 and 4, which was misidentified previously are derivatives of 11(15--&gt;1)-abeo-taxane with an intact oxirane system. The structures of compounds 1-4 were elucidated on the basis of extensive two dimensional (2D)-NMR analysis. abstract_id: PUBMED:432262 An X-ray fluorescence technique for in vivo determination of lead concentration in a bone matrix. We have previously reported the in vivo detection of lead in the skeleton of man by means of X-ray fluorescence analysis using a 740 MBq 57Co source for excitation and a 1 cm(3) Ge(Li) detector for registration of the Pb Kalpha and Kbeta radiation. The varying geometry, density and atomic composition of the tissues of interest (mainly fingers) introduce several problems in estimation of the true concentration of a given element. A two-component cylindrical finger phantom was therefore constructed from silica paraffin wax and animal bone ash. The diameter of the finger bone was estimated from X-ray examinations in two orthogonal projections. The bone mineral concentration was then estimated from the quotient of the number of coherent and Compton scattered primary photons. The lead concentration in the finger bones was then derived from a measurement on a finger phantom made of silica paraffin wax and bone ash with the same size and bone mineral concentration as the real bone. The minimum detectable lead concentration in a finger bone was 14 microgram g(-1) for 15 min measuring time. The lead concentration measured in workers from a metal industry was found to be in the range of 40-100 microgram g(-1). abstract_id: PUBMED:11545507 Calibration and characterization of a digital X-ray fluorescence bone lead system. Five different combinations of digital shaping parameters were tested for a newly assembled. 109Cd source, K X-ray fluorescence bone lead system. System calibration results are presented, along with analyses of measurement uncertainty and reproducibility obtained from repeat measurements of a bone phantom and a human tibia. Digital shaping parameters of 2.4 micros for a rise time/fall time and 1.2 micros for a flat top width were identified as superior. The digital system provided significant improvements in overall measurement precision, with gains of at least 25-35% over conventional system results. Answer: Late potential analysis using a mathematically derived X,Y,Z lead system is not fully comparable to a true orthogonal X,Y,Z lead system. A study aimed to validate an ECG system that allows late potential (LP) analysis using X,Y,Z leads mathematically derived from the standard 12-lead ECG found that while there was a good correlation for the filtered QRS complex duration (r = 0.92), the correlation was poor for the terminal QRS amplitude (RMS) and duration (LAS) criteria (r = 0.66 and 0.61, respectively; P < 0.0001). When defining LPs as present if at least two of the three time domain criteria were abnormal, the results matched in 78% of patients but differed in 22%. Therefore, the study concluded that SAECG registration should currently be performed with true X,Y,Z leads, until the accuracy of other approaches is validated (PUBMED:12431307).
Instruction: Pilot study in young Thai children with delayed bottle-weaning at Queen Sirikit National Institute of Child Health: does it affect iron status? Abstracts: abstract_id: PUBMED:19253508 Situation of baby bottle use: is it suitable to recommend weaning by the age of one year? Background: Inappropriate baby bottle use is associated with many adversely health affects such as dental caries, refuse to eat etc. International pediatric institutes suggest weaning by the age of 1 year To establish a practical recommendation for Thai children, needs situation analysis. Objective: To determine the percentage of baby bottle use including late night feeding, behavior contributing to baby bottle addiction and chance of adverse health affects. Material And Method: A cross sectional descriptive study was performed in well child clinic at QSNICH during November 2003-December 2007. One thousand thirty-eight caretakers from 13 groups of children age 1 month-4 years were randomly included. Questionnaires were used and analyzed by SPSS program. Results: A total of 1,038 caretakers were interviewed. Parents comprised 70% of the caretakers. Children aged 1-2 years, 2-3 years, and 3-4 years, are found to remain on the bottle feeding constituted 92%, 70% and 42% respectively and remain on the night feeding comprised 70%, 50%, and 37% respectively. And more specifically, children at the age of six month have night feeding up to 85%. The weaning ages from the bottle were widely distributed, the earliest was one year (1%), the mean age was 2.5 years (SD = 0.612), the mode was at the age of two years (13%). Forty-six percent of children age 6 month-4 years received a bottle to sleep with 34% of caretakers offering bottle feeding when the child just moved the body. After bedtime mouth care, 48% of children were back to bottle-feeding. Eighteen percent of children age 2-4 year who were bottle fed were getting more milk volume than recommended with the maximum amount of 56 ounces a day. Fifty-six percent of children at 2 years and 70% at 4 years, received bottle-feeding more frequently than recommended with the maximum of 11 times a day. Eighty-eight percent of the caretakers did not know the recommended age of weaning. Conclusion: The children in this study still use baby bottle and have night feeding far beyond the recommended age including the practice of bottle to sleep, returning to bottle after dental care which will lead to addiction and adverse health affect. The age at which the children can quit and the realization of caretakers of when to quit are scattered. Suggestion: Recommendation should be at the age of one year but be flexible to more half a year to the age of 1 omega year, with encourage appropriate using and preparation to the weaning process. abstract_id: PUBMED:28511552 R147W in PROC Gene Is a Risk Factor of Thromboembolism in Thai Children. The p.R147W mutation, the c.C6152T in exon 7, causing a change in amino acid from arginine to tryptophan of the PROC gene has been reported as a common mutation in Taiwanese populations with venous thromboembolism (VTE). The present study aimed to identify the prevalence of p.R147W in the Thai population and children with TE and the risk of developing TE. Patients aged ≤18 years diagnosed with TE were enrolled. The PROC gene was amplified by polymerase chain reaction using a specific primer in exon 7. The restriction fragment length polymorphism was designed using MwoI restriction enzyme. A total of 184 patients and 690 controls were enrolled. The most common diagnosis of TE was arterial ischemic stroke (AIS), at 100 (54.3%), followed by VTE, at 38 (20.6%), and cerebral venous sinus thrombosis (CVST), at 23 (12.5%). The prevalence of heterozygous and homozygous p.R147W in patients and controls was 9.5% versus 5.8% and 2.7% versus 0.1%, respectively. Heterozygous p.R147W had odds ratios (ORs) of 1.8 (95% confidence interval [CI]: 1.0-3.2, P = .04), 3.2 (95% CI: 1.2-8.2, P = .009), and 4.5 (95% CI: 1.6-12.8, P = .002) of developing overall TE, VTE, and CVST, respectively. Homozygous p.R147W had ORs of 20.2 (95% CI: 2.3-173.7, P &lt; .001), 21.4 (95% CI: 2.2-207.9, P &lt; .001), and 43.3 (95% CI: 3.8-490.6, P &lt; .001) of developing overall TE, AIS, and CVST, respectively. This study suggested that p.R147W is a common mutation and increased risk of TE in Thai children. abstract_id: PUBMED:35863207 HLA-DRB1∗1502 Is Associated With Anti-N-Methyl-D-aspartate Receptor Encephalitis in Thai Children. Background: Anti-N-methyl-d-aspartate receptor encephalitis (anti-NMDARE) is one of the most common types of autoimmune encephalitis. Most patients have no apparent immunologic triggers, which suggests a genetic predisposition. This study was conducted to identify human leukocyte antigen (HLA) class II alleles associated with anti-NMDARE in Thai children. Methods: This case-control study enrolled patients younger than 18 years who were diagnosed with anti-NMDARE between January 2010 and December 2020. A "good outcome" was determined as a patient with a modified Rankin scale score of less than 2 at any follow-up visit. HLA genotypes were determined at four-digit alleles using reverse sequence-specific oligonucleotide probe hybridization. The HLA class II allele frequency in patients was compared with that in a database of 101 healthy control Thai children. Results: Thirty-four patients were enrolled with a mean age of 12.8 ± 5.6 years (females 85.3%). The HLA-DRB1∗1502 allele frequency was significantly higher in patients than in controls (odds ratio, 2.32; 95% confidence interval, 1.11-4.8, P = 0.023). A good outcome was noted in 14 of 14 (100%) HLA-DRB1∗1502-positive patients (median time to a good outcome, 6 months) and 14 of 17 (82.3%) HLA-DRB1∗1502-negative patients (median time to a good outcome, 3 months). Two (11.8%) HLA-DRB1∗1502-positive patients had one relapse each, and six (35.3%) HLA-DRB1∗1502-negative patients had one to three relapses. Conclusions: HLA-DRB1∗1502 was significantly associated with anti-NMDARE in our patients. Most patients had good outcomes. HLA-DRB1∗1502-positive patients tended to require a longer time to achieve a good outcome but had less frequent relapses than HLA-DRB1∗1502-negative patients. abstract_id: PUBMED:25902157 Evolutionary relationship of 5'-untranslated regions among Thai dengue-3 viruses, Bangkok isolates, during 24 year-evolution. Objective: To study evolutionary relationship of the 5'untranslated regions (5'UTRs) in low passage dengue3 viruses (DEN3) isolated from hospitalized children with different clinical manifestations in Bangkok during 24 year-evolution (1977-2000) comparing to the DEN3 prototype (H87). Methods: The 5'UTRs of these Thai DEN3 and the H87 prototype were amplified by RT-PCR and sequenced. Their multiple sequence alignments were done by Codon Code Aligner v 4.0.4 software and their RNA secondary structures were predicted by MFOLD software. Replication of five Thai DEN3 candidates comparing to the H87 prototype were done in human (HepG2) and the mosquito (C6/36) cell lines. Results: Among these Thai DEN3, the completely identical sequences of their first 89 nucleotides, their high-order secondary structure of 5'UTRs and three SNPs including the predominant C90T, and two minor SNPs including A109G and A112G were found. The C90T of Thai DEN3, Bangkok isolates was shown predominantly before 1977. Five Thai DEN3 candidates with the predominant C90T were shown to replicate in human (HepG2) and the mosquito (C6/36) cell lines better than the H87 prototype. However, their highly conserved sequences as well as SNPs of the 5'UTR did not appear to correlate with their disease severity in human. Conclusions: Our findings highlighted evolutionary relationship of the completely identical 89 nucleotide sequence, the high-order secondary structure and the predominant C90T of the 5'UTR of these Thai DEN3 during 24 year-evolution further suggesting to be their genetic markers and magic targets for future research on antiviral therapy as well as vaccine approaches of Thai DEN3. abstract_id: PUBMED:22043763 Preliminary study on assessment of lead exposure in Thai children aged between 3-7 years old who live in Umphang district, Tak Province. Background: Centers of Disease Control of the United States of America (CDC) informs Ministry of Public Health, Thailand that up to 13% of Burmese refugee children who are transferred to the United States of America during 2007-2009 have elevated blood lead levels (EBLL, Blood Lead Level &gt; or = 10 microg/dl). These are children from a number of refugee camps in Tak Province; two camps are near Umphang but other camps are not. In June 2008, CDC, the result of investigation of Centers for Disease Control/Thailand Ministry of Public Health Collaboration (CDC/TUC) and International Organization for Migration, Thailand indicates that 33 of 64 children aged 6 months to 15 years (5.1%) who live in Mae La, Umpiem and Nupo camps have elevated blood lead level. However, no study on how Thai children who live nearby those camps are exposed to lead. Subsequently, Queen Sirikit National Institute of Child Health, Bangkok, Thailand contacts relevant organizations in Tak Province in order to investigate lead exposure and evaluate health status of Thai children who live close to Burmese refugee camps. Objective: 1) Evaluation of lead exposure of Thai children who live nearby Burmese refugee camps; 2) Assessment of risk factors on lead exposure of the children as mentioned above. Material And Method: The present study adopts a retrospective study based on information gathered from health assessment on 213 Thai children aged between 3-7 years old who live nearby Burmese refugee camps. The health assessment was conducted from April 30th, 2010 to May 5th, 2010. The information is from 3 sources. The first source is from blood sampling in order to assess lead level and ferritin level. The next source is from interview of persons who provide primary care in order to identify risk factors on lead exposure of target children. The last source is from physical examination and developmental assessment conducted by pediatricians and special nurses for child development in order to identify health and developmental problems. Results: The population of the present study was 213 of Thai children are 3-7 years old, average age is 54.54 +/- 12.41 months-old. The average blood lead level is 7.71 +/- 4.62 microg/dl (range = 3-25 microg/dl). Elevated blood lead levels of all populations show that 57 children (26%) have blood lead level at 10 microg/dl or more. Analysis of odds by controlling all risk factors (adjusted OR) that effect on blood lead level (&gt; or =10 microg/dl) indicates that only gender and source of drinking water are risk factors. To clarify, male children would have 2.8 times higher risk than female children. Children who drink water from tap and canal have 15 times and 72 times, respectively, higher risk than children drinking from bottle water. Conclusion: The result of the present study shows that 1 of 4 of Thai children at Umphang district, Tak Province who lived near Burmese refugee camps aged between 3-7 years old have blood lead level higher than concerning level. Thus, it is necessary to identify risk factors on lead exposure and policy of blood lead screening in some areas in Thailand. abstract_id: PUBMED:28684918 Genotype and phenotype correlation in intracranial hemorrhage in neonatal factor VII deficiency among Thai children. Congenital factor VII (FVII) deficiency is a rare inherited coagulopathy. The clinical manifestations and clinical findings vary widely, ranging from asymptomatic to life-threatening bleeding, including intracranial hemorrhage (ICH), with prolonged prothrombin time, normal partial thromboplastin time and normal platelet counts, which are confirmed by the low level of FVII assay. Treatment consists of fresh frozen plasma (FFP), prothrombin complex concentrates (PCCs), and recombinant activated FVII to treat bleeding and prophylactic therapy. Here, we report four patients with FVII levels &lt;5% (severe type) who presented ICH during the neonatal period. The IVS6+1G&gt;T was the most common (50%) mutation identified in our study, followed by the K376X nonsense mutation (37.5%). In our study, we found that genetic information affected the severity of congenital FVII deficiency with ICH. abstract_id: PUBMED:34890117 Young-onset diabetes patients in Thailand: Data from Thai Type 1 Diabetes and Diabetes diagnosed Age before 30 years Registry, Care and Network (T1DDAR CN). Aims/introduction: There is a lack of current information regarding young-onset diabetes in Thailand. Thus, the objectives of this study were to describe the types of diabetes, the clinical characteristics, the treatment regimens and achievement of glycemic control in Thai patients with young-onset diabetes. Materials And Methods: Data of 2,844 patients with diabetes onset before 30 years-of-age were retrospectively reviewed from a diabetes registry comprising 31 hospitals in Thailand. Gestational diabetes was excluded. Results: Based on clinical criteria, type 1 diabetes was identified in 62.6% of patients, type 2 diabetes in 30.7%, neonatal diabetes in 0.8%, other monogenic diabetes in 1.7%, secondary diabetes in 3.0%, genetic syndromes associated with diabetes in 0.9% and other types of diabetes in 0.4%. Type 1 diabetes accounted for 72.3% of patients with age of onset &lt;20 years. The proportion of type 2 diabetes was 61.0% of patients with age of onset from 20 to &lt;30 years. Intensive insulin treatment was prescribed to 55.2% of type 1 diabetes patients. Oral antidiabetic agent alone was used in 50.8% of type 2 diabetes patients, whereas 44.1% received insulin treatment. Most monogenic diabetes, secondary diabetes and genetic syndromes associated with diabetes required insulin treatment. Achievement of glycemic control was identified in 12.4% of type 1 diabetes patients, 30% of type 2 diabetes patients, 36.4% of neonatal diabetes patients, 28.3% of other monogenic diabetes patients, 45.6% of secondary diabetes patients and 28% of genetic syndromes associated with diabetes patients. Conclusion: In this registry, type 1 diabetes remains the most common type and the prevalence of type 2 diabetes increases with age. The majority of patients did not achieve the glycemic target, especially type 1 diabetes patients. abstract_id: PUBMED:27996280 A comparative pilot study of the efficacy and safety of nebulized magnesium sulfate and intravenous magnesium sulfate in children with severe acute asthma. Introduction: Severe asthma attacks are life-threatening, and require serious medical attention. Intravenous MgSO₄ is an efficient medication, proven to improve outcomes. To date, most research has focused on administration of nebulized MgSO₄ in adults with critical asthma. However, its benefits for treating childhood asthma has been little investigated. This study compared the clinical efficacy and adverse effects of nebulized MgSO₄ and intravenous MgSO₄ in the treatment of children with severe acute asthma. Method: A prospective, open-label, randomized, controlled pilot study was conducted in children with severe asthma exacerbation admitted at the Queen Sirikit National Institute of Child Health. Twenty-eight patients were randomized to receive three intermittent doses of nebulized or intravenous MgSO4. The Modified Wood's Clinical Asthma Score was determined prior to, and at 20, 40, 60, 120, 180 and 240 minutes after treatment administration. The length of hospital stay was also recorded. Results: Fifteen patients received nebulized isotonic MgSO₄ and 13 were administered intravenous MgSO₄. There were no differences in the baseline characteristics of the two groups, including their initial asthma severity scores (4.87 ± 0.92 vs. 5.0 +0.82; p = 0.69). No statistically significant differences between the two groups were identified at 60 minutes (2.47 ± 0.83 vs. 2.77 ± 0.93; p = 0.37) until 240 minutes. The length of hospital stay for both groups was also similar (4.0 ±1.2 vs. 4.54 ± 2.7; p = 0.51). No adverse effects from MgSO₄ administration were observed among the participants. Conclusions: In this small sample size we demonstrated that nebulized MgSO₄ and intravenous MgSO₄ are both clinically beneficial and safe for Thai children suffering from severe asthma exacerbation. abstract_id: PUBMED:33822359 Rapid exome sequencing as the first-tier investigation for diagnosis of acutely and severely ill children and adults in Thailand. The use of rapid DNA sequencing technology in severely ill children in developed countries can accurately identify diagnoses and positively impact patient outcomes. This study sought to evaluate the outcome of Thai children and adults with unknown etiologies of critical illnesses with the deployment of rapid whole exome sequencing (rWES) in Thailand. We recruited 54 unrelated patients from 11 hospitals throughout Thailand. The median age was 3 months (range, 2 days-55 years) including 47 children and 7 adults with 52% males. The median time from obtaining blood samples to issuing the rWES report was 12 days (range, 5-27 days). A molecular diagnosis was established in 25 patients (46%), resulting in a change in clinical management for 24 patients (44%) resulting in improved clinical outcomes in 16 patients (30%). Four out of seven adult patients (57%) received the molecular diagnosis which led to a change in management. The 25 diagnoses comprised 23 different diseases. Of the 34 identified variants, 15 had never been previously reported. This study suggests that use of rWES as a first-tier investigation tool can provide tremendous benefits in critically ill patients with unknown etiology across age groups in Thailand. abstract_id: PUBMED:22043765 Metabolic syndrome in obese Thai children and adolescents. Background: The prevalence of childhood obesity in Thailand is increasing. Obese children are at risk of metabolic syndrome. Objective: To determine the prevalence of metabolic syndrome in obese Thai children with various degrees of obesity and its association with severity of obesity, insulin resistance and C-reactive protein. Material And Method: A cross-sectional study of 89 obese Thai children and adolescents was conducted at the Department of Pediatrics, Faculty of Medicine, Ramathibodi Hospital, Mahidol University. Family histories of diabetes mellitus, hypertension, obesity and dyslipidemia were assessed. Anthropometry and cardiovascular risks including levels of fasting blood sugar, oral glucose tolerance test, insulin, C-reactive protein (CRP) and lipid profile were determined. Metabolic syndrome was defined using International Diabetes Federation criteria adjusted for age and sex. Univariate and logistic regression analysis were used for identification of the independent associated factors. Results: The overall prevalence of metabolic syndrome in the present study was 16.9%. The percentages of metabolic syndrome in subjects with moderate, severe and morbid obesity were 10.5, 23.1 and 22.2 respectively. Univariate analysis revealed that metabolic syndrome had a statistically significant association with insulin level over 25 microIU/mL, homeostasis model for assessment of insulin resistance (HOMA-IR) equal to 3.16 or more and CRP over 3 mg/L. Logistic regression analysis revealed that only insulin level over 25 microIU/mL was independently associated with metabolic syndrome (OR 7.24; 95% CI: 2.01-26.10). Conclusion: The prevalence of metabolic syndrome is high among obese Thai children and adolescents. Prevention and proper management of metabolic syndrome including treatment of obesity should be considered in obese children. Answer: The provided abstracts do not contain any information about a pilot study specifically addressing the impact of delayed bottle weaning on iron status in young Thai children at the Queen Sirikit National Institute of Child Health. The abstracts cover various topics related to child health in Thailand, including baby bottle use and weaning recommendations (PUBMED:19253508), genetic factors associated with thromboembolism (PUBMED:28511552), HLA associations with anti-NMDA receptor encephalitis (PUBMED:35863207), evolutionary relationships of dengue virus (PUBMED:25902157), lead exposure in children (PUBMED:22043763), genotype-phenotype correlation in factor VII deficiency (PUBMED:28684918), young-onset diabetes (PUBMED:34890117), efficacy and safety of nebulized magnesium sulfate in acute asthma (PUBMED:27996280), rapid exome sequencing for diagnosis in critically ill patients (PUBMED:33822359), and metabolic syndrome in obese Thai children (PUBMED:22043765). None of these abstracts mention a study about delayed bottle weaning and its effect on iron status.
Instruction: Is Mycoplasma hominis a vaginal pathogen? Abstracts: abstract_id: PUBMED:11158693 Is Mycoplasma hominis a vaginal pathogen? Objective: To evaluate the role of Mycoplasma hominis as a vaginal pathogen. Design: Prospective study comprising detailed history, clinical examination, sexually transmitted infection (STI) and bacterial vaginosis screen, vaginal swabs for mycoplasmas and other organisms, follow up of bacterial vaginosis patients, and analysis of results using SPSS package. Setting: Genitourinary medicine clinic, Royal Liverpool University Hospital. Participants: 1200 consecutive unselected new patients who had not received an antimicrobial in the preceding 3 weeks, and seen by the principal author, between June 1987 and May 1995. Main Outcome Measures: Relation of M. hominis isolation rate and colony count to: (a) vaginal symptoms and with the number of polymorphonuclear leucocytes (PMN) per high power field in the Gram stained vaginal smear in patients with a single condition--that is, candidiasis, bacterial vaginosis, genital warts, chlamydial infection, or trichomoniasis, as well as in patients with no genital infection; (b) epidemiological characteristics of bacterial vaginosis. Results: 1568 diagnoses were made (the numbers with single condition are in parenthesis). These included 291 (154) cases of candidiasis, 208 (123) cases of bacterial vaginosis, 240 (93) with genital warts, 140 (42) chlamydial infections, 54 (29) cases of trichomoniasis, and 249 women with no condition requiring treatment. M. hominis was found in the vagina in 341 women, but its isolation rates and colony counts among those with symptoms were not significantly different from those without symptoms in the single condition categories. There was no association between M. hominis and the number of PMN in Gram stained vaginal smears whether M. hominis was present alone or in combination with another single condition. M. hominis had no impact on epidemiological characteristics of bacterial vaginosis. Conclusion: This study shows no evidence that M. hominis is a vaginal pathogen in adults. abstract_id: PUBMED:9024109 Vaginal flora changes associated with Mycoplasma hominis. Objective: The aim of this study was to investigate any association between vaginal carriage of Mycoplasma hominis and genital signs and symptoms, other microbial findings, and some risk behavior factors in women with and without bacterial vaginosis. Study Design: Women who had attended two family planning clinics and a youth clinic for contraceptive advice were divided depending on the result of vaginal culture for Mycoplasma hominis and the occurrence of bacterial vaginosis. The study population included 123 (12.3%) women who harbored Mycoplasma hominis. Those 873 (87.7%) with a negative culture for Mycoplasma hominis served as a comparison group. In the former group, 50 (40.7%) had bacterial vaginosis, which was also the case in 81 (9.3%) of the women in the comparison group. The groups were compared with regard to genital signs and symptoms, results of vaginal wet smear microscopy and other office tests, vaginal flora changes as detected by culture, and other means and detection of sexually transmitted diseases. Any history of sexually transmitted diseases and other genital infections, reproductive history, use of oral contraceptives, and smoking habits were registered. Results: Women who harbored Mycoplasma hominis had significantly more often complained of a fishy odor, had a positive amine test, a vaginal pH &gt; 4.7, and clue cells than did the comparison group; all these statements were true before and after bacterial vaginosis had been excluded. Vaginal discharge was not significantly more often complained of, and a pathologic discharge was not more often detected in the Mycoplasma hominis carriers. Ureaplasma urealyticum occurred in 75% of the Mycoplasma hominis-positive women and in 59% of the comparison group (p = 0.001). The leukocyte/epithelial cell ratio did not differ significantly from that of the Mycoplasma hominis culture-negative controls. Conclusion: The study suggests that Mycoplasma hominis is associated with a number of genital signs and symptoms even after exclusion of bacterial vaginosis. abstract_id: PUBMED:28695118 Mycoplasma hominis and Mycoplasma genitalium in the Vaginal Microbiota and Persistent High-Risk Human Papillomavirus Infection. Background: Recent studies have suggested that the vaginal microenvironment plays a role in persistence of high-risk human papillomavirus (hrHPV) infection and thus cervical carcinogenesis. Furthermore, it has been shown that some mycoplasmas are efficient methylators and may facilitate carcinogenesis through methylation of hrHPV and cervical somatic cells. We examined associations between prevalence and persistence of Mycoplasma spp. in the vaginal microbiota, and prevalent as well as persistent hrHPV infections. Methods: We examined 194 Nigerian women who were tested for hrHPV infection using SPF25/LiPA10 and we identified Mycoplasma genitalium and Mycoplasma hominis in their vaginal microbiota established by sequencing the V3-V4 hypervariable regions of the 16S rRNA gene. We defined the prevalence of M. genitalium, M. hominis, and hrHPV based on positive result of baseline tests, while persistence was defined as positive results from two consecutive tests. We used exact logistic regression models to estimate associations between Mycoplasma spp. and hrHPV infections. Results: The mean (SD) age of the study participants was 38 (8) years, 71% were HIV positive, 30% M. genitalium positive, 45% M. hominis positive, and 40% hrHPV positive at baseline. At follow-up, 16% of the women remained positive for M. genitalium, 30% for M. hominis, and 31% for hrHPV. There was a significant association between persistent M. hominis and persistent hrHPV (OR 8.78, 95% CI 1.49-51.6, p 0.01). Women who were positive for HIV and had persistent M. hominis had threefold increase in the odds of having persistent hrHPV infection (OR 3.28, 95% CI 1.31-8.74, p 0.008), compared to women who were negative for both. Conclusion: We found significant association between persistent M. hominis in the vaginal microbiota and persistent hrHPV in this study, but we could not rule out reverse causation. Our findings need to be replicated in larger, longitudinal studies and if confirmed, could have important diagnostic and therapeutic implications. abstract_id: PUBMED:16922160 Mycoplasma hominis in female genital tract In dependence on health of women the female genital tract is colonised by different microorganisms. Mycoplasma hominis was the first mycoplasma of human origin to be isolated. M. hominis, a common inhabitant of the vagina of healthy women, becomes pathogenic once it invades the internal genital organs. M. hominis is associated with bacterial vaginosis but it is still unclear whether the organism really contributes to a pathological process in which so many different bacteria are involved. The aim of this article is to summarize known information about these microorganisms. abstract_id: PUBMED:15959994 Impact of Mycoplasma hominis and Ureaplasma urealyticum on the concentration of proinflammatory cytokines in vaginal fluid The main aim of this study was to determine impact of Mycoplasma hominis and Ureaplasma urealyticum on the concentrations of selected proinflammatory cytokines in vaginal fluid in pregnant women. The samples were obtained from 120 pregnant women at 22 to 36 weeks gestation. Vaginal fluid were analyzed for the concentrations of IL-1 alpha, IL-1 beta, IL-6 and IL-8 using standard enzyme-linked immunosorbent assay technique (ELISA), and cervical fluid for prevalence of Mycoplasma hominis and Ureaplasma urealyticum. Genital mycoplasmas were diagnosed in 36 of 120 pregnant women (30%), (in 17 of 36 women (47.2%) both M. hominis and U. urealyticum, in 14 women (38.9%) only U. urealyticum, and in 5 cases (13.8%) only M. hominis were diagnosed). Vaginal levels of IL-8 was statistically higher among women with genital mycoplasmas infection, as compared to group without these bacteria (p=0.033), while there was no correlation between IL-1 alpha, IL-1 beta and IL-6 concentrations and genital mycoplasmas infection. Future studies should concentrate on evaluation the impact of other lower genital tract bacteria on concentration of IL-8 and other proinflammatory cytokines. abstract_id: PUBMED:32131651 Vaginal Ureaplasma urealyticum or Mycoplasma hominis and preterm delivery in women with threatened preterm labor. Background: Amniotic fluid infection with Ureaplasma urealyticum or Mycoplasma hominis can cause chorioamnionitis and preterm birth. The aim of this study was to examine whether vaginal Ureaplasma urealyticum/Mycoplasma hominis colonization is predictive of preterm delivery in patients exhibiting signs of threatened preterm birth or those with asymptomatic short cervix. Methods: The present retrospective study, which was performed in a perinatal tertiary center, included patients carrying a singleton pregnancy who were referred to the emergency Ob/Gyn unit because of regular preterm uterine contractions and/or short cervical length (&lt;20 mm) at 22-33 weeks of gestation, and in whom a vaginal U. urealyticum/M. hominis examination (Urea-arginine LYO-2, BioMerieux®) was performed. Univariate and multivariate analyses were performed to assess the association between vaginal U. urealyticum or M. hominis and chorioamnionitis or preterm delivery. Results: The median gestational age of the 94 enrolled patients was 29.9 weeks, and 54 (57%) of the patients were vaginal U. urealyticum/M. hominis-positive. The preterm delivery rate in the positive group was higher than in the negative group (53 versus 25%; p = .007). Vaginal U. urealyticum/M. hominis positivity was found to be an independent risk factor for preterm birth at &lt;37 weeks of gestation (adjusted odds ratio = 4.0, 95% confidence interval, 1.1-15.3) in a multivariate analysis adjusted for age, history of preterm delivery and conization, gestational age, cervical length, presence of vaginal bleeding, vaginal fetal fibronectin and serum C-reactive protein at test. U. urealyticum/M. hominis positivity was not associated with delivery at &lt;34 weeks or chorioamnionitis. Conclusion: A positive vaginal U. urealyticum/M. hominis culture is an independent predictive factor for preterm birth in patients with symptomatic threatened preterm labor and/or short cervix. abstract_id: PUBMED:28838864 Pelvic Abscess Secondary to Mycoplasma Hominis after Vaginal Laceration. Background: Mycoplasma hominis frequently colonizes the urogenital and respiratory tracts of healthy individuals. It has also been associated with genitourinary tract and extragenital syndromes. Case: We present a 14-year-old girl who developed a pelvic abscess secondary to M. hominis after a vaginal laceration during sexual intercourse. Despite drainage and broad-spectrum antimicrobial therapy, the patient remained symptomatic until M. hominis was identified and specific therapy instituted. Summary And Conclusion: Health care providers need to be aware of the potential for M. hominis as a causal agent in patients who present with pelvic abscesses after vaginal trauma. This case highlights the challenges that exist in the diagnosis and treatment of M. hominis, because bacterial cultures are often negative and empiric antimicrobial agents do not provide adequate antimicrobial coverage. abstract_id: PUBMED:29054551 Mycoplasma hominis bacteremia. An underestimated etiological agent Mycoplasma hominis is a fastidious bacterium, which usually colonizes the lower urogenital tract and may cause systemic infections in neonates and genital infections in adults. It can also be the cause of serious extra-genital infections, mainly in immunosuppressed or predisposed subjects. Case Presentation: We describe a case of bacteremia caused by M. hominis in a previously healthy woman after uterine curettage due to incomplete abortion. M. hominis could be an underestimated cause of bacteremia in immunocompetent patients. Mycoplasma organisms have fastidious growth requirements, are often difficult to culture on a cell-free medium and have no cell wall. The conventional method for detection may fail. This is the first report of M. hominis isolation from a positive automated blood culture (BD BACTEC, USA). abstract_id: PUBMED:23427443 Diagnosis of Mycoplasma hominis, Ureaplasma parvum y Ureaplasma urealyticum in patients with bacterial vaginosis An observational descriptive study to determine the frequency of Mycoplasma hominis, Ureaplasma parvum and Ureaplasma urealyticum isolates in patients with bacterial vaginosis was carried out in 296 patients who had vaginal secretion and were seen at two hospitals. The diagnosis was based on Amsel's criteria. Endocervical swabs were taken from women positive to this disease for M. hominis and Ureaplasma spp. diagnosis by traditional methods. Polymerase chain reaction identified U. parvum and U. urealyticum. Bacterial vaginosis was diagnosed in 30.1% of females, and in 77.5% of them the studied urogenital mycoplasmas were present. M. hominis was the most common species (71%) whereas U. parvum and Urealyticum were detected in 23.2 % and 5.8% of cases respectively. The diagnosis of Mycoplasmas and ureaplasmas should be performed in females with bacterial vaginosis, which will allow applying adequate therapeutic control and avoiding future pathologies in the genital tract. abstract_id: PUBMED:33610259 Mycoplasma hominis: An under recognized pathogen. Mycoplasma hominis, a commensal of the genital tract, is a potential underestimated pathogen causing both genitourinary and extragenital infections including neonatal infections. Septic arthritis, prosthetic joint infection, central nervous system (CNS) infections, infective endocarditis and abscess formation are common extragenital infections associated mainly with immunocompromised patients. Mycoplasma hominis lipoproteins play an important role in pathogenicity and directly interact with the host immune system. Polymerase chain reaction (PCR) is the mainstay of diagnosis. Increasing resistance to tetracyclines and quinolones which are used for treatment, is a matter of global concern. We reviewed PubMed literature and Google search engine on the recent developments of association of Mycoplasma hominis with various diseases, pathogenesis, diagnosis and treatment. Answer: The role of Mycoplasma hominis as a vaginal pathogen is debated in the literature. One study suggests that M. hominis is not a vaginal pathogen in adults, as its isolation rates and colony counts were not significantly different among symptomatic and asymptomatic women, and there was no association with the number of polymorphonuclear leucocytes in Gram-stained vaginal smears (PUBMED:11158693). However, another study found that women who harbored M. hominis had significantly more often complained of a fishy odor, had a positive amine test, a vaginal pH > 4.7, and clue cells, even after excluding bacterial vaginosis, suggesting an association with genital signs and symptoms (PUBMED:9024109). Furthermore, a significant association was found between persistent M. hominis in the vaginal microbiota and persistent high-risk human papillomavirus (hrHPV) infection, indicating a potential role in the persistence of hrHPV infections (PUBMED:28695118). M. hominis is also known to be a common inhabitant of the vagina of healthy women, but it can become pathogenic once it invades the internal genital organs, and it is associated with bacterial vaginosis, although its contribution to the pathological process is unclear (PUBMED:16922160). Additionally, M. hominis has been linked to increased concentrations of proinflammatory cytokines in vaginal fluid, which could suggest a role in inflammatory processes (PUBMED:15959994). It has also been identified as an independent risk factor for preterm birth in patients with symptomatic threatened preterm labor and/or short cervix (PUBMED:32131651). Cases of pelvic abscess secondary to M. hominis after vaginal trauma have been reported, highlighting the challenges in diagnosis and treatment due to its fastidious nature and potential resistance to empirical antimicrobial agents (PUBMED:28838864). In summary, while some studies do not support the role of M.
Instruction: Are RECIST criteria sufficient to assess response to therapy in neuroendocrine tumors? Abstracts: abstract_id: PUBMED:22726975 Are RECIST criteria sufficient to assess response to therapy in neuroendocrine tumors? Material And Methods: Within the group of 47 patients treated with peptide receptor radionuclide therapy (PRRT), four patients were chosen: three with inoperable tumors without liver metastases and one with two lesions in the pancreas and metastases. Results: In all patients, after PRRT, the changes in the sum of the longest diameters of tumors were between -1% and -21%, resulting in stable disease reported [strict Response Evaluation Criteria in Solid Tumors (RECIST)]. But the measurements of tumor volume and attenuation in computed tomography and the tumor to nontumor ratio in somatostatin receptor scintigraphy resulted in different response assessments. Conclusions: The RECIST standard may be not sufficient to properly assess the therapy response in patients with neuroendocrine tumors. abstract_id: PUBMED:34989825 Defining disease status in gastroenteropancreatic neuroendocrine tumors: Choi-criteria or RECIST? Purpose: Adequate monitoring of changes in tumor load is fundamental for the assessment of the course of disease and response to treatment. There is an ongoing debate on the utility of RECIST v1.1 in gastroenteropancreatic neuroendocrine tumors (GEP-NETs). Methods: In this retrospective real-life cohort study, Choi-criteria were compared with RECIST v1.1. The agreement between both criteria and the association with survival endpoints were evaluated. Results: Seventy-five patients were included with a median follow-up of 35 months (range 8-53). Median progression-free survival (mPFS) according to RECIST v1.1 was 15 months (range 2-50) compared to 14 months (range 2-50) in Choi. According to RECIST, 33 (44%) patients were classified as having stable disease (SD), 40 (53%) as progressive disease (PD) and two (3%) patients as partial response (PR), compared to 9 (12%) patients classified as SD, 50 (67%) as PD and 16 (21%) as PR according to Choi-criteria. Overall concordance between the criteria was moderate (Cohen's Kappa = 0.408, p &lt; 0.001) and agreement varied between 57 and 69% at each consecutive scan (p &lt; 0.001). Survival analysis showed significant differences in overall survival (OS) for RECIST v1.1 categories PD and non-PD (log-rank p = 0.02), however, in Choi no significant differences in OS were found (p = 0.27). Conclusion: RECIST v1.1 had a better clinical utility and prognostic value compared to Choi-criteria. Still, RECIST were also not sufficient to adequately predict OS. This outlines the need for new tools that provides accurate information on the disease course and treatment response to support precise prognostication in patients with GEP-NETs. abstract_id: PUBMED:33871762 Are recist criteria adequate in assessing the response to therapy in metastatic NEN? Response to therapy criteria, known as RECIST (Response Evaluation Criteria in Solid Tumours), are widely used to evaluate neuroendocrine tumours (NET) metastatic to the liver, under treatment. RECIST criteria does not take in account many various distinct features such as tumour growth, secretory capacity and anatomical localisation with wide variation in clinical and biological presentation of different NETs. Key features of RECIST includes definitions of the minimal size of measurable lesions, instructions on how many lesions to measure and follow, and the use of unidimensional, rather than bidimensional, measures for overall evaluation of tumour burden. These measures are currently done with computed tomography (CT) or Magnetic Resonance Imaging (MRI). RECIST criteria are accurate in assessing tumour progression but sometimes inaccurate in assessing tumour response after locoregional therapy or under molecular targeted therapy, tumour vessels being part of the target of such treatments. There is poor correlation between a so called tumour necrosis and conventional methods of response assessment, which poses questions of how best to quantify efficacy of these targeted therapies. Variations in tumour density with computed tomography (CT) could theoretically be associated with tumour necrosis. This hypothesis has been studied proposing alternative CT criteria of response evaluation in metastatic digestive NET treated with targeted therapy. If preliminary results upon the poor relationship between density measured with CT (derived from CHOI criteria) evolution curves at CT and PFS are confirmed by further studies, showing that the correlation between density changing and response to non-targeted treatment is weak, the use of contrast injection, will probably be not mandatory to enable appropriate evaluation. abstract_id: PUBMED:35745849 Comparison of Choi, RECIST and Somatostatin Receptor PET/CT Based Criteria for the Evaluation of Response and Response Prediction to PRRT. Aim: The most suitable method for assessment of response to peptide receptor radionuclide therapy (PRRT) of neuroendocrine tumors (NET) is still under debate. In this study we aimed to compare size (RECIST 1.1), density (Choi), Standardized Uptake Value (SUV) and a newly defined ZP combined parameter derived from Somatostatin Receptor (SSR) PET/CT for prediction of both response to PRRT and overall survival (OS). Material and Methods: Thirty-four NET patients with progressive disease (F:M 23:11; mean age 61.2 y; SD ± 12) treated with PRRT using either Lu-177 DOTATOC or Lu-177 DOTATATE and imaged with Ga-68 SSR PET/CT approximately 10-12 weeks prior to and after each treatment cycle were retrospectively analyzed. Median duration of follow-up after the first cycle was 63.9 months (range 6.2-86.2). A total of 77 lesions (2-8 per patient) were analyzed. Response assessment was performed according to RECIST 1.1, Choi and modified EORTC (MORE) criteria. In addition, a new parameter named ZP, the product of Hounsfield unit (HU) and SUVmean (Standard Uptake Value) of a tumor lesion, was tested. Further, SUV values (max and mean) of the tumor were normalized to SUV of normal liver parenchyma. Tumor response was defined as CR, PR, or SD. Gold standard for comparison of baseline parameters for prediction of response of individual target lesions to PRRT was change in size of lesions according to RECIST 1.1. For prediction of overall survival, the response after the first and second PRRT were tested. Results: Based on RECIST 1.1, Choi, MORE, and ZP, 85.3%, 64.7%, 61.8%, and 70.6% achieved a response whereas 14.7%, 35.3%, 38.2%, and 29.4% demonstrated PD (progressive disease), respectively. Baseline ZP and ZPnormalized were found to be the only parameters predictive of lesion progression after three PRRT cycles (AUC ZP 0.753; 95% CI 0.6-0.9, p 0.037; AUC ZPnormalized 0.766; 95% CI 0.6-0.9; p 0.029). Based on a cut-off-value of 1201, ZP achieved a sensitivity of 86% and a specificity of 67%, while ZPnormalized reached a sensitivity of 86% and a specificity of 76% at a cut-off-value of 198. Median OS in the total cohort was not reached. In univariate analysis amongst all parameters, only patients having progressive disease according to MORE after the second cycle of PRRT were found to have significantly shorter overall survival (median OS in objective responders not reached, in PD 29.2 months; p 0.015). Patients progressive after two cycles of PRRT according to ZP had shorter OS compared to those responding (median OS for responders not reached, for PD 47.2 months, p 0.066). Conclusions: In this explorative study, we showed that Choi, RECIST 1.1, and SUVmax-based response evaluation varied significantly from each other. Only patients showing progressive disease after two PRRT cycles according to MORE criteria had a worse prognosis while baseline ZP and ZPnormalized performed best in predicting lesion progression after three cycles of PRRT. abstract_id: PUBMED:30406360 The RECIST criteria compared to conventional response evaluation after peptide receptor radionuclide therapy in patients with neuroendocrine neoplasms. Objective: The Response Evaluation Criteria In Solid Tumors (RECIST) is the most used radiological method for evaluating response after peptide receptor radionuclide therapy (PRRT) in patients with neuroendocrine tumors. This method may give too positive estimates of response in slow growing tumors as it allows a substantial increase in tumor size before patients are classified as having progressive disease. We wanted to compare RECIST with a conventional method in routine use for estimating treatment effect based on defining any unequivocal increase in size of tumor load as progressive disease. We also wanted to investigate whether any differences had clinical implications. Methods: Patients treated with 177Lutetium-DOTA-octreotate having at least one follow-up radiological response evaluation were included. Radiological examinations were retrospectively evaluated by RECIST and compared to the radiological evaluations performed at regular follow-up examinations. Results: Seventy-nine patients were included, 33 (42%) were women, median age 65 years. The primary tumors was located in the small intestine in 35 (44%) and the in the pancreas in 27 (34%) of the patients. Indication for treatment was progressive disease in 71 (90%) patients. Based on RECIST, 67 (85%) patients had objective response or stable disease as best effect versus 59 (75%) patients based on the conventional method (p &lt; 0.001). Median progression free survival was 33 months estimated by RECIST and 28 months estimated with the conventional method (p &lt; 0.001). Eight (10%) patients received tumor-targeted therapy due to progressive disease based on the conventional method while still having stable disease according to RECIST. Conclusion: Response evaluation after PRRT with RECIST gave more positive estimates for treatment effects compared to a method where any equivocal change in tumor load was regarded as significant. These differences had clinical implications. abstract_id: PUBMED:37345276 Proposal of early CT morphological criteria for response of liver metastases to systemic treatments in gastroenteropancreatic neuroendocrine tumors: Alternatives to RECIST. RECIST 1.1 criteria are commonly used with computed tomography (CT) to evaluate the efficacy of systemic treatments in patients with neuroendocrine tumors (NETs) and liver metastases (LMs), but their relevance is questioned in this setting. We aimed to explore alternative criteria using different numbers of measured LMs and thresholds of size and density variation. We retrospectively studied patients with advanced pancreatic or small intestine NETs with LMs, treated with systemic treatment in the first-and/or second-line, without early progression, in 14 European expert centers. We compared time to treatment failure (TTF) between responders and non-responders according to various criteria defined by 0%, 10%, 20% or 30% decrease in the sum of LM size, and/or by 10%, 15% or 20% decrease in LM density, measured on two, three or five LMs, on baseline (≤1 month before treatment initiation) and first revaluation (≤6 months) contrast-enhanced CT scans. Multivariable Cox proportional hazard models were performed to adjust the association between response criteria and TTF on prognostic factors. We included 129 systemic treatments (long-acting somatostatin analogs 41.9%, chemotherapy 26.4%, targeted therapies 31.8%), administered as first-line (53.5%) or second-line therapies (46.5%) in 91 patients. A decrease ≥10% in the size of three LMs was the response criterion that best predicted prolonged TTF, with significance at multivariable analysis (HR 1.90; 95% CI: 1.06-3.40; p = .03). Conversely, response defined by RECIST 1.1 did not predict prolonged TTF (p = .91), and neither did criteria based on changes in LM density. A ≥10% decrease in size of three LMs could be a more clinically relevant criterion than the current 30% threshold utilized by RECIST 1.1 for the evaluation of treatment efficacy in patients with advanced NETs. Its implementation in clinical trials is mandatory for prospective validation. Criteria based on changes in LM density were not predictive of treatment efficacy. CLINICAL TRIAL REGISTRATION: Registered at CNIL-CERB, Assistance publique hopitaux de Paris as "E-NETNET-L-E-CT" July 2018. No number was assigned. Approved by the Medical Ethics Review Board of University Medical Center Groningen. abstract_id: PUBMED:31477779 Evaluating radiological response in pancreatic neuroendocrine tumours treated with sunitinib: comparison of Choi versus RECIST criteria (CRIPNET_ GETNE1504 study). Background: The purpose of our study was to analyse the usefulness of Choi criteria versus RECIST in patients with pancreatic neuroendocrine tumours (PanNETs) treated with sunitinib. Method: A multicentre, prospective study was conducted in 10 Spanish centres. Computed tomographies, at least every 6 months, were centrally evaluated until tumour progression. Results: One hundred and seven patients were included. Median progression-free survival (PFS) by RECIST and Choi were 11.42 (95% confidence interval [CI], 9.7-15.9) and 15.8 months (95% CI, 13.9-25.7). PFS by Choi (Kendall's τ = 0.72) exhibited greater correlation with overall survival (OS) than PFS by RECIST (Kendall's τ = 0.43). RECIST incorrectly estimated prognosis in 49.6%. Partial response rate increased from 12.8% to 47.4% with Choi criteria. Twenty-four percent of patients with progressive disease according to Choi had stable disease as per RECIST, overestimating treatment effect. Choi criteria predicted PFS/OS. Changes in attenuation occurred early and accounted for 21% of the variations in tumour volume. Attenuation and tumour growth rate (TGR) were associated with improved survival. Conclusion: Choi criteria were able to capture sunitinib's activity in a clinically significant manner better than RECIST; their implementation in standard clinical practice shall be strongly considered in PanNET patients treated with this drug. abstract_id: PUBMED:32778165 Early response assessment and prediction of overall survival after peptide receptor radionuclide therapy. Background: Response after peptide receptor radionuclide therapy (PRRT) can be evaluated using anatomical imaging (CT/MRI), somatostatin receptor imaging ([68Ga]Ga-DOTA-TATE PET/CT), and serum Chromogranin-A (CgA). The aim of this retrospective study is to assess the role of these response evaluation methods and their predictive value for overall survival (OS). Methods: Imaging and CgA levels were acquired prior to start of PRRT, and 3 and 9 months after completion. Tumour size was measured on anatomical imaging and response was categorized according to RECIST 1.1 and Choi criteria. [68Ga]Ga-DOTA-TATE uptake was quantified in both target lesions depicted on anatomical imaging and separately identified PET target lesions, which were either followed over time or newly identified on each scan with PERCIST-based criteria. Response evaluation methods were compared with Cox regression analyses and Log Rank tests for association with OS. Results: A total of 44 patients were included, with median follow-up of 31 months (IQR 26-36 months) and median OS of 39 months (IQR 32mo-not reached)d. Progressive disease after 9 months (according to RECIST 1.1) was significantly associated with worse OS compared to stable disease [HR 9.04 (95% CI 2.10-38.85)], however not compared to patients with partial response. According to Choi criteria, progressive disease was also significantly associated with worse OS compared to stable disease [HR 6.10 (95% CI 1.38-27.05)] and compared to patients with partial response [HR 22.66 (95% CI 2.33-219.99)]. In some patients, new lesions were detected earlier with [68Ga]Ga-DOTA-TATE PET/CT than with anatomical imaging. After 3 months, new lesions on [68Ga]Ga-DOTA-TATE PET/CT which were not visible on anatomical imaging, were detected in 4/41 (10%) patients and in another 3/27 (11%) patients after 9 months. However, no associations between change in uptake on 68Ga-DOTA-TATE PET/CT or serum CgA measurements and OS was observed. Conclusions: Progression on anatomical imaging performed 9 months after PRRT is associated with worse OS compared to stable disease or partial response. Although new lesions were detected earlier with [68Ga]Ga-DOTA-TATE PET/CT than with anatomical imaging, [68Ga]Ga-DOTA-TATE uptake, and serum CgA after PRRT were not predictive for OS in this cohort with limited number of patients and follow-up time. abstract_id: PUBMED:19088924 Treatment response to transcatheter arterial embolization and chemoembolization in primary and metastatic tumors of the liver. Introduction: Transcatheter arterial embolization (TAE) and chemoembolization (TACE) are increasingly used to treat unresectable primary and metastatic liver tumors. The purpose of this study was to determine the objective response to TAE and TACE in unresectable hepatic malignancies and to identify clinicopathologic predictors of response. Materials And Methods: Seventy-nine consecutive patients who underwent 119 TAE/TACE procedures between 1998 and 2006 were reviewed. The change in maximal diameter of 121 evaluable lesions in 56 patients was calculated from pre and post-procedure imaging. Response rates were determined using Response Evaluation Criteria in Solid Tumors (RECIST) guidelines. The Kaplan-Meier method was used to compare survival in responders vs. non-responders and in primary vs. metastatic histologies. Results: TAE and TACE resulted in a mean decrease in lesion size of 10.3%+/-1.9% (p&lt;0.001). TACE (vs. TAE) and carcinoid tumors were associated with a greater response (p&lt;0.05). Lesion response was not predicted by pre-treatment size, vascularity, or histology. The RECIST partial response (PR) rate was 12.3% and all partial responders were in the TACE group. Neuroendocrine tumors, and specifically carcinoid lesions, had a significantly greater PR rate (p&lt;0.05). Overall survival, however, was not associated with histology or radiologic response. Discussion: TAE and TACE produce a significant objective treatment response by RECIST criteria. Response is greatest in neuroendocrine tumors and is independent of vascularity and lesion size. TACE appears to be superior to TAE. Although an association of response with improved survival was not demonstrated, large cohort studies are necessary to further define this relationship. abstract_id: PUBMED:33322819 The Challenge of Evaluating Response to Peptide Receptor Radionuclide Therapy in Gastroenteropancreatic Neuroendocrine Tumors: The Present and the Future. The NETTER-1 study has proven peptide receptor radionuclide therapy (PRRT) to be one of the most effective therapeutic options for metastatic neuroendocrine tumors (NETs), improving progression-free survival and overall survival. However, PRRT response assessment is challenging and no consensus on methods and timing has yet been reached among experts in the field. This issue is owed to the suboptimal sensitivity and specificity of clinical biomarkers, limitations of morphological response criteria in slowly growing tumors and necrotic changes after therapy, a lack of standardized parameters and timing of functional imaging and the heterogeneity of PRRT protocols in the literature. The aim of this article is to review the most relevant current approaches for PRRT efficacy prediction and response assessment criteria in order to provide an overview of suitable tools for safe and efficacious PRRT. Answer: The RECIST criteria are widely used to evaluate the response to therapy in neuroendocrine tumors (NETs), but their sufficiency has been questioned in several studies. In a study of patients treated with peptide receptor radionuclide therapy (PRRT), RECIST criteria showed stable disease based on changes in the longest diameters of tumors, but other assessments such as tumor volume, attenuation in computed tomography, and tumor to nontumor ratio in somatostatin receptor scintigraphy suggested different responses, indicating that RECIST may not be sufficient to properly assess therapy response in NET patients (PUBMED:22726975). A retrospective study comparing Choi-criteria with RECIST v1.1 in gastroenteropancreatic neuroendocrine tumors (GEP-NETs) found that RECIST v1.1 had better clinical utility and prognostic value compared to Choi-criteria, but still was not sufficient to adequately predict overall survival (OS), highlighting the need for new tools for accurate assessment of disease course and treatment response (PUBMED:34989825). Another study pointed out that RECIST criteria do not account for various distinct features of NETs, such as tumor growth, secretory capacity, and anatomical localization, and may be inaccurate in assessing tumor response after locoregional or molecular targeted therapy, suggesting that variations in tumor density could be associated with tumor necrosis and might be an alternative criterion for response evaluation (PUBMED:33871762). A comparison of Choi, RECIST, and Somatostatin Receptor PET/CT-based criteria for evaluating response to PRRT in NETs showed significant variation among the methods, with only patients showing progressive disease after two PRRT cycles according to modified EORTC (MORE) criteria having a worse prognosis, while baseline ZP and ZPnormalized were predictive of lesion progression after three cycles of PRRT (PUBMED:35745849). In a study comparing RECIST with a conventional method where any unequivocal increase in tumor size was considered progressive disease, RECIST provided more positive estimates for treatment effects, which had clinical implications as some patients received tumor-targeted therapy based on the conventional method while still classified as stable disease by RECIST (PUBMED:30406360). An exploration of alternative criteria using different numbers of measured liver metastases and thresholds of size and density variation found that a ≥10% decrease in the size of three liver metastases was the response criterion that best predicted prolonged time to treatment failure (TTF), suggesting that this could be a more clinically relevant criterion than the current 30% threshold utilized by RECIST 1.1 for evaluating treatment efficacy in patients with advanced NETs. However, criteria based on changes in liver metastasis density were not predictive of treatment efficacy (PUBMED:37345276). In the CRIPNET_GETNE1504 study, the usefulness of Choi criteria versus RECIST was analyzed in patients with pancreatic neuroendocrine tumors (PanNETs) treated with sunitinib. The study found that Choi criteria better captured sunitinib's activity and were more closely correlated with overall survival than RECIST, suggesting that Choi criteria should be strongly considered in clinical practice for PanNET patients treated with sunitinib (PUBMED:31477779). A retrospective study assessing the role of various response evaluation methods after PRRT and their predictive value for overall survival found that progression on anatomical imaging performed 9 months after PRRT was associated with worse overall survival compared to stable disease or partial response. However, changes in uptake on [68Ga]Ga-DOTA-TATE PET/CT or serum Chromogranin-A measurements after PRRT were not predictive for overall survival in the cohort studied (PUBMED:32778165). Lastly, a study on the objective response to transcatheter arterial embolization (TAE) and chemoembolization (TACE) in unresectable hepatic malignancies found that TAE and TACE produced a significant objective treatment response by RECIST criteria, especially in neuroendocrine tumors. However, an association of response with improved survival was not demonstrated, indicating the need for larger cohort studies to define this relationship (PUBMED:19088924). In summary, while RECIST criteria are a standard tool for assessing response to therapy in NETs, multiple studies suggest that they may not be sufficient due to their limitations in capturing the complexity of NETs' response to various treatments. Alternative criteria, such as those based on tumor density changes, the Choi criteria, and other novel parameters, may provide more accurate assessments and have been proposed for use in clinical practice and future research (PUBMED:33322819).
Instruction: Does urban extent from satellite images relate to symptoms of asthma, rhinoconjunctivitis and eczema in children? Abstracts: abstract_id: PUBMED:27211111 Does urban extent from satellite images relate to symptoms of asthma, rhinoconjunctivitis and eczema in children? A cross-sectional study from ISAAC Phase Three. Objective: The relationship between urbanisation and the symptom prevalence of asthma, rhinoconjunctivitis and eczema is not clear, and varying definitions of urban extent have been used. Furthermore, a global analysis has not been undertaken. This study aimed to determine whether the symptom prevalence of asthma, rhinoconjunctivitis and eczema in centres involved in the International Study of Asthma and Allergies in Childhood (ISAAC) were higher in urban than rural centres, using a definition of urban extent as land cover from satellite data. Methods: A global map of urban extent from satellite images (MOD500 map) was used to define the urban extent criterion. Maps from the ISAAC centres were digitised and merged with the MOD500 map to describe the urban percentage of each centre. We investigated the association between the symptom prevalence of asthma, rhinoconjunctivitis and eczema and the percentage of urban extent by centre. Results: A weak negative relationship was found between the percentage of urban extent of each ISAAC centre and current wheeze in the 13-14-year age group. This association was not statistically significant after adjusting for region of the world and gross national income. No other relationship was found between urban extent and symptoms of asthma, rhinoconjunctivitis and eczema. Conclusions: In this study, the prevalence of symptoms of asthma, rhinoconjunctivitis and eczema in children were not associated with urbanisation, according to the land cover definition of urban extent from satellite data. Comparable standardised definitions of urbanisation need to be developed so that global comparisons can be made. abstract_id: PUBMED:24612913 Prevalence of asthma, rhinitis and eczema symptoms in rural and urban school-aged children from Oropeza Province - Bolivia: a cross-sectional study. Background: Asthma and allergies are world-wide common chronic diseases among children and young people. Little information is available about the prevalence of these diseases in rural areas of Latin America. This study assesses the prevalence of symptoms of asthma and allergies among children in urban and rural areas at Oropeza Province in Bolivia. Methods: The Spanish version of the ISAAC standardized questionnaire and the ISAAC video questionnaire were implemented to 2584 children attending the fifth elementary grade in 36 schools in Oropeza province (response 91%). Lifetime, 12 months and severity prevalence were determined for asthma, rhinitis and eczema symptoms. Odds ratios (OR) with 95% confidence intervals (95% CI) were calculated adjusting for age using generalized linear mixed-effects models. Results: Median age of children was 11 years, 74.8% attended public schools, and 52.1% were female. While children attending urban schools had lower prevalence of self-reported wheeze in the written questionnaire (adjusted OR 0.6; 95% CI 0.4-1.9), they were more likely than children attending rural schools to report wheeze in the video questionnaire (aOR 2.1; 95% CI 1.0-2.6). They also reported more frequently severe rhinoconjunctivitis (aOR 2.8; 95% CI 1.2-6.6) and severe eczema symptoms (aOR 3.3; 95% CI 1.0-11.0). Conclusion: Overall in accordance with the hygiene hypothesis, children living in urban areas of Bolivia seem to have a higher prevalence of symptoms of asthma and allergies compared to children living in the country side. In order to develop primary prevention strategies, environmental factors need to be identified in future studies. abstract_id: PUBMED:23766726 Childhood asthma and allergies in urban, semiurban, and rural residential sectors in Chile. While rural living protects from asthma and allergies in many countries, results are conflicting in Latin America. We studied the prevalence of asthma and asthma symptoms in children from urban, semiurban, and rural sectors in south Chile. A cross-sectional questionnaire study was conducted in semiurban and rural sectors in the province of Valdivia (n = 559) using the ISAAC (International Study of Asthma and Allergies in Childhood) questionnaire. Results were compared to prevalence in urban Valdivia (n = 3105) by using data from ISAAC III study. Odds ratios (+95% confidence intervals) were calculated. No statistical significant differences were found for asthma ever and eczema symptoms stratified by residential sector, but a gradient could be shown for current asthma and rhinoconjunctivitis symptoms with urban living having highest and rural living having lowest prevalence. Rural living was inversely associated in a statistical significant way with current asthma (OR: 0.4; 95% CI: 0.2-0.9) and rhinoconjunctivitis symptoms (OR: 0.3; 95% CI: 0.2-0.7) in logistic regression analyses. Rural living seems to protect from asthma and respiratory allergies also in Chile, a South American country facing epidemiological transition. These data would be improved by clinical studies of allergic symptoms observed in studied sectors. abstract_id: PUBMED:21055079 The comparison of the indoor environmental factors associated with asthma and related allergies among school-child between urban and suburban areas in Beijing Objective: To study the indoor environmental factors associated with the prevalence of asthma and related allergies among school children. Methods: A cluster sampling method was used and the ISAAC questionnaire was conducted. A total of 4612 elementary students under Grade Five of 7 schools were enrolled in the survey for the impact of indoor environmental factors on the prevalence of asthma and related allergies in several urban and suburban schools of Beijing. Results: A total of 4060 sample were finally analyzed including 1992 urban and 2068 suburban. The prevalence of wheeze, allergic rhinoconjunctivitis and atopic eczema in the past 12 months was 3.1% (61/1992), 5.3% (106/1992), 1.1% (22/1992) among urban children while 1.3% (27/2068), 3.1% (65/2068), 1.0% (22/2068) among suburban children respectively. The prevalence of wheeze and allergic rhinoconjunctivitis of the past 12 months in urban were both significantly higher than that in suburban (χ(2) = 14.77, 11.93, P &lt; 0.01). The incidences of having asthma and eczema ever among urban children (5.3% (105/1992), 29.4% (586/1992)) were significantly (χ(2) = 39.03, 147.22, P &lt; 0.01) higher than that among suburban (1.7% (35/2068), 13.8% (285/2068)). Although the distributions of indoor environmental factors were similar in both areas, passive smoking and interior decoration had different influence on the prevalence of asthma and related allergies among school children in the two areas. The significant impact of passive smoking on having asthma ever among suburban children was observed (OR = 2.70, 95%CI = 1.17 - 6.23) while no significant result in urban (OR = 1.06, 95%CI = 0.71 - 1.58); the percentage of interior decoration was 84.0% (1673/1992) among urban children and 80.0% (1655/2068) among suburban children, there was significant impact of interior decoration on the prevalence of having eczema ever among urban children (OR = 1.57, 95%CI = 1.17 - 2.10) but no significant results were found in suburban sample (OR = 1.06, 95%CI = 0.76 - 1.48). Conclusion: The prevalence of asthma and related allergies among school children is much higher in urban areas than that in suburban areas and the indoor environmental factors such as passive smoking and interior decoration may differently explain the prevalence of asthma and related allergies in the two areas. abstract_id: PUBMED:23591930 Effects of air pollution on lung function and symptoms of asthma, rhinitis and eczema in primary school children. Health effects of ambient air pollution were studied in three groups of schoolchildren living in areas (suburban, urban and urban-traffic) with different air pollution levels in Eskişehir, Turkey. This study involved 1,880 students aged between 9 and 13 years from 16 public primary schools. This two-season study was conducted from January 2008 through March 2009. Symptoms of asthma, rhinitis and eczema were determined by the International Study of Asthma and Allergies in Childhood questionnaire in 2008. Two lung function tests were performed by each child for summer and winter seasons with simultaneous ambient air measurements of ozone (O3), nitrogen dioxide (NO2) and sulfur dioxide (SO2) by passive sampling. Effects of air pollution on impaired lung function and symptoms in schoolchildren were estimated by multivariate logistic regression analyses. Girls with impaired lung function (only for the summer season evaluation) were more observed in suburban and urban areas when compared to urban-traffic area ([odds ratio (OR) = 1.49; 95 % confidence interval (CI) 1.04-2.14] and [OR = 1.69 (95 % CI 1.06-2.71)] for suburban vs. urban-traffic and urban vs. urban-traffic, respectively). Significant association between ambient ozone concentrations and impaired lung function (for an increase of 10 μg m(-3)) was found only for girls for the summer season evaluation [OR = 1.11 (95 % CI 1.03-1.19)]. No association was found for boys and for the winter season evaluation. No association was found between any of the measured air pollutants and symptoms of current wheeze, current rhinoconjunctivitis and current itchy rash. The results of this study showed that increasing ozone concentrations may cause a sub-acute impairment in lung function of school aged children. abstract_id: PUBMED:10065208 Prevalence and severity of symptoms of asthma, allergic rhino-conjunctivitis and atopic eczema in secondary school children in Ibadan, Nigeria. This study was part of the effort of the International Study of Asthma and Allergies in Childhood (ISAAC) Steering Committee to evaluate the epidemiology of asthma and allergic diseases around the world. Three thousand and fifty eight randomly selected children aged 13-14 years were studied, using a standard questionnaire developed and field tested by the ISAAC Steering Committee, to establish the prevalence and severity of symptoms of asthma, allergic rhinoconjunctivitis and atopic eczema. Of the 3,058 children, there were 1,659 (54.3%) females and 1,399 (45.7%) males (F:M ratio 1.2:1). The cumulative prevalence rates of wheezing, rhinitis other than common cold, and symptoms of eczema were 16.4%, 54.1% and 26.1%, respectively while within the immediate 12-month period, the rates were 10.7%, 45.2% and 22.4%, respectively. However, rhinitis associated with itchy eyes (allergic rhinoconjunctivitis) was reported by 39.2% of the school children. The prevalence of doctor-diagnosed asthma was 18.4%. Multiple logistic regression analysis showed that a higher prevalence of wheezing and rhinitis was associated with itchy eyes. The prevalence of severe symptoms of asthma, allergic rhinitis and eczema were higher when compared with a similar study in Kenya. However, the prevalence of symptoms of asthma was lower and that of allergic rhinoconjunctivitis higher in our series. There is a need for further studies to investigate the risk factors which might be responsible for the apparently different patterns in these two African countries. abstract_id: PUBMED:34200291 Lack of Consistent Association between Asthma, Allergic Diseases, and Intestinal Helminth Infection in School-Aged Children in the Province of Bengo, Angola. Epidemiological studies have shown conflicting findings on the relationship between asthma, atopy, and intestinal helminth infections. There are no such studies from Angola; therefore, we aimed to evaluate the relationship between asthma, allergic diseases, atopy, and intestinal helminth infection in Angolan schoolchildren. We performed a cross-sectional study of schoolchildren between September and November 2017. Five schools (three urban, two rural) were randomly selected. Asthma, rhinoconjunctivitis, and eczema were defined by appropriate symptoms in the previous 12 months: atopy was defined by positive skin prick tests (SPT) or aeroallergen-specific IgE; intestinal helminths were detected by faecal sample microscopy. In total, 1023 children were evaluated (48.4% female; 57.6% aged 10-14 years; 60.5% urban). Asthma, rhinoconjunctivitis, or eczema were present in 9%, 6%, and 16% of the studies children, respectively. Only 8% of children had positive SPT, but 64% had positive sIgE. Additionally, 40% were infected with any intestinal helminth (A. lumbricoides 25.9%, T. trichiura 7.6%, and H. nana 6.3%). There were no consistent associations between intestinal helminth infections and asthma, allergic diseases, or atopy, except for A. lumbricoides, which was inversely associated with rhinoconjuctivitis and directly associated with aeroallergen-specific IgE. We concluded that, overall, intestinal helminth infections were not consistently associated with allergic symptoms or atopy. Future, preferably longitudinal, studies should collect more detailed information on helminth infections as part of clusters of environmental determinants of allergies. abstract_id: PUBMED:30031263 Associations between allergic symptoms and phosphate flame retardants in dust and their urinary metabolites among school children. Background: Phosphate flame retardants (PFRs) are ubiquitously detected in indoor environments. Despite increasing health concerns pertaining to PFR exposure, few epidemiological studies have examined PFR exposure and its effect on children's allergies. Objectives: To investigate the association between PFRs in house dust, their metabolites in urine, and symptoms of wheeze and allergies among school-aged children. Methods: A total of 128 elementary school-aged children were enrolled. House dust samples were collected from upper-surface objects. Urine samples were collected from the first morning void. Levels of 11 PFRs in dust and 14 PFR metabolites in urine were measured. Parent-reported symptoms of wheeze, rhinoconjunctivitis, and eczema were evaluated using the International Study of Asthma and Allergies in Childhood questionnaire. The odds ratios (ORs) of the Ln transformed PFR concentrations and categorical values were calculated using a logistic regression model adjusted for sex, grade, dampness index, annual house income, and creatinine level (for PFR metabolites only). Results: The prevalence rates of wheeze, rhinoconjunctivitis, and eczema were 22.7%, 36.7%, and 28.1%, respectively. A significant association between tris(1,3-dichloroisopropyl) phosphate (TDCIPP) in dust and eczema was observed: OR (95% confidence interval), 1.44 (1.13-1.82) (&gt;limit of detection (LOD) vs &lt;LOD). The ORs for rhinoconjunctivitis (OR = 5.01 [1.53-16.5]) and for at least one symptom of allergy (OR = 3.87 [1.22-12.3]) in the 4th quartile of Σtris(2-chloro-isopropyl) phosphate (TCIPP) metabolites was significantly higher than those in the 1st quartile, with significant p-values for trend (Ptrend) (0.013 and 0.024, respectively). A high OR of 2.86 (1.04-7.85) (&gt;LOD vs &lt;LOD) was found for hydroxy tris(2-butoxyethyl) phosphate (TBOEP)-OH and eczema. OR of the 3rd tertile of bis (1,3-dichloro-2-propyl) phosphate (BDCIPP) was higher than the 1st tertile as a reference for at least one symptom (OR = 3.91 [1.25-12.3]), with a significant Ptrend = 0.020. Conclusions: We found that TDCIPP in house dust, and metabolites of TDCIPP, TBOEP and TCIPP were associated with children's allergic symptoms. Despite some limitations of this study, these results indicate that children's exposure to PFR may impact their allergic symptoms. abstract_id: PUBMED:19559967 The prevalence of atopic symptoms in children with otitis media with effusion. Objective: To determine the prevalence of allergic symptoms in children with otitis media with effusion (OME). Study Design: A validated questionnaire from the International Study of Asthma and Allergies in Childhood was used to determine the prevalence of allergic symptoms in children. The questionnaire was completed by the parents of children with OME undergoing ventilation tube insertion, and the results were compared with a large reference group of school children of the same age. Subjects And Methods: Children aged 6 or 7 years old with OME confirmed intraoperatively during ventilation tube insertion between 2001 and 2005 (n=89). The prevalence of allergic symptoms and nasal symptoms in children with OME was compared with an age-matched reference group. Results: There was no difference in the prevalence of allergic symptoms suggesting rhinoconjunctivitis, asthma, or eczema between the OME and reference group. The prevalence of nasal symptoms, however, was greater in the children with OME than in the reference group 38.2 percent versus 23.5 percent (odds ratio=2.01; 95% confidence interval, 1.30-3.10; P&lt;0.001). Conclusion: The prevalence of allergic symptoms was similar in 6- to 7-year-old children with OME and the reference group, suggesting a limited effect of allergy in the pathogenesis of OME in this age group. Nasal symptoms were more common in the OME group, which may reflect a higher prevalence of adenoidal hyperplasia. abstract_id: PUBMED:32058144 Combined exposure to phthalate esters and phosphate flame retardants and plasticizers and their associations with wheeze and allergy symptoms among school children. Background: Phthalate esters and phosphate flame retardants and plasticizers (PFRs) are both used as plasticizers and are commonly detected in indoor environments. Although both phthalates and PFRs are known to be associated with children's wheeze and allergic symptoms, there have been no previous studies examining the effects of mixtures of these exposures. Objectives: To investigate the association between exposure to mixtures of phthalate esters and PFRs, and wheeze and allergic symptoms among school-aged children. Methods: A total of 128 elementary school-aged children were enrolled. Metabolites of 3 phthalate esters and 7 PFRs were measured in urine samples. Parent-reported symptoms of wheeze, rhinoconjunctivitis, and eczema were evaluated using the International Study of Asthma and Allergies in Childhood (ISAAC) questionnaire. In the primary model, we created a phthalate ester and PFR mixture exposure index, and estimated odds ratios (ORs) using weighted quantile sum (WQS) regression and quantile g (qg)-computation. The two highest chemicals according to qg-computation weight %s were combined to create a combination high × high exposure estimate, with ORs calculated using the "low × low" exposure group as the reference category. Concentrations of each metabolite were corrected by multiplying this value by the sex- and body size-Standardised creatinine concentration and dividing by the observed creatinine value. All models were adjusted for sex, grade, dampness index and annual house income. Results: The odds ratio of rhinoconjunctivitis for the association between exposure to chemical mixtures according to the WQS index positive models was; OR = 2.60 (95% confidence interval [CI]: 1.38-5.14). However, wheeze and eczema of the WQS index positive model, none of the WQS index negative models or qg-computation result yielded statistically significant results. Combined exposure to the two highest WQS weight %s of "high-high" ΣTCIPP and ΣTPHP was associated with an increased prevalence of rhino-conjunctivitis, OR = 5.78 (1.81-18.43) to the "low × low" group. Conclusions: Significant associations of mixed exposures to phthalates and PFRs and increased prevalence of rhinoconjunctivitis was found among elementary school-aged children in the WQS positive model. Mixed exposures were not associated with any of allergic symptoms in the WQS negative model or qg-computation approach. However, the combined effects of exposure to two PFRs suggested an additive and/or multiplicative interaction, potentially increasing the prevalence of rhinoconjunctivitis. A further study with a larger sample size is needed to confirm these results. Answer: The relationship between urban extent from satellite images and the prevalence of symptoms of asthma, rhinoconjunctivitis, and eczema in children is not straightforward. A study using the International Study of Asthma and Allergies in Childhood (ISAAC) data found a weak negative relationship between the percentage of urban extent of each ISAAC center and current wheeze in the 13-14-year age group, but this association was not statistically significant after adjusting for the region of the world and gross national income. No other relationship was found between urban extent and symptoms of asthma, rhinoconjunctivitis, and eczema (PUBMED:27211111). However, other studies have reported varying results. For instance, in Bolivia, children living in urban areas seemed to have a higher prevalence of symptoms of asthma and allergies compared to children living in the countryside (PUBMED:24612913). Similarly, in Chile, a gradient was observed with urban living having the highest and rural living having the lowest prevalence of current asthma and rhinoconjunctivitis symptoms (PUBMED:23766726). In Beijing, the prevalence of asthma and related allergies was much higher in urban areas than in suburban areas, and indoor environmental factors such as passive smoking and interior decoration may differently explain the prevalence of asthma and related allergies in the two areas (PUBMED:21055079). In Turkey, ambient ozone concentrations were associated with impaired lung function in school-aged children, but no association was found between measured air pollutants and symptoms of current wheeze, current rhinoconjunctivitis, and current itchy rash (PUBMED:23591930). In Nigeria, the prevalence of severe symptoms of asthma, allergic rhinitis, and eczema were higher compared to a similar study in Kenya, suggesting different patterns in these two African countries (PUBMED:10065208). In Angola, no consistent associations were found between intestinal helminth infections and asthma, allergic diseases, or atopy (PUBMED:34200291). A study on the prevalence of allergic symptoms in children with otitis media with effusion (OME) found no difference in the prevalence of allergic symptoms suggesting rhinoconjunctivitis, asthma, or eczema between the OME and reference group, although nasal symptoms were more common in the OME group (PUBMED:19559967). Exposure to certain chemicals such as phosphate flame retardants ( Exposure to certain chemicals such as phosphate flame retardants (PFRs) in house dust and their metabolites in urine was associated with children's allergic symptoms. Specifically, tris(1,3-dichloroisopropyl) phosphate (TDCIPP) in dust was associated with eczema, and metabolites of TDCIPP, tris(2-butoxyethyl) phosphate (TBOEP), and tris(2-chloro-isopropyl) phosphate (TCIPP) were associated with allergic symptoms (PUBMED:30031263). Another study found significant associations between mixed exposures to phthalates and PFRs and increased prevalence of rhinoconjunctivitis among elementary school-aged children (PUBMED:32058144). In summary, while one study using satellite data did not find a significant association between urban extent and symptoms of asthma, rhinoconjunctivitis, and eczema (PUBMED:27211111), other studies have reported that children living in urban areas may have a higher prevalence of these symptoms compared to those in rural areas (PUBMED:24612913, PUBMED:23766726, PUBMED:21055079). Additionally, environmental factors such as air pollution and exposure to certain chemicals in indoor environments may also play a role in the prevalence of these symptoms (PUBMED:23591930, PUBMED:30031263, PUBMED:32058144).
Instruction: PSA use and incidence of prostate biopsy in the Tuscany region: is opportunistic screening discounting biopsy in subjects with PSA elevation? Abstracts: abstract_id: PUBMED:18822688 PSA use and incidence of prostate biopsy in the Tuscany region: is opportunistic screening discounting biopsy in subjects with PSA elevation? Aims And Background: To assess PSA use in the general population and estimate biopsy rate subsequent to opportunistic screening. Methods And Study Design: We report on PSA testing and related prostate biopsy frequency in the Tuscany Region during 2004-2005 to establish current patterns of care. We used population data sources to survey PSA testing and biopsy and estimated expected PSA values and expected recommended biopsies (&gt; or = PSA 4 ng/ml) from the ongoing Florence arm of the European Study of Screening for Prostate Cancer (ERSPC). Results: PSA testing was common for both years and across age groups, increasing with age and peaking at 70-74 years (37.6% in 2004, 41.9% in 2005) and increasing over the 2 years. PSA use in the 55-69 years cohort (screening age in ERSPC) was 28.3% in 2004 and 30.4% in 2005. Repeat PSA testing was also common and repeat PSA probability increased with age, peaking at age 70-74 (60.9%); repeat PSA testing at age 55-69 was 53.7%. Overall, 1.3% and 1.2% of men had a biopsy following PSA testing in 2004 and 2005. Observed/expected biopsy incidence was 14.3% in 2004 and 13.2% in 2005. ERSPC compliance to recommended biopsy was 77% or 60% at first or repeat screening. Conclusions: A discordance was identified between high PSA testing prevalence and low prostate biopsy rate. Based on projections from the ERSPC, this indicates a much lower observed biopsy rate than expected in organized screening. Although the implications of this are difficult to quantify in the absence of evidence on screening efficacy, it suggests inefficient practice. abstract_id: PUBMED:24975792 PSA testing, biopsy and cancer and benign prostate hyperplasia in France Introduction: Prostate-specific antigen (PSA) testing is high in France. The aim of this study was to estimate their frequency and those of biopsy and newly diagnosed cancer (PCa) according to the presence or absence of treated benign prostatic hyperplasia (BPH). Patients And Methods: This study concerned men 40 years and older covered by the main French national health insurance scheme (73 % of all men of this age). Data were collected from the national health insurance information system (SNIIRAM). This database comprehensively records all of the outpatient prescriptions and healthcare services reimbursed. This information are linked to data collected during hospitalisations. Results: The frequency of men without diagnosed PCa (10.9 millions) with at least one PSA test was very high in 2011 (men aged 40 years and older: 30 %, 70-74 years: 56 %, 85 years and older: 33 % and without HBP: 25 %, 41 % and 19 %). Men with treated BPH totalized 9 % of the study population, but 18 % of the men with at least one PSA test, 44 % of those with at least one prostate biopsy and 40 % of those with newly managed PCa. Over a 3-year period, excluding men with PCa, 88 % of men with BPH had at least one PSA test and 52 % had three or more PSA tests versus 52 % and 15 % for men without BPH. One year after PSA testing, men of 55-69 years with BPH more frequently underwent prostate biopsy than those without BPH (5.4 % vs 1.8 %) and presented PCa (1.9 % vs 0.9 %). Conclusions: PSA testing frequencies in France are very high even after exclusion of men with BPH, who can be a group with more frequent managed PCa. Level Of Evidence: 4. abstract_id: PUBMED:26995328 Impact of Prostate-specific Antigen (PSA) Screening Trials and Revised PSA Screening Guidelines on Rates of Prostate Biopsy and Postbiopsy Complications. Background: Prostate biopsy and postbiopsy complications represent important risks of prostate-specific antigen (PSA) screening. Although landmark randomized trials and updated guidelines have challenged routine PSA screening, it is unclear whether these publications have affected rates of biopsy or postbiopsy complications. Objective: To evaluate whether publication of the 2008 and 2012 US Preventive Services Task Force (USPSTF) recommendations, the 2009 European Randomized Study of Screening for Prostate Cancer and the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial, or the 2013 American Urological Association (AUA) guidelines was associated with changes in rates of biopsy or postbiopsy complications, and to identify predictors of postbiopsy complications. Design, Setting, And Participants: This quasiexperimental study used administrative claims of 5279315 commercially insured US men aged ≥40 yr from 2005 to 2014, of whom 104584 underwent biopsy. Interventions: Publications on PSA screening. Outcome Measurements And Statistical Analysis: Interrupted time-series analysis was used to evaluate the association of publications with rates of biopsy and 30-d complications. Logistic regression was performed to identify predictors of complications. Results And Limitations: From 2005 to 2014, biopsy rates fell 33% from 64.1 to 42.8 per 100000 person-months, with immediate reductions following the 2008 USPSTF recommendations (-10.1; 95% confidence interval [CI], -17.1 to -3.0; p&lt;0.001), 2012 USPSTF recommendations (-13.8; 95% CI, -21.0 to -6.7; p&lt;0 .001), and 2013 AUA guidelines (-8.8; 95% CI, -16.7 to -0.92; p=0.03). Concurrently, complication rates decreased 10% from 8.7 to 7.8 per 100000 person-months, with a reduction following the 2012 USPSTF recommendations (-2.5; 95% CI, -4.5 to -0.45; p=0.02). However, the proportion of men undergoing biopsy who experienced complications increased from 14% to 18%, driven by nonsepsis infectious complications (p&lt;0.001). Predictors of complications included prior fluoroquinolone use (odds ratio [OR]: 1.27; 95% CI, 1.22-1.32; p&lt;0.001), anticoagulant use (OR: 1.14; 95% CI, 1.04-1.25; p=0.004), and age ≥70 yr (OR: 1.25; 95% CI, 1.15-1.36; p&lt;0.001). Limitations included the retrospective design. Conclusions: Although there has been an absolute reduction in rates of biopsy and 30-d complications, the relative morbidity of biopsy continues to increase. These observations suggest a need to reduce the morbidity of biopsy. Patient Summary: Absolute rates of biopsy and postbiopsy complications have decreased following landmark publications about prostate-specific antigen screening; however, the relative morbidity of biopsy continues to increase. abstract_id: PUBMED:31156017 Prostate cancer incidence and diagnosis in men with PSA levels &gt;20 ng/ml: is it possible to decrease the number of biopsy cores? Objectives: To define if less number of cores would be sufficient to diagnose prostate cancer (PCa) in men with PSA levels &gt;20 ng/ml and to reveal the cancer detection rates in this population. Methods: The data of the men who had 12-core prostate biopsy with a PSA value &gt;20 ng/mg were reviewed. We recorded age, prostate volume, PSA level, and pathology report findings. Patients grouped according to PSA levels and compared for PCa detection rates, and several parameters. We created 16 prostate biopsy scenarios (S1-S16) and applied these to our database to find out the best biopsy protocol to detect PCa. Results: A total of 336 patients with a mean age of 70.5 (47-91) years were included. Mean PSA level was 190.6 (20-5474) ng/ml. PCa detection rates were 55.3%, 81.0%, and 97.7% in patients with PSA levels 20-49.99, 50-99.99, and ≥100 ng/ml, respectively. PSA level was correlated to clinically more important digital rectal examination findings. We selected 2 cores in S1-S6, 4 cores in S7-S12, and 6 cores in S13-S16. We calculated the sensitivity of each scenario and found that all scenarios in PSA Group 3 had a sensitivity &gt;95%. In Group 2, S8, S10, S13, and S14 and in Group 1, only S14 had sensitivity &gt;95%. Conclusions: It is not necessary to take 10-12 core biopsy samples in men with PSA levels &gt;20 ng/ml. We recommend taking 2, 4, and 6 samples for patients with PSA levels ≥100 ng/ml, 50-99.99 ng/ml, and 20-49.99 ng/ml, respectively. abstract_id: PUBMED:24053128 Population-based analysis of prostate-specific antigen (PSA) screening in younger men (&lt;55 years) in Australia. Objective: To analyse the trends in opportunistic PSA screening in Australia, focusing on younger men (&lt;55 years of age), to examine the effects of this screening on transrectal ultrasonography (TRUS)-guided biopsy rates and to determine the nature of prostate cancers (PCas) being detected. Subjects And Methods: All men who received an opportunistic screening PSA test and TRUS-guided biopsy between 2001 and 2008 in Australia were analysed using data from the Australian Cancer registry (Australian Institute of Health and Welfare) and Medicare databases. The Victorian cancer registry was used to obtain Gleason scores. Age-standardized and age-specific rates were calculated, along with the incidence of PCa, and correlated with Gleason scores. Results: A total 5 174 031 PSA tests detected 128 167 PCas in the period 2001-2008. During this period, PSA testing increased by 146% (a mean of 4629 tests per 100 000 men annually), with 80 and 59% increases in the rates of TRUS-guided biopsy and incidence of PCa, respectively. The highest increases in PSA screening occurred in men &lt;55 years old and up to 1101 men had to be screened to detect one incident case of PCa (0.01%). Screening resulted in two thirds of men aged &lt;55 years receiving a negative TRUS biopsy. There was no correlation with Gleason &gt;7 tumours in patients aged &lt;55 years. Conclusion: Despite the ongoing controversy about the merits of PCa screening, there was an increase in PSA testing, especially in men &lt;55 years old, leading to a modestly higher incidence of PCa in Australia. Overall, PSA screening was associated with high rates of negative TRUS-biopsy and the detection of low/intermediate grade PCa among younger patients. abstract_id: PUBMED:35789453 Digital rectal examination impact on PSA derivatives and prostate biopsy triggers: a contemporary study. Objective: To evaluate the impact of the digital rectal exam (DRE) on PSA measurements and clinical decision-making. Methods: Healthy male volunteers between 50 and 70 years old were recruited during a 30-day public screening program. PSA levels were measured using two different methods (standard enhanced chemiluminescence immunoassay-ECLIA, and novel immunochromatography assay-ICA/rapid PSA) in the same blood sample. Two blood samples were drawn; first before DRE and the second 30-40 min after DRE. The effect of DRE on PSA levels and its impact on clinical decision-making for individual patients were evaluated based on different biopsy trigger cutoffs. Results: ECLIA-PSA was measured in 74 participants both pre- and 37 ± 5 min post-DRE, mean age 57.2 ± 8.3 years, and mean prostate volume 33.6 (20-80) cm3. Both total and free ECLIA-PSA increased significantly after DRE (mean increase of 0.47 and 0.26 ng/ml, respectively, both p &lt; 0.001). Different internationally accepted biopsy triggers were reached after DRE only: 5 total PSA &gt; 3 ng/ml, 13 increase &gt; 0.75 ng/ml, 3 PSA density &gt; 0.15, and 1 free/total PSA &lt; 0.18. On two occasions, patients were pushed away from biopsy trigger after DRE due to free/total PSA &gt; 0.18. ICA-PSA was detectable (&gt; 2.0 ng/ml) in 5 of 45 measured samples (11%) before DRE and 13/45 (29%) after DRE, p = 0.0316. Four among five detectable ICA-PSA tests increased after DRE. Conclusion: Performing DRE immediately before PSA measurement might change the clinical decision-making on a significant number of occasions (roughly 1 in 3); even though the mean increase (0.47 ng/ml) looks deceivingly small. Further studies are required that include gold standard tests (biopsy, or imaging). abstract_id: PUBMED:33839027 New recommendations for prostate cancer screening with PSA Prostate cancer is the most frequently diagnosed cancer in men and the second cause of death in those worldwide. The fact that is a tumor with a long latency period has led to a confusion in the convenience of its diagnosis and treatment in patients at an early stage. Classically, European and American societies have not recommended prostate cancer screening with PSA, allowing physicians take this decision. In 2012, after many years full of controversy, the American organization United States Preventive Task Force recommended to abandon its use. The results of these statements carried an increase in the incidence of the metastatic prostate cancer and, therefore, a rise in its mortality. In 2018, after these consequences, the European Association of Urology released new recommendations in favor of screening based on PSA for the first time. In 2019, guidelines were updated with no changes in its recommendations. abstract_id: PUBMED:12868147 Is it convenient in a screening programme for prostate tumour to do biopsy in people with PSA between 3 and 3.9 ng/ml? Purpose Of The Work: Is it convenient to do the prostate biopsy for the patients with PSA between 3 and 3.9 ng/ml? Materials And Methods: From January 1998 to April 2001 we did 421 transrectal prostate biopsies echoconducted, 84 of which (20%) done on patients with a PSA between 3 and 3.9 ng/ml. 84.34 (40%) of them had a rectal exploration or a transrectal prostate echography suspected, apart from the value of PSA free/PSA total relation and 50 (60%) had a PSA free/PSA total relation below 0.15, even though they did not have a rectal exploration or a suspect TR transrectal prostate echography. Results: Of the 84 patients, with PSA between 3 and 3.9 ng/ml, who had undergone a prostate biopsy, 15 (16.6%) were positive for prostate adenocarcinoma. Of 34 patients who, besides having a PSA between 3 and 3.9 ng/ml, had a rectal exploration or a suspect transrectal prostate echography, 7 (20.5%) were positive for prostate tumour, 4 of which (57.14%) had a Gleason score greater than 7 and 3 (12.86%) a Gleason less than 7. Of the 50 patients who had a PSA free/PSA total relation less than 0.15 and did not have a rectal exploration or a suspect transrectal prostate echography, 8 (16%) were positive for prostate adenocarcinoma, 7 of which (87.5%) had a Gleason score greater than 7 and 1 (12.5%) a Gleason score less than 7. Observing that, during the period considered, we obtained 173 overall diagnoses of prostate tumour, we can note that 15 diagnoses (8.66%) had in the whole a PSA between 3 and 3.9 ng/ml, 7 of which (4.04%) had a rectal exploration and a suspect TR prostate echography and 8 (4.62%) had a PSA free/PSA total relation inferior to 0.15. During the period considered, the overall diagnoses of prostate tumour with Gleason greater than 7 were 60 (35%); according to the data of this research, 10 (16%) had a PSA between 3 and 3.9%. Of these 10.7 patients (11%) had a pathological PSA free/PSA total relation and 3 patients (5%) had a pathological rectal exploration or a suspect echography. Instead, the overall diagnosis of prostate tumour with Gleason less than 7 were 113 (65%), 5 (4.4%) had a PSA between 3 and 3.9 ng/ml, 4 of which (3.5%) had a suspect rectal exploration and 1 (0.8%) had a pathological PSA free/PSA total relation. Conclusions: Doing biopsy on patients with PSA between 3 and 3.9 ng/ml and with rectal exploration and suspect TR prostate echography does not permit a remarkable increase in diagnosis of prostate tumour (4.4%). Doing prostate biopsy on patients with PSA between 3 and 3.9 ng/ml and pathological PSA free/PSA total relation, allows a more considerable increase in diagnosis of prostate tumour with Gleason score greater than 7 even if it does not permit a remarkable increase in positive diagnosis (4.62%). abstract_id: PUBMED:26989367 Can the Free/Total PSA Ratio Predict the Gleason Score Before Prostate Biopsy? Objectives: To determine whether there is a correlation between high Gleason score and free/total (f/t) prostate specific antigen (PSA) in patients newly diagnosed with prostate carcinoma. Materials And Methods: The study included 272 prostate biopsy patients whose total PSA value ranged from 4-10 ng/ml. The patients were divided into 2 groups according to the f/t PSA ratio: Group 1 ≤ 15% and Group 2 &gt; 15%. Furthermore, the groups were also compared to each other in terms of mild (≤ 6), moderate (= 7), and high (≥ 8) Gleason score. Results: Group 1 consisted of 135 (49.6%) patients and Group 2 consisted of 137 (50.4%) patients. While 27 (20%) patients had a high Gleason score in Group 1, only 10 (7.3%) patients had a high Gleason score in Group 2 (p = 0.008). Using Spearman's correlation test, we found that the f/t PSA ratios were observed to decrease significantly in all patients with increased Gleason scores (p = 0.002, r = -0.185). Conclusion: According to our study, there is a relationship between higher Gleason score and decreased f/t PSA ratio. Therefore, f/t PSA can be an indicator for predicting the Gleason score. abstract_id: PUBMED:34186135 Prostate Cancer Incidence and Mortality Following a Negative Biopsy in a Population Undergoing PSA Screening. Objective: Transrectal ultrasound guided biopsy for diagnostic workup for prostate cancer (PCa) has a substantial false negative rate. We sought to estimate PCa incidence and mortality following negative biopsy in a cohort of men undergoing prostate cancer screening. Subjects And Methods: The Prostate, Lung, Colorectal and Ovarian (PLCO) cancer screening trial randomized participants 55-74 years to an intervention vs control arm. Intervention arm men received annual prostate-specific antigen (PSA) tests for 6 years and digital rectal exams (DRE) for 4 years. We examined the cohort of men with a positive PSA (&gt; 4 ng/mL) or DRE screen followed within one year by a negative biopsy. PCa incidence and mortality rates from time of first negative biopsy were analyzed as a function of PSA level at diagnosis and other factors. Cumulative incidence and mortality rates accounting for competing risk were estimated. Multivariate proportional hazards regression was utilized to estimate hazard ratios (HRs) of PCa outcomes by PSA level, controlling for age and race. Results: The negative biopsy cohort included 2855 men. Median (25th/75th) age at biopsy was 65 (61/69) years; biopsies occurred between 1994 and 2006. Median (25/75th) follow-up was 13.2 (6.5/16.8) years for incidence and 16.6 (12.3/19.2) years for mortality. 740 PCa cases were diagnosed, with 33 PCa deaths. Overall 20-year cumulative PCa incidence and mortality rates were 26.4% (95% CI: 24.8-28.1) and 1.2% (95% CI: 0.9-1.7), respectively. HRs for PCa incidence and mortality increased significantly with increasing PSA. Conclusion: The mortality rate from PCa through 20 years following a negative biopsy is low. Answer: The study conducted in the Tuscany region of Italy assessed PSA use in the general population and estimated the biopsy rate subsequent to opportunistic screening for prostate cancer. The results indicated that PSA testing was common across age groups, with the highest prevalence in men aged 70-74 years. Despite the high rate of PSA testing, the rate of prostate biopsies following PSA testing was low, at 1.3% in 2004 and 1.2% in 2005. The observed biopsy incidence was much lower than expected based on projections from the European Study of Screening for Prostate Cancer (ERSPC), with only 14.3% in 2004 and 13.2% in 2005 of the expected biopsies being performed. This suggests a discordance between the high prevalence of PSA testing and the low rate of follow-up biopsies, indicating potentially inefficient practice in the region (PUBMED:18822688).
Instruction: Is church attendance associated with Latinas' health practices and self-reported health? Abstracts: abstract_id: PUBMED:16336105 Is church attendance associated with Latinas' health practices and self-reported health? Objectives: To evaluate (a) the relation between frequency of church attendance, self-rated health, and health behaviors controlling for potential confounders and (b) the influence of acculturation on church attendance and health behaviors. Methods: Physical activity and dietary patterns, demographics, and acculturation levels were compared among Latinas who attended church frequently, infrequently, and not at all. Results: Church attendance was independently and positively associated with healthier dietary and physical activity behaviors, but not with self-rated health. Acculturation attenuated the relation between physical activity and church attendance. Conclusion: Latinas' health behaviors and self-rated health may be related to other variables that explain the salutary effects of church attendance. abstract_id: PUBMED:25966802 Church Attendance as a Predictor of Number of Sexual Health Topics Discussed Among High-Risk HIV-Negative Black Women. Research suggests that sexual health communication is associated with safer sex practices. In this study, we examined the relationship between church attendance and sexual health topics discussed with both friends and sexual partners among a sample of urban Black women. Participants were 434 HIV-negative Black women who were at high risk for contracting HIV through heterosexual sex. They were recruited from Baltimore, Maryland using a network-based sampling approach. Data were collected through face-to-face interviews and Audio-Computer-Assisted Self-Interviews. Fifty-four percent of the participants attended church once a month or more (regular attendees). Multivariate logistic regression analyses revealed that regular church attendance among high-risk HIV-negative Black women was a significant predictor of the number of sexual health topics discussed with both friends (AOR = 1.85, p = .003) and sexual partners (AOR = 1.68, p = .014). Future efforts to reduce HIV incidence among high-risk Black women may benefit from partnerships with churches that equip faith leaders and congregants with the tools to discuss sexual health topics with both their sexual partners and friends. abstract_id: PUBMED:27761758 Good for All? Hardly! Attending Church Does Not Benefit Religiously Unaffiliated. The existing literature addressing Religion and Spirituality supports the idea that attending church is positively associated with health outcomes. However, within this literature there has been an impoverished effort to determine whether the Religiously Unaffiliated will report these positive relationships. Using representative data from Ontario (N = 3620), the relationships between Religious/Spiritual variables (Attendance, Prayer/Meditation, and Religiosity) and health outcomes (Happiness, Self-Rated Health, and Satisfaction with Life) were assessed. Results focused on three recurring trends: the Religiously Unaffiliated experienced attending church less positively than Christians; when compared at the highest level of Attendance, the Religiously Unaffiliated were less healthy than Christians; and when only considering the Religiously Unaffiliated, Religious/Spiritual variables were not significant and positive predictors of health outcomes. The discussion focused on the need to delineate between how Christians and the Religiously Unaffiliated experience Religious/Spiritual variables, and the need to stop over-generalizing the positive relationship between Religious/Spiritual variables and health. abstract_id: PUBMED:14652060 Is going to church good or bad for you? Denomination, attendance and mental health of children in West Scotland. Religiosity is often associated with mental health in adult populations, but not in a consistent direction. Conflicting results reflect the multidimensional nature of both concepts. Few studies have addressed the relationship between religiosity and mental health among children. In this paper, we examine the relation of weekly church attendance to measures of mental health for 11 year olds from the two main Christian denominations in West Scotland. Levels of church-attendance were low among those affiliated with the Church of Scotland and relatively high among Catholics. The only mental health measure to show a similar relationship with church attendance in both denominations was aggression, which was less prevalent among weekly attenders. Self-esteem, anxiety and depression all demonstrated an interaction, such that weekly church attendance was associated either with advantage for Catholics, disadvantage for children with a Church of Scotland affiliation, or both. Teasing/bullying acted in a small way as a mediating factor in these relationships. In an education system with separate Catholic and 'non-denominational' schools, we hypothesise that the relationship between church attendance and mental health may be contingent on whether church attendance is normative within the peer group. abstract_id: PUBMED:14687276 African American church participation and health care practices. Background: While religious involvement is associated with improvements in health, little is known about the relationship between church participation and health care practices. Objectives: To determine 1) the prevalence of church participation; 2) whether church participation influences positive health care practices; and 3) whether gender, age, insurance status, and levels of comorbidity modified these relationships. Design: A cross-sectional analysis using survey data from 2196 residents of a low-income, African-American neighborhood. Measurements: Our independent variable measured the frequency of church attendance. Dependent variables were: 1) Pap smear; 2) mammogram; and 3) dental visit-all taking place within 2 years; 4) blood pressure measurement within 1 year, 5) having a regular source of care, and 6) no perceived delays in care in the previous year. We controlled for socioeconomic factors and the number of comorbid conditions and also tested for interactions. Results: Thirty-seven percent of community members went to church at least monthly. Church attendance was associated with increased likelihood of positive health care practices by 20% to 80%. In multivariate analyses, church attendance was related to dental visits (odds ratio [OR], 1.5; 95% confidence interval [CI], 1.3 to 1.9) and blood pressure measurements (OR, 1.6; 95% CI, 1.2 to 2.1). Insurance status and number of comorbid conditions modified the relationship between church attendance and Pap smear, with increased practices noted for the uninsured (OR, 2.3; 95% CI, 1.2 to 4.1) and for women with 2 or more comorbid conditions (OR, 1.9; 95% CI, 1.1 to 3.5). Conclusion: Church attendance is an important correlate of positive health care practices, especially for the most vulnerable subgroups, the uninsured and chronically ill. Community- and faith-based organizations present additional opportunities to improve the health of low-income and minority populations. abstract_id: PUBMED:31485287 Church Attendance and CMV Herpes Virus Latency Among Bereaved and Non-Bereaved Adults. Objective: There is widespread literature linking church attendance to physical health. However, little is known about the association of church attendance and the immune system, particularly during difficult life transitions. This study investigated the association between church attendance and CMV herpes-virus latency by assessing Cytomegalovirus (CMV) IgG antibody titers among bereaved and non-bereaved individuals. Methods: Participants included 44 bereaved individuals and 44 controls with a mean age of 68 (SD=12.84). CMV herpes-virus latency was measured using CMV IgG antibody titers. Church attendance was measured using three items from the Community Healthy Activities Model Program for Seniors (CHAMPS) Questionnaire. Results: After adjusting for participant's age, gender, education, minority status, weekly alcohol consumption, smoking, depression, body mass index (BMI) and comorbidities, church attendance was associated with lower CMV IgG antibody titers among bereaved and control participants. Further, there was a significant moderating effect of church attendance in the association between bereavement status and CMV IgG antibody titers, so that bereaved individuals attending church were found to have less herpes-virus reactivation (lower CMV IgG antibody titers) when compared to their bereaved counterparts that do not attend church. Conclusion: This study demonstrated that church attendance is associated with less herpes-virus reactivation as indexed by lower levels of CMV IgG antibody titers, particularly among the bereaved. Future studies should focus on further understanding the pathways by which church attendance impacts CMV herpes-virus latency during stressful life events, such as bereavement. abstract_id: PUBMED:19210025 A prospective study of church attendance and health over the lifespan. Objective: The objective of the current study was to help clarify the previously ambiguous results concerning the relationship between church attendance and later physical health. Design: The current study examined the effect of church attendance on 4 different indicators of later health in a sample of inner city men followed throughout their lifecourse. Measures of previous health status, mood, substance abuse, smoking, education, and social class were used as covariates in regression analyses predicting health at age 70 from church attendance at age 47. Main Outcome Measures: Health at age 70 was assessed by 4 indicators: mortality, objective physical health, subjective physical health, and subjective well-being. Results: Though church attendance was related to later physical health, this was only through indirect means, as both physical health and church attendance were associated with substance use and mood. However, findings do suggest a more direct link between church attendance and well-being. Conclusion: Indirect effects of church attendance on health were clearly observed, with alcohol use/dependence, smoking, and mood being possible mediators of the church attendance-health relationship. The effects of church attendance on more subjective ratings of health, however, may be more direct. abstract_id: PUBMED:27495252 Cultural events - does attendance improve health? Evidence from a Polish longitudinal study. Background: Although there is strong advocacy for uptake of both the arts and creative activities as determinants of individual health conditions, studies evaluating causal influence of attendance at cultural events on population health using individual population data on health are scarce. If available, results are often only of an associative nature. In this light, this study investigated causative impact of attendance at cultural events on self-reported and physical health in the Polish population. Methods: Four recent waves (2009, 2011, 2013 and 2015) of the biennial longitudinal Polish household panel study, Social Diagnosis, were analysed. The data, representative for the Polish population aged over 16, with respect to age, gender, classes of place of residence and NUTS 2 regions, were collected from self-report questionnaires. Causative influence of cultural attendance on population health was established using longitudinal population representative data. To account for unobserved heterogeneity of individuals and to mitigate issues caused by omitted variables, a panel data model with a fixed effects estimator was applied. The endogeneity problem (those who enjoy good health are more likely to participate in cultural activities more frequently) was circumvented by application of instrumental variables. Results: Results confirmed positive association between cultural attendance and self-reported health. However, in contrast to the often suggested positive causative relationship, such a link was not confirmed by the study. Additionally, no evidence was found to corroborate a positive impact from cultural attendance on physical health. Both findings were substantiated by augmentation in the longitudinal perspective and causal link. Conclusions: We showed the relation between attendance at cultural events and self-reported health could only be confirmed as associational. Therefore, this study provided little justification to encourage use of passive cultural participation as a measure of health promotion (improvement). Our study did not confirm any identifiable benefit to physical health from passive participation in culture. Future research should investigate the causative influence of active participation in creative activities on health outcomes as, in contrast to passive attendance, it may be influential. abstract_id: PUBMED:26936149 Reported Church Attendance at the Time of Entry into HIV Care is Associated with Viral Load Suppression at 12 Months. The Southeast has high rates of church attendance and HIV infection rates. We evaluated the relationship between church attendance and HIV viremia in a Southeastern US, HIV-infected cohort. Viremia (viral load ≥200 copies/ml) was analyzed 12 months after initiation of care. Univariate and multivariable logistic regression models were fit for variables potentially related to viremia. Of 382 patients, 74 % were virally suppressed at 12 months. Protective variables included church attendance (AOR 0.5; 95 % CI 0.2, 0.9), being on antiretroviral therapy (AOR 0.01; 95 % CI 0.004, 0.04), CD4(+) T lymphocyte count 200-350 cells/mm(3) at care entry (AOR 0.3; 95 % 0.1, 0.9), and education (AOR 0.5; 95 % CI 0.2, 0.9). Variables predicting viremia included black race (AOR 3.2; 95 % CI 1.4, 7.4) and selective disclosure of HIV status (AOR 2.7; 95 % CI 1.2, 5.6). Church attendance may provide needed support for patients entering HIV care for the first time. abstract_id: PUBMED:8153204 Church attendance, religiosity, and health. Many studies have reported a beneficial effect of church attendance on one's health. Little attempt has been made, however, to ascertain what aspect of the behavior might be responsible for the effect, so church attendance, religiosity, and health were examined in a sample of 174 graduate students (ages 19 to 69 years). Neither attendance nor religiosity was significantly associated in this sample with the frequency or seriousness of illness experienced over the previous 12-mo. period. These findings are discussed in relation to sample characteristics. Answer: Church attendance is positively associated with healthier dietary and physical activity behaviors among Latinas, but it is not associated with self-rated health (PUBMED:16336105). This suggests that while Latinas who attend church may engage in health practices that are beneficial, this does not necessarily translate to a higher subjective assessment of their overall health.
Instruction: Does long-course radiotherapy influence postoperative perineal morbidity after abdominoperineal resection of the rectum for cancer? Abstracts: abstract_id: PUBMED:21176061 Does long-course radiotherapy influence postoperative perineal morbidity after abdominoperineal resection of the rectum for cancer? Aim: The aim of the study was to define risk factors for perineal wound complications after abdominoperineal resection (APR), with particular reference to preoperative radiotherapy. Method: Patients undergoing APR at our institution between 1985 and 2009 were reviewed. Wound complications were classified according to the Center for Disease Control and Prevention classification of surgical site infection (SSI). Perineal complications were identified in patients who had preoperative long-course radiotherapy (Group 1) and those who had surgery alone (Group 2). Results: One hundred and fifty-seven patients met the inclusion criteria. Preoperative radiotherapy was performed in 68 (44.7%) patients (Group 1), and 89 (65.3%) patients (Group 2) underwent surgery alone. The overall rate of perineal wound complications was 14.8%. The wound infection rate was similar in each group (Group 1, 10/68, 14.7%; Group 2, 13/89, 14.9%; P = 0.9). An elevated BMI (&gt;30) was the only factor correlated with perineal morbidity on univariate analysis (P = 0.01). Conclusion: Preoperative radiotherapy does not influence perineal healing other than in patients with obesity. abstract_id: PUBMED:20706069 Benefits of perineal colostomy on perineal morbidity after abdominoperineal resection. Purpose: Abdominoperineal resection has a high rate of postoperative morbidity of the perineal wound. This study aimed to determine the effects of perineal colostomy on perineal morbidity after abdominoperineal resection. Methods: All patients who underwent an abdominoperineal resection for rectal adenocarcinoma between 1993 and 2007 were studied. Two groups were identified and compared who had undergone either an iliac colostomy or a perineal colostomy. Results: The analysis included 110 patients (iliac colostomy group, n = 41; perineal colostomy group, n = 69). There were fewer instances of pelviperineal morbidity (P = .008) and fewer instances of wound dehiscence (P = .02) in the perineal colostomy group, which resulted in a shorter time to healing (35.3 vs 45.1 d, respectively; P = .04). There was no specific postoperative morbidity in any patient and no difference between the 2 groups regarding long-term perineal morbidity. The benefits from perineal colostomy were statistically significant in patients who received radiation therapy in terms of pelviperineal morbidity (P = .01) and healing time (50.8 vs 35.9 days, respectively; P = .02), whereas no difference was found in patients who had not received radiation therapy. Conclusion: Perineal colostomy is a safe and functionally acceptable procedure for perineal reconstruction after abdominoperineal resection for rectal adenocarcinoma. In the present study, there was no additional morbidity related to perineal colostomy, and this procedure was associated with a decrease in perineal morbidity and healing time compared with primary perineal closure, in particular, after radiotherapy treatment. abstract_id: PUBMED:21176893 Pelvic reconstruction after abdominoperineal resection of the rectum Despite the advances in the treatment of cancer of the rectum and the expansion of the multimodal therapeutic technique, abdominoperineal resection (APR) still needs to be performed as radical treatment in 20-30% of cases. APR of the rectum involves a significant morbidity, including intestinal obstruction and wound complications, with radiotherapy-induced enteritis being able to develop in 15% of cases subjected to post-operative radiotherapy. Furthermore, with the aim of improving local oncology results, an extended APR is recommended; a technique that requires a perineal reconstruction technique that allows a tension free closure in a previously radiated tissue and may prevent perineal hernias developing. The objective of this article is to review pelvic and perineal repair methods after APR due to cancer, with special attention to the new prosthetic repair techniques. abstract_id: PUBMED:33578769 Perineal Wound Closure Following Abdominoperineal Resection and Pelvic Exenteration for Cancer: A Systematic Review and Meta-Analysis. Background: Abdominoperineal resection (APR) and pelvic exenteration (PE) for the treatment of cancer require extensive pelvic resection with a high rate of postoperative complications. The objective of this work was to systematically review and meta-analyze the effects of vertical rectus abdominis myocutaneous flap (VRAMf) and mesh closure on perineal morbidity following APR and PE (mainly for anal and rectal cancers). Methods: We searched PubMed, Cochrane, and EMBASE for eligible studies as of the year 2000. After data extraction, a meta-analysis was performed to compare perineal wound morbidity. The studies were distributed as follows: Group A comparing primary closure (PC) and VRAMf, Group B comparing PC and mesh closure, and Group C comparing PC and VRAMf in PE. Results: Our systematic review yielded 18 eligible studies involving 2180 patients (1206 primary closures, 647 flap closures, 327 mesh closures). The meta-analysis of Groups A and B showed PC to be associated with an increase in the rate of total (Group A: OR 0.55, 95% CI 0.43-0.71; p &lt; 0.01/Group B: OR 0.54, CI 0.17-1.68; p = 0.18) and major perineal wound complications (Group A: OR 0.49, 95% CI 0.35-0.68; p &lt; 0.001/Group B: OR 0.38, 95% CI 0.12-1.17; p &lt; 0.01). PC was associated with a decrease in total (OR 2.46, 95% CI 1.39-4.35; p &lt; 0.01) and major (OR 1.67, 95% CI 0.90-3.08; p = 0.1) perineal complications in Group C. Conclusion: Our results confirm the contribution of the VRAMf in reducing major complications in APR. Similarly, biological prostheses offer an interesting alternative in pelvic reconstruction. For PE, an adapted reconstruction must be proposed with specialized expertise. abstract_id: PUBMED:15714245 Sutured perineal omentoplasty after abdominoperineal resection for adenocarcinoma of the lower rectum. Purpose: This study was designed to describe and evaluate the efficacy of sutured perineal omentoplasty on perineal wound healing after abdominoperineal resection for adenocarcinoma of the lower rectum. Methods: Charts of patients who underwent abdominoperineal resection for adenocarcinoma of the rectum from June 1995 to December 2001 were reviewed for mortality, morbidity, and perineal healing. Abdominoperineal resection was accomplished according to Miles combined with total mesorectal excision. The omentum was pediculized on the left gastroepiploic artery and tightly sewn to the subcutaneous fatty tissue. The perineal skin was then closed primarily. Results: A total of 104 patients were included in the study. The mean age at surgery was 65 (range, 13-91) years. The distance of the tumor from the anal sphincters was 0.45 +/- 0.9 mm (range, 0-50). During the study period, 92 patients (88 percent) had sutured perineal omentoplasty. The rate of primary perineal wound healing was 80 percent. Postoperative perineal wound complications consisted of perineal abscess in seven patients. Six of these patients had a sutured perineal omentoplasty (6 percent). Only four patients required a surgical drainage. Minor perineal suppuration occurred in four patients (4 percent), whereas partial perineal wound dehiscence occurred in eight patients (8 percent). All wounds healed completely at three months. Intestinal obstruction occurred in three patients (3 percent). No complication of the pedicled omentoplasty was observed. Conclusions: This study demonstrated that sutured perineal omentoplasty is possible in the majority of patients after abdominoperineal resection for adenocarcinoma of the lower rectum with excellent primary perineal wound healing. abstract_id: PUBMED:30690283 Robot-assisted laparoscopic repair of perineal hernia after abdominoperineal resection: A case report and review of the literature. Introduction: Perineal hernia is a protrusion of the pelvic floor containing intra-abdominal viscera. The occurrence of postoperative perineal hernia after abdominoperineal resection (APR) is rare, but reports have indicated a recent increase in occurrence following surgical treatment for rectal cancer. This has been attributed to a shift towards extralevator abdominoperineal resection, together with more frequent and long-term use of neoadjuvant therapy. Presentation Of Case: Here, we report the case of a patient who underwent APR for cancer. Twenty months postoperative, a perineal hernia was detected. The patient was electively scheduled for surgery. Robot-assisted laparoscopy was performed using the da Vinci Surgical System. The perineal hernia was repaired by primary closure with the placement of Symbotex Composite mesh as reinforcement for the pelvic floor. The surgery was performed without any adverse events, and the patient was discharged the day after surgery. Clinical follow-up proceeded at the designated time intervals without difficulties. Discussion: Recurrence rates of perineal hernia remain high, and surgeons face numerous challenges related to poor view, suturing and mesh placement in the deep pelvis. Numerous approaches have been described, but there is still no consensus as to the optimal repair technique for perineal hernia. Conclusion: Symptomatic perineal hernias can feasibly be repaired with robot-assisted laparoscopy. Furthermore, suturing and mesh placement require less effort with the robot approach when compared to the open and laparoscopic approaches. These promising findings are demonstrated in the included video. abstract_id: PUBMED:37580449 Purse string closure of perineal defects after abdominoperineal excision. Purpose: The aim of this study was to describe a new technique of perineal closure following abdominoperineal excision (APE) using purse-string perineal skin closure (PSPC). Material And Methods: Between January 2016 and May 2021, 15 consecutives patients who had an APE procedure were included in this retrospective single-center study. All indications of APE were considered, as well as all types of APE. We analyzed the patient characteristics and peri-operative features, including overall (Clavien 1 to 5) and severe (Clavien 3 and 4) postoperative morbidity, length of stay (LOS), and long-term results (median time to perineal wound closure and rate of perineal incisional hernia). Results: The patients included 11 men and four women, with a mean age of 64 ± 13 [33-80] years. The indication of APE was an epidermoid carcinoma of the anal canal (n = 5) or an adenocarcinoma of the rectum (n = 10). The mean operating time was 220 ± 88.64 [70-360] min. The overall morbidity rate was 60%, the severe morbidity rate 26%, and reoperation rate 26%. The median length of stay was 9 ± 6.5 days. After a mean follow-up of 23.5 ± 20.3 months, the median time to perineal wound closure was 96 ± 60 days, the persistent perineal sinus rate was 6% (n = 2), and one patient developed a perineal incisional hernia. Conclusion: Purse-string closure of perineal wounds is a safe and effective technique for perineal wound closure after APE. The short LOS allowed an early return home. abstract_id: PUBMED:37898965 Uterine retroversion and gluteal transposition flap for postoperative perineal evisceration after extralevator abdominoperineal resection. Anal squamous cell carcinoma (ASCC) is the most common histological subtype of malignant tumor affecting the anal canal. Chemoradiotherapy (CRT) is the first-line treatment in nearly all cases, ensuring complete clinical response in up to 80% of patients. Abdominoperineal resection (APR) is typically reserved as salvage therapy in those patients with persistent or recurrent tumor after CRT. In locally advanced tumors, an extralevator abdominoperineal excision (ELAPE), which entails excision of the anal canal and levator muscles, might be indicated to obtain negative resection margins. In this setting, the combination of highly irradiated tissue and large surgical defect increases the risk of developing postoperative perineal wound complications. One of the most dreadful complications is perineal evisceration (PE), which requires immediate surgical treatment to avoid irreversibile organ damage. Different techniques have been described to prevent perineal complications after ELAPE, although none of them have reached consensus. In this technical note, we present a case of PE after ELAPE performed for a recurrent ASCC. Perineal evisceration was approached by combining a uterine retroversion with a gluteal transposition flap to obtain wound healing and reinforcement of the pelvic floor at once, when a mesh placement is not recommended. abstract_id: PUBMED:20011400 Perineal wound complications after abdominoperineal resection. Perineal wound complications following abdominoperineal resection (APR) is a common occurrence. Risk factors such as operative technique, preoperative radiation therapy, and indication for surgery (i.e., rectal cancer, anal cancer, or inflammatory bowel disease [IBD]) are strong predictors of these complications. Patient risk factors include diabetes, obesity, and smoking. Intraoperative perineal wound management has evolved from open wound packing to primary closure with closed suctioned transabdominal pelvic drains. Wide excision is used to gain local control in cancer patients, and coupled with the increased use of pelvic radiation therapy, we have experienced increased challenges with primary closure of the perineal wound. Tissue transfer techniques such as omental pedicle flaps, and vertical rectus abdominis and gracilis muscle or myocutaneous flaps are being used to reconstruct large perineal defects and decrease the incidence of perineal wound complications. Wound failure is frequently managed by wet to dry dressing changes, but can result in prolonged hospital stay, hospital readmission, home nursing wound care needs, and the expenditure of significant medical costs. Adjuvant therapies to conservative wound care have been suggested, but evidence is still lacking. The use of the vacuum-assisted closure device has shown promise in chronic soft tissue wounds; however, experience is lacking, and is likely due to the difficulty in application techniques. abstract_id: PUBMED:34272997 A meta-analysis of low Hartmann's procedure versus abdominoperineal resection for non-restorative treatment of rectal cancer. Background: Non-restorative surgery for rectal cancer is indicated in patients with comorbidities, advanced disease and poor continence. The aim of this meta-analysis was to compare the postoperative morbidity of Hartmann's procedure (HP) with that of extrasphincteric and intersphincteric abdominoperineal resection (APR) in the treatment of rectal cancer. Methods: The Medline, Embase and Cochrane databases were searched for publications comparing postoperative morbidity of HP and APR. The incidence of overall surgical complications, pelvic-perineal complications and pelvic abscess was analysed as primary endpoints. Readmissions requiring reintervention and postoperative mortality were also compared. Results: A cumulative analysis showed a significantly higher rate of overall complications (odds ratio (OR) 0.553, 95% confidence interval (CI) 0.320 to 0.953 and P value 0.033) and pelvic-perineal complications (OR 0.464, 95% CI 0.250 to 0.861 and P value 0.015) after APR. The incidence of isolated pelvic abscess formation was significantly higher after HP (OR 2.523, 95% CI 1.383 to 4.602 and P value 0.003). A subgroup analysis of intersphincteric APR compared with HP did not show any significant difference in the incidence of overall complications, pelvic-perineal complications or pelvic abscess formation (P values of 0.452, 0.258 and 0.100, respectively). There was no significant difference in readmissions, reinterventions and mortality after HP and APR (P 0.992, 0.198 and 0.151). Conclusion: An extrasphincteric APR is associated with higher overall and pelvic-perineal complications and may be reserved for tumours invading the anal sphincter complex. In the absence of sphincter involvement, both HP and intersphincteric APR are better alternatives with comparable morbidity. Answer: Long-course radiotherapy does not appear to significantly influence postoperative perineal morbidity after abdominoperineal resection (APR) of the rectum for cancer. A study that specifically aimed to define risk factors for perineal wound complications after APR, with particular reference to preoperative radiotherapy, found that the overall rate of perineal wound complications was similar regardless of whether patients received preoperative radiotherapy or not. The only factor correlated with perineal morbidity on univariate analysis was an elevated BMI (>30) (PUBMED:21176061). Additionally, another study found that perineal colostomy, which is a procedure for perineal reconstruction after APR, was associated with a decrease in perineal morbidity and healing time compared with primary perineal closure, particularly after radiotherapy treatment. This suggests that while long-course radiotherapy itself may not influence perineal healing, the choice of perineal reconstruction technique can have beneficial effects on perineal morbidity and healing time in patients who have received radiotherapy (PUBMED:20706069). In summary, the evidence suggests that long-course radiotherapy does not have a significant impact on postoperative perineal morbidity after APR for rectal cancer, but factors such as obesity and the type of perineal reconstruction performed may influence healing outcomes.
Instruction: Source of drugs for prescription opioid analgesic abusers: a role for the Internet? Abstracts: abstract_id: PUBMED:18816331 Source of drugs for prescription opioid analgesic abusers: a role for the Internet? Objective: There has been a sharp increase in the abuse of prescription opioid analgesics in the United States in the past decade. It has been asserted, particularly by several governmental and regulatory agencies, that the Internet has become a significant source of these drugs which may account to a great extent for the surge in abuse. We have studied whether this is correct. Design: We asked 1,116 prescription drug abusers admitted for treatment, through standardized questionnaires, where they obtained their drugs. We also attempted to purchase scheduled II and III drugs from a random sample of Internet sites offering such sales. Results: Dealers, friends or relatives, and doctors' prescriptions were listed as a source of drugs with equal frequency ( approximately 50-65%), with theft and forgery far behind at 20%. The Internet was mentioned by fewer than 6% of the total responders. Because these data suggest either lack of availability or that our sample has not yet realized that the Internet is a potential source, we attempted to purchase scheduled II and III opioids and the unscheduled opioid, tramadol, from a random sample of 10% of the sites listing such sales. We were unsuccessful in purchasing a single scheduled opioid analgesic, but found that tramadol, as an unscheduled drug, was freely available. Conclusions: The assertion that the Internet has become a dangerous new avenue for the diversion of scheduled prescription opioid analgesics appears to be based on no empirical evidence and is largely incorrect. abstract_id: PUBMED:18781245 Impact of Internet pharmacy regulation on opioid analgesic availability. Objective: Access to prescription opioid analgesics has made Internet pharmacies the object of increased regulatory scrutiny, but the effectiveness of regulatory changes in curtailing availability of opioid analgesics from online sources has been not assessed. As part of an ongoing investigation into the relationship between the Internet and substance abuse, we examined the availability of prescription opioid analgesics from online pharmacies. Method: From a pharmacy watch Web site, we constructed a data set of postings entered every 3 months beginning November 1, 2005, that were related to the purchase of prescription opioid analgesics. Trained examiners assessed whether the final post described accessibility of pain medications that was increasing or decreasing. Results: We identified 45 threads related to the availability of opioid analgesics from Internet pharmacies. Of the 41 (91%) threads describing the declining availability of opioid analgesic agents from Internet pharmacies, 34 (82%) received posts on November 1, 2007. Despite the subjective nature of the research question, there was high interobserver agreement between coders (kappa= .845) that availability of opioid analgesics from online pharmacies had decreased. This finding was supported by a dramatic rise in the number of pageviews (an accepted measure of Web site visitor interest in a page's content) of Web pages describing decreased availability of opioid analgesics. Conclusions: These data suggest striking decreases in the availability of prescription opioid analgesic pharmaceuticals. This self-reported change in drug availability may be related to increased regulation of and law enforcement operations directed against Internet pharmacies. abstract_id: PUBMED:20227199 Prescription drugs purchased through the internet: who are the end users? Although prescription drugs are readily available on the Internet, little is known about the prevalence of Internet use for the purchase of medications without a legitimate prescription, and the characteristics of those that obtain non-prescribed drugs through online sources. The scientific literature on this topic is limited to anecdotal reports or studies plagued by small sample sizes. Within this context, the focus of this paper is an examination of five national data sets from the U.S. with the purpose of estimating: (1) how common obtaining prescription medications from the Internet actually is, (2) who are the typical populations of "end users" of these non-prescribed medications, and (3) which drugs are being purchased without a prescription. Three of the data sets are drawn from the RADARS (Researched Abuse Diversion and Addiction-Related Surveillance) System, a comprehensive series of studies designed to collect timely and geographically specific data on the abuse and diversion of a number of prescription stimulants and opioid analgesics. The remaining data sets include the National Survey on Drug Use and Health (NSDUH) and the Monitoring the Future (MTF) survey. Our analysis yielded uniformly low rates of prescription drug acquisition from online sources across all five data systems we examined. The consistency of this finding across very diverse populations suggests that the Internet is a relatively minor source for illicit purchases of prescription medications by the individual end-users of these drugs. abstract_id: PUBMED:30931773 Pattern of Opioid Analgesic Prescription for Adults by Dentists in Nova Scotia, Canada. Global consumption of prescription opioid analgesics has increased dramatically in the past 2 decades, outpacing that of illicit drugs in some countries. The increase has been partly ascribed to the widespread availability of prescription opioid analgesics and their subsequent nonmedical use, which may have contributed to the epidemic of opioid abuse, addiction, and overdose-related deaths. International studies report that dentists may be among the leading prescribers of opioid analgesics, thus adding to the societal impact of this epidemic. Between 2009 and 2011, dentists in the United States prescribed 8% to 12% of opioid analgesics dispensed. There is little information on the pattern of opioid analgesic prescription by dentists in Canada. The aim of this study was to examine the pattern of opioid analgesics prescription by dentists in Nova Scotia (NS), Canada. This retrospective observational study used the provincial prescription monitoring program's record of oral opioid analgesics and combinations dispensed to persons 16 y and older at community pharmacies that were prescribed by dentists from January 2011 to December 2015. During the study period, more than 70% of licensed dentists in NS wrote a prescription for dispensed opioid analgesics, comprising about 17% of all opioid analgesic prescribers. However, dentists were responsible for less than 4% of all prescriptions for dispensed opioid analgesics, prescribing less than 0.5% of the total morphine milligram equivalent (MMEq) of opioid analgesics dispensed over the 5 y. There was a significant downward trend in total MMEq of dispensed opioid analgesics prescribed by dentists from about 2.23 million MMEq in 2011 to 1.93 million MMEq in 2015 (r = -0.97; P = 0.006). Opioid prescription is common among dentists, but their contribution to the overall availability of opioid analgesics is low. Furthermore, there has been a downward trend in total dispensed MMEq of opioid analgesics prescribed by dentists. Knowledge Transfer Statement: This study will serve to inform dentists and policy makers on the types and dosage of opioid analgesics being prescribed by dentists. The study may prompt dentists to reflect on and adjust their practice of opioid analgesic prescription in view of the current opioid analgesic epidemic. abstract_id: PUBMED:17457555 Illegal purchase of psychotropic drugs from the internet Several national institutions are registering a significant increase in sales of prescription and illegal drugs from internet pharmacies. Psychoactive drugs are preferred; the clients are particularly young. Considering the current amount of data available, the extent and relevance to addiction medicine remain unclear. In the following report we present the case of a patient from our outpatient department who has suffered from an opioid dependency for several years and has been using a Spanish internet pharmacy to purchase tramadol without prescription. abstract_id: PUBMED:19154446 The availability of prescription-only analgesics purchased from the internet in the UK. What Is Already Known About This Subject: Increasing numbers of people are accessing medicines from the internet. This online market is poorly regulated and represents a potential threat to the health of patients and members of the public. What This Study Adds: Prescription-only analgesics, including controlled opioids, are readily available to the UK public through internet pharmacies that are easily identified by popular search engines. The majority of websites do not require the customer to possess a valid prescription for the drug. Less than half provide an online health screen to assess suitability for supply. The majority have no registered geographical location. Analgesic medicines are usually purchased at prices significantly above British National Formulary prices and are often supplied in large quantities. These findings are of particular relevance to pain-management specialists who are trying to improve the rational use of analgesic drugs. Aims: To explore the availability to the UK population of prescription-only analgesics from the internet. Methods: Websites were identified by using several keywords in the most popular internet search engines. From 2000 websites, details of 96 were entered into a database. Results: Forty-six (48%) websites sold prescription analgesics, including seven opioids, two non-opioids and 18 nonsteroidal anti-inflammatory drugs. Thirty-five (76%) of these did not require the customer to possess a valid prescription. Conclusion: Prescription-only analgesics, including controlled opioids, are readily available from internet websites, often without a valid prescription. abstract_id: PUBMED:36987981 Trends in the Source of Prescription Drugs for Misuse between 2015 and 2019. Background: Opioid and benzodiazepine-related deaths have been at all-time highs despite numerous changes to guidelines for prescribing these substances. Although prescribing guidelines appear to have resulted in fewer prescriptions from doctors, no recent study has looked at changes to where prescription drugs of misuse are obtained. Objectives: The purpose of this study was to examine trends in the source of prescription drug misuse between 2015 and 2019. Methods: Data were from the 2015 - 2019 National Survey on Drug Use and Health. Trend analysis was performed using logistic regression models with year as a predictor of prescription drug source. Results: The odds of receiving a prescription opioid or benzodiazepine for misuse from a friend or relative for free has significantly decreased from 2015 to 2019 (opioid: AOR= 0.96; benzodiazepine: AOR= 0.93), while the odds of purchasing benzodiazepines from a drug dealer or stranger has increased (AOR= 1.08). No significant changes were observed for obtaining misused prescription drugs from a doctor. Additional significant trends were observed among age groups. Conclusion: Overall, changes in prescribing guidelines for opioids do not appear to have affected the proportion of prescription drug misusers receiving opioids from doctors, though the willingness or ability of family members and friends to give prescription medications away appears to have decreased. Additionally, increases in purchases of prescription drugs from drug dealers and strangers is concerning as it may also increase risks involved in PDM. abstract_id: PUBMED:16968618 The Internet as a source of drugs of abuse. The Internet is a vital medium for communication, entertainment, and commerce, with more than 1 billion individuals connected worldwide. In addition to the many positive functions served by the Internet, it also has been used to facilitate the illicit sale of controlled substances. No-prescription websites (NPWs) offer--and then actually sell--controlled substances over the Internet without a valid prescription. NPW monitoring studies have focused primarily on the availability of prescription opioid medications, although many other drugs of abuse also are available online. Research indicates that these NPW sites are prevalent. Google or Yahoo searches simply using the term "Vicodin" return 40% to 50% NPWs in the top 100 sites. Thus, NPWs represent an important development in the sale of illicit drugs because of the ease with which controlled substances can be sold with relative anonymity. The emergence of NPWs requires new law enforcement and public health initiatives; continued monitoring efforts will determine whether efforts to reduce the availability of NPWs are successful. abstract_id: PUBMED:33325317 Misuse of Prescription and Illicit Drugs in Middle Adulthood in the Context of the Opioid Epidemic. Background: The United States' opioid epidemic continues to escalate overdose deaths. Understanding its extent is complicated by concurrent misuse of other prescription or illicit drugs, increasing risk for overdose. Current surveillance using electronic medical records and police data has limitations and frequently fails to distinguish middle-aged adults from other age groups in reporting. Objectives: The purpose of this analysis is to (1) describe characteristics of middle-aged US adults who report misusing prescription and illicit drugs and (2) evaluate if misusing prescription opioids increases risk of misusing other drugs. Methods: We analyzed data from 12,300 adults ages 32-42 from Wave V of the Add Health study collected from 2016 to 2018. Self-reported past 30-day misuse of prescription sedatives, tranquilizers, stimulants, and opioids as well as cocaine, crystal methamphetamine, heroin, and other illicit drugs were analyzed for associations with demographic characteristics in weighted bivariate analysis and multivariable logistic regression. Results: Those misusing prescription opioids were more likely to misuse prescription sedatives, tranquilizers, and stimulants compared to those not misusing prescription opioids. Those misusing prescription opioids were also more likely to misuse heroin, crystal meth, cocaine, and other illicit drugs. Higher levels of education and personal income were protective for prescription opioid misuse, any prescription drug misuse, and any illicit drug misuse. Race/ethnicity was not significantly associated with prescription opioid misuse. Conclusions/Importance: Our analysis shows those misusing prescription opioids are at high risk of misusing other prescription and illicit drugs. Practitioners and researchers should consider concurrent drug misuse when treating and studying opioid misuse disorders. abstract_id: PUBMED:37802247 Trends in Repeat Opioid Analgesic Prescription Utilization for Acute Pain in Children: 2013-2018. Objective: Our study examined the change in repeat opioid analgesic prescription trends in children and adolescents experiencing acute pain between 2013 and 2018. Methods: Eligible individuals were children and adolescents between 1 and 17 years of age enrolled in a Medicaid Managed Care plan and filled an incident opioid analgesic prescription from 2013 to 2018. A repeat opioid prescription was defined as receiving a subsequent opioid prescription within 30 days from the end of the incident opioid prescription. A generalized linear regression analysis was conducted to examine changes in repeat opioid analgesic dispensing over time at quarterly intervals from January 1, 2013, to December 31, 2018. Results: The cohort comprised 17,086 children and adolescents receiving an incident opioid analgesic. Of these, 1780 (10.4%) filled a repeat opioid analgesic prescription. There was a significant decline in the repeat opioid analgesic trend from 11.5% in Q1 2013 to 9.6% in Q4 2018. Stratified analyses by age, sex, and race and ethnicity in a sub-cohort of patients undergoing surgical procedures showed that a significant decline in repeat opioid utilization over time has been observed in all racial/ethnic groups stratified by age and sex, with the most significant decline found in non-Hispanic White children and Hispanic adolescents. At the end of the 6-year follow-up, the racial and ethnic variations in repeat opioid utilization associated with surgical procedures had significantly reduced in children yet persisted among adolescents. Conclusions: Approximately 10% of incident pediatric opioid analgesic recipients received a repeat opioid prescription. There has been a moderate but steady decline (∼7% per quarter) in repeat opioid analgesic utilization between 2013 and 2018. Answer: The role of the Internet as a source for prescription opioid analgesic abusers appears to be relatively minor based on the available data. A study that asked 1,116 prescription drug abusers admitted for treatment about their sources of drugs found that dealers, friends or relatives, and doctors' prescriptions were the most frequently listed sources, with each mentioned by approximately 50-65% of respondents. In contrast, the Internet was mentioned by fewer than 6% of the total responders. Additionally, the study attempted to purchase scheduled II and III opioids from a random sample of Internet sites and was unsuccessful in obtaining any scheduled opioid analgesics, although tramadol, an unscheduled opioid, was freely available (PUBMED:18816331). Another study examining the impact of Internet pharmacy regulation on opioid analgesic availability found a significant decrease in the availability of these drugs from online pharmacies, which may be related to increased regulation and law enforcement operations directed against Internet pharmacies (PUBMED:18781245). Further analysis of five national data sets from the U.S. also yielded uniformly low rates of prescription drug acquisition from online sources across diverse populations, suggesting that the Internet is a relatively minor source for illicit purchases of prescription medications by individual end-users (PUBMED:20227199). In the UK, however, a study found that prescription-only analgesics, including controlled opioids, were readily available from Internet websites, often without a valid prescription, indicating a potential threat to patient health (PUBMED:19154446). Overall, while the Internet does provide a channel for the illegal purchase of drugs, including psychotropic drugs and opioids, as evidenced by anecdotal reports and specific cases (PUBMED:17457555), the data suggests that it is not the primary source for prescription opioid analgesic abusers. Instead, traditional sources such as dealers, friends, relatives, and prescriptions remain the predominant means of obtaining these drugs for misuse (PUBMED:36987981). The Internet's role as a source of drugs of abuse is an area that requires ongoing monitoring and law enforcement efforts to assess and mitigate its impact (PUBMED:16968618).
Instruction: Can community surgeons perform laparoscopic colorectal surgery with outcomes similar to tertiary care centres? Abstracts: abstract_id: PUBMED:17550713 Can community surgeons perform laparoscopic colorectal surgery with outcomes similar to tertiary care centres? Introduction: The use of the laparoscopic approach in colorectal surgery (LCS) is the subject of active debate. Studies demonstrating its safety and feasibility in tertiary care centres are now available. The aim of this study was to examine the results of LCS performed in a community hospital setting. Methods: We prospectively studied 100 patients who underwent an LCS at the North Bay District Hospital (a 200-bed community hospital located 350 km away from the nearest tertiary care centre). All operations were performed by 2 community surgeons who transitioned themselves from an open to a laparoscopic approach. Results: Between October 2000 and December 2003, 100 patients (56 women and 44 men, mean age 64 yr) underwent an LCS for benign (n = 54) and malignant (n = 46) disease. Median operating time was 165 minutes (range 70350 min), and the conversion rate was 10%. The intraoperative complication rate was 3%. There were 10 major postoperative complications and 14 minor postoperative complications. There was no intraoperative mortality and one 30-day mortality secondary to cardiogenic shock. The median length of stay was 4.5 days (range 245 d). At a mean follow-up of 18 months, no trocar site or wound recurrences were noted. The mean number of resected lymphnodes was 10.6. Conclusion: Our study suggests that it is possible for community surgeons to transition themselves from an open to a laparoscopic approach and to perform LCS with outcomes similar to those of tertiary care centres. abstract_id: PUBMED:18437481 Can community surgeons perform laparoscopic colorectal surgery with outcomes equivalent to tertiary care centers? Background: Laparoscopic colorectal surgery (LCS) performed in tertiary care centers has been well studied. It has been shown to provide improved short-term outcomes and comparable long-term outcomes to the conventional open approach. However, LCS performed in a community hospital setting has not been well studied. In a previous paper, we presented the short-term outcomes of 100 LCS performed by two community surgeons with no formal training in LCS. In this follow-up study, we present both short- and longer-term outcomes for 250 patients who underwent LCS. Methods: This is a prospective study of 250 consecutive patients who underwent LCS at the North Bay District Hospital (a 200-bed community hospital located 350 km away from the nearest tertiary care center). Results: Between October 2000 and October 2006, 250 consecutive patients (130 women and 120 men, mean age of 64.4 +/- 13.7 years) underwent LCS for benign (N = 129) and malignant (N = 121) disease. Median operating time was 215.0 min (58.0-475.0 min) and the conversion rate was 7.2%. The intraoperative complication rate was 2.8%. There were 20 (8.0%) major postoperative complications and 42 (16.8%) minor postoperative complications. There was no intraoperative mortality. There were six 30-day mortalities due to ischemic bowel (1), stroke (1), myocardial infarction (3), and pneumonia (1). The median length of stay was 4.0 days (2.0-55.0 days). Disease-free survival for stages I-IV colorectal cancer (CRC) was 100, 97.2, 71.4, and 10% for a mean follow-up time of 36.9, 29.3, 27.9, and 21.1 months, respectively. The mean number of resected lymph nodes was 11.5 +/- 8.6. Conclusion: We note that both our short and longer-term outcomes are similar to tertiary care centers. We therefore conclude that LCS can be performed in a community hospital setting with both short- and longer-term outcomes similar to tertiary care centers. abstract_id: PUBMED:34051717 Awareness and Practice of Laparoscopic Surgery among Trainee Surgeons in Nigerian Tertiary Hospitals. Background: The advent of laparoscopy has been a notable landmark in surgery; however, there is a slow progress to widespread utilization in West Africa. Aims: To study the awareness and practice of laparoscopic surgery among trainee surgeons in Nigerian tertiary hospitals while highlighting measures to mitigate challenges. Materials And Methods: A cross-sectional study conducted during a 2-week West African College of Surgeons update course in September 2018 at Ilorin, Kwara State, Nigeria. A structured questionnaire was distributed to registered trainee surgeons for completion. Data collated included demographics, cognitive knowledge, common procedures in centres, referrals, routine practice, performing laparoscopic surgeon, and routine practice. Statistical analysis was done using IBM SPSS Statistics for Windows version 20 Armonk NY USA. Results: There were 184 registered trainee surgeons with 80 respondents from 26 Nigerian tertiary health facilities. The age range was 29 -51 years (mean 35.0 ± 4.4) and a mean training duration of 3.3 years (R2= 0.12). Seven (63.6%) senior registrars and 54(76.3%) registrars were reported as first assistants in laparoscopic surgeries performed but no unassisted surgery. Four (15.4%) represented centres had no laparoscopy equipment or expertise. A non-referral rate of 52/80(65.0%) for laparoscopic surgery was recorded. Conclusion: Laparoscopic surgery is practiced in some Nigerian tertiary hospitals with trainee surgeons actively involved in performing these surgeries. However, there is limited unassisted experience by trainee surgeons in the basic laparoscopic surgeries predominantly performed. abstract_id: PUBMED:38449997 Short-Term Outcomes of First 100 Laparoscopic Colorectal Surgeries at a Newly Developed Surgical Setup at Peshawar. Background: The incidence of colorectal cancer (CRC) has risen steadily, necessitating innovative strategies for diagnosis and treatment. Minimally invasive surgery, exemplified by laparoscopic techniques, has emerged as a transformative approach in colorectal surgical practices. Laparoscopy offers advantages such as improved aesthetic outcomes, reduced post-operative pain, early patient mobilization, and shorter hospital stays. Objective: This study aims to present the short-term surgical outcomes of the first 100 elective laparoscopic CRC resections performed at a newly established tertiary care cancer center in Peshawar, Pakistan. Materials And Methods: Data were prospectively collected for CRC resections performed between April 2021 and February 2022. The study included patients above 18 years of age with biopsy-proven CRC. Surgical procedures were performed by two dedicated colorectal surgeons trained in minimally invasive surgery. Patient demographics, pre-operative factors, intraoperative parameters, and post-operative outcomes were systematically recorded and analyzed. Results: Among the 100 cases included in the study, laparoscopic colorectal surgeries were successfully performed without any conversions to open surgery. The mean age of the study population was 52.5 years, with a male-to-female ratio of 2:1. The majority of cases were colon (48%) and anorectal cancers (52%). The mean lymph node yield was 18.29 (range 6-49). Only one patient required a re-look laparoscopy for a pelvic hematoma, and overall mortality was reported at 1%. Conclusion: Laparoscopic colorectal surgery is a safe and effective treatment option for elective colorectal operations with minimal post-operative complications and favorable short-term outcomes. abstract_id: PUBMED:26490770 The current status of emergent laparoscopic colectomy: a population-based study of clinical and financial outcomes. Background: Population-based studies evaluating laparoscopic colectomy and outcomes compared with open surgery have concentrated on elective resections. As such, data assessing non-elective laparoscopic colectomies are limited. Our goal was to evaluate the current usage and outcomes of laparoscopic in the urgent and emergent setting in the USA. Methods: A national inpatient database was reviewed from 2008 to 2011 for right, left, and sigmoid colectomies in the non-elective setting. Cases were stratified by approach into open or laparoscopic groups. Demographics, perioperative clinical variables, and financial outcomes were compared across each group. Results: A total of 22,719 non-elective colectomies were analyzed. The vast majority (95.8 %) was open. Most cases were performed in an urban setting at non-teaching hospitals by general surgeons. Colorectal surgeons were significantly more likely to perform a case laparoscopic than general surgeons (p &lt; 0.001). Demographics were similar between open and laparoscopic groups; however, the disease distribution by approach varied, with significantly more severe cases in the open colectomy arm (p &lt; 0.001). Cases performed laparoscopically had significantly better mortality and complication rates. Laparoscopic cases also had significantly improved outcomes, including shorter length of stay and hospital costs (all p &lt; 0.001). Conclusions: Our analysis revealed less than 5 % of urgent and emergent colectomies in the USA are performed laparoscopically. Colorectal surgeons were more likely to approach a case laparoscopically than general surgeons. Outcomes following laparoscopic colectomy in this setting resulted in reduced length of stay, lower complication rates, and lower costs. Increased adoption of laparoscopy in the non-elective setting should be considered. abstract_id: PUBMED:35245094 Consecutive Laparoscopic Colorectal Resections in a Single Workday by the Same Surgeon: Efficient or Risky? Background: As laparoscopic colorectal surgery (LCS) continues increasing worldwide, surgeons may need to perform more than one LCS per day to accommodate this higher demand. We aimed to determine the safety of performing consecutive LCSs by the same surgeon in a single workday. Materials and Methods: Consecutive LCSs performed by the same surgeon from 2006 to 2019 were included. The sample was divided into two groups: patients who underwent the first (G1) and those who underwent the second and the third (G2) colorectal resections in a single workday. LCSs were stratified into level I (low complexity), level II (medium complexity), and level III (high complexity). Demographics, operative variables, and postoperative outcomes were compared between groups. Results: From a total of 1433 LCSs, 142 (10%) were included in G1 and 158 (11%) in G2. There was a higher rate of complexity level III LCS (G1: 23% versus G2: 6%, P &lt; .0001) and a longer operative time (G1: 160 minutes versus G2: 139 minutes, P = .002) in G1. There were no differences in anastomotic leak, overall morbidity, or mortality rates. Mean length of hospital stay and readmission rates were similar between groups. Conclusion: Multiple consecutive laparoscopic colorectal resections can be safely performed by the same surgeon in a single workday. This efficient strategy should be encouraged at high-volume centers with experienced colorectal surgeons. abstract_id: PUBMED:26527560 Obese patients have similar short-term outcomes to non-obese in laparoscopic colorectal surgery. Aim: To determine whether obese patients undergoing laparoscopic surgery within an enhanced recovery program had worse short-term outcomes. Methods: A prospective study of consecutive patients undergoing laparoscopic colorectal resection was carried out between 2008 and 2011 in a single institution. Patients were divided in groups based on body mass index (BMI). Short-term outcomes including operative data, length of stay, complications and readmission rates were recorded and compared between the groups. Continuous data were analysed using t-test or one-way Analysis of Variance. χ(2) test was used to compare categorical data. Results: Two hundred and fifty four patients were included over the study period. The majority of individuals (41.7%) recruited were of a healthy weight (BMI &lt; 25), whilst 50 patients were classified as obese (19.6%). Patients were matched in terms of the presence of co-morbidities and previous abdominal surgery. Obese patients were found to have a statistically significant difference in The American Society of Anesthesiologists grade. Length of surgery and intra-operative blood loss were no different according to BMI. Conclusion: Obesity (BMI &gt; 25) does not lead to worse short-term outcomes in laparoscopic colorectal surgery and therefore such patients should not be precluded from laparoscopic surgery. abstract_id: PUBMED:33784279 Safeness and reproductivity of a laparoscopic colorectal program in two tertiary care academic centers of South America Objetivo: Evaluar la reproducibilidad y la seguridad de un programa de cirugía colorrectal laparoscópica en dos centros de Sudamérica. Método: Se realizó un estudio analítico-descriptivo. Se revisaron retrospectivamente los registros clínicos de pacientes sometidos a cirugía videolaparoscópica colorrectal desde el año 2012 hasta el año 2018, en dos centros académicos de tercer nivel argentinos. Se analizaron datos demográficos, indicaciones y tiempos quirúrgicos, tasa de conversión, evolución posoperatoria, morbimortalidad y resecabilidad oncológica, y se comparó con el abordaje convencional. Resultados: Se realizaron 505 cirugías. La edad media de los pacientes fue de 63.4 años y el 50.9% eran hombres. El tiempo operatorio medio fue de 175 minutos. La principal indicación fue cáncer de colon. La incidencia de conversión fue del 9.5%. El promedio de ganglios por pieza quirúrgica en patología neoplásica fue de 15.9. La morbilidad fue del 35.4%, en su mayoría complicaciones menores. La tasa de fístulas fue del 11.7%. La mortalidad a 30 días fue del 2.5%. Conclusión: La cirugía colorrectal laparoscópica podría representar una opción segura y reproducible en un centro de tercer nivel de un país en desarrollo. Objective: To evaluate the feasibility and safeness of a colorectal laparoscopic program in two centers form South America. Method: We retrospectively review the records of patients who underwent laparoscopic colorectal surgery from 2012 to 2018 in two tertiary care academic centers. Surgical indication, operative time, conversion rate, lymph nodes harvested, surgical margins and complications were analyzed. This results were then compared to the open approach. Results: We collected data from 505 patients, mean age 63.4, 50.9% male. The most frequent indication was colon cancer, mean operative time was 175 minutes. Conversion rate was 9.5%, mean nodes harvested was 15.9 with free resection margins in every case. Morbidity was 35.4% at 30 days, most of them were minor complications. The leak rate was 11.7 %. The 30-day mortality was 2.5%. Conclusion: The laparoscopic approach for colorectal surgery might represent a safe and feasible option in an tertiary care hospital from a developing country. abstract_id: PUBMED:28255631 Mentored Trainees have Similar Short-Term Outcomes to a Consultant Trainer Following Laparoscopic Colorectal Resection. Background: Laparoscopic colorectal surgery has a long learning curve. Using a modular-based training programme may shorten this. Concerns with laparoscopic surgery have been oncological compromise and poor surgical outcomes when training more junior surgeons. This study aimed to compare operative and oncological outcomes between trainees undergoing a mentored training programme and a consultant trainer. Methods: A prospective study of all elective laparoscopic colorectal resections was undertaken in a single institution. Operative and oncological outcomes were recorded. All trainees were mentored by a National Laparoscopic Trainer (Lapco), and results between trainer and trainees compared. Results: Three hundred cases were included, with 198 (66%) performed for cancer. The trainer undertook 199 (66%) of operations, whilst trainees performed 101 (34%). Anterior resection was the commonest operation (n = 124, 41%). There were no differences between trainer and trainees for the majority of surgical outcomes, including blood loss (p = 0.598), conversion to open (p = 0.113), anastomotic leak (p = 0.263), readmission (p = 1.000) and death rates (p = 0.549). Only length of stay (p = 0.034), stoma formation (p &lt; 0.01) and operative duration (p = 0.007) were higher in the trainer cohort, reflecting the more complex cases undertaken. Overall, there were no significant differences in both short- and longer-term oncology outcomes according to the grade of operating surgeon, including lymph nodes in specimen, circumferential resection margin and 1- and 2-year radiological recurrence. Conclusion: When a modular-based training system was combined with case selection, both clinical and histopathological outcomes following resectional laparoscopic colorectal surgery were similar between trainees and trainer. This should encourage the use of more training opportunities in laparoscopic colorectal surgery. abstract_id: PUBMED:27928671 Effective stepwise training and procedure standardization for young surgeons to perform laparoscopic left hepatectomy. Background: Laparoscopic hepatectomy remains one of the most difficult procedures for young surgeons to perform. We recently developed a new training method and standardization procedure for teaching young surgeons to perform laparoscopic left hepatectomy (Lap-LHx). The aim of this study was to assess the effectiveness of our method. Methods: In 2004, we standardized a laparoscopic procedure for Lap-LHx, using a laparoscopy-assisted method as a stepping stone. The laparoscopic training method comprised the following three steps: (1) training in fundamental procedures using a dry box and checking by mentors; (2) detailed preoperative simulation using Vincent three-dimensional software for each patient; and (3) self-assessment including understanding of relevant anatomy and completion grade for each procedure using a check sheet and feedback by both mentors and a professor. Twenty-three Lap-LHx procedures performed during the study period were divided into two groups: those performed by young non-board-certified surgeons (n = 9) and those performed by senior board-certified surgeons (n = 14). Results: The blood loss and operative time were similar in the young surgeon (194 g and 336 min, respectively) and senior surgeon groups (208 g and 322 min, respectively). Conclusion: Our standardized Lap-LHx procedure and stepwise training to perform it enable young surgeons to perform Lap-LHx as confidently and safely as more experienced surgeons. Answer: Yes, community surgeons can perform laparoscopic colorectal surgery (LCS) with outcomes similar to tertiary care centers. Studies have shown that with appropriate training and experience, community surgeons can achieve comparable results. For instance, a study conducted at the North Bay District Hospital, a 200-bed community hospital, demonstrated that two community surgeons who transitioned from an open to a laparoscopic approach were able to perform LCS with outcomes similar to those of tertiary care centers. The study reported a median operating time of 165 minutes, a conversion rate of 10%, and a median length of stay of 4.5 days, with no trocar site or wound recurrences noted at a mean follow-up of 18 months. The mean number of resected lymph nodes was 10.6, indicating adequate oncologic resection (PUBMED:17550713). A follow-up study with a larger sample size of 250 patients at the same community hospital showed similar findings, with a median operating time of 215.0 minutes, a conversion rate of 7.2%, and a median length of stay of 4.0 days. The study also reported disease-free survival rates for stages I-IV colorectal cancer that were comparable to tertiary care centers, with a mean number of resected lymph nodes of 11.5. These results suggest that LCS can be performed in a community hospital setting with both short- and longer-term outcomes similar to tertiary care centers (PUBMED:18437481). Therefore, the evidence suggests that with proper training and experience, community surgeons can indeed perform laparoscopic colorectal surgery with outcomes that are equivalent to those achieved in tertiary care centers.
Instruction: Is there a relationship between fatigue perception and the serum levels of thyrotropin and free thyroxine in euthyroid subjects? Abstracts: abstract_id: PUBMED:22966868 Is there a relationship between fatigue perception and the serum levels of thyrotropin and free thyroxine in euthyroid subjects? Background: Thyrotoxicosis and hypothyroidism are associated with fatigue. Here we studied euthyroid subjects to determine if there was a relationship between serum thyrotropin (TSH), free thyroxine (FT(4)) and thyroperoxidase antibodies and fatigue. Methods: A total of 5897 participants of the Nijmegen Biomedical Study received a questionnaire and serum TSH (normal range 0.4-4.0 mIU/L) and FT(4) (normal range 8-22 pmol/L) were measured. Fatigue was evaluated by the RAND-36 and the shortened fatigue questionnaire (SFQ). Results: Euthyroid subjects with a serum TSH level of 0.4-1.0 mIU/L had a lower RAND-36 vitality score (65.2 vs. 66.8; regression coefficient (RC) -1.6 [95% confidence interval (CI) -2.6 to -0.5]; p=0.005) and a higher SFQ score (11.7 vs. 11.0; RC 0.6 [CI 0.2-1.0]; p=0.004) than those with a TSH of 1.0-2.0 mIU/L. Those with a serum FT(4) of 18.5-22 pmol/L reported fatigue more often (52.5% vs. 33.3%; relative risk (RR) 1.4 [CI 1.0-1.9]; p=0.03), had a lower RAND-36 vitality score (61.7 vs. 66.6; RC -4.4 [CI -8.1 to -0.6]; p=0.02) and a higher SFQ score (13.2 vs. 11.0; RC 1.9 [CI 0.4-3.3]; p=0.01) than subjects with a FT(4) level of 11.5-15 pmol/L. In comparison to euthyroid subjects without known thyroid disease, euthyroid subjects with previously known thyroid disease reported fatigue more often (52.3% vs. 34.0%; RR 1.3 [CI 1.0-1.5]; p=0.025), had a lower RAND-36 vitality score (61.4 vs. 66.3; RC -2.9 [CI -5.3 to -0.6]; p=0.015) and a higher SFQ score (13.7 vs. 11.1; RC 1.4 [CI 0.5-2.3]; p=0.002). Conclusion: In euthyroid individuals without a history of thyroid disease, there is a modest relationship between thyroid function and fatigue with subjects having an apparently higher production of T(4) experiencing more fatigue. Subjects with a history of thyroid disease, but with normal TSH and FT(4) concentrations, experience more fatigue than the general population. The reasons for this are unclear, but subtle abnormalities in the dynamics of thyroid hormone secretion should be considered. abstract_id: PUBMED:29144817 SEASONAL VARIATION OF VITAMIN D AND SERUM THYROTROPIN LEVELS AND ITS RELATIONSHIP IN A EUTHYROID CAUCASIAN POPULATION. Objective: It is unclear whether seasonal variations in vitamin D concentrations affect the hypothalamo-pituitary-thyroid axis. We investigated the seasonal variability of vitamin D and serum thyrotropin (TSH) levels and their interrelationship. Methods: Analysis of 401 patients referred with nonspecific symptoms of tiredness who had simultaneous measurements of 25-hydroxyvitamin D3 (25[OH]D3) and thyroid function. Patients were categorized according to the season of blood sampling and their vitamin D status. Results: 25(OH)D3 levels were higher in spring-summer season compared to autumn-winter (47.9 ± 22.2 nmol/L vs. 42.8 ± 21.8 nmol/L; P = .02). Higher median (interquartile range) TSH levels were found in autumn-winter (1.9 [1.2] mU/L vs. 1.8 [1.1] mU/L; P = .10). Across different seasons, 25(OH)D3 levels were observed to be higher in lower quartiles of TSH, and the inverse relationship was maintained uniformly in the higher quartiles of TSH. An independent inverse relationship could be established between 25(OH)D3 levels and TSH by regression analysis across both season groups (autumn-winter: r = -0.0248; P&lt;.00001 and spring-summer: r = -0.0209; P&lt;.00001). We also observed that TSH varied according to 25(OH)D3 status, with higher TSH found in patients with vitamin D insufficiency or deficiency in comparison to patients who had sufficient or optimal levels across different seasons. Conclusion: Our study shows seasonal variability in 25(OH)D3 production and TSH secretion in euthyroid subjects and that an inverse relationship exists between them. Further studies are needed to see if vitamin D replacement would be beneficial in patients with borderline thyroid function abnormalities. Abbreviations: 25(OH)D2 = 25-hydroxyvitamin D2; 25(OH)D3 = 25-hydroxyvitamin D3; AITD = autoimmune thyroid disease; FT4 = free thyroxine; TFT = thyroid function test; TSH = thyrotropin; UVB = ultraviolet B. abstract_id: PUBMED:12015476 Heart failure accompanied by sick euthyroid syndrome and exercise training. Sick euthyroid syndrome is defined as the decrease of serum free triiodothyronine with normal free L-thyroxin and thyrotropin. Its appearance in patients with chronic heart failure is an indicator of severity. Exercise training through a wide variety of mechanisms reverses sick euthyroid syndrome (normalization of free triiodothyronine levels) and improves the ability to exercise. There is a connection during exercise among dyspnea, hyperventilation, fatigue, catecholamines, a decrease in the number and function of beta-blocker receptors, and elevation of serum free triiodothyronine. It is not known whether sick euthyroid syndrome contributes to the development of heart failure or is only an attendant syndrome. abstract_id: PUBMED:8323787 Thyroxine prescription in the community: serum thyroid stimulating hormone level assays as an indicator of undertreatment or overtreatment. Examination of thyroxine usage in a study in the United States of America revealed that many patients were prescribed thyroxine for non-thyroid indications, such as obesity and fatigue. Many of those receiving thyroxine had high or low serum thyroid stimulating hormone levels, indicating prescription of incorrect doses or lack of patient compliance with therapy. Long term thyroxine therapy may have effects upon the risk of osteoporosis. The aims of this study were to investigate indications for thyroxine prescription in the United Kingdom and to examine the frequency of abnormal serum thyroid stimulating hormone concentrations in those prescribed thyroxine for hypothyroidism. This was in order to determine the relevance of measurement of thyroid stimulating hormone level in monitoring thyroxine therapy. Subjects receiving thyroxine were identified from the computerized prescribing records of four general practices in the West Midlands. Of 18,944 patients registered, 146 (0.8%) were being prescribed thyroxine; 134 of these had primary hypothyroidism and the remainder had other thyroid or pituitary diseases prior to treatment. Of the 97 patients with primary hypothyroidism who agreed to have their thyroid stimulating hormone level measured, abnormal serum levels were found in 48%, high levels in 27% and low levels in 21%. There was a significant relationship between prescribed thyroxine dose and median serum thyroid stimulating hormone level: high hormone levels were found in 47% of those prescribed less than 100 micrograms thyroxine per day, while low levels were found in 24% of those prescribed 100 micrograms or more. Thus, thyroxine prescription was common in the four practices sampled, although indications for its use were appropriate.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:15622018 Preliminary study of the relationship between thyroid status and cognitive and neuropsychiatric functioning in euthyroid patients with Alzheimer dementia. Objective: To investigate whether variations within normal ranges of thyroid functioning are related to cognitive and neuropsychiatric functioning in Alzheimer disease (AD). Background: Mild alterations of thyroid hormone levels, even in the normal range, are associated with changes in mood and cognitive functioning in older, nondemented adults, and lower concentrations of thyroid hormones have been shown to be associated with an increased risk for cognitive decline. Less is known about the relationship between thyroid hormone levels and cognitive and neuropsychiatric dysfunction in AD. Method: Twenty-eight euthyroid patients with AD on donepezil underwent evaluation of thyroid status, including measures of thyroid-stimulating hormone (TSH) and free thyroxine (FT4), and cognitive and neuropsychiatric assessment with the Alzheimer's Disease Assessment Scale, Neuropsychiatric Inventory, and Visual Analog Mood Scales. Results: Correlational analyses indicated statistically significant associations between FT4 concentrations and self-reported feelings of fear and fatigue. Fear and fatigue were negatively correlated with FT4. There were no significant relationships between thyroid hormones and cognition and other depressive and anxiety symptoms. Conclusions: Results of this preliminary study support a relationship between thyroid status and neuropsychiatric symptoms in euthyroid individuals with AD, with lower concentrations of FT4 associated with fear and fatigue. abstract_id: PUBMED:22077961 Higher free thyroxine levels are associated with frailty in older men: the Health In Men Study. Objective: Frailty is common in the elderly and predisposes to ill-health. Some symptoms of frailty overlap those of thyroid dysfunction, but it is unclear whether differences in thyroid status influence risk of frailty. We evaluated associations between thyroid status and frailty in older men. Design: Cross-sectional epidemiological study. Participants: Community-dwelling men aged 70-89 years. Measurements: Circulating thyrotropin (TSH) and free thyroxine (FT(4) ) were assayed. Frailty was assessed as ≥3 of the Fatigue, Resistance, Ambulation, Illnesses and Loss (FRAIL) scale's 5 domains: fatigue; resistance (difficulty climbing flight of stairs); ambulation (difficulty walking 100 m); illness (&gt;5); or weight loss (&gt;5%), blinded to hormone results. Results: Of 3943 men, 27 had subclinical hyperthyroidism, 431 subclinical hypothyroidism and 608 were classified as being frail (15·4%). There was an inverse log-linear association of TSH with FT(4). There was no association between TSH and frailty. After adjusting for covariates, men with FT(4) in the highest two quartiles had increased odds of being frail (Q3:Q1, odds ratio [OR] = 1·32, 95% confidence interval [CI] = 1·01-1·73 and Q4:Q1, OR = 1·36, 95% CI = 1·04-1·79, P = 0·010 for trend). Higher FT(4) was associated with fatigue (P = 0·038) and weight loss (P &lt; 0·001). The association between FT(4) and frailty remained significant when the analysis was restricted to euthyroid men. Conclusions: High-normal FT(4) level is an independent predictor of frailty among ageing men. This suggests that even within the euthyroid range, circulating thyroxine may contribute to reduced physical capability. Further studies are needed to clarify the utility of thyroid function testing and the feasibility of preventing or reversing frailty in older men. abstract_id: PUBMED:21615309 Effects of physical activity on body composition and fatigue perception in patients on thyrotropin-suppressive therapy for differentiated thyroid carcinoma. Background: Subclinical thyrotoxicosis (scTox) may be associated with alterations in body composition and fatigue that can be possibly reversed with physical activity. The aim of the present study was to evaluate whether the systematic practice of physical activity improves lower extremity muscle mass and fatigue perception in patients with scTox. Materials And Methods: We studied 36 patients (2 men) with median age of 48.0 (43.0-51.0) years, body mass index of 27.4 (22.1-30.2) kg/m(2), thyrotropin &lt;0.4 mU/L, and free thyroxine between 0.8 and 1.9 ng/dL and 48 control subjects (C group; 7 men). Patients were randomly divided in two groups according to the adherence to the exercise training: scTox-Tr (n = 19)-patients who adhered to the exercise intervention and scTox-Sed (n = 17)--patients who did not adhere to it. The C group did not participate in the randomization. The exercise training was supervised by a physical education instructor, and it was composed of 60 minutes of aerobic activity and stretching exercises, twice a week, during 12 weeks. In both groups, body composition was assessed (anthropometric method), and the Chalder Fatigue Scale was determined at baseline and after 3 months of intervention (scTox-Tr group) or observation (scTox-Sed group). Results: At baseline, patients with scTox had lower muscle mass and mid-thigh girth and more fatigue on the Chalder Fatigue Scale than euthyroid control subjects. The scTox-Tr group had an increase in muscle mass, reduction in the variables reflecting whole body fat, and lesser perception of fatigue during the exercise training period (p ≤ 0.05 for these parameters at the start and end of the exercise training period). Conclusions: scTox is associated with lower muscle mass and mid-thigh girth and more fatigue. Physical activity training can partially ameliorate these characteristics. More studies are needed to determine what training program would be optimum, both in terms of beneficial effects and for avoiding potential adverse responses. abstract_id: PUBMED:11904108 A 6-month randomized trial of thyroxine treatment in women with mild subclinical hypothyroidism. Purpose: The role of thyroxine replacement in subclinical hypothyroidism remains unclear. We performed a 6-month randomized, double-blind, placebo-controlled trial to evaluate the effects of thyroxine treatment for mild subclinical hypothyroidism, defined as a serum thyroid-stimulating hormone level between 5 to 10 microU/mL with a normal serum free thyroxine level (0.8-16 ng/dL). Subjects And Methods: We randomly assigned 40 women with mild subclinical hypothyroidism who had presented to their family practitioners to either thyroxine treatment (n = 23; 50 to 100 microg daily) or placebo (n = 17). Health-related quality of life (Hospital Anxiety and Depression scale, 30-item General Health Questionnaire), fasting lipid profiles, body weight, and resting energy expenditure were measured at baseline and 6 months. Results: The most common presenting symptoms were fatigue (n = 33 [83%]) and weight gain (n = 32 [80%]). At presentation, 20 women (50%) had elevated anxiety scores and 22 (56%) had elevated scores on the General Health Questionnaire. Thirty-five women completed the study. There were no significant differences in the changes from baseline to 6 months between women in the thyroxine group and the placebo group for any of the metabolic, lipid, or anthropometric variables measured, expressed as the mean change in the thyroxine group minus the mean change in the placebo group: body mass index, -0.3 kg/m(2) (95% confidence interval [CI]: -0.9 to 0.4 kg/m(2)); resting energy expenditure, -0.2 kcal/kg/24 h (95% CI: -1.3 to 1.0 kcal/kg/24 h); and low-density lipoprotein cholesterol, -4 mg/dL (95% CI: -23 to 15 mg/dL). There was a significant worsening in anxiety scores in the thyroxine group (scores increased in 8 of 20 women and were unchanged in 2 of 20) compared with the placebo group (scores increased in 1 of 14 women and were unchanged in 6 of 14; P = 0.03). CONCLUSIONS; We observed no clinically relevant benefits from 6 months of thyroxine treatment in women with mild subclinical hypothyroidism. abstract_id: PUBMED:36378137 Association between thyroid hormone levels and frailty in the community-dwelling oldest-old: a cross-sectional study. Background: Changes in thyroid hormone levels are commonly recognized characters among the elderly, which were reported to potentially influence incident frailty. Therefore, we examined the cross-sectional associations of thyroid hormones (THs) with frailty as well as the five components characterizing frailty (fatigue, resistance, ambulation, number of illnesses, and loss of weight) among the oldest-old. Methods: Four hundred and eighty-seven community-dwelling oldest-old from a local community in Haidian District, Beijing, participated in our recruitment campaign between April 2019 and May 2020. The primary outcomes were a definitive diagnosis of frailty according to the FRAIL scale (Fatigue, Resistance, Ambulation, Illnesses, Loss of weight) and a positive score for each frailty subdomain. Demographic information (age, sex, marital status, and educational status), comorbidities, and details on the participants' lifestyles were recorded. Serum THs including free triiodothyronin (fT3), triiodothyronine (T3), free thyroxine (fT4), and thyroxine (T4) and thyroid stimulating hormone (TSH) levels were also measured at the beginning of our study. Logistic regressions were conducted to screen for potential risk factors for frailty and its subdomains. Results: Among the total 487 subjects at enrollment, 60 (12.23%) of them were diagnosed with subclinical hypothyroidism and 110 (22.59%) of the total population scored positive for frailty. Logistic regression analyses adjusted for all potential confounders, showed that frailty was significantly associated with the serum TSH concentration (odds ratio [OR]: 1.06), fT3 concentration (OR: 0.54), and subclinical hypothyroidism score (OR: 2.18). The association between fT4 and frailty was absent in our observational study. The fT3/fT4 ratio characterizing peripheral hormone conversion was also tested to be correlated with frailty. Conclusion: Subclinical hypothyroidism, higher TSH level, lower fT3 level, and decreased fT3/fT4 ratio were all associated with frailty assessed by the FRAIL scale among the community-dwelling oldest-old, suggesting a relevant role of thyroid function in aging. Future longitudinal studies are warranted to determine the casual relationship between thyroid dysfunction and frailty in the oldest-old. abstract_id: PUBMED:17473069 Health status, mood, and cognition in experimentally induced subclinical hypothyroidism. Objective: The objective of the study was to determine whether subclinical hypothyroidism causes decrements in health status, mood, and/or cognitive function. Design: This was a double-blinded, randomized, crossover study of usual dose l-thyroxine (L-T4) (euthyroid arm) vs. lower dose L-T4 (subclinical hypothyroid arm) in hypothyroid subjects. Patients: Nineteen subjects on L-T4 therapy for primary hypothyroidism participated in the study. Measurements: Subjects underwent measurements of health status, mood, and cognition using validated instruments: Short Form 36, Profile of Mood States, and tests of declarative memory (paragraph recall, complex figure), working memory (N-back, subject ordered pointing, digit span backward), and motor learning (pursuit rotor). The same measures were repeated after 12 wk on each of the study arms. Results: Mean TSH levels increased to 17 mU/liter on the subclinical hypothyroid arm (P &lt; 0.0001). Mean free T4 and free T3 levels remained within the normal range. The Profile of Mood States fatigue subscale and Short Form 36 general health subscale were slightly worse during the subclinical hypothyroid arm. Measures of working memory (N-back, subject ordered pointing) were worse during the subclinical hypothyroid arm. These differences did not depend on mood or health status but were related to changes in free T4 or free T3 levels. There were no decrements in declarative memory or motor learning. Conclusions: We found mild decrements in health status and mood in L-T4-treated hypothyroid subjects when subclinical hypothyroidism was induced in a blinded, randomized fashion. More importantly, there were independent decrements in working memory, which suggests that subclinical hypothyroidism specifically impacts brain areas responsible for working memory. Answer: Yes, there is a relationship between fatigue perception and the serum levels of thyrotropin (TSH) and free thyroxine (FT4) in euthyroid subjects. A study involving 5897 participants of the Nijmegen Biomedical Study found that euthyroid subjects with a serum TSH level of 0.4-1.0 mIU/L had a lower RAND-36 vitality score and a higher shortened fatigue questionnaire (SFQ) score than those with a TSH of 1.0-2.0 mIU/L. Additionally, those with a serum FT4 of 18.5-22 pmol/L reported fatigue more often, had a lower RAND-36 vitality score, and a higher SFQ score than subjects with a FT4 level of 11.5-15 pmol/L. Euthyroid subjects with previously known thyroid disease also reported more fatigue than the general population, despite having normal TSH and FT4 concentrations (PUBMED:22966868). Furthermore, a study on the effects of physical activity on body composition and fatigue perception in patients on thyrotropin-suppressive therapy for differentiated thyroid carcinoma found that subclinical thyrotoxicosis (scTox) is associated with lower muscle mass and more fatigue. Physical activity training can partially ameliorate these characteristics (PUBMED:21615309). Another study investigating the relationship between thyroid status and cognitive and neuropsychiatric functioning in euthyroid patients with Alzheimer dementia found significant associations between FT4 concentrations and self-reported feelings of fear and fatigue, with fear and fatigue being negatively correlated with FT4 (PUBMED:15622018). Lastly, a study on the association between thyroid hormone levels and frailty in older men found that high-normal FT4 level is an independent predictor of frailty among aging men, suggesting that even within the euthyroid range, circulating thyroxine may contribute to reduced physical capability (PUBMED:22077961). In summary, these studies indicate that there is a modest but significant relationship between thyroid function within the euthyroid range and the perception of fatigue.
Instruction: Are there different predictors of analgesic response between antidepressants and anticonvulsants in painful diabetic neuropathy? Abstracts: abstract_id: PUBMED:26311228 Are there different predictors of analgesic response between antidepressants and anticonvulsants in painful diabetic neuropathy? Background: To investigate baseline demographics and disease characteristics as predictors of the analgesic effect of duloxetine and pregabalin on diabetic peripheral neuropathic pain (DPNP). Methods: Based on data from the COMBO-DN study, a multinational clinical trial in DPNP, the potential impact of baseline characteristics on pain relief after 8-week monotherapy with 60 mg/day duloxetine or 300 mg/day pregabalin was assessed using analyses of covariance. Subgroups of interest were characterized regarding their baseline characteristics and efficacy outcomes. Results: A total of 804 patients were evaluated at baseline. A significant interaction with treatment was observed in the mood symptom subgroups with a larger pain reduction in duloxetine-treated patients having no mood symptoms [Hospital Anxiety and Depression Scale (HADS) depression or anxiety subscale score &lt;11; -2.33 (duloxetine); -1.52 (pregabalin); p = 0.024]. There were no significant interactions between treatment for subgroups by age (&lt;65 or ≥65 years), gender, baseline pain severity [Brief Pain Inventory Modified Short Form (BPI-MSF) average pain &lt;6 or ≥6], diabetic neuropathy duration (≤2 or &gt;2 years), baseline haemoglobin A1c (HbA1c) (&lt;8% or ≥8%), presence of comorbidities and concomitant medication use. Conclusions: Our analyses suggest that the efficacy of duloxetine and pregabalin for initial 8-week treatment in DPNP was consistent across examined subgroups based on demographics and disease characteristics at baseline except for the presence of mood symptoms. Duloxetine treatment appeared to be particularly beneficial in DPNP patients having no mood symptoms. abstract_id: PUBMED:11131263 Antidepressants and anticonvulsants for diabetic neuropathy and postherpetic neuralgia: a quantitative systematic review. To determine the relative efficacy and adverse effects of antidepressants and anticonvulsants in the treatment of diabetic neuroapathy and postherpetic neuralgia, published reports were identified from a variety of electronic databases, including Medline, EMBASE, the Cochrane Library and the Oxford Pain Relief Database, and from two previously published reviews. Additional studies were identified from the reference lists of retrieved reports. The relative benefit (RB) and number-needed-to-treat (NNT) for one patient to achieve at least 50 % pain relief was calculated from available dichotomous data, as was the relative risk (RR) and number-needed-to-harm (NH) for minor adverse effects and drug related study withdrawal. In diabetic neuropathy, 16 reports compared antidepressants with placebo (491 patient episodes) and three compared anticonvulsants with placebo (321). The NNT for at least 50 % pain relief with antidepressants was 3.4 (95 % confidence interval 2.6-4. 7) and with anticonvulsants 2. 7 (2. 2-3. 8). In postherpetic neuralgia, three reports compared antidepressants with placebo (145 patient episodes) and one compared anticonvulsants with placebo (225), giving an NNT with antidepressants of 2.1 (1. 7-3) and with anticonvulsants 3.2 (2.4-5). There was little difference in the incidence of minor adverse effects with either antidepressants or anticonvulsants compared with placebo, with 1VH (minor) values of about 3. For drug-related study withdrawal, antidepressants had an NNH (major) of 17 (11-43) compared with placebo, whereas with anticonvulsants there was no significant difference from placebo. Antidepressants and anticonvulsants had the same efficacy and incidence of minor adverse effects in these tzoo neuropathic pain conditions. There was no evidence that selective serotonin reuptake inhibitors (SSRIs) were better than older antidepressants, and no evidence that gabapentin was better than older anticonvulsants. In these trials patients were more likely to stop taking antidepressants than anticonvulsants because of adverse effects. abstract_id: PUBMED:19157324 Use of anticonvulsants drugs for neuropathic painful conditions. Neuropathic pain, a form of chronic pain initiated and sustained by an insult to the peripheral or central nervous system, is a challenge to clinicians as it does not respond well to traditional pain therapies. However exact pathophysiology is not known but considering similarities between epilepsy models and in neuropathic pain models justify the rationale for use of anticonvulsant drugs in the symptomatic management of neuropathic pain disorders. The role of anticonvulsant drugs in the treatment of neuropathic pain is evolving and various clinical trials have used these anticonvulsants and shown positive results in the treatment of trigeminal neuralgia, painful diabetic neuropathy and postherpetic neuralgia. The availability of newer anticonvulsants tested in higher quality clinical trials has marked a new era in the treatment of neuropathic pain. Gabapentin has the most clearly demonstrated analgesic effect for the treatment of neuropathic pain, specifically for treatment of painful diabetic neuropathy and postherpetic neuralgia. Pregablin is a newer drug and will soon gain popularity in clinical practice. There is a need for further advances in our understanding of the neuropathic pain syndromes to establish the role of anticonvulsants in the treatment of neuropathic pain. abstract_id: PUBMED:35775075 Diabetes: how to manage diabetic peripheral neuropathy. Diabetic peripheral neuropathy (DPN) is a major complication of diabetes mellitus. Tight glycaemic management focused on lowering haemoglobin A1C and increasing time in the target glucose range along with metabolic risk factor management form the cornerstone of DPN prevention. However, there is limited evidence supporting the efficacy of glycaemic and metabolic control in reducing the symptoms and complications of DPN, including pain once painful DPN develops. DPN treatments include pharmacological agents and non-pharmacological interventions such as foot care and lifestyle modifications. Pharmacological agents primarily address pain symptoms, which affect 25-35% of people with DPN. First-line agents include the anticonvulsants pregabalin and gabapentin, the serotonin-norepinephrine reuptake inhibitors duloxetine and venlafaxine, and secondary amine tricyclic antidepressants, including nortriptyline and desipramine. All agents have unique pharmacological, safety and clinical profiles, and agent selection should be guided by the presence of comorbidities, potential for adverse effects, drug interactions and costs. Even with the current treatment options, people are commonly prescribed less than the recommended dose of medications, leading to poor management of DPN symptoms and treatment discontinuation. By keeping up with the latest therapy algorithms and treatment options, healthcare professionals can improve the care for people with DPN. abstract_id: PUBMED:10870743 Anticonvulsants (antineuropathics) for neuropathic pain syndromes. Our knowledge about the pathogenesis of neuropathic pain has grown significantly during last two decades. Basic research with animal models of neuropathic pain and human clinical trials with neuropathic pain have accumulated solid evidence that a number of pathophysiologic and biochemical changes take place in the nervous system at a peripheral or central level as a result of the insult or disease. Many similarities between the pathophysiologic phenomena observed in some epilepsy models and neuropathic pain models justify the rationale for the use of anticonvulsant drugs in the symptomatic management of neuropathic pain disorders. Carbamazepine (CBZ) was the first representative from this class of drugs to be studied in clinical trials. It has been used for the treatment of neuropathic pain syndromes, in particular, trigeminal neuralgia (TN), for the longest time of any of the drugs in this class. Results from clinical trials support the use of CBZ in the treatment of TN, painful diabetic neuropathy, and postherpetic neuralgia. The use of CBZ was not studied for complex regional pain syndrome, phantom limb pain, and other neuropathic conditions, however. Phenytoin was the first anticonvulsant to be used as an antinociceptive agent, but based on clinical trials, there is no evidence for its efficacy in relieving neuropathic pain. Newer anticonvulsants have marked a new era in the treatment of neuropathic pain, with clinical trials of higher quality standards. Gabapentin (GBP) has most clearly demonstrated an analgesic effect for the treatment of neuropathic pain, specifically for the treatment of painful diabetic neuropathy and postherpetic neuralgia. Gabapentin has a favorable side effects profile, and based on the results of these studies, it should be considered a first-line treatment for neuropathic pain. Gabapentin mechanisms of action are still not thoroughly defined, but GBP is effective in relieving indexes of allodynia and hyperalgesia in animal models. It still remains to be seen whether GBP is as effective in other painful disorders. One small clinical trial with lamotrigine demonstrated improved pain control in TN. Evidence in support of the efficacy of anticonvulsant drugs in the treatment of neuropathic pain continues to evolve, and benefits have been clearly demonstrated in the case of GBP and CBZ. More advances in our understanding of the mechanisms underlying neuropathic pain syndromes should further our opportunities to establish the role of anticonvulsants in the treatment of neuropathic pain. abstract_id: PUBMED:25897354 Diabetic neuropathic pain: Physiopathology and treatment. Diabetic neuropathy is a common complication of both type 1 and type 2 diabetes, which affects over 90% of the diabetic patients. Although pain is one of the main symptoms of diabetic neuropathy, its pathophysiological mechanisms are not yet fully known. It is widely accepted that the toxic effects of hyperglycemia play an important role in the development of this complication, but several other hypotheses have been postulated. The management of diabetic neuropathic pain consists basically in excluding other causes of painful peripheral neuropathy, improving glycemic control as a prophylactic therapy and using medications to alleviate pain. First line drugs for pain relief include anticonvulsants, such as pregabalin and gabapentin and antidepressants, especially those that act to inhibit the reuptake of serotonin and noradrenaline. In addition, there is experimental and clinical evidence that opioids can be helpful in pain control, mainly if associated with first line drugs. Other agents, including for topical application, such as capsaicin cream and lidocaine patches, have also been proposed to be useful as adjuvants in the control of diabetic neuropathic pain, but the clinical evidence is insufficient to support their use. In conclusion, a better understanding of the mechanisms underlying diabetic neuropathic pain will contribute to the search of new therapies, but also to the improvement of the guidelines to optimize pain control with the drugs currently available. abstract_id: PUBMED:12525267 Antidepressants for chronic neuropathic pain. Tricyclic antidepressants have been used to manage pain for several decades, and are superior treatments for some patients suffering from neuropathic pain. Unfortunately, older antidepressants have dose-limiting side effects that can lead to drug intolerance. The most common are anticholinergic side effects, although some patients experience sexual dysfunction. Cognitive impairment, sedation, and orthostatic hypotension also are relatively common. Taking an overdose of tricyclic antidepressants can be lethal in overdose. Several weeks of therapy may be required before antinociception occurs, but tricyclic antidepressants in optimal doses appear to be the most effective treatment for neuropathic pain; this is supported by systematic reviews comparing them with other agents. Newer medications such as atypical antidepressants and anticonvulsants may be overtaking older antidepressants, but they should not be overlooked as important options for the management of pain. abstract_id: PUBMED:11129121 Anticonvulsants for neuropathic pain syndromes: mechanisms of action and place in therapy. Neuropathic pain, a form of chronic pain caused by injury to or disease of the peripheral or central nervous system, is a formidable therapeutic challenge to clinicians because it does not respond well to traditional pain therapies. Our knowledge about the pathogenesis of neuropathic pain has grown significantly over last 2 decades. Basic research with animal and human models of neuropathic pain has shown that a number of pathophysiological and biochemical changes take place in the nervous system as a result of an insult. This property of the nervous system to adapt morphologically and functionally to external stimuli is known as neuroplasticity and plays a crucial role in the onset and maintenance of pain symptoms. Many similarities between the pathophysiological phenomena observed in some epilepsy models and in neuropathic pain models justify the rational for use of anticonvulsant drugs in the symptomatic management of neuropathic pain disorders. Carbamazepine, the first anticonvulsant studied in clinical trials, probably alleviates pain by decreasing conductance in Na+ channels and inhibiting ectopic discharges. Results from clinical trials have been positive in the treatment of trigeminal neuralgia, painful diabetic neuropathy and postherpetic neuralgia. The availability of newer anticonvulsants tested in higher quality clinical trials has marked a new era in the treatment of neuropathic pain. Gabapentin has the most clearly demonstrated analgesic effect for the treatment of neuropathic pain, specifically for treatment of painful diabetic neuropathy and postherpetic neuralgia. Based on the positive results of these studies and its favourable adverse effect profile, gabapentin should be considered the first choice of therapy for neuropathic pain. Evidence for the efficacy of phenytoin as an antinociceptive agent is, at best, weak to modest. Lamotrigine has good potential to modulate and control neuropathic pain, as shown in 2 controlled clinical trials, although another randomised trial showed no effect. There is potential for phenobarbital, clonazepam, valproic acid, topiramate, pregabalin and tiagabine to have antihyperalgesic and antinociceptive activities based on result in animal models of neuropathic pain, but the efficacy of these drugs in the treatment of human neuropathic pain has not yet been fully determined in clinical trials. The role of anticonvulsant drugs in the treatment of neuropathic pain is evolving and has been clearly demonstrated with gabapentin and carbamazepine. Further advances in our understanding of the mechanisms underlying neuropathic pain syndromes and well-designed clinical trials should further the opportunities to establish the role of anticonvulsants in the treatment of neuropathic pain. abstract_id: PUBMED:11888243 Anticonvulsants in neuropathic pain: rationale and clinical evidence. Neuropathic pain, whether of peripheral or central origin, is characterized by a neuronal hyperexcitability in damaged areas of the nervous system. In peripheral neuropathic pain, damaged nerve endings exhibit abnormal spontaneous and increased evoked activity, partly due to an increased and novel expression of sodium channels. In central pain, although not explored in detail, the spontaneous pain and evoked allodynia are also best explained by a neuronal hyperexcitability. The peripheral hyperexcitability is due to a series of molecular changes at the level of the peripheral nociceptor, in dorsal root ganglia, in the dorsal horn of the spinal cord, and in the brain. These changes include abnormal expression of sodium channels, increased activity at glutamate receptor sites, changes in gamma-aminobutyric acid (GABA-ergic) inhibition, and an alteration of calcium influx into cells. The neuronal hyperexcitability and corresponding molecular changes in neuropathic pain have many features in common with the cellular changes in certain forms of epilepsy. This has led to the use of anticonvulsant drugs for the treatment of neuropathic pain. Carbamazepine and phenytoin were the first anticonvulsants to be used in controlled clinical trials. Studies have shown these agents to relieve painful diabetic neuropathy and paroxysmal attacks in trigeminal neuralgia. Subsequent studies have shown the anticonvulsant gabapentin to be effective in painful diabetic neuropathy, mixed neuropathies, and postherpetic neuralgia. Lamotrigine, a new anticonvulsant, is effective in trigeminal neuralgia, painful peripheral neuropathy, and post-stroke pain. Other anticonvulsants, both new and old, are currently undergoing controlled clinical testing. The most common adverse effects of anticonvulsants are sedation and cerebellar symptoms (nystagmus, tremor and incoordination). Less common side-effects include haematological changes and cardiac arrhythmia with phenytoin and carbamazepine. The introduction of a mechanism-based classification of neuropathic pain, together with new anticonvulsants with a more specific pharmacological action, may lead to more rational treatment for the individual patient with neuropathic pain. abstract_id: PUBMED:24284851 Comparative efficacy and safety of six antidepressants and anticonvulsants in painful diabetic neuropathy: a network meta-analysis. Background: Anticonvulsants and antidepressants are mostly used in management of painful diabetic neuropathy (PDN). However there are few direct comparisons between drugs of these classes, making evidence-based decision-making in the treatment of painful diabetic neuropathy difficult. Objectives: This study aimed to perform a network meta-analysis and benefit-risk analysis to evaluate the comparative efficacy and safety of these drugs in PDN treatment. Study Design: Comparative effectiveness study. Setting: Medical Education and Research facility in India. Methods: A comprehensive data search was done in PubMed, Cochrane, and Embase up to August 2012. We then systematically reviewed the studies which compared any of 6 drugs for the management of PDN: amitriptyline, duloxetine, gabapentin, pregabalin, valproate, and venlafaxine or any of their combinations. We performed a random-effects network meta-analysis to rank treatments in terms of efficacy and safety. We chose the number of patients experiencing = 50% reduction in pain and number of patient withdrawals due to adverse events (AE) as primary outcomes for efficacy and safety, respectively. We also performed benefit-risk analysis, taking efficacy outcome as benefit and safety outcome as risk. Analysis was intention-to-treat. Results: We included 21 published trials in the analysis. Duloxetine, gabapentin, pregabalin, and venlafaxine were shown to be significantly efficacious compared to placebo with odds ratios (OR) of 2.12, 3.98, 2.78, and 4.43, respectively. Amitriptyline (OR: 7.03, 95% confidence interval [CI]: 1.87, 29.05) and duloxetine (OR: 3.26, 95% CI: 1.04, 9.97) caused more withdrawals than gabapentin. The ranking order of efficacy was gabapentin, venlafaxine, pregabalin, duloxetine/gabapentin, duloxetine, amitriptyline, and placebo and the ranking order of safety was placebo, gabapentin, pregabalin, venlafaxine, duloxetine/gabapentin combination, duloxetine, and amitriptyline. Benefit-risk balance favored the order: gabapentin, venlafaxine, pregabalin, duloxetine/gabapentin combination, duloxetine, placebo, and amitriptyline. Limitations: We could not include valproate in our analysis owing to the lack of studies reporting the dichotomous efficacy and safety outcomes. Conclusion: Gabapentin was found to be most efficacious and amitriptyline to be least safe among the treatments included in the study. Gabapentin showed most favorable balance between efficacy and safety. Answer: Yes, there are different predictors of analgesic response between antidepressants and anticonvulsants in painful diabetic neuropathy (PDN). According to the data from the COMBO-DN study, the presence of mood symptoms was found to be a significant predictor of the analgesic effect of duloxetine, an antidepressant, in patients with diabetic peripheral neuropathic pain (DPNP). Patients treated with duloxetine who had no mood symptoms experienced a larger pain reduction compared to those treated with pregabalin, an anticonvulsant (PUBMED:26311228). This suggests that mood symptoms may influence the response to antidepressants differently than to anticonvulsants in the treatment of PDN. Furthermore, a network meta-analysis indicated that gabapentin, an anticonvulsant, was found to be the most efficacious treatment for PDN, while amitriptyline, an antidepressant, was the least safe. The benefit-risk balance favored gabapentin over other treatments, including antidepressants (PUBMED:24284851). This implies that the efficacy and safety profiles of these drugs can be predictors of their analgesic response in PDN. Additionally, a quantitative systematic review found that both antidepressants and anticonvulsants had similar efficacy and incidence of minor adverse effects in treating neuropathic pain conditions like diabetic neuropathy. However, patients were more likely to stop taking antidepressants than anticonvulsants due to adverse effects, which could also be considered a predictor of analgesic response (PUBMED:11131263). Overall, the predictors of analgesic response between antidepressants and anticonvulsants in PDN can include the presence of mood symptoms, the efficacy and safety profiles of the drugs, and the likelihood of adverse effects leading to treatment discontinuation.
Instruction: Does increased electrocautery during adenoidectomy lead to neck pain? Abstracts: abstract_id: PUBMED:37109697 Use and Abuse of Electrocautery in Adenoidectomy Hemostasis. Background and objectives: Bipolar electrocautery is commonly used to control bleeding after cold-instrument pediatric adenoidectomy, but the surgeon should be aware of the possible side effects. OBJECTIVE: The aim of our study is to investigate the effects of bipolar electrocautery when used for bleeding control at the end of an adenoidectomy procedure. Materials and Methods: We evaluated the effect of electrocautery on postoperative pain, velopharyngeal insufficiency symptoms, postoperative nasal obstruction, and rhinorrhea in a group of 90 children undergoing adenoidectomy in our ENT department over a period of 3 months. Results: After statistically analyzing the data, we found that the duration of postoperative pain, the duration of rhinorrhea and nasal obstruction, and the duration of painkiller administration, as well as the velopharyngeal insufficiency symptoms, were significantly longer in patients in whom electrocautery was used for hemostasis. A significantly higher incidence of posterior neck pain and halitosis (oral malodor) was noted in the patients in whom electrocautery was used for adenoidectomy hemostasis. Conclusions: Bipolar electrocautery use should be limited during pediatric adenoidectomy hemostasis because of the possible side effects: longer postoperative pain, prolonged nasal obstruction, rhinorrhea and velopharyngeal insufficiency, and halitosis. We noted some side effects that were specific to electrocautery use during adenoidectomy: posterior neck pain and oral malodor. Acknowledging the risk for these symptoms can help to alleviate the anxiety of both the parents and the patients regarding the expected postoperative outcomes. abstract_id: PUBMED:16213929 Does increased electrocautery during adenoidectomy lead to neck pain? Objectives: The objective was to assess the impact of electrocautery on complications in adenoidectomy. We sought to quantify cautery-related temperature changes in prevertebral fascia that may occur during the procedure, retrospectively evaluate the incidence of cautery-related complications, and prospectively assess the role of cautery in postoperative neck pain. Methods: Three consecutive related trials were performed. Initially, adenoidectomy was performed on 20 fresh cadavers, using a thermister to evaluate temperature changes in the prevertebral fascia after electrocautery (30 watts over a 30-second period). Next, retrospective analysis of adenoidectomy complications in 1206 children over a 5-year period was performed. Based on these findings, a prospective study of the incidence of neck pain following adenoidectomy was performed in a cohort of 276 children. Adenoidectomy technique, wattage, and duration of electrocautery were recorded for each child. Children with significant neck pain were evaluated with MRI. Results: Peak thermister readings averaged 74 degrees C, for a mean change of 51.8 degrees C. Complications observed in retrospective analysis included neck pain (3), Grisel's syndrome (1), prolonged velopharyngeal insufficiency (1), retropharyngeal edema (1), and severe nasopharyngeal stenosis (1). The incidence of neck pain in the prospective study was 12% (33 pts), and was independent of adenoidectomy technique, cautery wattage, or duration of cautery use. MRIs revealed edema without abscess. Conclusions: Cautery can result in substantial temperature changes in the surgical adenoid bed. Despite this, the incidence of complications, specifically neck pain, associated with adenoidectomy is low, although underreported. Complications appear to be independent of adenoidectomy technique and cautery use. abstract_id: PUBMED:16647547 Coblation adenotonsillectomy: an improvement over electrocautery technique? Objectives: To compare postoperative complication rates of coblation and electrocautery adenotonsillectomies. Study Design: Retrospective chart review. Results: From January 2000 to June 2004, 1997 pediatric patients underwent adenotonsillectomy. 745 coblation, and 1252 electrocautery tonsillectomies were performed. Primary bleed, secondary bleed, and dehydration were seen in 3, 35, and 23 coblation, and 9, 41, and 64 electrocautery tonsillectomies, respectively. Data analysis revealed no significant difference in primary and secondary hemorrhage rate, but a higher dehydration rate in the electrocautery group (P=0.0423). A total of 602 coblation, 763 curette/cautery, and 632 electrocautery adenoidectomies were performed. Neck pain was seen in 0, 17, and 3 patients, respectively. Data analysis showed a higher incidence of neck pain with the curette/cautery technique compared with coblator and cautery techniques (P=0.0006 and P=0.0119, respectively). Conclusions: Coblation tonsillectomy had similar rates of primary and secondary hemorrhage when compared with electrocautery tonsillectomy but a lower incidence of postoperative dehydration. Coblation adenoidectomy caused less postoperative neck pain than curette/cautery adenoidectomy without significant advantage over cautery adenoidectomy. Ebm Rating: B-3b. abstract_id: PUBMED:28384895 Combined Conventional and Endoscopic Microdebrider-Assisted Adenoidectomy: A Tertiary Centre Experience. Introduction: Adenoidectomy is one of the most commonly performed surgical procedures in children. Conventional adenoidectomy is associated with incomplete adenoid tissue removal with persistence of symptoms. The advent of rigid nasal endoscopes, cold light source, fiber optics and powered instruments used in functional endoscopic sinus surgery helped in the development of endoscopic microdebrider-assisted adenoidectomy. Aim: To establish the safety and efficacy of combined conventional and endoscopic microdebrider-assisted adenoidectomy procedure. Materials And Methods: This is a prospective study of 60 child patients who underwent combined conventional and endoscopic microdebrider-assisted adenoidectomy. The study was conducted from September 2013 to September 2015. Only child patients with grade 3 and grade 4 Adenoid Hypertrophy (AH) was included in the study. At the end of conventional adenoidectomy and after combined procedure, the AH was graded again. Post-operative complications like neck pain, hypernasality and swallowing problems were noted. Their symptom score was reviewed before surgery and after one month and one year of surgery. The duration of surgery and amount of blood loss was recorded. Results: By this technique, complete clearance of adenoid tissue was obtained in all 60 (100%) cases. The mean pre-operative symptom score for AH was 3.7, which improved to 0 after one month of combined conventional and endoscopic microdebrider-assisted adenoidectomy. All child patients were symptom-free at the end of one month and one year. The duration of conventional adenoidectomy was 5 minutes 12 seconds while total duration of the combined conventional and endoscopic microdebrider-assisted adenoidectomy was 14 minutes 45 seconds. There was no significant blood loss (15±3 ml approximately). There were no major complications in this study. Conclusion: The combined approach of conventional curette along with endoscopic microdebrider-assisted adenoidectomy is a safe and effective method for complete and accurate removal of large adenoids. abstract_id: PUBMED:36544967 Adverse events of coblation or microdebrider in pediatric adenoidectomy: A retrospective analysis in 468 patients. Objective: Childhood obstructive sleep apnea hypopnea syndrome (OSAHS) is a common clinical disease that can cause serious complications if not treated in time. Adenoidectomy with or without tonsillectomy is the most important first line surgical treatment of obstructive sleep apnea in children. The aim of this study was to compare the differences between these two surgical procedures for adenoidectomy in terms of operation time, intraoperative blood loss, proportion of patients experiencing postoperative delayed hemorrhage, and incidence of adverse events. Study Design: Retrospective analysis. Methods: We performed a retrospective systematic analysis of patient data using the in-house electronic patient records and considered a 2-year period from 2016 to 2017. In total, 468 patients who underwent adenoidectomy under nasal endoscopy with coblation or microdebrider were identified. Results: The coblation adenoidectomy technique was associated with significantly reduced blood loss and operation time. However, incidence of fever, neck pain, and halitosis were significantly lower in the microdebrider adenoidectomy group (p &lt; .01). The difference in the postoperative primary and secondary hemorrhage between the two groups was not statistically significant (p &gt; .05). Conclusion: Coblation adenoidectomy had a significantly higher incidence of adverse events such as halitosis, neck pain, and fever. Therefore, otorhinolaryngologists should consider the differences in adverse events when selecting use of coblation adenoidectomy for pediatric patients. Level Of Evidence: IV. abstract_id: PUBMED:8604892 Atlanto-axial subluxation and cervical osteomyelitis: two unusual complications of adenoidectomy. Grisel's syndrome (atlanto-axial subluxation) and cervical osteomyelitis are two unusual complications of adenoidectomy. We present two patients; one with atlanto-axial subluxation following uncomplicated tonsillectomy and adenoidectomy, and one with cervical osteomyelitis following uncomplicated adenoidectomy. Both patients presented with persistent postoperative neck pain. Surgical intervention, as well as long-term intravenous antibiotics, was required. A high index of suspicion, as well as cervical spinal series with flexion-extension views, is necessary for diagnosis. Flexible nasopharyngoscopy and computed tomography of the cervical spine also aided in diagnosis and treatment planning. With early diagnosis and proper treatment, the prognosis is good. Neurologic sequelae were prevented in both of our patients. abstract_id: PUBMED:28570360 Atlanto-Axial Subluxation After Adenoidectomy. Atlanto-axial subluxation is a rare but potentially serious complication after otolaryngological procedures. We are describing a case of a 4-year-old child who developed atlanto-axial subluxation of the cervical spine after adenoidectomy. Our patient underwent adenoidectomy and, 18 days later, presented to the emergency department with her neck tilted to the left in a cock-robin position and complaining of neck pain persisting since the surgery. A multiplanar 3-dimensional computed tomography was obtained and confirmed the diagnosis of an atlanto-axial subluxation (Fielding type 3). She was managed conservatively with the application of a cervical collar, anti-inflammatory medication, and manual reduction under anesthesia later in the course because of persistence of her symptoms. It is important to consider this diagnosis in any child who undergoes ENT surgical procedures complaining of neck pain subsequent to surgery or holding the head in a fixed position persistently after surgery. Early diagnosis is important to reduce the time between the onset of symptoms and reduction to reduce the risk or need for surgical intervention. abstract_id: PUBMED:16482983 Grisel's syndrome: a rare complication following adenoidectomy. Grisel's syndrome, defined as subluxation of the atlanto-axial joint, not associated with trauma or bone disease, is found primarily in children. There are few references to this syndrome in the ENT literature but it may occur in association with any condition that results in hyperaemia and pathological relaxation of the transverse ligament of the atlanto-axial joint. Several common otolaryngeal conditions have been associated with the syndrome: pharyngitis, adenotonsillitis, tonsillar abscess, cervical abscess, and otitis media. Moreover, the syndrome has been observed after numerous otolaryngologic procedures such as tonsillectomy, adenoidectomy and mastoidectomy. Non-traumatic subluxation of the atlanto-axial joint should be suspected in cases of persistent neck pain and stiffness. X-rays and computed tomography scans of the cervical spine can confirm the diagnosis. Early management, consisting of cervical immobilization and medical treatment, is considered the key factor for a satisfactory outcome. Inappropriate treatment may result in a permanent and painful neck deformity that may even require surgical fusion. Neurological complications have been reported in the literature, with outcome ranging from mild paresthesia, clonus, to quadriplegia or acute respiratory failure and death. The case is described of an 8-year-old boy who developed Grisel's syndrome following adenoidectomy. The pathogenesis, classification, diagnosis, and treatment of this condition are discussed. abstract_id: PUBMED:25959249 Grisel Syndrome Following Adenoidectomy: Surgical Management in a Case with Delayed Diagnosis. Background: Grisel syndrome is a nontraumatic rotatory subluxation of the atlantoaxial joint, following nasopharyngeal inflammation or ear, nose, and throat (ENT) procedures. The syndrome should be suspected in cases of persistent neck pain and stiffness, especially after ENT surgical procedures. The primary treatment of early detected Grisel syndrome is conservative. If conservative treatment fails to achieve a stable reduction or it is followed by neurologic symptoms, arthrodesis of the first and second cervical vertebrae is indicated. We report the case of a 9-year-old boy who developed Grisel syndrome after adenoidectomy and was treated with C1-C3 internal fixation and fusion. Case Description: A 9-year-old boy was referred to our hospital with a 3-month history of painful torticollis, which appeared 4 days after adenoidectomy. The patient underwent a neuroimaging study that documented the presence of atlantoaxial rotatory subluxation. The patient underwent C1-C3 internal fixation and fusion, using lateral masses and laminar and pars interarticularis screws. On the third postoperative day he was mobilized with a rigid collar. Postoperative computed tomography scans showed the resolution of rotational deformity and a solid fusion. Conclusion: Early treatment of Grisel syndrome is of utmost importance to avoid neurologic complications and surgical intervention. In a patient with torticollis following ENT procedures, Grisel syndrome should be always suspected. In case of failure of conservative treatment or in case of delayed diagnosis, rigid C1-C2 or C1-C2-C3 fixation is a straightforward and valid surgical technique, even in children, because it provides immediate spinal stability in all planes at the atlantoaxial complex, avoiding the need for prolonged rigid external bracing. abstract_id: PUBMED:17099343 Cervical complications following routine tonsillectomy and adenoidectomy. Purpose Of Review: Fortunately, patients undergoing adenotonsillectomies have far more complaints than complications. Sore throats, dysphagia, and neck pain are frequent and expected symptoms after surgery. Differentiating these symptoms from early signs of severe cervical complications can be difficult. Such complications are rare but include atlanto-axial subluxation (Grisel's syndrome), cervical necrotizing fasciitis, cervical emphysema and cervical oseteomyelitis. Due to the frequency with which adenotonsillectomies are performed, most otolaryngologists will encounter these events during their career. This article is thereby intended to elucidate the early warnings, appropriate diagnostic workup, and therapeutic modalities for cervical complications following adenotonsillectomies. Recent Findings: Early recognition and intervention can prevent devastating morbidity and mortality described with cervical complications. Computed tomography scanning remains the gold standard for diagnosing cervical complications after adenotonsillectomy. Although a range of severity exists in cervical complications, most cases can be managed conservatively with broad spectrum antibiotics, observation, bedrest and immobilization in cases of Grisel's syndrome. Cervical necrotizing fasciitis requires a high index of suspicion and urgent management to avoid fatal consequences. Summary: This is a review of the most frequently encountered, although rare, cervical complications following adenotonsillectomies. It gives the reader an insight into the efficient diagnosis and management of these complications. Answer: The use of electrocautery during adenoidectomy has been associated with neck pain in some studies. A retrospective analysis of adenoidectomy complications in 1206 children over a 5-year period found that neck pain was one of the observed complications, although the incidence was low and appeared to be independent of adenoidectomy technique, cautery wattage, or duration of cautery use (PUBMED:16213929). Another study found that the incidence of neck pain in a prospective study was 12%, and it was independent of adenoidectomy technique, cautery wattage, or duration of cautery use (PUBMED:16213929). However, a study comparing postoperative complication rates of coblation and electrocautery adenotonsillectomies found that coblation adenoidectomy caused less postoperative neck pain than curette/cautery adenoidectomy without significant advantage over cautery adenoidectomy (PUBMED:16647547). Additionally, a retrospective analysis in 468 patients comparing coblation and microdebrider adenoidectomy techniques found that the incidence of neck pain was significantly lower in the microdebrider adenoidectomy group compared to the coblation group, which uses electrocautery (PUBMED:36544967). It is important to note that while some studies suggest an association between electrocautery use and neck pain, the overall incidence of neck pain related to adenoidectomy is considered low. Moreover, complications such as neck pain can occur due to other factors, and the relationship between electrocautery use and neck pain may not be causal. For instance, Grisel's syndrome, a nontraumatic subluxation of the atlanto-axial joint, can occur after adenoidectomy and present with persistent neck pain, but it is a rare complication (PUBMED:8604892, PUBMED:28570360, PUBMED:16482983, PUBMED:25959249, PUBMED:17099343). In conclusion, while there is some evidence to suggest that increased electrocautery during adenoidectomy may be associated with neck pain, the relationship is not definitive, and the incidence of neck pain as a complication is generally low. Other factors may also contribute to neck pain after adenoidectomy, and further research may be needed to fully understand the relationship between electrocautery use and neck pain.
Instruction: Are poor responders patients at higher risk for producing aneuploid embryos in vitro? Abstracts: abstract_id: PUBMED:21110079 Are poor responders patients at higher risk for producing aneuploid embryos in vitro? Purpose: To test the hypothesis that aged women with poor ovarian response express an increase on embryo chromosomal alterations when compared to aged women who presented normal response. Methods: Couples undergoing intracytoplasmic sperm injection cycles with preimplantation genetic screening, were subdivided into two groups: Poor Responder group (n = 34), patients who produced ≤4 oocytes; and Normoresponder group (n = 50), patients who produced ≥5 oocytes. Groups were compared regarding cycles' outcomes and aneuploidy frequency. Results: There were no significant differences between and groups regarding the fertilization rate (p = 0.6861), clinical pregnancy (p = 0.9208), implantation (p = 0.6863), miscarriage (p = 0.6788) and the percentage of aneuploid embryos (p = 0.270). Embryo transfer rate was significantly lower on poor responder group (p = 0.0128) and logistic regression confirmed the influence of poor response on the chance of embryo transfer (p = 0.016). Conclusions: Aged females responding poorly to gonadotrophins are not at a higher risk for producing aneuploid embryos in vitro. abstract_id: PUBMED:35367012 When is the right time to stop autologous in vitro fertilization treatment in poor responders? Declining oocyte quality and quantity with age are the main limiting factors in female reproductive success. Age of the female partner, ovarian reserve, the patient's previous fertility treatment outcomes, and the fertility center's pregnancy success data for specific patient profiles are used to predict live birth rates with in vitro fertilization (IVF) treatment. The chance of finding a euploid blastocyst or achieving live birth after the age of 45 is close to zero. Therefore, any IVF cycle using autologous oocytes after the age of 45 can be accepted as futile and should be discouraged. The number of mature eggs retrieved and the number of embryos available for transfer are the second most important predictors of pregnancy and live birth after female age. For patients aged ≤45 years, the recommendation for attempting IVF should be given considering the patient's age and the expected ovarian response. Before the start of the IVF cycle, patients with a very poor prognosis must be fully informed of the prognosis, risks, costs, and alternatives, including using donor oocytes. Alternative treatments to improve oocyte quality and decrease aneuploidy have the potential to change how clinicians treat poor responders. However, these treatments are not yet ready for clinical use. abstract_id: PUBMED:10773399 Gonadal activity and chromosomal constitution of in vitro generated embryos. Chromosomal analysis of pre-implantation embryos was carried out in patients with a poor prognosis of full term pregnancy, which underwent induction of multiple follicular growth. In all, 1034 embryos generated from 191 stimulated cycles were screened for nine chromosome aneuploidy by using the multicolour fluorescence in situ hybridisation technique. Thirty-five percent of the diagnosed embryos were chromosomally normal, whereas the remaining presented with numerical abnormalities, which made them not suitable for transfer. The results obtained confirmed that the incidence of abnormalities is mostly dependent on age; however, monosomy and trisomy are more frequent in poor responders. Accordingly, the pregnancy rate per started cycle was significantly higher in women with a normal response to gonadotropic stimulation (33% vs. 8%, P&lt;0. 001). These findings indicate that poor responder patients are physiologically exposed not only to reduced chances of implantation, but also to an increased risk of spontaneous abortion and trisomic pregnancies. abstract_id: PUBMED:12032382 Poor responders: does the protocol make a difference? An inadequate response to gonadotropins during in-vitro fertilization treatment may result in cycle cancellation, fewer embryos available for transfer and decreased pregnancy rates. For these reasons, numerous strategies to improve ovarian stimulation in poor responders have been proposed. These include variations in the type, dose and timing of gonadotropins, gonadotropin-releasing hormone agonists and gonadotropin-releasing hormone antagonists. Unfortunately, despite optimism generated by studies using retrospective controls, epidemiologically sound trials have been scarce. Indeed, of the three prospective randomized trials performed in poor responders to date no compelling advantage for one stimulation protocol over another has been established. Although this lack of improvement may reflect inadequate sample sizes, an alternative explanation is simply that the protocol, after a certain point, does not make a difference. Aside from oocyte donation, greater hope for poor responders may rest in aneuploidy screening, in-vitro oocyte maturation and cytoplasm/nuclear transfer. abstract_id: PUBMED:35260238 Alteration of final maturation and laboratory techniques in low responders. The number and quality of embryos generated from the limited number of oocytes retrieved from low responders are important aspects of infertility treatment for these patients. This article focuses on 5 aspects relating to final maturation and laboratory techniques: follicular size at trigger, dual trigger, artificial oocyte activation (AOA), blastocyst transfer, and the role of preimplantation genetic testing for aneuploidy (PGT-A). There is lack of data regarding the role of follicular size, specifically in low-responder patients, but consideration should be given to using broader follicular size criteria when retrieving oocytes in this patient group. Use of dual trigger seems to be a good strategy in low-responder patients on the basis of initial evidence. Use of AOA with calcium ionophore may improve fertilization, embryonic development, and outcomes in cases with previous developmental problems. There is lack of data for low responders, but this promising technique deserves further study. In unselected patients, clinical trial data on blastocyst transfer are conflicting, and no high-quality studies have evaluated whether the live birth rate is higher after blastocyst transfer than after cleavage-stage embryo transfer in low responders. Specific evidence for PGT-A in low-responder patients is also lacking. Preimplantation genetic testing for aneuploidy should be considered in POSEIDON group 2 patients, especially those aged &gt;38 years. Overall, applying the limited data available in combination with patient preference and individual patient characteristics will ensure a patient-centered and evidence-based approach that should optimize fertility outcomes for low responders. abstract_id: PUBMED:9604763 Incidence of chromosomal abnormalities from a morphologically normal cohort of embryos in poor-prognosis patients. Purpose: Preimplantation genetic diagnosis of aneuploidy was performed on the embryos yielded by 70 poor-prognosis patients, with the aim of transferring those with a normal chromosomal complement, thus possibly increasing the chances of pregnancy. Methods: Multicolor fluorescence in situ hybridization (FISH) was applied for the simultaneous detection of chromosomes X, Y, 13, 16, 18, and 21. Inclusion criteria were (1) a maternal age of 36 years or older (n = 33), (2) three or more previous in vitro fertilization cycles (n = 20), and (3) an altered karyotype (n = 17). Results: A total of 412 embryos underwent FISH, resulting in 234 (57%) that were chromosomally abnormal. Euploid embryos were available for transfer in 59 patients, generating 19 pregnancies (32%), with an implantation rate of 19.9%. Conclusions: High rates of chromosomally abnormal embryos in poor-prognosis patients can determine repeated in vitro fertilization failures when embryo selection is performed on the basis of morphological criteria alone. Hence, the FISH analysis could represent the prevailing approach for the identification of embryos possessing full potential for developing to term. abstract_id: PUBMED:33267960 Female obesity increases the risk of miscarriage of euploid embryos. Objective: To determine whether female body mass index (BMI) is associated with an increased risk of miscarriage after euploid embryo transfer. Design: A retrospective, observational, multicenter cohort study. Setting: University-affiliated in vitro fertilization center. Patient(s): In this study, 3,480 cycles of in vitro fertilization with preimplantation genetic testing for aneuploidy (PGT-A) in the blastocyst stage and euploid embryo transfer were divided into four groups according to patient BMI. Intervention(s): In vitro fertilization with PGT-A. Main Outcome Measure(s): The primary outcome was the miscarriage rate, which included both biochemical and clinical miscarriages. Secondary outcomes were implantation, pregnancy, clinical pregnancy, and live birth rates. Result(s): Cycles were divided into four groups according to BMI (kg/m2): underweight (&lt;18.5; n = 155), normal weight (18.5-24.9; n = 2,549), overweight (25-29.9; n = 591), and obese (≥30; n = 185). The number of PGT-A cycles per patient was similar in the four groups. Fertilization rate, day of embryo biopsy, technique of chromosomal analysis, number of euploid embryos, number of transferred embryos, and method of endometrial preparation for embryo transfer were similar in the four BMI groups. Miscarriage rates were significantly higher in women with obesity compared to women with normal weight, mainly due to a significant increase in the clinical miscarriage rates. Live birth rates also were lower in women with obesity. Obesity in women and day 6 trophectoderm biopsy were found to influence the reduced live birth rate. Conclusion(s): Women with obesity experience a higher rate of miscarriage after euploid embryo transfer than women with a normal weight, suggesting that other mechanisms than aneuploidy are responsible for this outcome. abstract_id: PUBMED:8215225 Cytogenetic study of fragmented embryos not transferred in in vitro fertilization A cytogenetic analysis was performed on a sample of 411 human grade IV embryos (i.e. poor morphological quality embryos, never transferred in our in vitro fertilization (IVF) pro Gram) in order to investigate the chromosomal status of these embryos. One hundred eighteen were successfully karyotyped from at least one metaphase. Only 10% displayed normal diploid metaphases. Aneuploidy was the most frequently observed abnormality, with a rate of 36.4%. Six cases of single chromatids were noted and 9 embryos showed structural aberrations. Polyploidy (from 3n to 7n) and haploidy were also observed, suggesting parthenogenetic activation, polyspermy or chromosomal duplication. Mosaicism constituted 6% of the abnormalities. Thirty embryos exhibited fragmented chromosome sets which might result from in vitro delayed fertilization. abstract_id: PUBMED:11576729 Chromosomal abnormalities in embryos. Chromosomal analysis was performed on 1620 embryos generated in vitro by patients with a poor prognosis of pregnancy. A diagnosis was yielded in 1596 embryos: 536 (34%) were euploid and 1060 (66%) carried chromosomal abnormalities. The results revealed a strong association between chromosomal abnormalities, cellular stage and percentage of fragmentation. In addition, 92% of embryos with multinucleated cells were diagnosed mosaics, whereas the presence of cytoplasmic concentration was associated to 86% chromosomal abnormalities. The rate of development to expanded blastocysts was dependent on both the cleavage stage at the time of blastomere biopsy and the chromosomal status of the embryos. The highest percentage of blastocyst growth derived from embryos with 7-8 cells on the morning of day 3: 68% were generated from euploid embryos and 43% from chromosomally abnormal embryos. These findings suggest that morphological criteria alone are not sufficient in poor prognosis patients to guarantee the best embryo selection. abstract_id: PUBMED:32285297 Preimplantation genetic testing for aneuploidy in poor ovarian responders with four or fewer oocytes retrieved. Purpose: To assess whether preimplantation genetic testing for aneuploidies (PGT-A) at the blastocyst stage improves clinical outcomes compared with transfer of embryos without PGT-A in poor ovarian response (POR) patients. Methods: Retrospective cohort study of IVF cycles from 2016 to 2019 at a single academic fertility center. IVF cycles with POR and four or fewer oocytes retrieved were stratified into PGT-A (n = 241) and non-PGT (n = 112) groups. In PGT-A cycles, trophectoderm biopsy, next-generation sequencing with 24-chromosome screening, and single euploid frozen embryo transfer were performed. In non-PGT cycles, fresh or frozen transfer of untested embryos on day 3 or 5 was performed. Main outcomes included live birth rate and miscarriage rate per retrieval. Result(s): Patients who underwent PGT-A cycles were significantly less likely to reach embryo transfer compared with those who underwent non-PGT cycles (13.7% vs 70.6%). The live birth rate per retrieval did not differ between the PGT-A and non-PGT groups (6.6% vs 5.4%). Overall, the miscarriage rate was low. The PGT-A group demonstrated a significantly lower miscarriage rate per retrieval (0.4% vs 3.6%) as well as per pregnancy (5.9% vs 40.0%) compared with the non-PGT group. The number needed to treat to avoid one clinical miscarriage was 31 PGT-A cycles. Conclusion(s): PGT-A did not improve live birth rate per retrieval in POR patients with four or fewer oocytes retrieved. PGT-A was associated with a lower miscarriage rate; however, a fairly large number of PGT-A cycles were needed to prevent one miscarriage. Answer: The relationship between poor ovarian response and the risk of producing aneuploid embryos in vitro is not straightforward. Some studies suggest that poor responders may have an increased risk of aneuploidy in their embryos. For instance, a study found that the incidence of chromosomal abnormalities is mostly dependent on age, but poor responders also had a higher frequency of monosomy and trisomy, leading to a lower pregnancy rate and increased risk of spontaneous abortion and trisomic pregnancies (PUBMED:10773399). Another study reported high rates of chromosomally abnormal embryos in poor-prognosis patients, which could lead to repeated in vitro fertilization failures when embryo selection is based solely on morphological criteria (PUBMED:9604763). Additionally, chromosomal analysis of embryos from patients with a poor prognosis of pregnancy revealed a strong association between chromosomal abnormalities and certain morphological features, suggesting that poor prognosis patients may not be best served by morphological criteria alone for embryo selection (PUBMED:11576729). However, other research indicates that poor responders are not necessarily at a higher risk for producing aneuploid embryos. A study specifically aimed at testing this hypothesis found no significant difference in the percentage of aneuploid embryos between poor responders and normoresponders, suggesting that aged females responding poorly to gonadotrophins are not at a higher risk for producing aneuploid embryos in vitro (PUBMED:21110079). Moreover, a retrospective cohort study indicated that preimplantation genetic testing for aneuploidies (PGT-A) did not improve the live birth rate per retrieval in poor ovarian response patients with four or fewer oocytes retrieved, although it was associated with a lower miscarriage rate (PUBMED:32285297). In summary, while there is evidence to suggest that poor responders may have an increased risk of aneuploidy, this is not a consistent finding across all studies. Age and other factors also play a significant role in the risk of aneuploidy, and the use of PGT-A may help in reducing the miscarriage rate but does not necessarily improve the live birth rate per retrieval in poor responders (PUBMED:21110079, PUBMED:32285297).
Instruction: Do white matter changes have clinical significance in Alzheimer's disease? Abstracts: abstract_id: PUBMED:37174620 Basal Ganglia Compensatory White Matter Changes on DTI in Alzheimer's Disease. The volume reduction of the gray matter structures in patients with Alzheimer's disease is often accompanied by an asymmetric increase in the number of white matter fibers located close to these structures. The present study aims to investigate the white matter structure changes in the motor basal ganglia in Alzheimer's disease patients compared to healthy controls using diffusion tensor imaging. The amounts of tracts, tract length, tract volume, quantitative anisotropy, and general fractional anisotropy were measured in ten patients with Alzheimer's disease and ten healthy controls. A significant decrease in the number of tracts and general fractional anisotropy was found in patients with Alzheimer's disease compared to controls in the right caudate nucleus, while an increase was found in the left and the right putamen. Further, a significant decrease in the structural volume of the left and the right putamen was observed. An increase in the white matter diffusion tensor imaging parameters in patients with Alzheimer's disease was observed only in the putamen bilaterally. The right caudate showed a decrease in both the diffusion tensor imaging parameters and the volume in Alzheimer's disease patients. The right pallidum showed an increase in the diffusion tensor imaging parameters but a decrease in volume in Alzheimer's disease patients. abstract_id: PUBMED:31008274 Topographic distribution of white matter changes and lacunar infarcts in neurodegenerative and vascular dementia syndromes: A post-mortem 7.0-tesla magnetic resonance imaging study. Background: White matter changes and lacunar infarcts are regarded as linked to the same underlying small-vessel pathology. On magnetic resonance imaging, white matter changes are frequently observed, while the number of lacunar infarcts is probably underestimated. The present study post-mortem 7.0-tesla magnetic resonance imaging study compares the severity and the distribution of white matter changes and lacunar infarcts in different neurodegenerative and vascular dementia syndromes in order to determine their impact on the disease evolution. Patients And Methods: Eighty-four post-mortem brains consisting of 15 patients with pure Alzheimer's disease and 12 with associated cerebral amyloid angiopathy, 14 patients with frontotemporal lobar degeneration, 7 with Lewy body dementia, 10 with progressive supranuclear palsy, 14 with vascular dementia and 12 control brains were examined. Six hemispheric coronal sections of each brain underwent 7.0-tesla magnetic resonance imaging. Location and severity of white matter changes and lacunar infarcts were evaluated semi-quantitatively in each section separately. Results: White matter changes predominated in the prefrontal and frontal sections of frontotemporal lobar degeneration and in the post-central section of associated cerebral amyloid angiopathy brains, while overall increased in vascular dementia cases. Lacunar infarcts were more frequent in the vascular dementia brains and mainly increased in the centrum semiovale. Conclusions: White matter changes have a different topographic distribution in neurodegenerative diseases and are most severe and extended in vascular dementia. Lacunar infarcts predominate in the deep white matter of vascular dementia compared to the neurodegenerative diseases. Vascular cognitive impairment is mainly linked to white matter changes due to chronic ischaemia as well as to lacunar infarcts due to small-vessel occlusion. abstract_id: PUBMED:25639959 White matter changes in familial Alzheimer's disease. Background: Familial Alzheimer's disease (FAD) resulting from gene mutations in PSEN1, PSEN2 and APP is associated with changes in the brain. Objective: The aim of this study was to investigate changes in grey matter (GM), white matter (WM) and the cerebrospinal fluid (CSF) in FAD. Subjects: Ten mutation carriers (MCs) with three different mutations in PSEN1 and APP and 20 noncarriers (NCs) were included in the study. Three MCs were symptomatic and seven were presymptomatic (pre-MCs). Methods: Whole-brain GM volume as well as fractional anisotropy (FA) and mean diffusivity (MD) using voxel-based morphometry and tract-based spatial statistics analyses, respectively, were compared between MCs and NCs. FA and MD maps were obtained from diffusion tensor imaging. Results: A significant increase in MD was found in the left inferior longitudinal fasciculus, cingulum and bilateral superior longitudinal fasciculus in pre-MCs compared with NCs. After inclusion of the three symptomatic MCs in the analysis, the regions became wider. The mean MD of these regions showed significant negative correlation with the CSF level of Aβ42, and positive correlations with P-tau181p and T-tau. No differences were observed in GM volume and FA between the groups. Conclusions: The results of this study suggest that FAD gene mutations affect WM diffusivity before changes in GM volume can be detected. The WM changes observed were related to changes in the CSF, with similar patterns previously observed in sporadic Alzheimer's disease. abstract_id: PUBMED:15258430 Do white matter changes have clinical significance in Alzheimer's disease? Background: Although white matter changes visible with MRI are generally considered to result from ischemia, it has become clear that these changes also appear in patients with Alzheimer's disease (AD). However, their significance in AD is unknown. Objective: We evaluated the clinical significance of white matter changes in AD. Methods: Ninety-six AD patients (79.4 +/- 5.92 years old) and 48 age-matched control subjects (80.0 +/- 7.03 years old) participated in the study. Three neuroradiologists assessed the degree of periventricular hyperintensities (PVH) and deep white matter hyperintensities (DWMH) using a modified Fazekas' rating scale. We examined whether there was a difference in the severity and the histogram pattern of the white matter changes, or in vascular factors (hypertension, diabetes mellitus, and ischemic heart disease) between the two groups. We also analyzed the association between the severity of the white matter changes and the degree of dementia (MMSE score and disease duration). Results: There were no differences in the vascular factors between AD and control subjects. The degree of PVH in AD was severe compared with that in the control subjects. In histograms of the number of subjects with each degree of PVH severity, the distribution of AD patients had peaks at both the low and intermediate degrees of PVH, while most of the controls had a low degree of PVH. There was no difference in the degree or the histogram pattern of DWMH between the two groups. The severity of white matter changes was not associated with severity of dementia in AD. Conclusions: Although PVH might have several causative factors, and may have some clinical significance, the change itself does not contribute to the progression of AD. abstract_id: PUBMED:36495726 Discriminative patterns of white matter changes in Alzheimer's. Changes in structural connectivity of the Alzheimer's brain have not been widely studied utilizing cutting-edge methodologies. This study develops an efficient structural connectome-based convolutional neural network (CNN) to classify the AD and uses explanations of CNNs' choices in classification to pinpoint the discriminative changes in white matter connectivity in AD. A CNN architecture has been developed to classify normal control (NC) and AD subjects from the weighted structural connectome. Then, the CNN classification decision is visually analyzed using gradient-based localization techniques to identify the discriminative changes in white matter connectivity in Alzheimer's. The cortical regions involved in the identified discriminative structural connectivity changes in AD are highly covered in the temporal/subcortical regions. A specific pattern is identified in the discriminative changes in structural connectivity of AD, where the white matter changes are revealed within the temporal/subcortical regions and from the temporal/subcortical regions to the frontal and parietal regions in both left and right hemispheres. The proposed approach has the potential to comprehensively analyze the discriminative structural connectivity differences in AD, change the way of detecting biomarkers, and help clinicians better understand the structural changes in AD and provide them with more confidence in automated diagnostic systems. abstract_id: PUBMED:29499767 White matter changes in Alzheimer's disease: a focus on myelin and oligodendrocytes. Alzheimer's disease (AD) is conceptualized as a progressive consequence of two hallmark pathological changes in grey matter: extracellular amyloid plaques and neurofibrillary tangles. However, over the past several years, neuroimaging studies have implicated micro- and macrostructural abnormalities in white matter in the risk and progression of AD, suggesting that in addition to the neuronal pathology characteristic of the disease, white matter degeneration and demyelination may be also important pathophysiological features. Here we review the evidence for white matter abnormalities in AD with a focus on myelin and oligodendrocytes, the only source of myelination in the central nervous system, and discuss the relationship between white matter changes and the hallmarks of Alzheimer's disease. We review several mechanisms such as ischemia, oxidative stress, excitotoxicity, iron overload, Aβ toxicity and tauopathy, which could affect oligodendrocytes. We conclude that white matter abnormalities, and in particular myelin and oligodendrocytes, could be mechanistically important in AD pathology and could be potential treatment targets. abstract_id: PUBMED:32334962 White-matter changes in early and late stages of mild cognitive impairment. Mild Cognitive Impairment (MCI) is characterized by cognitive deficits that exceed age-related decline, but not interfering with daily living activities. Amnestic type of the disorder (aMCI) is known to have a high risk to progress to Alzheimer's Disease (AD), the most common type of dementia. Identification of very early structural changes in the brain related to the cognitive decline in MCI patients would further contribute to the understanding of the dementias. In the current study, we target to investigate whether the white-matter changes are related to structural changes, as well as the cognitive performance of MCI patients. Forty-nine MCI patients were classified as Early MCI (E-MCI, n = 24) and Late MCI (L-MCI, n = 25) due to their performance on The Free and Cued Selective Reminding Test (FCSRT). Age-Related White-Matter Changes (ARWMC) scale was used to evaluate the white-matter changes in the brain. Volumes of specific brain regions were calculated with the FreeSurfer program. Both group and correlation analyses were conducted to show if there was any association between white-matter hyperintensities (WMHs) and structural changes and cognitive performance. Our results indicate that, L-MCI patients had significantly more WMHs not in all but only in the frontal regions compared to E-MCI patients. Besides, ARWMC scores were not correlated with total hippocampal and white-matter volumes. It can be concluded that WMHs play an important role in MCI and cognitive functions are affected by white-matter changes of MCI patients, especially in the frontal regions. abstract_id: PUBMED:26289958 Clinical significance of circulating vascular cell adhesion molecule-1 to white matter disintegrity in Alzheimer's dementia. Endothelial dysfunction leads to worse cognitive performance in Alzheimer's dementia (AD). While both cerebrovascular risk factors and endothelial dysfunction lead to activation of vascular cell adhesion molecule-1 (VCAM-1), intercellular adhesion molecule-1 (ICAM-1) and E-selectin, it is not known whether these biomarkers extend the diagnostic repertoire in reflecting intracerebral structural damage or cognitive performance. A total of 110 AD patients and 50 age-matched controls were enrolled. Plasma levels of VCAM-1, ICAM-1 and E-selectin were measured and correlated with the cognitive performance, white matter macro-structural changes, and major tract-specific fractional anisotropy quantification. The AD patients were further stratified by clinical dementia rating score (mild dementia, n=60; moderate-to-severe dementia, n=50). Compared with the controls, plasma levels of VCAM-1 (p&lt; 0.001), ICAM-1 (p=0.028) and E-selectin (p=0.016) were significantly higher in the patients, but only VCAM-1 levels significantly reflected the severity of dementia (p&lt; 0.001). In addition, only VCAM-1 levels showed an association with macro- and micro- white matter changes especially in the superior longitudinal fasciculus (p&lt; 0.001), posterior thalamic radiation (p=0.002), stria terminalis (p=0.002) and corpus callosum (p=0.009), and were independent of, age and cortical volume. These tracts show significant association with MMSE, short term memory and visuospatial function. Meanwhile, while VCAM-1 level correlated significantly with short-term memory (p=0.026) and drawing (p=0.025) scores in the AD patients after adjusting for age and education, the significance disappeared after adjusting for global FA. Endothelial activation, especially VCAM-1, was of clinical significance in AD that reflects macro- and micro-structural changes and poor short term memory and visuospatial function. abstract_id: PUBMED:30717182 White Matter Changes in Patients with Alzheimer's Disease and Associated Factors. Alzheimer's disease (AD) is traditionally thought of as a neurodegenerative disease. Recent evidence shows that beta amyloid-independent vascular changes and beta amyloid-dependent neuronal dysfunction both equally influence the disease, leading to loss of structural and functional connectivity. White matter changes (WMCs) in the brain are commonly observed in dementia patients. The effect of vascular factors on WMCs and the relationship between WMCs and severity of AD in patients remain to be clarified. We recruited 501 clinically diagnosed probable AD patients with a series of comprehensive neuropsychological tests and brain imaging. The WMCs in cerebral CT or MRI were rated using both the modified Fazekas scale and the combined CT-MRI age related WMC (ARWMC) rating scale. Periventricular WMCs were observed in 79.4% of the patients and deep WMCs were also seen in 48.7% of the patients. WMC scores were significantly higher in the advanced dementia stage in periventricular WMCs (p = 0.001) and total ARWMCs (p &lt; 0.001). Age and disease severity were both independently associated with WMCs score, particularly in the total, frontal and parieto-occipital areas. Vascular factors including hypertension, diabetes mellitus, and gender were not significantly associated with WMCs. In conclusion, both age and severity of dementia were significantly associated with WMCs in AD patients. These associations highlight future research targets. abstract_id: PUBMED:34412149 Orientational changes of white matter fibers in Alzheimer's disease and amnestic mild cognitive impairment. White matter abnormalities represent early neuropathological events in neurodegenerative diseases such as Alzheimer's disease (AD), investigating these white matter alterations would likely provide valuable insights into pathological changes over the course of AD. Using a novel mathematical framework called "Director Field Analysis" (DFA), we investigated the geometric microstructural properties (i.e., splay, bend, twist, and total distortion) in the orientation of white matter fibers in AD, amnestic mild cognitive impairment (aMCI), and cognitively normal (CN) individuals from the Alzheimer's Disease Neuroimaging Initiative 2 database. Results revealed that AD patients had extensive orientational changes in the bilateral anterior thalamic radiation, corticospinal tract, inferior and superior longitudinal fasciculus, inferior fronto-occipital fasciculus, and uncinate fasciculus in comparison with CN. We postulate that these orientational changes of white matter fibers may be partially caused by the expansion of lateral ventricle, white matter atrophy, and gray matter atrophy in AD. In contrast, aMCI individuals showed subtle orientational changes in the left inferior longitudinal fasciculus and right uncinate fasciculus, which showed a significant association with the cognitive performance, suggesting that these regions may be preferential vulnerable to breakdown by neurodegenerative brain disorders, thereby resulting in the patients' cognitive impairment. To our knowledge, this article is the first to examine geometric microstructural changes in the orientation of white matter fibers in AD and aMCI. Our findings demonstrate that the orientational information of white matter fibers could provide novel insight into the underlying biological and pathological changes in AD and aMCI. Answer: White matter changes have been observed in Alzheimer's disease (AD) and are considered to have clinical significance. Studies have shown that white matter abnormalities are not only present in AD but also in various neurodegenerative diseases, and they may be related to the risk and progression of the disease (PUBMED:31008274). In familial Alzheimer's disease (FAD), white matter diffusivity changes have been detected before changes in grey matter volume can be observed, suggesting that white matter changes could be an early indicator of the disease (PUBMED:25639959). Moreover, white matter changes have been associated with cognitive performance in AD patients. For instance, endothelial activation, particularly vascular cell adhesion molecule-1 (VCAM-1), reflects macro- and micro-structural changes and correlates with poor short-term memory and visuospatial function in AD patients (PUBMED:26289958). Additionally, the severity of white matter changes has been linked to the severity of dementia, with higher white matter changes scores observed in advanced dementia stages (PUBMED:30717182). However, it is important to note that while periventricular white matter changes were more severe in AD patients compared to control subjects, the severity of these changes was not associated with the severity of dementia, suggesting that the presence of white matter changes alone may not contribute to the progression of AD (PUBMED:15258430). In summary, white matter changes in AD have clinical significance as they are associated with cognitive decline and may serve as early indicators of the disease. They also reflect structural brain changes that are relevant to the understanding of AD pathology and could potentially be targets for treatment (PUBMED:29499767). Nonetheless, the exact contribution of white matter changes to the progression of AD remains to be fully elucidated.
Instruction: Easing of suffering in children with cancer at the end of life: is care changing? Abstracts: abstract_id: PUBMED:32484136 Pediatric Brain Tumors: Narrating Suffering and End-of-Life Decisionmaking. When talking about decisionmaking for children with a life-threatening condition, the death of children with brain tumors deserves special attention. The last days of the lives of these children can be particularly harsh for bystanders, and raise questions about the suffering of these children themselves. In the Netherlands, these children are part of the group for whom a wide range of end-of-life decisions are discussed, and questions raised. What does the end-of-life for these children look like, and what motivates physicians and parents to make decisions that may affect the life and death of these children? This article highlights the story of the parents of the sisters Roos and Noor. When both their daughters were diagnosed with a hereditary brain tumor, they had to make similar decisions twice. Their story sheds light on the suffering of children in the terminal phase, and how this suffering may motivate parents and physicians to make decisions that influence the end of life of these children's lives.We argue that complete knowledge about suffering in the terminal phase of children with brain tumors is impossible. However, by collecting experiences like those of Roos and Noor, we can move toward an experienced-based understanding and better guide parents and physicians through these hardest of decisions. abstract_id: PUBMED:18375901 Easing of suffering in children with cancer at the end of life: is care changing? Purpose: In the past decade studies have documented substantial suffering among children dying of cancer, prompting national attention on the quality of end-of-life care and the development of a palliative care service in our institutions. We sought to determine whether national and local efforts have led to changes in patterns of care, advanced care planning, and symptom control among children with cancer at the end of life. Methods: Retrospective cohort study from a US tertiary level pediatric institution. Parent survey and chart review data from 119 children who died between 1997 and 2004 (follow-up cohort) were compared with 102 children who died between 1990 and 1997 (baseline cohort). Results: In the follow-up cohort, hospice discussions occurred more often (76% v 54%; adjusted risk difference [RD], 22%; P &lt; .001) and earlier (adjusted geometric mean 52 days v 28 days before death; P = .002) compared with the baseline cohort. Do-not-resuscitate orders were also documented earlier (18 v 12 days; P = .031). Deaths in the intensive care unit or other hospitals decreased significantly (RD, 16%; P = .024). Parents reported less child suffering from pain (RD, 19%; P = .018) and dyspnea (RD, 21%; P = .020). A larger proportion of parents felt more prepared during the child's last month of life (RD, 29%; P &lt; .001) and at the time of death (RD, 24%; P = .002). Conclusion: Children dying of cancer are currently receiving care that is more consistent with optimal palliative care and according to parents, are experiencing less suffering. With ongoing growth of the field of hospice and palliative medicine, further advancements are likely. abstract_id: PUBMED:21990213 Suffering and distress at the end-of-life. Objective: Suffering frequently occurs in the context of chronic and progressive medical illnesses and emerges with great intensity at end-of-life. A review of the literature on suffering and distress-related factors was conducted to illustrate the integrative nature of suffering in this context. We hope it will result in a comprehensive approach, centered in the patient-family unit, which will alleviate or eliminate unnecessary suffering and provide well-being, when possible. Methods: An extensive search of the literature on suffering and distress in end-of-life patients was conducted. While the present review is not a systematic one, an in-depth search using the terms 'Suffering', 'Distress', End-of-Life', 'Palliative Care', and 'Terminal illness' was conducted using search engines such as PubMed, PsycINFO, MEDLINE, EBSCO-Host, OVID, and SciELO. Results: Taking into account the comprehensive and integrative nature of suffering, factors related to the physical, psychological, spiritual, and social human dimensions are described. As well, some treatment considerations in the palliative care context are briefly discussed. Conclusions: Suffering is individual, unique, and inherent to each person. Assessment processes require keeping in mind the complexity, multi-dimensionality, and subjectivity of symptoms and experiences. Optimal palliative care is based on continuous and multidimensional evaluation and treatment of symptoms and syndromes. It should take place in a clinical context where the psychological, spiritual, and socio-cultural needs of the patient-family unit are taken care of simultaneously. A deep knowledge of the nature of suffering and its associated factors is central to alleviate unnecessary suffering. abstract_id: PUBMED:23727449 Suffering indicators in terminally ill children from the parental perspective. Purpose: Suffering is a complex multifaceted phenomenon, which has received limited attention in relation to children with terminal illness. As part of a wider study we interviewed parents of children with terminal illness to elicit their perspectives on suffering, in order to provide initial understanding from which to develop observational indicators and further research. Methods: Qualitative descriptive study with semi-structured interviews made "ad hoc". Selection through deliberate sampling of mothers and fathers of hospitalised children (0-16 years old) with a terminal illness in Granada (Spain). Key Results: 13 parents were interviewed. They described children's suffering as manifested through sadness, apathy, and anger towards their parents and the professionals. The isolation from their natural environment, the uncertainty towards the future, and the anticipation of pain caused suffering in children. The pain is experienced as an assault that their parents allow to occur. Conclusions: The analysis of the interview with the parents about their perception of their ill children's suffering at the end of their lives is a valuable source of information to consider supportive interventions for children and parents in health care settings. An outline summary of the assessed aspects of suffering, the indicators and aspects for health professional consideration is proposed. abstract_id: PUBMED:34311060 Symptoms and Suffering at End of Life for Children With Complex Chronic Conditions. Context: Children with cancer and cardiac disease suffer with high symptom burden at end of life (EOL). Little is known about the EOL experience for children with other complex chronic conditions (CCCs). Objectives: To evaluate symptoms and suffering at EOL for children with noncancer, noncardiac CCCs as well as parental distress related to child suffering. Methods: This study is a secondary data analysis of a cross-sectional, single-center survey of bereaved parents of children with CCCs who died between 2006 to 2015. The primary outcome was parent-reported child suffering in the final two days of life. Results: Among 211 eligible parents contacted for participation, 114 completed the survey, and 99 had complete primary outcome data (participation rate 47%). Most children had congenital/chromosomal (42%) or progressive central nervous system (22%) conditions. Twenty-eight percent of parents reported high child suffering in the final two days of life. Parents reported that pain and difficulty breathing caused the greatest suffering for children and distress among themselves. Some parents also reported distress related to uncertainty about child suffering. Parents were less likely to report high child suffering if they were confident in knowing what to expect when their child was dying (AOR 0.20; 95% CI 0.07-0.60) or felt prepared for medical problems at EOL (AOR 0.12; 95% CI 0.04-0.42). Conclusion: Nearly one-third of parents of children with CCCs report high suffering in their child's final days of life. Parent preparedness was associated with lower perceived child suffering. Future research should target symptoms contributing to parent and child distress and assess whether enhancing parent preparedness reduces perceived child suffering. abstract_id: PUBMED:10655532 Symptoms and suffering at the end of life in children with cancer. Background: Cancer is the second leading cause of death in children, after accidents. Little is known, however, about the symptoms and suffering at the end of life in children with cancer. Methods: In 1997 and 1998, we interviewed the parents of children who had died of cancer between 1990 and 1997 and who were cared for at Children's Hospital, the Dana-Farber Cancer Institute, or both. Additional data were obtained by reviewing medical records. Results: Of 165 eligible parents, we interviewed 103 (62 percent), 98 by telephone and 5 in person. The interviews were conducted a mean (+/-SD) of 3.1+/-1.6 years after the death of the child. Almost 80 percent died of progressive disease, and the rest died of treatment-related complications. Forty-nine percent of the children died in the hospital; nearly half of these deaths occurred in the intensive care unit. According to the parents, 89 percent of the children suffered "a lot" or "a great deal" from at least one symptom in their last month of life, most commonly pain, fatigue, or dyspnea. Of the children who were treated for specific symptoms, treatment was successful in 27 percent of those with pain and 16 percent of those with dyspnea. On the basis of a review of the medical records, parents were significantly more likely than physicians to report that their child had fatigue, poor appetite, constipation, and diarrhea. Suffering from pain was more likely in children whose parents reported that the physician was not actively involved in providing end-of-life care (odds ratio, 2.6; 95 percent confidence interval, 1.0 to 6.7). Conclusions: Children who die of cancer receive aggressive treatment at the end of life. Many have substantial suffering in the last month of life, and attempts to control their symptoms are often unsuccessful. Greater attention must be paid to palliative care for children who are dying of cancer. abstract_id: PUBMED:35463792 Palliative Care in Children With Advanced Heart Disease in a Tertiary Care Environment: A Mini Review. Palliative care for children continues to evolve. More recently, this has also been true in the field of pediatric cardiology, particularly for children with advanced heart disease. In these children, similarly to children with cancer, treatment successes are offset by the risks of long-term morbidities, including premature death. This mini review aims to provide an overview of current knowledge on children suffering from advanced heart disease, their medical care during various phases of illness (including the palliative and end-of-life phase), symptom burden, experiences of parents, prognostic understanding of parents and physicians, and current status of the involvement of pediatric palliative care. In conclusion, the suffering of these children at the end of their young lives is pronounced and many parents feel prepared neither for medical problems nor for the child's death. An effective and mutually trusting partnership between pediatric cardiology and pediatric palliative care would appear to be a prerequisite for the timely involvement of palliative care in further supporting these children and their families. abstract_id: PUBMED:33318855 Striving to reduce suffering: A Phenomenological Study of nurses experience in caring for children with cancer in Ghana. Aim: To provide insights into nurses lived experiences in caring for children with cancer. Background: Little is known about the paediatric oncology nurses shared practices of caring for children with cancer in Ghana. Design: A hermeneutic phenomenological qualitative study. Methods: A semi-structured interview with 14 purposely sampled Ghanaian paediatric oncology nurses. Findings were analysed using Diekelman, Allen and Tanner's approach. Results: The theme "Striving to reduce suffering" and three relational subthemes: "knowing children's needs," "Rendering a hopeful fight" and "Ensuring continuity and coordination of care" emerged. Increased awareness of this phenomenon for the nurses who care for these children is vital to ensure quality and holistic care that is meaningful and satisfying by nurses for children with cancer. Paediatric oncology nurses can use the result of the study to evaluate their caring practice and as an avenue to develop better caring practice. abstract_id: PUBMED:30613519 The barriers and facilitators in providing spiritual care for parents who have children suffering from cancer. Background: Given the importance of spirituality in crisis situations including life-threatening diseases such as cancer, paying due attention to this aspect of care is of great importance. This study aims to investigate the barriers and facilitators in providing spiritual care for parents who have children suffering from cancer. Methods: This study was conducted using a qualitative approach and the common content analysis method. Fifteen people who participated in the study included 11 mothers and 4 fathers. A purposive sampling method was used. The environment for the research included the oncology and hematology departments of state children's hospitals around the country. Semi-structured interviews were conducted individually. Results: Analyzing the data yielded "crossing the rocky route" as the main category in barriers to spiritual care, which included the subcategories "spiritual escape," "lack of supporting resources," and "family influenceability." In addition, "multiple support" was obtained as the main category in facilitators of spiritual care, which included the subcategories "emotional support," "healthcare support," "social support," "spiritual support," and "supporting the child." Conclusion: The results of the study showed that spirituality plays an important role in helping parents deal with and adjust to their children's disease. Therefore, it is of great importance to pay due attention to the different aspects of spiritual care and enforce factors that can act as facilitators in this area and remove the barriers to spiritual care. This aim is attainable through educating the parents and medical staff. abstract_id: PUBMED:25637944 The meaning of suffering in patients with advanced progressive cancer. Background: There is a lack of research into suffering and what it means to the individual patient with advanced cancer and its importance in end of life care. Aims: To explore the concept of suffering and distress by eliciting what individual patients with advanced cancer perceived as suffering and how they utilised their own resources to manage suffering. Method: A qualitative study design of focused narrative interviews analysed by thematic analysis was conducted with a heterogeneous sample of 49 palliative day care patients. Results: For those patients who perceived they were suffering, this study revealed a wide spectrum of definitions which could be described within the following areas; the unavoidable and avoidable suffering, physical and emotional suffering, suffering as loss and finally transformation through suffering. Conclusions: These findings provide important insight that broadens and enriches our understanding into the concept of suffering and how patients utilise their resources, which may have important implications for end of life care. Answer: The care for children with cancer at the end of life appears to be changing, with improvements noted in patterns of care, advanced care planning, and symptom control. A retrospective cohort study from a US tertiary level pediatric institution found that in the follow-up cohort, hospice discussions occurred more often and earlier compared to the baseline cohort. Do-not-resuscitate orders were also documented earlier, and there was a significant decrease in deaths occurring in intensive care units or other hospitals. Parents reported less child suffering from pain and dyspnea, and a larger proportion of parents felt more prepared during the child's last month of life and at the time of death (PUBMED:18375901). This suggests that children dying of cancer are receiving care that is more consistent with optimal palliative care, and parents perceive that their children are experiencing less suffering. Furthermore, a study on suffering indicators in terminally ill children from the parental perspective revealed that parents described children's suffering as manifested through sadness, apathy, and anger. The isolation from their natural environment, uncertainty towards the future, and anticipation of pain were also sources of suffering. Pain was experienced as an assault that their parents allowed to occur (PUBMED:23727449). This highlights the importance of understanding parental perceptions of suffering to guide supportive interventions for both children and parents in healthcare settings. Additionally, a study on symptoms and suffering at the end of life for children with complex chronic conditions (CCCs) found that nearly one-third of parents reported high suffering in their child's final days of life. Parent preparedness was associated with lower perceived child suffering, indicating that enhancing parent preparedness could potentially reduce perceived child suffering (PUBMED:34311060). Overall, these studies suggest that there is a shift towards better palliative care practices for children with cancer at the end of life, with an emphasis on early and improved communication, symptom management, and parental preparedness, which may contribute to easing the suffering of these children.
Instruction: Is diabetes mellitus a negative prognostic factor for the treatment of advanced non-small-cell lung cancer? Abstracts: abstract_id: PUBMED:24210228 Is diabetes mellitus a negative prognostic factor for the treatment of advanced non-small-cell lung cancer? Background: It has been demonstrated that there are a lot of different prognostic factors which are worthy of consideration whereas diabetes mellitus (DM) has not been clearly or consistently identified as a prognostic value in advanced non-small cell lung cancer (NSCLC). The aim of this study was to investigate the prognostic significance of the characteristics of patients in advanced NSCLC. Specifically, we investigated the impact of DM for progression-free survival (PFS) and overall survival (OS) in patients receiving first-line platinum-based doublets chemotherapy. Methods: We retrospectively reviewed 442 patients with advanced NSCLC. DM and other potential prognostic variables were chosen for analysis in this study. Univariate and multivariate analyses were conducted to identify prognostic factors associated with survival. Result: The results of univariate analysis for OS were identified as having prognostic significance: performance status (p&lt;0.001), stage (p&lt;0.001), DM (p&lt;0.001), liver metastasis (p=0.02) and brain metastasis (p&lt;0.001). Stage, diabetes mellitus, and liver metastasis were identified as having prognostic significance for PFS. Multivariate analysis showed that poor performance status, presence of DM and advanced stage were considered independent negative prognostic factors for OS (p 0.001, p&lt;0.001 and p&lt;0.001 respectively). Furthermore, DM and stage were considered independent negative prognostic factors for PFS (p 0.005 and p 0.001 respectively). Conclusion: In conclusion, DM at the time of diagnosis was associated with the negative prognostic importance for PFS and OS in the advanced stage patients who were receiving first-line platinum-based doublets chemotherapy. In addition poor performance status and advanced stage were identified as negative prognostic factors. abstract_id: PUBMED:24051083 Evaluation of the Simplified Comorbidity Score (Colinet) as a prognostic indicator for patients with lung cancer: a cancer registry study. Introduction: A Simplified Comorbidity Score (SCS) provided additional prognostic information to the established factors in patients with non-small cell lung cancer lung cancer. We undertook this analysis to test the prognostic value of the SCS in a population-based study. Patients And Methods: Retrospective survey of all Victorians diagnosed with lung cancer in January-June 2003, identified from the Victorian Cancer Registry. Results: There were 921 patients, with data available for 841 (91.3%). Median age was 72 years (range 30-94) and 63.1% were male. A tissue diagnosis was made for 89.9%, of which 86.6% were non-small cell (NSCLC), and 13.4% small cell carcinoma (SCLC). Comorbidities on which the SCS is based were distributed: cardiovascular 54.6%; respiratory 38.9%; neoplastic 19.9%; renal 4.6%; diabetes 11.7%; alcoholism 5.5%; and tobacco 83.1%. In patients with NSCLC, higher SCS score (&gt;9) was associated with increasing stage, ECOG performance status, male sex, increasing age, tobacco consumption and not receiving treatment. Using Cox regression, survival was analysed by SCS score after adjusting for the effect of age, sex, cell type (NSCLC, SCLC, no histology), ECOG performance status and stage for all patients and then restricted to NSCLC. As a continuous or dichotomous (≤ or &gt;9) variable, SCS was not a significant prognostic factor for all patients or when restricted to NSCLC. Conclusion: In this retrospective analysis of population based registry patients, SCS did not provide additional prognostic information in patients with lung cancer. ECOG performance status may be a substitute for the effect of comorbidity. abstract_id: PUBMED:37835539 Diabetes Mellitus Is a Strong Independent Negative Prognostic Factor in Patients with Brain Metastases Treated with Radiotherapy. Background: Brain metastases (BM) cause relevant morbidity and mortality in cancer patients. The presence of cerebrovascular diseases can alter the tumor microenvironment, cellular proliferation and treatment resistance. However, it is largely unknown if the presence of distinct cerebrovascular risk factors may alter the prognosis of patients with BM. Methods: Patients admitted for the radiotherapy of BM at a large tertiary cancer center were included. Patient and survival data, including cerebrovascular risk factors (diabetes mellitus (DM), smoking, arterial hypertension, peripheral arterial occlusive disease, hypercholesterolemia and smoking) were recorded. Results: 203 patients were included. Patients with DM (n = 39) had significantly shorter overall survival (OS) (HR 1.75 (1.20-2.56), p = 0.003, log-rank). Other vascular comorbidities were not associated with differences in OS. DM remained prognostically significant in the multivariate Cox regression including established prognostic factors (HR 1.92 (1.20-3.06), p = 0.006). Furthermore, subgroup analyses revealed a prognostic role of DM in patients with non-small cell lung cancer, both in univariate (HR 1.68 (0.97-2.93), p = 0.066) and multivariate analysis (HR 2.73 (1.33-5.63), p = 0.006), and a trend in melanoma patients. Conclusion: DM is associated with reduced survival in patients with BM. Further research is necessary to better understand the molecular mechanisms and therapeutic implications of this important interaction. abstract_id: PUBMED:16234816 A new simplified comorbidity score as a prognostic factor in non-small-cell lung cancer patients: description and comparison with the Charlson's index. Treatment of non-small-cell lung cancer (NSCLC) might take into account comorbidities as an important variable. The aim of this study was to generate a new simplified comorbidity score (SCS) and to determine whether or not it improves the possibility of predicting prognosis of NSCLC patients. A two-step methodology was used. Step 1: An SCS was developed and its prognostic value was compared with classical prognostic determinants in the outcome of 735 previously untreated NSCLC patients. Step 2: the SCS reliability as a prognostic determinant was tested in a different population of 136 prospectively accrued NSCLC patients with a formal comparison between SCS and the classical Charlson comorbidity index (CCI). Prognosis was analysed using both univariate and multivariate (Cox model) statistics. The SCS summarised the following variables: tobacco consumption, diabetes mellitus and renal insufficiency (respective weightings 7, 5 and 4), respiratory, neoplastic and cardiovascular comorbidities and alcoholism (weighting=1 for each item). In step 1, aside from classical variables such as age, stage of the disease and performance status, SCS was a statistically significant prognostic variable in univariate analyses. In the Cox model weight loss, stage grouping, performance status and SCS were independent determinants of a poor outcome. There was a trend towards statistical significance for age (P=0.08) and leucocytes count (P=0.06). In Step 2, both SCS and well-known prognostic variables were found as significant determinants in univariate analyses. There was a trend towards a negative prognostic effect for CCI. In multivariate analysis, stage grouping, performance status, histology, leucocytes, lymphocytes, lactate dehydrogenase, CYFRA 21-1 and SCS were independent determinants of a poor prognosis. CCI was removed from the Cox model. In conclusion, the SCS, constructed as an independent prognostic factor in a large NSCLC patient population, is validated in another prospective population and appears more informative than the CCI in predicting NSCLC patient outcome. abstract_id: PUBMED:22799319 Prognostic factors for second-line treatment of advanced non-small-cell lung cancer: retrospective analysis at a single institution. Background: Platinum-based chemotherapy for advanced non-small cell lung cancer (NSCLC) is still considered the first choice, presenting a modest survival advantage. However, the patients eventually experience disease progression and require second-line therapy. While there are reliable predictors to identify patients receiving first-line chemotherapy, very little knowledge is available about the prognostic factors in patients who receive second-line treatments. The present study was therefore performed. Methods: We retrospectively reviewed 107 patients receiving second-line treatments from August 2002 to March 2012 in the Dicle University, School of Medicine, Department of Medical Oncology. Fourteen potential prognostic variables were chosen for analysis in this study. Univariate and multivariate analyses were conducted to identify prognostic factors associated with survival. Result: The results of univariate analysis for overall survival (OS) were identified to have prognostic significance: performance status (PS), stage, response to first-line chemotherapy, response to second- line chemotherapy and number of metastasis. PS, diabetes mellitus (DM), response to first-line chemotherapy and response to second-line chemotherapy were identified to have prognostic significance for progression-free survival (PFS). Multivariate analysis showed that PS, response to first-line chemotherapy and response to second- line chemotherapy were considered independent prognostic factors for OS. Furthermore, PS and response to second-line chemotherapy were considered independent prognostic factors for PFS. Conclusion: In conclusion, PS, response to first and second-line chemotherapy were identified as important prognostic factors for OS in advanced NSCLC patients who were undergoing second-line palliative treatment. Furthermore, PS and response to second-line chemotherapy were considered independent prognostic factors for PFS. It may be concluded that these findings may facilitate pretreatment prediction of survival and can be used for selecting patients for the correct choice of treatment. abstract_id: PUBMED:33002203 Clinicopathological and prognostic features of operable non-small cell lung cancer patients with diabetes mellitus. Background: The aim of this study was to investigate the clinicopathological and prognostic features of operable non-small cell lung cancer (NSCLC) patients with diabetes mellitus (DM). Methods: A total of 1231 surgically resected NSCLC patients were retrospectively reviewed. Clinicopathological characteristics were compared between patients with DM (DM group, n = 139) and those without DM (non-DM group, n = 1092). The clinical factors associated with postoperative complications and prognostic factors were identified. Results: The DM group had distinct clinicopathological features. No significant differences in histological invasiveness or stage were found. The presence and control status of DM were independent predictors of postoperative complications. No significant differences in recurrence-free survival or cancer-specific survival were observed; however, the DM group had worse overall survival (OS). The DM group had a higher number of deaths from other diseases than the non-DM group, and these patients had significantly higher postoperative hemoglobin A1c levels than patients with cancer-related death. Conclusion: The presence and control status of preoperative DM are useful predictors of both postoperative complications and OS in operable NSCLC patients. Concomitant diabetes-related complications have a negative effect on long-term survival in diabetic NSCLC patients, and long-term glycemic control is important to prolong OS. abstract_id: PUBMED:26690494 Prognostic significance of diabetes mellitus in locally advanced non-small cell lung cancer. Background: To investigate the prognostic significance of patient characteristics and clinical laboratory test results in locally advanced non-small cell lung cancer (NSCLC), and in particular the impact of diabetes mellitus (DM) on the survival of patients who underwent chemoradiotherapy. Methods: We retrospectively reviewed 159 patients with locally advanced NSCLC with a focus on DM and other potential prognostic factors, using the log-rank test, and univariate and multivariate analyses to assess their association with survival. Result: Five significant prognostic factors were identified in univariate analysis: stage (p &lt; 0.001), DM (p = 0.04), hemoglobin levels (p = 0.003), serum albumin (p &lt;0.001) and lactate dehydrogenase (LDH) levels (p = 0.01). Furthermore, among the factors tested using Fisher's exact test and the Wilcoxon rank sum test, gender (p = 0.019) and plasma glucose level (p &lt;0.001) were found to have prognostic significance. Multivariate analysis showed that stage, DM, serum albumin and LDH levels were independent prognostic factors for survival (p = 0.007, p = 0.024, p = 0.007 and p = 0.005, respectively). Conclusions: The presence of DM at the time of diagnosis was identified as an independent and significant prognostic factor for predicting negative outcome in locally advanced NSCLC patients. abstract_id: PUBMED:26961089 Prognostic value of pre-operative glucose-corrected maximum standardized uptake value in patients with non-small cell lung cancer after complete surgical resection and 5-year follow-up. Introduction: In this study we evaluated the value of pre-operative glucose corrected maximum standard uptake value (GC-SUVmax) as prognostic factor in patients with early stage non-small cell lung cancer (NSCLC) after complete surgical resection. Methods: This study was designed as a retrospectively evaluated single center study with prospective data registry. Inclusion criteria were: histologically proven stage I NSCLC, 18F-FDG-PET/CT scan prior to surgery, complete resection (R0) and follow up in our outpatient department. Exclusion criteria were: history of malignancy other than NSCLC, diabetes and (neo) adjuvant therapy. Follow up period was 5 years. Results: Between 2006 and 2008 a total of 33 patients (16 males, 17 females) met the inclusion criteria. SUVmax and GC-SUVmax were strongly correlated (Spearman's ρ = 0.97). Five-year overall survival (OS) rate was 70 % (95 % CI = 56-87 %). Patients who died within 5 years of follow up had significantly higher pre-operative GC-SUVmax (median = 10.6, IQR = 8.3-14.4) than patients who were alive at 5-year follow up (median = 6.4, IQR = 3.0-9.8), p = 0.04. SUVmax showed similar differences: 10.4 (8-12.9) vs. 6.6 (3.0-8.8), p = 0.047. The area under the receiver-operating characteristic (ROC) curve at 5 years was 0.70 (95 % CI = 0.50-0.90) for GC-SUVmax and 0.71 (95 % CI = 0.51-0.91) for SUVmax (p = 0.75). Conclusion: Pre-operative FDG tumor uptake in patients with NSCLC is predictive for survival after complete surgical resection. GC-SUVmax, as an additional value to SUVmax, may better approach competitive inhibition of FDG and glucose in tumors, however, in this study this potential advantage, if any, was very small. abstract_id: PUBMED:20460557 Clinical utility of routine proteinuria evaluation in treatment decisions of patients receiving bevacizumab for metastatic solid tumors. Background: Bevacizumab is an anti-vascular endothelial growth factor monoclonal antibody approved for use in treatment of patients with metastatic breast, colorectal, and non-small cell lung cancer. In the pivotal Phase 3 clinical trials, grades 3-4 proteinuria occurred in &lt;5% of patients. The manufacturer recommends monitoring for the development of proteinuria but does not provide specific recommendations, except to discontinue treatment if the patient develops nephrotic syndrome. Objective: To determine the incidence and severity of elevated proteinuria and the frequency of changes in bevacizumab administration due to elevated proteinuria; secondary objectives included analysis of the cost of routine proteinuria monitoring and the relationship of proteinuria with other patient comorbidities such as diabetes, hypertension, chronic kidney disease, and viral hepatitis. Methods: A retrospective chart review was performed at the University of Washington Medical Center, a large academic teaching hospital, and its affiliated ambulatory clinics at the Seattle Cancer Care Alliance. Patients treated with bevacizumab and seen in the breast, lung, and gastrointestinal cancer clinics from June 1, 2005, to November 30, 2007, were included in the study. Results: A total of 243 patients were included in the analysis. Only 1.6% of these patients developed grades 3-4 proteinuria. All 4 of these patients had a history of hypertension, 2 of these patients had prior chronic kidney disease, and 3 patients had prior viral hepatitis. Elevated proteinuria affected treatment decisions in 2% of patients. Over $130,000 was charged to patients for monitoring of proteinuria. Conclusions: These results demonstrate that the development of grades 3-4 proteinuria with bevacizumab is rare and affects treatment decisions in few patients with metastatic solid tumor. Furthermore, routine proteinuria monitoring is associated with high cost and may not be required before each administration. abstract_id: PUBMED:26341687 Synergistic effects of metformin in combination with EGFR-TKI in the treatment of patients with advanced non-small cell lung cancer and type 2 diabetes. Background: Acquired resistance has become the bottleneck affecting the efficacy of epidermal growth factor receptor (EGFR) tyrosine kinase inhibitor (TKI) treatment. Studies have shown that the antidiabetic drug metformin could effectively increase the sensitivity of TKI-resistant lung cancer cells to EGFR-TKI. This study aimed to evaluate the effect of metformin in combination with EGFR-TKI on the prognosis of non-small cell lung cancer (NSCLC) patients with diabetes mellitus type 2 (DM2). Methods: Data of NSCLC patients with DM2 who received treatment in six hospitals in China between January 2006 and January 2014 were reviewed retrospectively. They were divided into two groups: Group A, where the patients (n = 44) received EGFR-TKI plus metformin; and Group B, where the patients (n = 46) received EGFR-TKI plus hypoglycemic agents other than metformin. Prognostic differences between the two groups were assessed. Results: The median progression-free survival (PFS) and median overall survival (OS) in Group A were significantly longer than those in Group B (19.0 months vs. 8.0 months, P = .005; 32.0 months vs. 23.0 months, P = .002). The objective response rate (ORR) and disease control rate (DCR) in Group A were significantly higher than those in Group B (70.5% vs. 45.7%, P = .017; 97.7% vs. 80.4%, P = .009). Secondary data analysis showed that metformin use significantly prolonged the median PFS in subgroups using either first-line EGFR-TKI or second-line EGFR-TKI. Conclusions: Metformin and EGFR-TKI have a synergistic effect in the treatment of DM2 NSCLC patients harboring EGFR-activating mutations. Metformin use is associated with improved survival and delayed onset of acquired resistance to EGFR-TKI. Answer: Yes, diabetes mellitus (DM) has been identified as a negative prognostic factor for the treatment of advanced non-small cell lung cancer (NSCLC). Several studies have demonstrated that DM is associated with poorer outcomes in patients with advanced NSCLC. For instance, one study found that DM at the time of diagnosis was associated with negative prognostic importance for progression-free survival (PFS) and overall survival (OS) in patients receiving first-line platinum-based doublets chemotherapy. Poor performance status and advanced stage were also identified as negative prognostic factors (PUBMED:24210228). Another study indicated that DM is a strong independent negative prognostic factor in patients with brain metastases treated with radiotherapy, particularly in patients with NSCLC and melanoma (PUBMED:37835539). Furthermore, the presence of DM was identified as an independent and significant prognostic factor for predicting negative outcomes in locally advanced NSCLC patients undergoing chemoradiotherapy (PUBMED:26690494). However, it is important to note that the prognostic value of DM can vary depending on the context and the presence of other factors. For example, one study found that the Simplified Comorbidity Score (SCS), which includes DM as one of its components, did not provide additional prognostic information in patients with lung cancer when considering other established factors (PUBMED:24051083). In contrast, another study suggested that the SCS, which includes DM, was a significant prognostic variable and more informative than the Charlson comorbidity index (CCI) in predicting NSCLC patient outcomes (PUBMED:16234816). In the context of operable NSCLC, the presence and control status of DM were predictors of postoperative complications and OS, with diabetic patients experiencing worse OS due to a higher number of deaths from other diseases (PUBMED:33002203). Additionally, metformin, a common medication for DM, has been shown to have a synergistic effect when combined with EGFR-TKI in the treatment of NSCLC patients with type 2 diabetes, leading to improved survival and delayed onset of acquired resistance to EGFR-TKI (PUBMED:26341687). In summary, DM is generally considered a negative prognostic factor in advanced NSCLC, affecting survival outcomes and treatment efficacy.
Instruction: Magnetic resonance imaging of the spinal cord in cervical ossification of the posterior longitudinal ligament. Can it predict surgical outcome? Abstracts: abstract_id: PUBMED:8272953 Magnetic resonance imaging study on the results of surgery for cervical compression myelopathy. The morphologic changes and signal intensity of the spinal cord on preoperative magnetic resonance images were correlated with postoperative outcomes in 74 patients undergoing decompressive cervical surgery for compressive myelopathy. The transverse area of the spinal cord on T1-weighted images at the level of maximum compression was closely correlated with the severity of myelopathy, duration of disease, and recovery rate as determined by the Japanese Orthopaedic Association score. In patients with ossification of the posterior longitudinal ligament or cervical spondylotic myelopathy, the increased intramedullary T2-weighted magnetic resonance imaging signal at the site of maximal cord compression and duration of disease significantly influenced the rate of recovery. A multiple regression equation was then developed with these three variables to predict surgical outcomes. abstract_id: PUBMED:1440032 Preoperative and postoperative magnetic resonance image evaluations of the spinal cord in cervical myelopathy. To evaluate the morphologic changes of the spinal cord in patients with cervical myelopathy due to cervical spondylosis and ossification of the posterior longitudinal ligament, the authors measured the thickness and signal intensity of the cervical cord with magnetic resonance imaging in healthy adults and patients with cervical myelopathy, and compared these findings. In patients with cervical myelopathy, the preoperative and postoperative magnetic resonance imaging findings were compared with the severity of myelopathy and postoperative results. In healthy adults, the anteroposterior diameter of the cervical cord was 7.8 mm at the C3 level and decreased at lower levels. In the patients with cervical myelopathy, the preoperative spinal anteroposterior diameter was significantly reduced at various levels corresponding to the stenosis site within the vertebral canal. In the group with ossification of the posterior longitudinal ligament, the minimal anteroposterior diameter of the cervical cord tended to decrease with increasing severity of myelopathy. However no relationship was observed between the two parameters in the cervical spondylotic myelopathy group. In the group with ossification of the posterior longitudinal ligament, surgical results were good when the postoperative anteroposterior diameter was increased, whereas in the cervical spondylotic myelopathy group there was no relationship between the two parameters. In the patients with myelopathy, a high intensity area was observed in about 40% of all patients before operation and about 30% after operation. However, the presence or absence of a high intensity area did not correlate with the severity of myelopathy or with surgical results in the group with ossification of the posterior longitudinal ligament and the cervical spondylotic myelopathy groups. abstract_id: PUBMED:9460150 Magnetic resonance imaging of the spinal cord in cervical ossification of the posterior longitudinal ligament. Can it predict surgical outcome? Study Design: Magnetic resonance imaging findings of an increased signal in the cervical cord in patients undergoing surgery for ossification of the posterior longitudinal ligament were analyzed to determined whether an increased signal on T2-weighted images correlated with a poorer outcome. Objectives: To clarify whether preoperative magnetic resonance imaging findings of a high signal in the cord constitute a poor prognostic factor. Summary Of Background Data: The importance of a high, T2-weighted, intramedullary signal on preoperative magnetic resonance studies in patients undergoing surgery for ossification of the posterior longitudinal ligament requires further clarification. Methods: Of 91 patients having cervical surgery for ossification of the posterior longitudinal ligament, 26 had a history of minor trauma. High, T2-weighted signals in the cord were noted in 23 patients who had sustained trauma and in 39 patients who had no history of trauma. Patients were divided into four groups according to the presence or absence of a high cord signal and/or a trauma history. Pre- and postoperative Japanese Orthopaedic Association scores and recovery ratios then were evaluated. Results: The pre- and postoperative Japanese Orthopaedic Association scores and recovery ratios of the patients with a high signal and a trauma history were significantly less than those with no high signal but with a trauma history. Among the patients with no history of trauma, however, there were no significant differences in the pre- and postoperative JOA scores and recovery ratios between the patients with a high signal and those with no high signal. Conclusion: A high preoperative cord signal on T2-weighted magnetic resonance images for patients undergoing surgery for ossification of the posterior longitudinal ligament constitutes a poor prognostic factor, when trauma has played a role. abstract_id: PUBMED:9266466 Plasticity of the spinal cord contributes to neurological improvement after treatment by cervical decompression. A magnetic resonance imaging study. To investigate the relationship between morphological plasticity of the spinal cord and neurological outcome after surgery for compressive lesions, we correlated the transverse area of the cervical spinal cord measured by transaxial magnetic resonance imaging (MRI) obtained during the early postoperative period (1-6 months) with neurological function assessed at a median postoperative follow-up period of 2.5 years. Measurements on MRI in 56 patients (35 men and 21 women) included evaluation of the cross-sectional area of the cervical cord and the subarachnoidal space at the level of decompression. The transverse area of the cervical cord increased by 30 to 62% postoperatively and that of the subarachnoidal space by 57 to 95%. Neurological improvement was noted in all patients and averaged 63% in our assessment scale. Expansion of the cervical cord during the early postoperative period correlated significantly with the late postoperative neurological status (P = 0.009). Our results suggest that an increase in the cross-sectional area of the cervical spinal cord, representing spinal cord morphological plasticity, is a significant factor in determining the late neurological improvement following decompressive surgery. abstract_id: PUBMED:11389390 Correlation between operative outcomes of cervical compression myelopathy and mri of the spinal cord. Study Design: Magnetic resonance images of cervical compression myelopathy were retrospectively analyzed in comparison with surgical outcomes. Objectives: To investigate which magnetic resonance findings in patients with cervical compression myelopathy reflect the clinical symptoms and prognosis, and to determine the radiographic and clinical factors that correlate with the prognosis. Summary Of Background Data: Signal intensity changes of the spinal cord on magnetic resonance imaging in chronic cervical myelopathy are thought to be indicative of the prognosis. However, the prognostic significance of signal intensity change remains controversial. Methods: The participants in this study were 73 patients who underwent cervical expansive laminoplasty for cervical compression myelopathy. Their mean age was 64 years, and the mean postoperative follow-up period was 3.4 years. The pathologic conditions were cervical spondylotic myelopathy in 42 patients and ossification of the posterior longitudinal ligament in 31 patients. Magnetic resonance imaging (spin-echo sequence) was performed in all the patients. The transverse area of the spinal cord at the site of maximal compression was computed, and spinal cord signal intensity changes were evaluated before and after surgery. Three patterns of spinal cord signal intensity changes on T1-weighted sequences/T2-weighted sequences were detected as follows: normal/normal, normal/high-signal intensity changes, and low-signal/high-signal intensity changes. Surgical outcomes were compared among these three groups. The most useful combination of parameters for predicting prognosis was determined using a stepwise regression analysis. Results: The findings showed 2 patients with normal/normal, 67 patients with normal/high-signal, and 4 patients with low-signal/high-signal change patterns before surgery. Regarding postoperative recovery, the preoperative low-signal/high-signal group was significantly inferior to the preoperative normal/high-signal group. There was no significant difference between the transverse area of the spinal cord at the site of maximal compression in the normal/high-signal group and the low-signal/high-signal group. A stepwise regression analysis showed that the best combination of surgical outcome predictors included age (correlation coefficient R = -0.348), preoperative signal pattern, and duration of symptoms (correlation coefficient R = -0.231). Conclusions: The low-signal intensity changes on T1-weighted sequences indicated a poor prognosis. The authors speculate that high-signal intensity changes on T2 weighted images include a broad spectrum of compressive myelomalacic pathologies and reflect a broad spectrum of spinal cord recuperative potentials. Predictors of surgical outcomes are preoperative signal intensity change pattern of the spinal cord on radiologic evaluations, age at the time of surgery, and chronicity of the disease. abstract_id: PUBMED:28623403 Diffusion tensor imaging can predict surgical outcomes of patients with cervical compression myelopathy. Purpose: The aim of this study was to assess the potential role of diffusion tensor imaging (DTI) as a predictor of surgical outcomes in patients with cervical compressive myelopathy (CCM). Surgical decompression is often recommended for symptomatic CCM. It is important to know the prognosis of surgical outcomes and to recommend appropriate timing for surgery. Methods: We enrolled 26 patients with CCM who underwent surgery. The Japanese Orthopaedic Association (JOA) score for cervical myelopathy was evaluated before and 6 months after surgery. Surgical outcomes were regarded as good if there was a change in JOA score of three points or more, or the recovery rate of JOA score was 50% or more. The patients were examined using a 3.0 T magnetic resonance system before surgery. Measured diffusion parameters were fractional anisotropy (FA) and mean diffusivity (MD). The correlations between DTI parameters and surgical outcomes were analyzed. Results: Both change and recovery rate of JOA score moderately correlated with FA. Furthermore, the area under the receiver-operator characteristic curve based on FA for prognostic precision of surgical outcomes indicates that FA is a good predictive factor. The cut-off values of FA for predicting good surgical outcomes evaluated by change and recovery rate of JOA score were 0.65 and 0.57, respectively. Neither change nor recovery rate of JOA score correlated with MD. Conclusions: FA in spinal cord DTI can moderately predict surgical outcomes. DTI can serve as a supplementary tool for decision-making to guide surgical intervention in patients with CCM. abstract_id: PUBMED:20043766 Long-term surgical outcome and risk factors in patients with cervical myelopathy and a change in signal intensity of intramedullary spinal cord on Magnetic Resonance imaging. Object: The goal of this study was to determine the long-term clinical significance of and the risk factors for intramedullary signal intensity change on MR images in patients with cervical compression myelopathy (CCM), an entity most commonly seen with cervical spondylotic myelopathy and ossification of the posterior longitudinal ligament (OPLL). Methods: One hundred seventy-four patients with CCM but without cervical disc herniation, severe OPLL (in which the cervical canal is &lt; 10 mm due to OPLL), or severe kyphotic deformity (&gt; 15 degrees of cervical kyphosis) who underwent surgery were initially selected. One hundred eight of these patients were followed for &gt; 36 months, and the 71 patients who agreed to MR imaging examinations both pre- and postsurgery were enrolled in the study (the mean follow-up duration was 60.6 months). All patients underwent cervical laminoplasty. The authors used the Japanese Orthopaedic Association (JOA) score and recovery ratio for evaluation of pre- and postoperative outcomes. The multifactorial effects of variables such as age, sex, a history of smoking, diabetes mellitus, duration of symptoms, postoperative expansion of the high signal intensity area of the spinal cord on MR imaging, sagittal arrangement of the cervical spine, presence of ventral spinal cord compression, and presence of an unstable cervical spine were studied. Results: Change in intramedullary signal intensity was observed in 50 of the 71 patients preoperatively. The pre- and postoperative JOA scores and the recovery ratio were significantly lower in the patients with signal intensity change. The mean JOA score of the upper extremities was also significantly lower in these patients. Twenty-one patients showed hypointensity in their T1-weighted images, and a nonsignificant correlation was observed between intensity in the T1-weighted image and the mean JOA score and recovery ratio. The risk factors for signal intensity change were instability of the cervical spine (OR 8.255, p = 0.037) and ventral spinal cord compression (OR 5.502, p &lt; 0.01). Among these patients, 16 had postoperative expansion of the high signal intensity area of the spinal cord. The mean JOA score and the recovery ratio at the final follow-up were significantly lower in these patients. The risk factor for postoperative expansion of the high signal intensity area was instability of the cervical spine (OR 5.509, p = 0.022). No significant correlation was observed between signal intensity on T1-weighted MR images and postoperative expansion of the intramedullary high signal intensity area on T2-weighted MR images. Conclusions: Long-term clinical outcome was significantly worse in patients with intramedullary signal intensity changes on MR images. The risk factors were instability of the cervical spine and severe ventral spinal compression. The long-term clinical outcome was also significantly worse in patients with postoperative expansion of the high signal intensity area. The fact that cervical instability was a risk factor for the postoperative expansion of the high signal intensity indicates that this high signal intensity area occurred, not only from necrosis secondary to ischemia of the anterior spinal artery, but also from the repeated minor traumas inflicted on the spinal cord from an unstable cervical spine. The long-term neurological outcome found in the preliminary study of patients with CCM who had cervical instability and intramedullary signal intensity changes on MR images suggests that surgical treatment should include posterior fixation along with cervical laminoplasty or anterior spinal fusion. abstract_id: PUBMED:36342593 Magnetic resonance image segmentation of the compressed spinal cord in patients with degenerative cervical myelopathy using convolutional neural networks. Purpose: Spinal cord segmentation is the first step in atlas-based spinal cord image analysis, but segmentation of compressed spinal cords from patients with degenerative cervical myelopathy is challenging. We applied convolutional neural network models to segment the spinal cord from T2-weighted axial magnetic resonance images of DCM patients. Furthermore, we assessed the correlation between the cross-sectional area segmented by this network and the neurological symptoms of the patients. Methods: The CNN architecture was built using U-Net and DeepLabv3 + and PyTorch. The CNN was trained on 2762 axial slices from 174 patients, and an additional 517 axial slices from 33 patients were held out for validation and 777 axial slices from 46 patients for testing. The performance of the CNN was evaluated on a test dataset with Dice coefficients as the outcome measure. The ratio of CSA at the maximum compression level to CSA at the C2 level, as segmented by the CNN, was calculated. The correlation between the spinal cord CSA ratio and the Japanese Orthopaedic Association score in DCM patients from the test dataset was investigated using Spearman's rank correlation coefficient. Results: The best Dice coefficient was achieved when U-Net was used as the architecture and EfficientNet-b7 as the model for transfer learning. Spearman's rs between the spinal cord CSA ratio and the JOA score of DCM patients was 0.38 (p = 0.007), showing a weak correlation. Conclusion: Using deep learning with magnetic resonance images of deformed spinal cords as training data, we were able to segment compressed spinal cords of DCM patients with a high concordance with expert manual segmentation. In addition, the spinal cord CSA ratio was weakly, but significantly, correlated with neurological symptoms. Our study demonstrated the first steps needed to implement automated atlas-based analysis of DCM patients. abstract_id: PUBMED:28939166 Spinal cord MRI signal changes at 1 year after cervical decompression surgery is useful for predicting midterm clinical outcome: an observational study using propensity scores. Background Context: There is little information on the relationship between magnetic resonance imaging (MRI) T2-weighted high signal change (T2HSC) in the spinal cord and surgical outcome for cervical myelopathy. We therefore examined whether T2HSC regression at 1 year postoperatively reflected a 5-year prognosis after adjustment using propensity scores for potential confounding variables, which have been a disadvantage of earlier observational studies. Purpose: The objective of this study was to clarify the usefulness of MRI signal changes for the prediction of midterm surgical outcome in patients with cervical myelopathy. Study Design/setting: This is a retrospective cohort study. Patient Sample: We recruited 137 patients with cervical myelopathy who had undergone surgery between 2007 and 2012 at a median age of 69 years (range: 39-87 years). Outcome Measures: The outcome measures were the recovery rates of the Japanese Orthopaedic Association (JOA) scores and the visual analog scale (VAS) scores for complaints at several body regions. Materials And Methods: The subjects were divided according to the spinal MRI results at 1 year post surgery into the MRI regression group (Reg+ group, 37 cases) with fading of T2HSC, or the non-regression group (Reg- group, 100 cases) with either no change or an enlargement of T2HSC. The recovery rates of JOA scores from 1 to 5 years postoperatively along with the 5-year postoperative VAS scores were compared between the groups using t test. Outcome scores were adjusted for age, sex, diagnosis, symptom duration, and preoperative JOA score by the inverse probability weighting method using propensity scores. Results: The mean recovery rates in the Reg- group were 35.1%, 34.6%, 27.6%, 28.0%, and 30.1% from 1 to 5 years post surgery, respectively, whereas those in the Reg+ group were 52.0%, 52.0%, 51.1%, 49.0%, and 50.1%, respectively. The recovery rates in the Reg+ group were significantly higher at all observation points. At 5 years postoperatively, the VAS score for pain or numbnessin the arms or hands of the patients in the Reg+ group (24.7 mm) was significantly milder than that of the patients in the Reg- group (42.2 mm). Conclusions: Spinal T2HSC improvement at 1 year postoperatively may predict a favorable recovery until up to 5 years after surgery. abstract_id: PUBMED:7946816 Does increased signal intensity of the spinal cord on MR images due to cervical myelopathy predict prognosis? We examined whether or not high signal intensity change on magnetic resonance imaging of the spinal cord of patients with cervical myelopathy is related to the clinical symptoms and prognosis. Twenty-five patients with cervical myelopathy were treated by decompressive surgery which involved laminoplasty or decompressive anterior interbody fusion. The pathological conditions were cervical disc herniation (n = 8), ossification of the posterior longitudinal ligament in the cervical spine (n = 7), and cervical spondylotic myelopathy (n = 10). The spinal cord compression and the intramedullary signal intensity at the site of maximum compression were evaluated pre- and postoperatively using T1- and T2-weighted images. There was no significant relationship between spinal cord compressive change and clinical symptoms. Patients in whom the high signal change of the spinal cord on T2-weighted sequence recovered after decompressive surgery had better recovery from clinical symptoms, but a statistical significance was not found. We suggest that signal changes on T2-weighted images may reflect pathological changes but cannot be used to predict prognosis at present. Answer: Magnetic resonance imaging (MRI) of the spinal cord in cervical ossification of the posterior longitudinal ligament (OPLL) can provide valuable information that may help predict surgical outcomes. Several studies have investigated the relationship between MRI findings and postoperative recovery in patients with OPLL. One study found that an increased intramedullary T2-weighted MRI signal at the site of maximal cord compression and the duration of the disease significantly influenced the rate of recovery in patients with OPLL or cervical spondylotic myelopathy. A multiple regression equation was developed with these variables to predict surgical outcomes (PUBMED:8272953). Another study analyzed whether a high signal on T2-weighted images correlated with a poorer outcome. The results indicated that for patients with a history of minor trauma, a high preoperative cord signal on T2-weighted MRI images constituted a poor prognostic factor. However, among patients with no history of trauma, there were no significant differences in pre- and postoperative Japanese Orthopaedic Association (JOA) scores and recovery ratios between patients with a high signal and those without (PUBMED:9460150). Furthermore, a study that investigated the correlation between operative outcomes of cervical compression myelopathy and MRI of the spinal cord found that low-signal intensity changes on T1-weighted sequences indicated a poor prognosis. The study suggested that predictors of surgical outcomes include preoperative signal intensity change patterns of the spinal cord on radiologic evaluations, age at the time of surgery, and chronicity of the disease (PUBMED:11389390). In summary, MRI findings, particularly the presence of increased signal intensity on T2-weighted images and signal intensity changes on T1-weighted images, can be indicative of surgical outcomes in patients with cervical OPLL. These imaging characteristics, along with other factors such as the duration of symptoms and patient age, can be used to predict the likelihood of recovery after decompressive surgery for cervical compression myelopathy associated with OPLL.
Instruction: Does transient cART started during primary HIV infection undermine the long-term immunologic and virologic response on cART resumption? Abstracts: abstract_id: PUBMED:25888386 Does transient cART started during primary HIV infection undermine the long-term immunologic and virologic response on cART resumption? Background: We explored the impact of transient cART started during the primary HIV-infection (PHI) on the long-term immunologic and virologic response on cART resumption, by comparison with treatment initiation during the chronic phase of HIV infection (CHI). Methods: We analyzed data on 1450 patients enrolled during PHI in the ANRS PRIMO cohort between 1996 and 2013. "Treatment resumption" was defined as at least 3 months of resumed treatment following interruption of at least 1 month of treatment initiated during PHI. "Treatment initiation during CHI" was defined as cART initiated ≥6 months after PHI. The virologic response to resumed treatment and to treatment initiated during CHI was analyzed with survival models. The CD4 cell count dynamics was modeled with piecewise linear mixed models. Results: 136 patients who resumed cART for a median (IQR) of 32 (18-51) months were compared with 377 patients who started cART during CHI for a median of 45 (22-57) months. Most patients (97%) achieved HIV-RNA &lt;50 cp/mL after similar times in the two groups. The CD4 cell count rose similarly in the two groups during the first 12 months. However, after 12 months, patients who started cART during CHI had a better immunological response than those who resumed cART (p = 0.01); therefore, at 36 months, the gains in √CD4 cells/mm(3) and CD4% were significantly greater in patients who started treatment during CHI. Conclusion: These results suggest that interruption of cART started during PHI has a significant, albeit modest negative impact on CD4 cell recovery on cART resumption. abstract_id: PUBMED:23936260 Virologic and immunologic response to cART by HIV-1 subtype in the CASCADE collaboration. Background: We aimed to compare rates of virologic response and CD4 changes after combination antiretroviral (cART) initiation in individuals infected with B and specific non-B HIV subtypes. Methods: Using CASCADE data we analyzed HIV-RNA and CD4 counts for persons infected ≥1996, ≥15 years of age. We used survival and longitudinal modeling to estimate probabilities of virologic response (confirmed HIV-RNA &lt;500 c/ml), and failure (HIV-RNA&gt;500 c/ml at 6 months or ≥1000 c/ml following response) and CD4 increase after cART initiation. Results: 2003 (1706 B, 142 CRF02_AG, 55 A, 53 C, 47 CRF01_AE) seroconverters were included in analysis. There was no evidence of subtype effect overall for response or failure (p = 0.075 and 0.317, respectively) although there was a suggestion that those infected with subtypes CRF01_AE and A responded sooner than those with subtype B infection [HR (95% CI):1.37 (1.01-1.86) and 1.29 (0.96-1.72), respectively]. Rates of CD4 increase were similar in all subtypes except subtype A, which tended to have lower initial, but faster long-term, increases. Conclusions: Virologic and immunologic response to cART was similar across all studied subtypes but statistical power was limited by the rarity of some non-B subtypes. Current antiretroviral agents seem to have similar efficacy in subtype B and most widely encountered non-B infections in high-income countries. abstract_id: PUBMED:35140523 A Comparison of Adherence and CD4 Cell Count with Respect to Virologic Failure Among HIV-Infected Adults Under Combination Antiretroviral Therapy (cART) at Felege Hiwot Teaching and Specialized Hospital, Bahir Dar, Ethiopia. Background: Medication adherence plays a significant in the success of combination antiretroviral therapy (cART). Therefore, the current investigation was conducted with the objective of comparing adherence and CD4 cell count with respect to virologic failure among HIV-infected adults under cART. Methods: A retrospective study design was conducted on 792 randomly selected HIV-infected adult patients who initiated first-line cART enrolled in the first 10 months of 2012 and followed up to August 2018 by using a simple random sampling technique based on their identification number. Results: The main outcome for the current investigation was the virologic failure which was decreased with successive visits. The area under the receiver operating characteristic curve for adherence and CD4 cell count change were 0.68 and 0.63 with χ2 = 21.2; p-value &lt;0.001 for the 12-month assessment. Similarly, these areas for the 36th and 60th month assessments were 0.71 and 0.66, with χ2 = 23.2; p-value &lt;0.001, and 0.73 and 0.71 with χ2 = 24.3; p-value &lt;0.001 for adherence and CD4 cell count, respectively. Conclusion: Pill count adherence was more accurate compared to CD4 cell count change for assessing virologic responses. Therefore, because of its easy access, simple use, cost-effectiveness, and accuracy, the adherence to cART was in favor of CD4 cell count change for monitoring the healthcare quality of HIV-infected patients. abstract_id: PUBMED:31554200 Rates and Correlates of Short Term Virologic Response among Treatment-Naïve HIV-Infected Children Initiating Antiretroviral Therapy in Ethiopia: A Multi-Center Prospective Cohort Study. There is limited data on virologic outcome and its correlates among HIV-infected children in resource-limited settings. We investigated rate and correlates of virologic outcome among treatment naïve HIV-infected Ethiopian children initiating cART, and were followed prospectively at baseline, 8, 12, 24 and 48 weeks using plasma viral load, clinical examination, laboratory tests and pretreatment HIV drug resistance (PDR) screening. Virologic outcome was assessed using two endpoints-virological suppression defined as having "undetectable" plasma viral load &lt; 150 RNA copies/mL, and rebound defined as viral load ≥150 copies/mL after achieving suppression. Cox Proportional Hazards Regression was employed to assess correlates of outcome. At the end of follow up, virologic outcome was measured for 110 participants. Overall, 94(85.5%) achieved virological suppression, of which 36(38.3%) experienced virologic rebound. At 48 weeks, 9(8.2%) children developed WHO-defined virological treatment failure. Taking tenofovir-containing regimen (Hazard Ratio (HR) 3.1-[95% confidence interval (95%CI) 1.0-9.6], p = 0.049) and absence of pretreatment HIV drug resistance (HR 11.7-[95%CI 1.3-104.2], p = 0.028) were independently associated with earlier virologic suppression. In conclusion, PDR and cART regimen type correlate with rate of virologic suppression which was prominent during the first year of cART initiation. However, the impact of viral rebound in 38.3% of the children needs evaluation. abstract_id: PUBMED:34074181 Tobacco smoking and HIV-related immunologic and virologic response among individuals of the Canadian HIV Observational Cohort (CANOC). We assessed the relationship between tobacco smoking and immunologic and virologic response among people living with HIV (PLWH) initiating combination antiretroviral therapy (cART) in the Canadian HIV Observational Cohort (CANOC). Positive immunologic and virologic response, respectively, were defined as ≥50 cells/mm3 CD4 count increase (CD4+) and viral suppression ≤50 copies/mL (VL+) within 6 months of cART initiation. Using multinomial regression, we examined the relationship between smoking, immunologic, and virologic response category. Model A adjusted for birth sex, baseline age, enrolling province, and era of cohort entry; models B and C further adjusted for neighbourhood level material deprivation and history of injection drug use (IDU), respectively. Among 4267 individuals (32.7%) with smoking status data, concordant positive (CD4+/VL+) response was achieved by 64.2% never, 66.9% former, and 59.4% current smokers. In the unadjusted analysis, current smoking was significantly associated with concordant negative response (odds ratio [OR] 1.85, 95% confidence interval [CI] 1.40-2.45). Similarly, models A and B showed an increased odds of concordant negative response in current smokers (adjusted OR [aOR] 1.78, 95% CI 1.32-2.39 and 1.74, 95% CI 1.29-2.34, respectively). The association between current smoking and concordant negative response was no longer significant in model C (aOR 1.18, 95%CI 0.85-1.65). abstract_id: PUBMED:37877716 Epitope-dependent effect of long-term cART on maintenance and recovery of HIV-1-specific CD8+ T cells. Importance: HIV-1-specific CD8+ T cells are anticipated to become effector cells for curative treatment using the "shock and kill" approach in people living with HIV-1 (PLWH) under combined antiretroviral therapy (cART). Previous studies demonstrated that the frequency of HIV-1-specific CD8+ T cells is reduced under cART and their functional ability remains impaired. These studies analyzed T-cell responses to a small number of HIV-1 epitopes or overlapping HIV-1 peptides. Therefore, the features of CD8+ T cells specific for HIV-1 epitopes under cART remain only partially clarified. Here, we analyzed CD8+ T cells specific for 63 well-characterized epitopes in 90 PLWH. We demonstrated that CD8+ T cells specific for large numbers of HIV-1 epitopes were maintained in an epitope-dependent fashion under long-term cART and that long-term cART enhanced or restored the ability of HIV-1-specific T cells to proliferate in vitro. This study implies that some HIV-1-specific T cells would be useful as effector cells for curative treatment. abstract_id: PUBMED:30925831 HIV and cART-Associated Dyslipidemia Among HIV-Infected Children. Background: Persistent dyslipidemia in children is associated with risks of cardiovascular accidents and poor combination antiretroviral therapy (cART) outcome. We report on the first evaluation of prevalence and associations with dyslipidemia due to HIV and cART among HIV-infected Ethiopian children. Methods: 105 cART naïve and 215 treatment experienced HIV-infected children were enrolled from nine HIV centers. Demographic and clinical data, lipid profile, cART type, adherence to and duration on cART were recorded. Total, low density (LDLc) and high density (HDLc) cholesterol values &gt;200 mg/dL, &gt;130 mg/dL, &lt;40 mg/dL, respectively; and/or, triglyceride values &gt;150 mg/dL defined cases of dyslipidemia. Prevalence and predictors of dyslipidemia were compared between the two groups. Results: prevalence of dyslipidemia was significantly higher among cART experienced (70.2%) than treatment naïve (58.1%) children (p = 0.03). Prevalence of low HDLc (40.2% versus 23.4%, p = 0.006) and hypertriglyceridemia (47.2% versus 35.8%, p = 0.02) was higher among cART experienced than naïve children. There was no difference in total hypercholesterolemia and high LDLc levels. Nutrition state was associated with dyslipidemia among cART naïve children (p = 0.01). Conclusion: high prevalence of cART-associated dyslipidemia, particularly low HDLc and hypertriglyceridemia was observed among treatment experienced HIV-infected children. The findings underscore the need for regular follow up of children on cART for lipid abnormalities. abstract_id: PUBMED:33717191 Long-Term Suppressive cART Is Not Sufficient to Restore Intestinal Permeability and Gut Microbiota Compositional Changes. Background: We explored the long-term effects of cART on markers of gut damage, microbial translocation, and paired gut/blood microbiota composition, with a focus on the role exerted by different drug classes. Methods: We enrolled 41 cART naïve HIV-infected subjects, undergoing blood and fecal sampling prior to cART (T0) and after 12 (T12) and 24 (T24) months of therapy. Fifteen HIV-uninfected individuals were enrolled as controls. We analyzed: (i) T-cell homeostasis (flow cytometry); (ii) microbial translocation (sCD14, EndoCab, 16S rDNA); (iii) intestinal permeability and damage markers (LAC/MAN, I-FABP, fecal calprotectin); (iv) plasma and fecal microbiota composition (alpha- and beta-diversity, relative abundance); (v) functional metagenome predictions (PICRUSt). Results: Twelve and twenty four-month successful cART resulted in a rise in EndoCAb (p = 0.0001) and I-FABP (p = 0.039) vis-à-vis stable 16S rDNA, sCD14, calprotectin and LAC/MAN, along with reduced immune activation in the periphery. Furthermore, cART did not lead to substantial modifications of microbial composition in both plasma and feces and metabolic metagenome predictions. The stratification according to cART regimens revealed a feeble effect on microbiota composition in patients on NNRTI-based or INSTI-based regimens, but not PI-based regimens. Conclusions: We hereby show that 24 months of viro-immunological effective cART, while containing peripheral hyperactivation, exerts only minor effects on the gastrointestinal tract. Persistent alteration of plasma markers indicative of gut structural and functional impairment seemingly parallels enduring fecal dysbiosis, irrespective of drug classes, with no effect on metabolic metagenome predictions. abstract_id: PUBMED:28239376 A Mature NK Profile at the Time of HIV Primary Infection Is Associated with an Early Response to cART. Natural killer (NK) cells are major effectors of the innate immune response. Despite an overall defect in their function associated with chronic human immunodeficiency virus (HIV) infection, their role in primary HIV infection is poorly understood. We investigated the modifications of the NK cell compartment in patients from the ANRS-147-Optiprim trial, a study designed to examine the benefits of intensive combination antiretroviral therapy (cART) in patients with acute or early primary HIV infection. Multiparametric flow cytometry combined with bioinformatics analyses identified the NK phenotypes in blood samples from 30 primary HIV-infected patients collected at inclusion and after 3 months of cART. NK phenotypes were revealed by co-expression of CD56/CD16/NKG2A/NKG2C and CD57, five markers known to delineate stages of NK maturation. Three groups of patients were formed according to their distributions of the 12 NK cell phenotypes identified. Their virological and immunological characteristics were compared along with the early outcome of cART. At inclusion, HIV-infected individuals could be grouped into those with predominantly immature/early differentiated NK cells and those with predominantly mature NK cells. Several virological and immunological markers were improved in patients with mature NK profiles, including lower HIV viral loads, lower immune activation markers on NK and dendritic cell (DC), lower levels of plasma IL-6 and IP-10, and a trend to normal DC counts. Whereas all patients showed a decrease of viremia higher than 3 log10 copies/ml after 3 months of treatment, patients with a mature NK profile at inclusion reached this threshold more rapidly than patients with an immature NK profile (70 vs. 38%). In conclusion, a better early response to cART is observed in patients whose NK profile is skewed to maturation at inclusion. Whether the mature NK cells contributed directly or indirectly to HIV control through a better immune environment under cART is unknown. The NK maturation status of primary infected patients should be considered as a relevant marker of an immune process contributing to the early outcome of cART that could help in the management of HIV-infected patients. abstract_id: PUBMED:17208897 Comparison of single and boosted protease inhibitor versus nonnucleoside reverse transcriptase inhibitor-containing cART regimens in antiretroviral-naïve patients starting cART after January 1, 2000. Background: Few published studies have considered both the short- and long-term virologic or immunologic response to combination antiretroviral therapy (cART) and the impact of different cART strategies. Purpose: To compare time to initial virologic (&lt;500 copies/mL) or immunologic (&gt;200/mm3 cell increase) response in antiretroviral-naïve patients starting either a single protease inhibitor (PI; n = 183), a ritonavir-boosted PI regimen (n = 197), or a nonnucleoside reverse transcriptase inhibitor (NNRTI)-based cART regimen (n = 447) after January 1, 2000, and the odds of lack of virologic or immunologic response at 3 years after starting cART. Method: Cox proportional hazards models and logistic regression. Results: After adjustment, compared to patients taking an NNRTI-regimen, patients taking a single-PI regimen were significantly less likely to achieve a viral load (VL) &lt;500 copies/mL (relative hazard [RH] 0.74, 95% CI 0.54-0.84, p = .0005); there was no difference between the boosted-PI regimen and the NNRTI regimen (p = .72). There were no differences between regimens in the risk of &gt;200/mm3 CD4 cell increase after starting cART (p &gt; .3). At 3 years after starting cART, patients taking a single-PI-based regimen were more likely to not have virologic suppression (&lt;500 copies/mL; odds ratio [OR] 1.60, 95% CI 1.06-2.40, p = .024), while there were no differences in the odds of having an immunologic response (&gt;200/mm3 increase; p &gt; .15). This model was adjusted for CD4 and VL at starting cART, age, prior AIDS diagnosis, year of starting cART, and region of Europe. Conclusion: Compared to patients starting an NNRTI-based regimen, patients starting a single-PI regimen were less likely to be virologically suppressed at 3 years after starting cART. These results should be interpreted with caution, because of the potential biases associated with observational studies. Ultimately, clinical outcomes, such as new AIDS diagnoses or deaths, will be the measure of efficacy of cART regimens, which requires the follow-up of a very large number of patients over many years. Answer: Based on the findings from the ANRS PRIMO cohort study, transient cART started during primary HIV infection (PHI) does have a modest negative impact on the long-term immunologic response upon cART resumption. The study compared 136 patients who resumed cART after an interruption with 377 patients who started cART during the chronic phase of HIV infection (CHI). Although most patients in both groups achieved HIV-RNA <50 cp/mL after similar times, the CD4 cell count dynamics differed. During the first 12 months, the CD4 cell count rose similarly in both groups. However, after 12 months, patients who initiated cART during CHI had a better immunological response than those who resumed cART. Consequently, at 36 months, the gains in √CD4 cells/mm^3 and CD4% were significantly greater in patients who started treatment during CHI (PUBMED:25888386). This suggests that while transient cART during PHI does not prevent virologic suppression upon resumption, it may lead to a less robust recovery of CD4+ T cells in the long term.
Instruction: Liver echogenicity: measurement or visual grading? Abstracts: abstract_id: PUBMED:28932621 The SCHEIE Visual Field Grading System. Objective: No method of grading visual field (VF) defects has been widely accepted throughout the glaucoma community. The SCHEIE (Systematic Classification of Humphrey visual fields-Easy Interpretation and Evaluation) grading system for glaucomatous visual fields was created to convey qualitative and quantitative information regarding visual field defects in an objective, reproducible, and easily applicable manner for research purposes. Methods: The SCHEIE grading system is composed of a qualitative and quantitative score. The qualitative score consists of designation in one or more of the following categories: normal, central scotoma, paracentral scotoma, paracentral crescent, temporal quadrant, nasal quadrant, peripheral arcuate defect, expansive arcuate, or altitudinal defect. The quantitative component incorporates the Humphrey visual field index (VFI), location of visual defects for superior and inferior hemifields, and blind spot involvement. Accuracy and speed at grading using the qualitative and quantitative components was calculated for non-physician graders. Results: Graders had a median accuracy of 96.67% for their qualitative scores and a median accuracy of 98.75% for their quantitative scores. Graders took a mean of 56 seconds per visual field to assign a qualitative score and 20 seconds per visual field to assign a quantitative score. Conclusion: The SCHEIE grading system is a reproducible tool that combines qualitative and quantitative measurements to grade glaucomatous visual field defects. The system aims to standardize clinical staging and to make specific visual field defects more easily identifiable. Specific patterns of visual field loss may also be associated with genetic variants in future genetic analysis. abstract_id: PUBMED:15249074 Liver echogenicity: measurement or visual grading? Objective: Two methods to assess liver echogenicity were compared. Methods: Liver/kidney echogenicity ratio was measured in 41 persons with the ultrasound software and visually graded by two radiologists and a radiographer. These echogenicity ratios and grades were related to risk factors for fatty liver and to liver enzyme levels. Results: These determinants explained 55% of the radiologists' mean grades, 14% of the radiographer's and 31% of the measured echogenicity ratios. Conclusion: Radiologists' visual gradings correlated best with the indirect determinants of early liver pathology. Computerized measurements may be inferior to visual grading due to the lack of holistic tissue diagnostics. abstract_id: PUBMED:27342594 Tumor grading of the hepatobiliary system Tumors of the liver, intrahepatic and extrahepatic bile ducts as well as the gallbladder are very heterogeneous and show different biological behavior. The 4‑stage (i.e. well, moderately, poorly and undifferentiated) grading system for hepatocellular carcinoma proposed by the WHO takes tumor size and architecture as well as the extent of cell and nuclear pleomorphism into account. In addition, the WHO defines some special forms of hepatocellular carcinoma. For carcinomas of intrahepatic bile ducts the WHO provides a 3‑stage (well, moderately and poorly differentiated) grading system, which is based on architectural and cytological changes. At this localization there are also additional special histological forms that have to be dealt with outside the grading system described. The WHO proposes a 3‑stage (well, moderately and poorly differentiated) grading system for carcinomas of the extrahepatic bile ducts and the gallbladder, which considers the proportion of glands contained within the adenocarcinoma. Similar to cancers of the liver and intrahepatic bile ducts there are also numerous special histological forms, which are explained in this article. abstract_id: PUBMED:31179397 Effects of intraocular lens glistenings on visual function: a prospective study and presentation of a new glistenings grading methodology. Objective: To investigate the effect of intraocular lens (IOL) glistenings on visual performance and evaluate a new glistenings grading methodology. Methods And Analysis: Thirty-four patients (34 eyes) were recruited. Corrected distance visual acuity (CDVA), mesopic gap acuity (MGA), functional contrast sensitivity (FCS) and forward light scatter were measured (Advanced Vision and Optometric Tests, City Occupational, London, UK). The IOL centre was imaged and glistenings density graded by three observers using the Miyata scale and a new system. Inter-rater reliability, association between the two grading scales, and correlations between glistenings grades and visual performance parameters were evaluated. Results: The intraclass correlation coefficient between graders for the new grading system was 0.769 (95% Confidence Interval [CI] 0.636 to 0.868). There was a significant association between the Miyata scale and the new grading system for all graders (rs=0.533-0.895, p≤0.001). There was no association between CDVA or MGA and glistenings grade (rs=- 0.098, p=0.583 and rs=0.171, p=0.359, respectively). There was no association between FCS at mesopic light levels and glistenings grade (rs=-0.032, p=0.864), or the straylight parameter and glistenings grade (rs=0.021, p=0.916). No association was found between the integrated straylight parameter and glistenings grade (rs=0.078, p=0.701). Conclusion: The new glistenings grading scale was highly reproducible. In this cohort, glistenings in the same hydrophobic acrylic IOL after cataract surgery were not associated with changes in visual function, as assessed by a series of tests not previously used in glistenings research. abstract_id: PUBMED:17641368 Unenhanced CT for assessment of macrovesicular hepatic steatosis in living liver donors: comparison of visual grading with liver attenuation index. Purpose: To retrospectively compare the accuracy of visual grading and the liver attenuation index in the computed tomographic (CT) diagnosis of 30% or higher macrovesicular steatosis in living hepatic donors, by using histologic analysis as the reference standard. Materials And Methods: Institutional review board approval was obtained with waiver of informed consent. Of 703 consecutive hepatic donor candidates, 24 patients (22 men and two women; mean age +/- standard deviation, 36.3 years +/- 9.7) who had 30% or higher macrovesicular steatosis at histologic analysis and same-day CT with subsequent needle biopsy in the right hepatic lobe (at least two samples per patient) were evaluated. An age- and sex-matched control group of 24 subjects included those who had less than 30% macrovesicular steatosis but otherwise met the same criteria as the patient group. A diagnostically difficult setting was made by selecting those with the highest degree of macrovesicular steatosis when there were multiple control subjects matched for a particular subject in the patient group. Two independent radiologists assessed steatosis of the right hepatic lobe by using two methods: a five-point visual grading system that used attenuation comparison between the liver and hepatic vessels and the liver attenuation index (CT(L-S)), defined as hepatic attenuation minus splenic attenuation and calculated with region of interest measurements of hepatic attenuation. Interobserver agreement was assessed. Accuracy in the diagnosis of 30% or higher macrovesicular steatosis was compared by using a multireader, multicase receiver operating characteristic (ROC) analysis. Results: For visual grading, kappa = 0.905 (95% confidence interval [CI]: 0.834, 0.976). Intraclass correlation coefficient for CT(L-S) was 0.962 (95% CI: 0.893, 0.983). The area under the ROC curve of visual grading and CT(L-S) were 0.927 (95% CI: 0.822, 1) and 0.929 (95% CI: 0.874, 0.983), respectively, indicating no statistically significant difference (P = .975). Conclusion: Both visual grading and CT(L-S) are highly reliable and similarly accurate in the diagnosis of 30% or higher macrovesicular steatosis in living hepatic donor candidates. abstract_id: PUBMED:27393141 Grading of prostate cancer The current grading of prostate cancer is based on the classification system of the International Society of Urological Pathology (ISUP) following a consensus conference in Chicago in 2014. The foundations are based on the frequently modified grading system of Gleason. This article presents a brief description of the development to the current ISUP grading system. abstract_id: PUBMED:34126105 Development and prognostic relevance of a histologic grading and staging system for alcohol-related liver disease. Background & Aims: The SALVE Histopathology Group (SHG) developed and validated a grading and staging system for the clinical and full histological spectrum of alcohol-related liver disease (ALD) and evaluated its prognostic utility in a multinational cohort of 445 patients. Methods: SALVE grade was described by semiquantitative scores for steatosis, activity (hepatocellular injury and lobular neutrophils) and cholestasis. The histological diagnosis of steatohepatitis due to ALD (histological ASH, hASH) was based on the presence of hepatocellular ballooning and lobular neutrophils. Fibrosis staging was adapted from the Clinical Research Network staging system for non-alcoholic fatty liver disease and the Laennec staging system and reflects the pattern and extent of ALD fibrosis. There are 7 SALVE fibrosis stages (SFS) ranging from no fibrosis to severe cirrhosis. Results: Interobserver κ-value for each grading and staging parameter was &gt;0.6. In the whole study cohort, long-term outcome was associated with activity grade and cholestasis, as well as cirrhosis with very broad septa (severe cirrhosis) (p &lt;0.001 for all parameters). In decompensated ALD, adverse short-term outcome was associated with activity grade, hASH and cholestasis (p = 0.038, 0.012 and 0.001, respectively), whereas in compensated ALD, hASH and severe fibrosis/cirrhosis were associated with decompensation-free survival (p = 0.011 and 0.001, respectively). On multivariable analysis, severe cirrhosis emerged as an independent histological predictor of long-term survival in the whole study cohort. Severe cirrhosis and hASH were identified as independent predictors of short-term survival in decompensated ALD, and also as independent predictors of decompensation-free survival in compensated ALD. Conclusion: The SALVE grading and staging system is a reproducible and prognostically relevant method for the histological assessment of disease activity and fibrosis in ALD. Lay Summary: Patients with alcohol-related liver disease (ALD) may undergo liver biopsy to assess disease severity. We developed a system to classify ALD under the microscope by grading ALD activity and staging the extent of liver scarring. We validated the prognostic performance of this system in 445 patients from 4 European centers. abstract_id: PUBMED:32976320 A New Visual Transient Elastography Technique for Grading Liver Fibrosis in Patients With Chronic Hepatitis B. Abstract: Liver fibrosis is evaluated to assess the prognosis and guide the treatment of chronic hepatitis B (CHB). To compare the efficiency of 2 transient elastography techniques for grading liver fibrosis in CHB: visual transient elastography (ViTE) with real-time image guidance and FibroScan (FS) with no image guidance. All of the CHB patients in this study underwent both FS and ViTE examinations. The final diagnosis was based on the histological findings of a liver biopsy. According to the severity of liver fibrosis (based on the Scheuer criteria), the area under the receiver operating characteristic curve values for diagnostic efficiency were calculated for the 2 elastography techniques. This study enrolled 227 patients (79 [39.1%] women; mean age, 45.8 ± 16.8 years). The ViTE and FS liver elasticity measurements were highly correlated with liver fibrosis stage (r = 0.852 and r = 0.813, respectively). The area under the receiver operating characteristic curve value was larger for ViTE compared with FS, with respect to differentiating liver fibrosis stage, but not significantly (P &gt; 0.05). The ViTE and FS can be used to detect and stage liver fibrosis. ViTE, easier and quicker to perform with superior interoperator reproducibility, is a stable and reliable elastography technique that benefits from real-time visual guidance. abstract_id: PUBMED:27356985 Grading of lung cancer In comparison with other tumor entities there is no common generally accepted grading system for lung cancer with clearly defined criteria and clinical relevance. In the recent fourth edition of the World Health Organization (WHO) classification from 2015 of tumors of the lungs, pleura, thymus and heart, there is no generally applicable grading for pulmonary adenocarcinomas, squamous cell carcinomas or rarer forms of carcinoma. Since the new IASLC/ATS/ERS classification of adenocarcinomas published in 2011, 5 different subtypes with significantly different prognosis are proposed. This results in an architectural (histologic) grading, which is usually applied to resection specimens. For squamous cell carcinoma the number of different histological subtypes in the new WHO classification was reduced compared to earlier versions but without a common grading system. In recent publications nesting and budding were proposed as the main (histologic) criteria for a grading of squamous cell carcinomas. The grading of neuroendocrine tumors (NET) of the lungs in comparison with NET in other organs is presented in a separate article in this issue. Certain rare tumor types are high grade per definition: small cell, large cell and pleomorphic carcinomas, carcinosarcomas and pulmonary blastomas. In the future it is to be expected that these developments will be further refined, e. g. by adding further subtypes for adenocarcinomas and cytologic and/or nuclear criteria for adenocarcinoma and/or squamous cell carcinomas. abstract_id: PUBMED:34232824 Tarsier Anterior Chamber Cell Grading: Improving the SUN Grading Scheme with a Visual Analog Scale. Purpose: To compare an analog visual scale in grading anterior chamber cells (ACC) to a modified Standardization of Uveitis Nomenclature (SUN) ACC scale. Method: A graphical representation of anterior chamber cells as a reference and a test set was created and shown to two groups of experienced uveitis experts. Group 1 was given the analog scale in written format, while group two was given the reference images for comparison. Each test subject was asked to provide the best approximation for each grade. Results: Eleven graders participated in phase 1. Correct grading occurred in 87.4% of cases. Discrepancies were seen at all grades. Only 3 of 11 graders were able to achieve a perfect score. Seven graders participated in phase 2. Agreement was 95.2% with 4/7 graders achieving a perfect score. Discrepancies were seen at higher grades only. Conclusions: ACC grading is improved by a visual grading scale, and interobserver variability is reduced. Answer: Liver echogenicity can be assessed through two primary methods: measurement and visual grading. The study described in abstract PUBMED:15249074 compared these two methods by examining the liver/kidney echogenicity ratio measured with ultrasound software and visually graded by two radiologists and a radiographer. The echogenicity ratios and grades were then related to risk factors for fatty liver and to liver enzyme levels. The results indicated that the radiologists' visual gradings correlated best with the indirect determinants of early liver pathology, suggesting that visual grading may be superior to computerized measurements due to the holistic tissue diagnostics that visual grading can provide. The study concluded that radiologists' visual gradings were more closely associated with the determinants of early liver pathology than the measured echogenicity ratios, which only explained a smaller percentage of the variance. This suggests that visual grading of liver echogenicity may be more effective in capturing the nuances of liver tissue changes associated with early liver pathology.
Instruction: Are ICD-10 codes appropriate for performance assessment in asthma and COPD in general practice? Abstracts: abstract_id: PUBMED:15683548 Are ICD-10 codes appropriate for performance assessment in asthma and COPD in general practice? Results of a cross sectional observational study. Background: The increasing prevalence and impact of obstructive lung diseases and new insights, reflected in clinical guidelines, have led to concerns about the diagnosis and therapy of asthma and COPD in primary care. In Germany diagnoses written in medical records are used for reimbursement, which may influence physicians' documentation behaviour. For that reason it is unclear to what respect ICD-10 codes reflect the real problems of the patients in general practice. The aim of this study was to assess the appropriateness of the recorded diagnoses and to determine what diagnostic information is used to guide medical treatment. Methods: All patients with lower airway symptoms (n = 857) who had attended six general practices between January and June 2003 were included into this cross sectional observational study. Patients were selected from the computerised medical record systems, focusing on ICD-10-codes concerning lower airway diseases (J20-J22, J40-J47, J98 and R05). The performed diagnostic procedures and actual medication for each identified patient were extracted manually. Then we examined the associations between recorded diagnoses, diagnostic procedures and prescribed treatment for asthma and COPD in general practice. Results: Spirometry was used in 30% of the patients with a recorded diagnosis of asthma and in 58% of the patients with a recorded diagnosis of COPD. Logistic regression analysis showed an improved use of spirometry when inhaled corticosteroids were prescribed for asthma (OR = 5.2; CI 2.9-9.2) or COPD (OR = 4.7; CI 2.0-10.6). Spirometry was also used more often when sympathomimetics were prescribed (asthma: OR = 2.3; CI 1.2-4.2; COPD: OR = 4.1; CI 1.8-9.4). Conclusions: This study revealed that spirometry was used more often when corticosteroids or sympathomimetics were prescribed. The findings suggest that treatment was based on diagnostic test results rather than on recorded diagnoses. The documented ICD-10 codes may not always reflect the real status of the patients. Thus medical care for asthma and COPD in general practice may be better than initially found on the basis of recorded diagnoses, although further improvement of practice patterns in asthma and COPD is still necessary. abstract_id: PUBMED:26070795 Presentation of respiratory symptoms prior to diagnosis in general practice: a case-control study examining free text and morbidity codes. Objective: General practitioners can record patients' presenting symptoms by using a code or free text. We compared breathlessness and wheeze symptom codes and free text recorded prior to diagnosis of ischaemic heart disease (IHD), chronic obstructive pulmonary disease (COPD) and asthma. Design: A case-control study. Setting: 11 general practices in North Staffordshire, UK, contributing to the Consultations in Primary Care Archive consultation database. Participants: Cases with an incident diagnosis of IHD, COPD or asthma in 2010 were matched to controls (four per case) with no such diagnosis. All prior consultations with codes for breathlessness or wheeze symptoms between 2004 and 2010 were identified. Free text of cases and controls were also searched for mention of these symptoms. Results: 592 cases were identified, 194 (33%) with IHD, 182 (31%) with COPD and 216 (37%) with asthma. 148 (25%) cases and 125 (5%) controls had a prior coded consultation for breathlessness. Prevalence of a prior coded symptom of breathlessness or wheeze was 30% in cases, 6% in controls. Median time from first coded symptom to diagnosis among cases was 57 weeks. After adding symptoms recorded in text, prevalence rose to 62% in cases and 25% in controls. Median time from first recorded symptom increased to 144 weeks. The associations between diagnosis of cases and prior symptom codes was strong IHD relative risk ratio (RRR) 3.21 (2.15 to 4.79); COPD RRR 9.56 (6.74 to 13.60); asthma RRR 10.30 (7.17 to 14.90). Conclusions: There is an association between IHD, COPD and asthma diagnosis and earlier consultation for respiratory symptoms. Symptoms are often noted in free text by GPs long before they are coded. Free text searching may aid investigation of early presentation of long-term conditions using GP databases, and may be an important direction for future research. abstract_id: PUBMED:28609599 Assessment and management of asthma and chronic obstructive pulmonary disease in Australian general practice. Background: Dispensing data suggest potential issues with the quality use of medicines for airways disease. Objective: The objective of this article was to describe the management of asthma and chronic obstructive pulmonary disease (COPD) in general practice, and investigate the appropriateness of prescribing. Methods: The method used for this study consisted of a national cross‑sectional survey of 91 Australian general practitioners (GPs) participating in the Bettering the Evaluation and Care of Health (BEACH) program. Results: Data were available for 2589 patients (288 asthma; 135 COPD). For the patients with asthma, GPs classified asthma as well controlled in 76.4%; 54.3% were prescribed inhaled corticosteroids (ICS), mostly (84.9%) as combination therapy, and mostly at moderate-high dose; only 26.3% had a written action plan. GPs classified COPD as mild for 42.9%. Most patients with COPD (60.9%) were prescribed combination ICS therapy and 36.7% were prescribed triple therapy. Discussion: There were substantial differences between guideline-based and GP- recorded assessment and prescription for asthma and COPD. Further research is needed to improve care and optimise patient outcomes with scarce health resources. abstract_id: PUBMED:34476724 Diagnostic coding of chronic physical conditions in Irish general practice. Background: Chronic conditions are responsible for significant mortality and morbidity among the population in Ireland. It is estimated that almost one million people are affected by one of the four main categories of chronic disease (cardiovascular disease, chronic obstructive pulmonary disease, asthma, and diabetes). Primary healthcare is an essential cornerstone for individuals, families, and the community and, as such, should play a central role in all aspects of chronic disease management. Aim: The aim of the project was to examine the extent of chronic disease coding of four chronic physical conditions in the general practice setting. Methods: The design was a descriptive cross-sectional study with anonymous retrospective data extracted from practices. Results: Overall, 8.8% of the adult population in the six participating practices were coded with at least one chronic condition. Only 0.7% of adult patients were coded with asthma, 0.3% with COPD, 3% with diabetes, and 3.3% with CVD. Male patients who visited their GP in the last year were more likely to be coded with any of the four chronic diseases in comparison with female patients. A significant relationship between gender and being coded with diabetes and CVD was found. Conclusions: For a likely multitude of reasons, diagnostic coding in Irish general practice clinics in this study is low and insufficient for an accurate estimation of chronic disease prevalence. Monitoring of information provided through diagnostic coding is important for patients' care and safety, and therefore appropriate training and reimbursement for these services is essential. abstract_id: PUBMED:20797841 The Physicians' Practice Assessment Questionnaire on asthma and COPD. We describe a new tool, the Physicians' Practice Assessment Questionnaire (PPAQ), designed for the global self-assessment of implementation of asthma and COPD guidelines, as determined by the percentage of patients in whom physicians estimate that they implement guidelines key recommendations. Some of its properties were assessed by a group of 47 general practitioners (GPs), and test-retest data were obtained in repeating the questionnaire at a 5-week interval without intervention in a sub-group of 28 practitioners. Answers to the various questions were globally reproducible. The lowest scores (recommendations implemented in less than 50% of their patients) were: 1) for both asthma and COPD: referral for patient education, provision of a written action plan and regular assessment of inhaler technique, 2) for asthma: referral to a specialist for difficult to control asthma or uncertain diagnosis, and 3) for COPD: assessment of lung function and disability according to specific criteria and referral to a rehabilitation program. The analysis showed sufficient internal consistency for both questionnaires (Cronbach alphas 0.7617 for asthma and 0.8317 for COPD). Pearson's correlations indicated good test-retest (r = 0.6421, p = 0.0002 for asthma; r = 0.6801, p &lt; 0.0001 for COPD). In conclusion, the PPAQ is a new tool to assess implementation of asthma and COPD guidelines; it has the potential to identify care gaps that can be specifically targeted for intervention. abstract_id: PUBMED:15314736 Guidelines for the sociomedical assessment of performance in patients suffering from chronic obstructive lung diseases (COPD) and bronchial asthma. II: Sociomedical assessment of performance, Annex Part B, bibliography see Part I, Gesundheitswesen 2004, 66: 263f The following guidelines were developed for the medical assessment services of the German Federal Insurance Institute for Salaried Employees (BfA). Starting from day-to-day practice, criteria and attributes to guide decisions for a systemisation of the sociomedical assessment of performance in chronic obstructive pulmonary diseases (COPD) and bronchial asthma were compiled. The guidelines aim at standardising the sociomedical assessment of performance and help to make the decision-making process more transparent - e. g. for the assessment of applications for decreased earning capacity benefits. Part II outlines assessment of the individual's capacity, taking occupational factors into account. Following the determination of dysfunctions the remaining abilities and disabilities, respectively, are deduced and compared with occupational demands. Finally, inferences are drawn regarding the occupational capacity of the individual. abstract_id: PUBMED:7972971 Management of chronic airflow obstruction: differences in practice between respiratory and general physicians. An audit of inpatient care of diseases characterized by chronic airflow obstruction namely chronic bronchitis, emphysema and chronic obstructive airways disease (ICD Code Nos. 490-2 &amp; 496) was performed and the practice of respiratory and general physicians compared. One hundred cases were sampled at random from 279 cases admitted to hospitals serving the West of Glasgow in 1988. Fifty cases were selected from those admitted under the care of respiratory physicians and 50 from those under general physicians; 89 were suitable for analysis. The main outcome measurements consisted of the use of routine respiratory investigations, comparison of the use of standard therapies during the admission and at discharge, length of stay, inpatient deaths, follow up and readmission rates. The groups were similar in age, smoking history, gender and there was no significant difference in admission arterial blood gas values. The pulse rate on admission was higher in the general group (102 beats per min) in comparison to the respiratory group (91 beats per min) (P &lt; 0.004). A similar use of chest radiograph and arterial blood gas analysis was noted between the groups. Ninety-six per cent of respiratory patients had either spirometry or peak expiratory flow measured compared to 62% in the general group (P = 0.0001). No significant differences were noted in the use of antibiotics, bronchodilators, corticosteroids, oxygen or respiratory stimulants. The mean length of stay was similar. Two patients (4%) in the respiratory group compared with seven (18%) in the general group died during the admission (P = 0.01); there were no further early deaths at 1 month from discharge.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:31431060 COPD patients prescribed inhaled corticosteroid in general practice: Based on disease characteristics according to guidelines? In a primary care setting, our aim was to investigate characteristics of patients classified as having chronic obstructive pulmonary disease (COPD) and currently being prescribed inhaled corticosteroids (ICSs). The electronic patient record system in each participating general practice was searched for patients coded as COPD (ICPC, Second Edition code R95) and treated with ICS (ACT code R03AK and R03BA, that is, ICS in combination with a long-acting β2-agonist) or ICS as monotherapy. Data, if available, on demographics, smoking habits, spirometry, COPD medication, symptom score, blood eosinophils, co-morbidity and exacerbation history were retrieved from the medical records for all identified cases. Of all patients registered in the 138 participating general practices, 12.560 (3%) were coded as COPD, of whom 32% were prescribed ICS. The final study sample comprised 2.289 COPD patients currently prescribed ICS (98% also prescribed long-acting β2-agonist), with 24% being coded as both COPD and asthma. Post-bronchodilator spirometry was available in 79% (mean forced expiratory volume in 1 second 60% pred (standard deviation (SD) 23.3)), symptom severity score in 53% (mean Medical Research Council score 2.7 (SD 1.1)) and 56% of the COPD patients had had no exacerbation in the previous year (and 45% not within the 2 previous years). Blood eosinophils were measured in 67% of the patients. Information on severity of airflow limitation was missing in 15% of the patients, and the combined information on symptom severity and exacerbation history was missing in in 46%. Most of the patients (74%) were managed only by their general practitioner. Although only one-third of the COPD patients were prescribed ICSs, our findings from this study of a large cohort of patients prescribed ICSs for COPD in general practice suggest that more detailed assessment of diagnosis and disease characteristics is likely to improve the risk-benefit ratio of maintenance therapy with ICSs in COPD patients managed in primary care. abstract_id: PUBMED:9115793 Spirometry in general practice: the performance of practice assistants scored by lung function technicians. Recently, spirometers have become available for use within general practice as an alternative to peak flow meters. This study investigates whether practice assistants, after comprehensive training, can effectively carry out spirometry with patients suffering from asthma and other chronic obstructive pulmonary diseases. A scoring system, consisting of 20 items, was devised to determine the effectiveness of the assistants' instructions and to assess the patient's subsequent use of the spirometer. About half the instructions and half the patient performance items were considered to be carried out satisfactorily. abstract_id: PUBMED:9282425 Peak flow meters and spirometers in general practice. Since peak flow meters were first made available on FP10 prescription in 1990, they have become widely used in general practice for the diagnosis and monitoring of asthma. Patients can also use them at home as part of a self-management plan. Now there is increasing interest in the use of spirometers in general practice for diagnosing chronic obstructive pulmonary disease and monitoring patients' progress. We consider how peak flow monitoring and spirometry can be used to greatest advantage in general practice. Answer: The appropriateness of ICD-10 codes for performance assessment in asthma and COPD in general practice has been questioned in several studies. One study found that spirometry was used more often when corticosteroids or sympathomimetics were prescribed, suggesting that treatment was based on diagnostic test results rather than on recorded diagnoses. The documented ICD-10 codes may not always reflect the real status of the patients, indicating that medical care for asthma and COPD in general practice may be better than initially found based on recorded diagnoses, although further improvement of practice patterns in asthma and COPD is still necessary (PUBMED:15683548). Another study highlighted that symptoms of respiratory conditions are often noted in free text by GPs long before they are coded, which suggests that ICD-10 codes alone may not capture the early presentation of long-term conditions like asthma and COPD. This study also found a strong association between the diagnosis of IHD, COPD, and asthma and earlier consultation for respiratory symptoms, indicating that ICD-10 codes may not fully represent the patient's history and symptomatology (PUBMED:26070795). Furthermore, a study on the management of asthma and COPD in Australian general practice revealed substantial differences between guideline-based and GP-recorded assessment and prescription for these conditions. This suggests that ICD-10 codes used in general practice may not align with guideline-based management, potentially impacting the appropriateness of these codes for performance assessment (PUBMED:28609599). In Irish general practice, diagnostic coding of chronic physical conditions, including COPD and asthma, was found to be low and insufficient for an accurate estimation of chronic disease prevalence. This indicates that reliance on ICD-10 codes for performance assessment may not provide a complete picture of chronic disease management in general practice (PUBMED:34476724). Overall, while ICD-10 codes are a standardized way to record diagnoses, these studies suggest that they may not be entirely appropriate for performance assessment in asthma and COPD in general practice due to underutilization of diagnostic tests, discrepancies between recorded codes and actual patient status, and differences from guideline-based management.
Instruction: Do retail clinics increase early return visits for pediatric patients? Abstracts: abstract_id: PUBMED:18772304 Do retail clinics increase early return visits for pediatric patients? Objective: The purpose of this study was to assess the risk of early return visits for pediatric patients using a retail clinic. Methods: We used medical records of pediatric patients seen in a large group practice in Minnesota in the first 2 months of 2008. A retrospective analysis of electronic patient records was performed on 2 groups of patients: those using the retail clinic (n = 200) and a comparison group using a same-day acute family medicine clinic in a medical office (n = 200). Two measures of early return visits were used as dependent variables: office visits within 2 weeks for any reason and office visits within 2 weeks for the same reason. Multiple logistic regression analysis was used to adjust for case mix differences between groups. Trained medical records abstractors reviewed electronic medical records to obtain the data. Results: After adjustment for baseline differences in age, acuity, and number of office visits in the previous 6 months, no significant differences in risk of early return visits were found among clinic types. Conclusions: Retail clinic visits were not associated with early return visits. abstract_id: PUBMED:22207018 Early return visits by pediatric primary care patients with otitis media: a retail nurse practitioner clinic versus standard medical office care. Purpose: To compare outpatient return visits within 2 weeks experienced by pediatric patients diagnosed with otitis media using retail nurse practitioner clinics to similar patients using standard medical office clinics. Background: The impact of retail clinics on return visit rates has not been extensively studied. Data Source: Electronic medical records of pediatric primary care patients seen in a large group practice in Minnesota in 2009 for otitis media. Sample: Patients seen in retail walk-in clinics staffed by nurse practitioners (N = 627) or regular office clinics (N = 2353). Outcome Measure: A return visit to any site within 2 weeks. Results: The percentage returning was higher in standard care patients than in retail medicine patients (21.0 vs 11.2, P &lt; .001). The odds of a return visit within 2 weeks were higher in standard care patients than in retail medicine patients after adjusting for propensity to use services, age, and gender (odds ratio = 1.54, P &lt; 0.01). Conclusion: In this group practice, the odds of return visits within 2 weeks for pediatric patients treated for otitis media were lower in retail medicine clinics than in standard office clinics. abstract_id: PUBMED:22409532 Early return visits by primary care patients: a retail nurse practitioner clinic versus standard medical office care. The purpose of this study was to compare return visits made by patients within 2 weeks after using retail nurse practitioner clinics to return visits made by similar patients after using standard medical office clinics. Retail medicine clinics have become widely available. However, their impact on return visit rates compared to standard medical office visits for similar patients has not been extensively studied. Electronic medical records of adult primary care patients seen in a large group practice in Minnesota in 2009 were analyzed for this study. Patients who were treated for sinusitis were selected. Two groups of patients were studied: those who used one of 2 retail walk-in clinics staffed by nurse practitioners and a comparison group who used one of 4 regular office clinics. The dependent variable was a return office visit to any site within 2 weeks. Multiple logistic regression analysis was used to adjust for case-mix differences between groups. Unadjusted odds of return visits were lower for retail clinic patients than for standard office care patients. After adjustment for case mix, patients with more outpatient visits in the previous 6 months had higher odds of return visits within 2 weeks (2-6 prior visits: odds ratio [OR]=1.99, P=0.00; 6 or more prior visits: OR=6.80, P=0.00). The odds of a return visit within 2 weeks were not different by clinic type after adjusting for propensity to use services (OR=1.17, P=0.28). After adjusting for case mix differences, return visit rates did not differ by clinic type. abstract_id: PUBMED:18780911 Retail clinics, primary care physicians, and emergency departments: a comparison of patients' visits. In this study we compared the demographics of and reasons for visits in national samples of visits to retail clinics, primary care physicians (PCPs), and emergency departments (EDs). We found that retail clinics appear to be serving a patient population that is underserved by PCPs. Ten clinical problems such as sinusitis and immunizations encompass more than 90 percent of retail clinic visits. These same ten clinical problems make up 13 percent of adult PCP visits, 30 percent of pediatric PCP visits, and 12 percent of ED visits. Whether there will be a future shift of care from EDs or PCPs to retail clinics is unknown. abstract_id: PUBMED:19148026 Impact of retail walk-in care on early return visits by adult primary care patients: evaluation via triangulation. Background: Retail medicine clinics have become widely available. However, few studies have been published reporting on the outcomes of care from these clinics. The purpose of this study was to assess the risk of early return visits for patients using a retail walk-in clinic. Design: Medical records of patients seen in a large group practice in Minnesota in the first 2 months of 2008 were analyzed for this study. Three groups of patients were studied: those using the retail walk-in clinic (n = 300), a comparison group using regular office care in the previous year (n = 373), and a same-day acute care clinic in a medical office (n = 204). The dependent variable was a return office visit within 2 weeks. Multiple logistic regression analysis was used to adjust for case-mix differences between groups. Results: The percentage of office visits within 2 weeks for these groups was 31.7 for retail walk-in patients, 38.9 for office visit patients, and 37.1 for same-day acute care clinic patients, respectively (P = .13). The corresponding percentages of return office visits within 2 weeks for the same reasons were 14.0, 24.4, and 20.6 (P &lt; .01). After adjustment for age, sex, marital status, acuity, and number of office visits in the previous 6 months, no significant differences in risk of early return visits were found among clinic types. Conclusion: Our retail walk-in clinic appeared to increase access without increasing early return visits. abstract_id: PUBMED:32787928 The rates of hospital admissions and return visits to a rapidly growing pediatric emergency department as measures of quality of care. Background: Return visits to the emergency department are viewed as a quality measure of patient management. Avoiding unnecessary admissions to the ward can potentially cause an increase in return visits, thus effecting quality assessment. Methods: After implementing an educational process the relationship between admissions and return visits was assessed over time at a rapidly growing pediatric emergency department. Results: There was a 264% increase in visits from 2004 to 2017. In the study period admission rates declined from 25 to 14%. This was achieved without a rise in return visits and with a stable percentage of admissions from return visits. Conclusions: Interventions aimed at decreasing unnecessary admissions do not lead to increased return visits and return visit admissions. abstract_id: PUBMED:30790114 The cost of callbacks: return visits for diagnostic imaging discrepancies in a pediatric emergency department. Purpose: Diagnostic imaging has mirrored the steady growth of healthcare utilization in the USA. This has created greater opportunity for diagnostic errors, which can be costly in terms of morbidity and mortality as well as dollars and cents. The purposes of this study were to describe all return visits to a tertiary care urban pediatric emergency department (PED) resulting from diagnostic imaging discrepancies and to calculate the costs of these return visits. Methods: From July 2014 to February 2015, all children who underwent a diagnostic imaging study during an ED visit were assembled. Analysis was performed on all children who were called back and returned to the ED following a discrepant read. Direct and indirect costs to the patient, family, hospital, and society for these return visits were calculated. Results: During the study period, 8310 diagnostic imaging studies were performed, with 207 (2.5%) discrepant reads. Among the discrepant reads, 37 (0.4% of total, 17.9% of discrepant) patients had a return visit to the ED for further management. Including ED charges, time and travel costs to the family, and costs of radiation exposure, return visits for radiologic discrepancies over this 8-month period cost a total of $84,686.47, averaging $2288.82 per patient. Conclusions: Though the overall diagnostic imaging discrepancy rate among our study population was low, the clinically significant discrepancies requiring return ED visits were potentially high risk, and costly for the patient, family, and healthcare system. abstract_id: PUBMED:27139638 Comparison of Primary Care Provider Office Hours and Pediatric Emergency Department Return Visits. Objective: The aim of this study was to evaluate the influence of primary care office hours of operation on 48-hour return visits (RVs) to a pediatric emergency department (ED). We compared characteristics of patients who return with those who follow up outpatient to determine the feasibility of opening off-hour clinics to decrease the RV rate. Methods: The study was a retrospective chart review of patients presenting to a pediatric ED for a 3-year period. A subset of patients with a hospital-affiliated primary care provider was evaluated to compare those with 48-hour ED RVs with those with office follow-up. Results: Patients with a hospital-affiliated primary care provider had 30,231 visits, of whom 842 had a 48-hour return (2.79%). A significant number (48.5%) of those who returned had seen their primary care doctor between emergency visits. The percentage of RVs occurring at night (55.7%) was slightly lower than the percentage of all visits occurring off hours (58.1%). Patients with more acute presentation at initial visit (emergency severity index level acuity 2, &gt;20 orders placed) were more likely to follow up with their provider than return to the ED. Conclusions: The findings from this study show no significant increase in RVs during the evening and overnight hours and many patients with outpatient follow-up before returning to the ED. Opening a clinic at our hospital during nontraditional hours would not likely significantly decrease RV rate. abstract_id: PUBMED:36963176 Early unplanned return visits to pediatric emergency departments in Israel during the SARS-CoV-2 pandemic. Introduction: During the SARS-CoV-2 pandemic there was a considerable drop in the number of visits to Pediatric Emergency Departments (PED). Unplanned return visits (URV) might represent inadequate emergency care. We assessed the impact of the pandemic on early URV to PEDs in Israel. Methods: This multicenter cross-sectional study analyzed the 72-h URV to PEDs among patients under the age of 18 years during a one-year pandemic period (March 1st, 2020, to February 28th, 2021), and compared them with the 72-h URV of the corresponding pre-pandemic period (March 1st, 2019, to February 28th, 2020). Data was extracted from Clalit Health Services (CHS), the largest public health care organization in Israel. Results: The pandemic and pre-pandemic early URV rates were 5465 (5.1%) and 8775 (5.6%), respectively (OR = 0.90, 95% CI 0.92-0.99). The rate of return-visit admissions to hospital wards during these periods were 29.5% and 32.1%, respectively (OR = 0.83, 95% CI 0.86-0.98). The rate of return-visit admissions to ICUs during these periods were 0.64% and 0.52%, respectively (OR = 1.11, 95% CI 0.67-1.62). On return-visit, 3 (0.055%) and 5 (0.057%) URV patients were declared dead on arrival during the pandemic and pre-pandemic periods, respectively (OR = 0.96, 95% CI 0.23-4.03). The distributions of the time interval from index visit to return visit remained consistent between the periods. Discussion: In our study, early URV to PED's were only mildly influenced by the SARS-CoV-2 pandemic. abstract_id: PUBMED:32029989 Pattern of Use and Determinants of Return Visits at Community or Mohalla Clinics of Delhi, India. Background: Mohalla or Community Clinics of Delhi, India, have made primary care accessible, equitable, and affordable for women, elderly, and children in the underserved areas. Objectives: To understand the population subgroups which use these clinics and to analyze why people use these facilities and the determinants of the return visits for health seeking. Materials And Methods: This was a community-based cross-sectional study, with primary data collection from 25 localities across Delhi. A pretested semi-structured interview schedule was used for data collection. Two regression models were used for data analysis: a linear probability model to understand the factors contributing to the use of these clinics and a probit regression model to understand the determinants of return visits to these facilities. Results: Four hundred ninety-three ever-married women residing in study settings were included. The age of beneficiaries, marital status, distance from the clinics, and awareness about the services were found to be positively associated with the use of Mohalla Clinics. The proximity to households, waiting time at clinics, interaction time with the doctor, perceived performance of doctor, and effectiveness of treatment influenced the decision on a return visit for care seeking. Conclusions: Improved information on service provision, proximity to the facility, assured provision of doctors and laboratory services, and increased patient-doctor interaction time have the potential to increase the use and return visits to these Community or Mohalla Clinics. The lessons from this study can be used to design government primary health-care facilities in urban settings, for increased use by the target populations. Answer: Based on the provided abstracts, retail clinics do not appear to increase early return visits for pediatric patients. A study assessing the risk of early return visits for pediatric patients using a retail clinic found no significant differences in the risk of early return visits among clinic types after adjusting for baseline differences in age, acuity, and number of office visits in the previous 6 months (PUBMED:18772304). Another study comparing outpatient return visits within 2 weeks for pediatric patients diagnosed with otitis media found that the odds of return visits within 2 weeks were lower in retail medicine clinics than in standard office clinics (PUBMED:22207018). Additionally, a study comparing return visits made by patients within 2 weeks after using retail nurse practitioner clinics to return visits made by similar patients after using standard medical office clinics found that after adjusting for case mix differences, return visit rates did not differ by clinic type (PUBMED:22409532). Furthermore, a study evaluating the impact of retail walk-in care on early return visits by adult primary care patients found that after adjustment for age, sex, marital status, acuity, and number of office visits in the previous 6 months, no significant differences in risk of early return visits were found among clinic types (PUBMED:19148026). These findings suggest that retail clinics do not increase the likelihood of early return visits for pediatric patients and may provide a comparable level of care in terms of the need for early follow-up visits when compared to traditional medical office visits.
Instruction: Retrograde holmium:YAG laser disintegration of stones in pelvic ectopic kidneys: would it minimize the risk of surgery? Abstracts: abstract_id: PUBMED:18370611 Retrograde holmium:YAG laser disintegration of stones in pelvic ectopic kidneys: would it minimize the risk of surgery? Purpose: To study the safety and efficacy of the holmium:YAG laser for disintegration of stones in pelvic ectopic kidneys. Patients And Methods: Between October 2005 and October 2006 four consecutive patients with large obstructing calculi (&gt;3 cm in diameter) in the pelves of pelvic ectopic kidneys were prepared to be treated using retrograde ureterorenoscopy and holmium:YAG laser lithotripsy. All the patients were investigated with x-rays of the kidney, ureters, and bladder (KUB) and intravenous urography (IVU). Holmium:YAG laser lithotripsy was performed in a retrograde manner using energy ranging from 1 to 1.5 J/pulse with a frequency ranging from 15 to 20 Hz. Results: Four patients were included in the study. The average age of the patients was 44 years (range 35-56 years). The average operative time for the laser lithotripsy procedure was 120 minutes (range 100-180 minutes). Three of the patients (75%) were rendered stone-free at 3 months. None of the patients developed back-pressure changes, gross hematuria, or abdominal pain during the follow-up period. One of the patients could not be treated endoscopically and required open surgery. Hospital stay ranged between 2 and 3 days. Conclusion: Retrograde ureteroscopy and holmium:YAG laser lithotripsy is efficacious for managing patients with stones in pelvic kidneys. The procedure is safe and effective and avoids the complications of open surgery. abstract_id: PUBMED:37144307 The role of retrograde intrarenal surgery in kidney stones of upper urinary system anomalies. Introduction: Fusion, pelvic, and duplicated urinary tract anomalies of the kidney are rarely seen. There might be some difficulties in the stone treatment, in the administration of extracorporeal shockwave lithotripsy (ESWL), retrograde intrarenal surgery (RIRS), percutaneous nephrolithotomy (PCNL), and laparoscopic pyelolithotomy procedures in these patients due to the anatomical variations in kidneys with anomalies. Aim: To evaluate RIRS results on patients with upper urinary tract anomalies. Materials And Methods: Data of 35 patients with horseshoe kidney, pelvic ectopic kidney, and double urinary system in two referral centers were reviewed retrospectively. Demographic data, stone characteristics, and postoperative characteristics of the patients were evaluated. Results: The mean age of patients (n=35, 6 women and 29 men) was 50 years. Thirty-nine stones were detected. The total mean stone surface area in all anomaly groups was found to be 140 mm2, and the mean operative time was 54.7±24.7 minutes. The rate of using ureteral access sheath (UAS) was very low (5/35). Eight patients needed auxiliary treatment after the operation. The residual rate, which was 33.3% in the first 15 days, decreased to 22.6% in the third month follow-ups. Four patients had minor complications. In patients with horseshoe kidney and duplicated ureteral systems, it was observed that the risk factor increasing the presence of residual stones was the total stone volume. Conclusions: RIRS for kidneys with low and medium stone volume anomalies is an effective treatment method with high stone-free and low complication rates. abstract_id: PUBMED:36188640 Hybrid flexible ureteroscopy strategy in the management of renal stones - a narrative review. The introduction of single-use flexible ureteroscopes (suFURSs) in daily practice tends to overcome the main limitations of reusable ureteroscopes (reFURSs), in terms of high acquisition costs, maintenance, breakages and repairing costs, reprocessing and sterilization, as retrograde intrarenal surgery (RIRS) is promoted as first-line treatment of renal stones in most cases. A hybrid strategy implies having both instruments in the armamentarium of endourology and choosing the best strategy for cost-efficiency and protecting expensive reusable instruments in selected high-risk for breakage cases such as large stones of the inferior calyx, a steep infundibulopelvic angle or narrow infundibulum, or abnormal anatomy as in horseshoe and ectopic kidney. In terms of safety and efficiency, data present suFURSs as a safe alternative considering operating time, stone-free, and complication rates. An important aspect is highlighted by several authors about reusable instrument disinfection as various pathogens are still detected after proper sterilization. This comprehensive narrative review aims to analyze available data comparing suFURSs and reFURSs, considering economic, technical, and operative aspects of the two types of instruments, as well as the strategy of adopting a hybrid approach to selecting the most appropriate flexible ureteroscope in each case. abstract_id: PUBMED:26834410 Laparoscopic-assisted mini percutaneous nephrolithotomy in the ectopic pelvic kidney: Outcomes with the laser dusting technique. Introduction: The treatment of renal lithiasis has undergone a sea change with the advent of extracorporeal shock wave lithotripsy (ESWL) and endourological procedures such as percutaneous nephrolithotomy (PCNL), ureterorenoscopy and retrograde intrarenal surgery (RIRS). The presence of anatomical anomalies, such as ectopic pelvic kidney, imposes limitations to such therapeutic procedures. This study is aimed to find a simple and effective way to treat the stones in ectopic kidney. Materials And Methods: From 2010 to 2014, nine patients underwent laparoscopic-assisted mini PCNL with Laser dusting for calculi in ectopic pelvic kidneys at our hospital. Retrograde pyelography was done to locate the kidney. Laparoscopy was performed and after mobilizing the bowel and peritoneum, the puncture was made in the kidney and using rigid mini nephroscope, and stones were dusted with Laser. Results: The median interquartile range (IQR) stone size was 18 (6.5) mm. Median (IQR) duration of the procedure was 90 (40) min. The median (IQR) duration of postoperative hospital stay was 4 (2) days. The stone clearance in our series was 88.9%, with only one patient having a residual stone. No intra- or post-operative complications were encountered. Conclusion: Laparoscopy-assisted mini PCNL with Laser dusting offers advantages in ectopic pelvic kidneys in achieving good stone clearance, especially in patients with a large stone burden or failed ESWL or RIRS. abstract_id: PUBMED:35510004 Delayed Bleeding After Retrograde Intrarenal Surgery: A Rare Complication in Ectopic Pelvic Kidney. Anatomical variations in the pelvic ectopic kidney (PEK) present many challenges to stone treatment. Retrograde intrarenal surgery (RIRS) has emerged as the treatment of choice for small to medium stones. We present a case of delayed hemorrhage due to an arteriocaliceal fistula. A 57-year-old man with a 12 mm middle calyx stone was subjected to uneventful RIRS, despite a high grade of scope deflection. Recovery was unremarkable until 37 days after surgery when the patient started recurrent hematuria and clot retention. Renal angiography revealed a bleeding vessel from an arteriocaliceal fistula. Superselective arterial embolization was successfully performed. Anomalous collecting system and vasculature can increase the risk of complications in PEKs. Massive bleeding from unusual arterial blood supply was effectively treated by angioembolization. abstract_id: PUBMED:32257806 Management of staghorn stones in special situations. Staghorn stones have always been a challenge for urologists, especially in some special situations, such as horseshoe kidney, ectopic kidney, paediatric kidney, and solitary kidney. The treatment of these staghorn stones must be aggressive because they can lead to renal function loss and serious complications. The gold-standard management for staghorn stones is surgical treatment with the aim of clearing the stones and preserving renal function. Treatment methods for staghorn stones have developed rapidly, such as extracorporeal shock wave lithotripsy, retrograde intrarenal surgery, percutaneous nephrolithotomy and laparoscopy and open surgery. Whether the standard procedures for staghorn stones can also apply to these stones in special situations is still not agreed upon. The decision should be made individually according to the circumstances of the patient. In this review, we evaluates the previous studies and comments on the management of staghorn stones under special situations in the hope of guiding the optimal choice for urologists. abstract_id: PUBMED:34715241 Real-world Global Outcomes of Retrograde Intrarenal Surgery in Anomalous Kidneys: A High Volume International Multicenter Study. Objective: To analyze the trends and outcomes of retrograde intrarenal surgery for treatment of urolithiasis in anomalous kidneys in a large international multicenter series. Materials And Methods: We designed a multicentric retrospective study. Nineteen high-volume centers worldwide were included. Pre-, peri- and postoperative data were collected, and a subgroup analysis was performed according to renal anomaly. Results: We analyzed 414 procedures: 119 (28.7%) were horseshoe kidneys, 102 (24.6%) pelvic ectopic kidneys, 69 (16.7%) malrotated kidneys and 50 (12.1%) diverticular calculus. The average size (SD) of the stone was 13.9 (±6) millimeters and 193 (46.6%) patients had a pre-operative stent. In 249 cases (60.1%) a disposable scope was used. A UAS (ureteral access sheath) was used in 373 (90%) patients. A Holmium laser was used in 391 (94.4%) patients. The average (SD) operating time was 65.3 (±24.2) minutes. Hematuria, caliceal perforation and difficulty in stone localisation were mostly seen in diverticular stones and difficulty in UAS placement and lithotripsy in the cases of renal malrotation. The overall complication rate was 12%. Global stone-free rate was 79.2%. Residual fragments (RF) were significantly lesser in the pre-stented group (P &lt;.05). Diverticular calculi was the group with more RF and needed ancillary procedures (P &lt;.05). Conclusion: Retrograde intrarenal surgery in patients with anomalous kidneys is safe and effective with a high single-stage stone-free rate and low complication rate. There is a trend toward using smaller and disposable scopes and smaller UAS. Diverticular stones can still be challenging with higher rates of intraoperative hematuria, caliceal perforation and RF. abstract_id: PUBMED:16479212 Management of stones in patients with anomalously sited kidneys. Purpose Of Review: Congenital abnormities in urology are very common. Horseshoe, malrotated and ectopic kidneys, as well as duplex systems, are the most frequent in this respect. The combination of both abnormalities and stones is of clinical importance. The question is asked if standard procedures for stones apply also to stones in abnormal kidneys. Recent Findings: In general, open surgery, extracorporeal shock-wave lithotripsy, percutaneous procedures, endoscopic procedures and laparoscopy are possible procedures in both normal and abnormal kidneys. The importance of ureteric pelvic junction obstruction has to be taken into account and a metabolic work-up remains important. Summary: The trend for treatment of stones in abnormal kidneys goes towards endoscopical and laparoscopical procedures, whereas a combination of both seems to be appropriate in many cases. abstract_id: PUBMED:32240110 Retrograde intrarenal surgery as a tool for lithiasis management in renal anomalies. Four cases description. Objective: The management of stone disease in renal abnormalities is a challenge for urologist due to its rarity. The aim of the current manuscript is to report our experience in Retrograde Intrarenal Surgery (RIRS) in 4 complex-abdnormal cases using the flexible videoureterorrenoscopy. Material And Methods: Data was prospectively collected and retrospectively analyzed regarding our first 100 RIRS for stone disease with flexible videoureterorrenoscope (FLEX-X 8.4 Fr- STORZ®) between 2017and 2018. Four patients presented with renal anomalies and stone disease; one horseshoe kidney, polycystickidney, a renal ectopia fused and a caliceal diverticulum. We analyzed demographic variables (age andgender), stone size, previous treatment received, clinical presentation, stone free rate and complication rate using Dindo-Clavien classification. Results: 4 (4%) cases of renal stone disease associated to renal anomalies were identified. All procedures were ambulatory. The mean age was 56 years (43 to 65) being 3 male and 1 female. The average stone size was 16.25 mm (6 to 23). All cases represented recurrent stone disease, initially treated with a primary treatment such as extracorporeal shock wave or percutaneous lithotripsy. The mean surgical time was 57 minutes (43 to 79) and the stone free rate 100%. As complications, one patient presented low back pain at 48 hour safter surgery, which did not yield with oral analgesics requiring intravenous treatment, although without admission (Clavien II). Conclusion: Retrograde intrarenal surgery for the management of renal stone in kidney anomalies is safe, feasible and effective. However; more cases and comparative studies with percutaneous and extracorporeal lithotripsy are needed to optimize treatment decision making. abstract_id: PUBMED:33102728 Efficacy of the Endoscopic Combined Intrarenal Surgery for the Treatment of a Staghorn Calculus in Crossed Fused Renal Ectopia. Background: Crossed fused renal ectopia (CFRE) is an unusual anomaly in which both kidneys lie fused on one side, with double pelvis and ureters draining into both sides of the bladder. Complex renal stones are a considerable challenge to endourologists, and when a staghorn stone is associated with abnormal anatomy, its treatment is even more difficult. Today there is no consensus about the right treatment for complex renal stones in CFRE. So, the objective of this case is to present the efficacy of the endoscopic combined intrarenal surgery (ECIRS) for the treatment of a staghorn renal stone in one patient with CFRE. Case Presentation: We described a case of a 23-year-old man with prolonged lasting and pain on the left flank associated with intermittent gross hematuria. Enhanced CT revealed a crossed fused kidney on the left side, drained by an intercommunicating pelvis and a single ureter, with a staghorn stone wholly occupying both renal units. The patient was effectively treated by one single session of ECIRS. Conclusion: The ECIRS is a good alternative to consider in patients with CFRE that have a staghorn calculus with a reasonable success rate. Answer: The use of retrograde holmium:YAG laser disintegration for stones in pelvic ectopic kidneys appears to minimize the risk of surgery. A study conducted between October 2005 and October 2006 on four patients with large obstructing calculi in pelvic ectopic kidneys demonstrated that retrograde ureteroscopy and holmium:YAG laser lithotripsy is a safe and effective procedure. The average operative time was 120 minutes, and three out of four patients were rendered stone-free at 3 months without developing back-pressure changes, gross hematuria, or abdominal pain during the follow-up period. Only one patient required open surgery, suggesting that this minimally invasive approach could avoid the complications associated with more invasive surgical procedures (PUBMED:18370611). Additionally, retrograde intrarenal surgery (RIRS) has been found to be an effective treatment method for kidneys with low and medium stone volume anomalies, with high stone-free and low complication rates (PUBMED:37144307). The use of hybrid flexible ureteroscopy strategies, which include both single-use and reusable ureteroscopes, can be cost-efficient and protect expensive instruments in high-risk cases, such as those with abnormal anatomy like ectopic kidneys (PUBMED:36188640). Laparoscopic-assisted mini percutaneous nephrolithotomy with laser dusting has also been shown to offer advantages in ectopic pelvic kidneys, achieving good stone clearance, especially in patients with a large stone burden or failed ESWL or RIRS (PUBMED:26834410). However, it is important to note that anatomical variations in pelvic ectopic kidneys can increase the risk of complications, as evidenced by a case of delayed hemorrhage due to an arteriocaliceal fistula after RIRS (PUBMED:35510004). Overall, the evidence suggests that minimally invasive techniques, such as retrograde holmium:YAG laser disintegration, can minimize the risks associated with surgery for stones in pelvic ectopic kidneys.
Instruction: Is routine preoperative screening echocardiogram indicated in all children with congenital duodenal obstruction? Abstracts: abstract_id: PUBMED:24682563 Is routine preoperative screening echocardiogram indicated in all children with congenital duodenal obstruction? Introduction: Congenital duodenal obstruction (DO) is frequently associated with congenital heart disease (CHD). Operative repair of DO is often postponed until an echocardiogram is completed, which may result in unnecessary delays. We aimed to identify and characterize CHD in children with DO to determine if appropriately selected patients could forego preoperative echocardiogram. Methods: A two-center retrospective review of all infants with DO undergoing operative repair with completed echocardiograms was included (2003-2011). Demographics, co-morbid conditions, clinical exam findings, radiologic imaging, and need for cardiac surgery were recorded. Results: 67 children were identified. 47 (70.1%) had CHD on echocardiogram of which 19 (40.5%) had significant CHD. Children without clinical findings, abnormalities on physical examination, and/or abnormal chest x-ray were unlikely to have CHD; i.e., no asymptomatic child had significant CHD. Sensitivity and specificity of clinical findings, physical exam, and/or chest x-ray for significant CHD were 100% (95% CI 0.79-1.0) and 37.5% (95% CI 0.24-0.53), respectively, for major CHD and 87.2% (0.74-0.95) and 60% (0.36-0.80) for any CHD. Conclusion: Careful clinical assessment, evaluation with pulse oximetry, and chest x-ray may be sufficient to exclude significant CHD in children with DO. Identifying children at low risk for cardiac lesions may prevent unnecessary delays to operative intervention and may limit medical expenses. abstract_id: PUBMED:29866484 Neonatal echocardiogram in duodenal obstruction is unnecessary after normal fetal cardiac imaging. Background: Duodenal obstruction (DO) is associated with congenital cardiac anomalies that may complicate the delivery of anesthesia during surgical repair. As most infants undergo fetal ultrasounds that identify cardiac anomalies, our aim was to determine the utility of obtaining preoperative neonatal echocardiograms in all DO patients. Methods: We conducted a retrospective cohort study of all DO patients treated at two tertiary care children's hospitals between January 2005 and February 2016. Prenatal ultrasounds were compared to neonatal echocardiograms to determine concordance. Binomial exact analyses were used to estimate the negative predictive value (NPV) of prenatal imaging. Results: We identified 65 infants with DO. The majority of patients (93.8%) had prenatal ultrasounds, including twenty patients that underwent fetal echocardiogram. Fourteen (21.5%) were diagnosed with cardiac lesions in utero, and neonatal echocardiograms confirmed 12 lesions, without identifying any new lesions. No changes to anesthetic management were made because of cardiac lesions. The NPV of prenatal imaging was 100% (95% Confidence Interval: 91.0-100.0). Conclusions: Neonatal echocardiogram is unlikely to identify new cardiac lesions in DO patients with negative fetal imaging and delays in surgical care are unwarranted. Levels Of Evidence: Study of Diagnostic Test-Level II. abstract_id: PUBMED:12124699 Prenatal ultrasonographic detection of gastrointestinal obstruction: results from 18 European congenital anomaly registries. Objectives: We evaluated the prenatal detection of gastrointestinal obstruction (GIO, including atresia, stenosis, absence or fistula) by routine ultrasonographic examination in an unselected population all over Europe. Methods: Data from 18 congenital malformation registries in 11 European countries were analysed. These multisource registries used the same methodology. All fetuses/neonates with GIO confirmed within 1 week after birth who had prenatal sonography and were born during the study period (1 July 1996 to 31 December 1998) were included. Results: There were 670 793 births in the area covered and 349 fetuses/neonates had GIO. The prenatal detection rate of GIO was 34%; of these 40% were detected &lt; or = 24 weeks of gestation (WG). A total of 31% (60/192) of the isolated GIO were detected prenatally, as were 38% (59/157) of the associated GIO (p=0.26). The detection rate was 25% for esophageal obstruction (31/122), 52% for duodenal obstruction (33/64), 40% for small intestine obstruction (27/68) and 29% for large intestine obstruction (28/95) (p=0.002). The detection rate was higher in countries with a policy of routine obstetric ultrasound. Fifteen percent of pregnancies were terminated (51/349). Eleven of these had chromosomal anomalies, 31 multiple malformations, eight non-chromosomal recognized syndromes, and one isolated GIO. The participating registries reflect the various national policies for termination of pregnancy (TOP), but TOPs after 24 WG (11/51) do not appear to be performed more frequently in countries with a liberal TOP policy. Conclusion: This European study shows that the detection rate of GIO depends on the screening policy and on the sonographic detectability of GIO subgroups. abstract_id: PUBMED:36967411 Cardiac anomalies in children with congenital duodenal obstruction: a systematic review with meta-analysis. Background: Cardiac anomalies occur frequently in patients with congenital duodenal obstruction (DO). However, the exact occurrence and the type of associated anomalies remain unknown. Therefore, the aim of this systematic review is to aggregate the available literatures on cardiac anomalies in patients with DO. Methods: In July 2022, a search was performed in PubMed and Embase.com. Studies describing cardiac anomalies in patients with congenital DO were considered eligible. Primary outcome was the pooled percentage of cardiac anomalies in patients with DO. Secondary outcomes were the pooled percentages of the types of cardiac anomalies, type of DO, and trisomy 21. A meta-analysis was performed to pool the reported data. Results: In total, 99 publications met our eligibility data, representing 6725 patients. The pooled percentage of cardiac anomalies was 29% (95% CI 0.26-0.32). The most common cardiac anomalies were persistent foramen ovale 35% (95% CI 0.20-0.54), ventricular septal defect 33% (95% CI 0.24-0.43), and atrial septal defect 33% (95% CI 0.26-0.41). The most prevalent type of obstruction was type 3 (complete atresias), with a pooled percentage of 54% (95% CI 0.48-0.60). The pooled percentage of Trisomy 21 in patients with DO was 28% (95% CI 0.26-0.31). Conclusion: This review shows cardiac anomalies are found in one-third of the patients with DO regardless of the presence of trisomy 21. Therefore, we recommend that patients with DO should receive preoperative cardiac screening. Level Of Evidence: II. abstract_id: PUBMED:18377511 Abnormalities of intestinal rotation in patients with congenital heart disease and the heterotaxy syndrome. Objective: Abnormalities of intestinal rotation (AIR) are seen in association with congenital heart disease and heterotaxy syndrome. The prevalence of these abnormalities and recommendations for management are unclear. Our objective was to determine the prevalence of screening for AIR by elective imaging among our group and prophylactic vs. emergent surgical intervention for AIR in patients with congenital heart disease and heterotaxy syndrome. Methods: From October 1988 through October 2000, we identified 74 patients with congenital heart disease and heterotaxy syndrome, 44 (59%) asplenia, 30 (41%) polysplenia. Abdominal imaging was performed in 34 patients (45%). Twenty-four (32%) were found to have AIR. Of 34 patients imaged, 22 (65%) were found to have AIR. Two patients not imaged were found to have AIR: one at autopsy, and the other, incidentally during other abdominal surgery. Because imaging was performed based on individual cardiologist's practice style that did not change over the period of the study and rarely secondary to symptoms, it is likely that the prevalence of AIR in the patients that were not electively imaged would be similar. Results: There was no statistical difference in the presence of AIR between asplenic (34%[15/44]) and polysplenic (30%[9/30]) patients. Of the 22 patients imaged with AIR, 18 underwent Ladd procedure. Five of 12 imaged patients without AIR were found to have other significant gastrointestinal pathologies requiring intervention including gastrostomy tube placement for reflux (3), duodenal web (1), and biliary atresia (1). Of the 40 patients who were not pre-emptively imaged, none suffered acute obstruction solely secondary to AIR. However, in 2 patients intestinal obstruction was suspected and subsequently discovered by imaging and/or laparotomy due to other intestinal anomalies. Conclusions: AIR is common among patients with heterotaxy syndrome and congenital heart disease. We recommend that patients with congenital heart disease and heterotaxy syndrome have routine elective abdominal imaging of their gastrointestinal tract at birth as part of their evaluation. abstract_id: PUBMED:22730264 A systematic review of studies of quality of life in children and adults with selected congenital anomalies. Background: Few studies have assessed quality of life (QOL) for children born with major structural congenital anomalies. We aimed to review studies reporting QOL in children and adults born with selected congenital anomalies involving the digestive system. Methods: Systematic review methods were applied to literature searches, development of the data extraction protocol, and the review process. We included studies published in English (1990-2010), which used validated instruments to assess QOL in individuals born with congenital diaphragmatic hernia, esophageal atresia, duodenal atresia or abdominal wall defects. Results: Of 200 papers identified through literature searches, 111 were excluded after applying restrictions and removing duplicates. After scanning 89 abstracts, 32 full-text papers were reviewed (none on duodenal atresia), of which 18 (nine in children or adolescents and nine in adults) were included. Studies measured health-related QOL, but did not assess subjective wellbeing. Instruments used to assess health-related QOL in children varied considerably. In adults most studies used the Short Form 36. Many studies had methodological limitations, such as being from a single institution, retrospective cohorts, and low sample size. The summarized evidence suggests that health-related QOL of these children is affected by associated anomalies and ongoing morbidity resulting in lower physical functioning and general health perception. In adults, health-related QOL is comparable with the general population. Conclusions: The reviewed studies considered health status and functioning as a major determinant of QOL. More studies assessing QOL in patients with major congenital anomalies are needed, and those involving children should use age-adjusted, validated instruments to measure both health-related QOL and self-reported subjective wellbeing. abstract_id: PUBMED:31391773 Comparison of outcomes between complete and incomplete congenital duodenal obstruction. Background: Congenital duodenal obstruction (CDO) can be complete (CCDO) or incomplete (ICDO). To date there is no outcome analysis available that compares both subtypes. Aim: To quantify and compare the association between CCDO and ICDO with outcome parameters. Methods: We retrospectively reviewed all patients who underwent operative repair of CCDO or ICDO in our tertiary care institution between January 2004 and January 2017. The demographics, clinical presentation, preoperative diagnostics and postoperative outcomes of 50 patients were compared between CCDO (n = 27; atresia type 1-3, annular pancreas) and ICDO (n = 23; annular pancreas, web, Ladd´s bands). Results: In total, 50 patients who underwent CDO repair were enrolled and followed for a median of 5.2 and 3.9 years (CCDO and ICDO, resp.). CCDO was associated with a significantly higher prenatal ultrasonographic detection rate (88% versus 4%; CCDO vs ICDO, P &lt; 0.01), lower gestational age at birth, lower age and weight at operation, higher rate of associated congenital heart disease (CHD), more extensive preoperative radiologic diagnostics, higher morbidity according to Clavien-Dindo classification and comprehensive complication index (all P ≤ 0.01). The subgroup analysis of patients without CHD and prematurity showed a longer time from operation to the initiation of enteral feeds in the CCDO group (P &lt; 0.01). Conclusion: CCDO and ICDO differ with regard to prenatal detection rate, gestational age, age and weight at operation, rate of associated CHD, preoperative diagnostics and morbidity. The degree of CDO in mature patients without CHD influences the postoperative initiation of enteral feeding. abstract_id: PUBMED:37385804 Efficacy and safety of endoscopic diaphragm incision in children with congenital duodenal diaphragm Objective: To explore the efficacy and safety of endoscopic diaphragm incision in pediatric congenital duodenal diaphragm. Methods: Eight children with duodenal diaphragm treated by endoscopic diaphragm incision in the Department of Gastroenterology of Guangzhou Women and Children's Medical Center from October 2019 to May 2022 were enrolled in this study. Their clinical data including general conditions, clinical manifestations, laboratory and imaging examinations, endoscopic procedures and outcomes were retrospectively analyzed. Results: Among the 8 children, 4 were males and 4 females. The diagnosis was confirmed at the age of 6-20 months; the age of onset was 0-12 months and the course of disease was 6-18 months. The main clinical manifestations were recurrent non-biliary vomiting, abdominal distension and malnutrition. One case complicated with refractory hyponatremia was first diagnosed with atypical congenital adrenal hyperplasia in the endocrinology department. After treatment with hydrocortisone, the blood sodium returned to normal, but vomiting was recurrent. One patient underwent laparoscopic rhomboid duodenal anastomosis in another hospital but had recurred vomiting after the operation, who was diagnosed with double duodenal diaphragm under endoscope. No other malformations were found in all the 8 cases. The duodenal diaphragm was located in the descending part of the duodenum, and the duodenal papilla was located below the diaphragm in all the 8 cases. Three cases had the diaphragm dilated by balloon to explore the diaphragm opening range before diaphragm incision; the other 5 had diaphragm incision performed after probing the diaphragm opening with guide wire. All the 8 cases were successfully treated by endoscopic incision of duodenal diaphragm, with the operation time of 12-30 minutes. There were no complications such as intestinal perforation, active bleeding or duodenal papilla injury. At one month of follow-up, their weight increased by 0.4-1.5 kg, with an increase of 5%-20%. Within the postoperative follow-up period of 2-20 months, all the 8 children had duodenal obstruction relieved, without vomiting or abdominal distension, and all resumed normal feeding. Gastroscopy reviewed at 2-3 months after the operation in 3 cases found no deformation of the duodenal bulbar cavity, and the mucosa of the incision was smooth, with a duodenal diameter of 6-7 mm. Conclusion: Endoscopic diaphragm incision is safe, effective and less invasive in pediatric congenital duodenal diaphragm, with favorable clinical applicability. abstract_id: PUBMED:36553258 The Incidence of Associated Anomalies in Children with Congenital Duodenal Obstruction-A Retrospective Cohort Study of 112 Patients. Background: Duodenal obstruction (DO) is a congenital anomaly that is highly associated with other anomalies, such as cardiac anomalies and trisomy 21. However, an overview of additional anomalies and patient-specific risk factors for cardiac anomalies is lacking. Potential association with the vertebral, anorectal, cardiac, trachea-esophageal, renal and limb anomalies (VACTERL) spectrum remains unknown. Therefore, we aim to examine the incidence of associated anomalies, a VACTERL-spectrum association and patient-specific risk factors for cardiac anomalies in patients with DO. Methods: A retrospective cohort study was performed between 1996 and 2021. Outcomes were the presence of any additional anomalies. Risk factors for cardiac anomalies were analyzed using multivariate logistic regression. Results: Of 112 neonates with DO, 47% (N = 53/112) had one associated anomaly and 38% (N = 20/53) had multiple anomalies. Cardiac anomalies (N = 35/112) and trisomy 21 (N = 35/112) were present in 31%. In four patients, VACTERL-spectrum was discovered, all with cardiac anomalies. Trisomy 21 was found to be a risk factor for cardiac anomalies (OR:6.5; CI-95%2.6-16.1). Conclusion: Associated anomalies were present in half of patients with DO, of which cardiac anomalies and trisomy 21 occurred most often, and the VACTERL-spectrum was present in four patients. Trisomy 21 was a significant risk factor for cardiac anomalies. Therefore, we recommend a preoperative echocardiogram in patients with DO. In case a cardiac anomaly is found without trisomy 21, VACTERL-screening should be performed. abstract_id: PUBMED:27324508 Congenital duodenal obstruction: causes and imaging approach. Bilious emesis is a common cause for evaluation in pediatric radiology departments. There are several causes of congenital duodenal obstruction, most of which require elective surgical correction, but the potential of malrotation with midgut volvulus demands prompt imaging evaluation. We review the various causes of congenital duodenal obstruction with an emphasis on the approach to imaging evaluation and diagnosis. Answer: The necessity of routine preoperative screening echocardiograms in all children with congenital duodenal obstruction (DO) is debated in the literature. Some studies suggest that careful clinical assessment, including evaluation with pulse oximetry and chest x-ray, may be sufficient to exclude significant congenital heart disease (CHD) in children with DO. One study found that children without clinical findings, abnormalities on physical examination, or abnormal chest x-ray were unlikely to have CHD, suggesting that not all children with DO may require a preoperative echocardiogram (PUBMED:24682563). Another study reported that neonatal echocardiograms are unlikely to identify new cardiac lesions in DO patients with negative fetal imaging, indicating that preoperative neonatal echocardiograms may not be necessary if fetal imaging did not show any cardiac anomalies (PUBMED:29866484). However, a systematic review with meta-analysis found that cardiac anomalies are present in about one-third of patients with DO, regardless of the presence of trisomy 21, leading to the recommendation that patients with DO should receive preoperative cardiac screening (PUBMED:36967411). This is supported by another study that found a significant incidence of associated anomalies, including cardiac anomalies, in patients with DO, with trisomy 21 being a significant risk factor for cardiac anomalies (PUBMED:36553258). In conclusion, while some studies suggest that not all children with DO may require a preoperative echocardiogram if they have no clinical signs of CHD or if fetal imaging was negative for cardiac lesions, other research indicates a high prevalence of cardiac anomalies in DO patients, recommending preoperative cardiac screening. The decision to perform a routine preoperative echocardiogram may depend on individual clinical assessments, the presence of risk factors such as trisomy 21, and institutional protocols.
Instruction: Does subthalamic nucleus stimulation induce apathy in Parkinson's disease? Abstracts: abstract_id: PUBMED:31930749 Subthalamic Nucleus Stimulation Impairs Motivation: Implication for Apathy in Parkinson's Disease. Background: Apathy is one of the most disabling neuropsychiatric symptoms in Parkinson's disease (PD) patients and has a higher prevalence in patients under subthalamic nucleus deep brain stimulation. Indeed, despite its effectiveness for alleviating PD motor symptoms, its neuropsychiatric repercussions have not yet been fully uncovered. Because it can be alleviated by dopaminergic therapies, especially D2 and D3 dopaminergic receptor agonists, the commonest explanation proposed for apathy after subthalamic nucleus deep brain stimulation is a too-strong reduction in dopaminergic treatments. The objective of this study was to determine whether subthalamic nucleus deep brain stimulation can induce apathetic behaviors, which remains an important matter of concern. We aimed to unambiguously address this question of the motivational effects of chronic subthalamic nucleus deep brain stimulation. Methods: We longitudinally assessed the motivational effects of chronic subthalamic nucleus deep brain stimulation by using innovative wireless microstimulators, allowing continuous stimulation of the subthalamic nucleus in freely moving rats and a pharmacological therapeutic approach. Results: We showed for the first time that subthalamic nucleus deep brain stimulation induces a motivational deficit in naive rats and intensifies those existing in a rodent model of PD neuropsychiatric symptoms. As reported from clinical studies, this loss of motivation was fully reversed by chronic treatment with pramipexole, a D2 and D3 dopaminergic receptor agonist. Conclusions: Taken together, these data provide experimental evidence that chronic subthalamic nucleus deep brain stimulation by itself can induce loss of motivation, reminiscent of apathy, independently of the dopaminergic neurodegenerative process or reduction in dopamine replacement therapy, presumably reflecting a dopaminergic-driven deficit. Therefore, our data help to clarify and reconcile conflicting clinical observations by highlighting some of the mechanisms of the neuropsychiatric side effects induced by chronic subthalamic nucleus deep brain stimulation. © 2020 International Parkinson and Movement Disorder Society. abstract_id: PUBMED:37787488 Motivational and cognitive predictors of apathy after subthalamic nucleus stimulation in Parkinson's disease. Postoperative apathy is a frequent symptom in Parkinson's disease patients who have undergone bilateral deep brain stimulation of the subthalamic nucleus. Two main hypotheses for postoperative apathy have been suggested: (i) dopaminergic withdrawal syndrome relative to postoperative dopaminergic drug tapering; and (ii) direct effect of chronic stimulation of the subthalamic nucleus. The primary objective of our study was to describe preoperative and 1-year postoperative apathy in Parkinson's disease patients who underwent chronic bilateral deep brain stimulation of the subthalamic nucleus. We also aimed to identify factors associated with 1-year postoperative apathy considering: (i) preoperative clinical phenotype; (ii) dopaminergic drug management; and (iii) volume of tissue activated within the subthalamic nucleus and the surrounding structures. We investigated a prospective clinical cohort of 367 patients before and 1 year after chronic bilateral deep brain stimulation of the subthalamic nucleus. We assessed apathy using the Lille Apathy Rating Scale and carried out a systematic evaluation of motor, cognitive and behavioural signs. We modelled the volume of tissue activated in 161 patients using the Lead-DBS toolbox and analysed overlaps within motor, cognitive and limbic parts of the subthalamic nucleus. Of the 367 patients, 94 (25.6%) exhibited 1-year postoperative apathy: 67 (18.2%) with 'de novo apathy' and 27 (7.4%) with 'sustained apathy'. We observed disappearance of preoperative apathy in 22 (6.0%) patients, who were classified as having 'reversed apathy'. Lastly, 251 (68.4%) patients had neither preoperative nor postoperative apathy and were classified as having 'no apathy'. We identified preoperative apathy score [odds ratio (OR) 1.16; 95% confidence interval (CI) 1.10, 1.22; P &lt; 0.001], preoperative episodic memory free recall score (OR 0.93; 95% CI 0.88, 0.97; P = 0.003) and 1-year postoperative motor responsiveness (OR 0.98; 95% CI 0.96, 0.99; P = 0.009) as the main factors associated with postoperative apathy. We showed that neither dopaminergic dose reduction nor subthalamic stimulation were associated with postoperative apathy. Patients with 'sustained apathy' had poorer preoperative fronto-striatal cognitive status and a higher preoperative action initiation apathy subscore. In these patients, apathy score and cognitive status worsened postoperatively despite significantly lower reduction in dopamine agonists (P = 0.023), suggesting cognitive dopa-resistant apathy. Patients with 'reversed apathy' benefited from the psychostimulant effect of chronic stimulation of the limbic part of the left subthalamic nucleus (P = 0.043), suggesting motivational apathy. Our results highlight the need for careful preoperative assessment of motivational and cognitive components of apathy as well as executive functions in order to better prevent or manage postoperative apathy. abstract_id: PUBMED:37567462 Subacute alpha frequency (10Hz) subthalamic stimulation for emotional processing in Parkinson's disease. Background: Psychiatric comorbidities are common in Parkinson's disease (PD) and may change with high-frequency stimulation targeting the subthalamic nucleus. Numerous accounts indicate subthalamic alpha-frequency oscillation is implicated in emotional processing. While intermittent alpha-frequency (10Hz) stimulation induces positive emotional effects, with more ventromedial contacts inducing larger effects, little is known about the subacute effect of ventral 10Hz subthalamic stimulation on emotional processing. Objective/hypothesis: To evaluate the subacute effect of 10Hz stimulation at bilateral ventral subthalamic nucleus on emotional processing in PD patients using an affective task, compared to that of clinical-frequency stimulation and off-stimulation. Methods: Twenty PD patients with bilateral subthalamic deep brain stimulation for more than six months were tested with the affective task under three stimulation conditions (10Hz, 130Hz, and off-stimulation) in a double-blinded randomized design. Results: While 130Hz stimulation reduced arousal ratings in all patients, 10Hz stimulation increased arousal selectively in patients with higher depression scores. Furthermore, 10Hz stimulation induced a positive shift in valence rating to negative emotional stimuli in patients with lower apathy scores, and 130Hz stimulation led to more positive valence to emotional stimuli in the patients with higher apathy scores. Notably, we found correlational relationships between stimulation site and affective rating: arousal ratings increase with stimulation from anterior to posterior site, and positive valence ratings increase with stimulation from dorsal to ventral site of the ventral subthalamic nucleus. Conclusions: Our findings highlight the distinctive role of 10Hz stimulation on subjective emotional experience and unveil the spatial organization of the stimulation effect. abstract_id: PUBMED:19157719 The subthalamic nucleus is a key-structure of limbic basal ganglia functions. Among the basal ganglia nuclei, the subthalamic nucleus has a major function in the motor cortico-basal ganglia-thalamo-cortical circuit and is a target site for neurosurgical treatment such as parkinsonian patients with long-term motor fluctuations and dyskinesia. According to animal and human studies, the motor functions of the subthalamic nucleus have been well documented whereas its implication on limbic functions is still less well understood and is only partially explained by anatomical and functional theories of basal ganglia organisation. After chronic subthalamic nucleus stimulation in patients with Parkinson's disease, many studies showed executive impairments, apathy, depression, hypomania, and impairment of recognition of negative facial emotions. The medial tip of the subthalamic nucleus represents its limbic part. This part receives inputs from the anterior cingulate cortex, the medial prefrontal cortex, the limbic part of the striatum (nucleus accumbens), the ventral tegmental area and the limbic ventral pallidum. The medial tip of the subthalamic nucleus projects to the limbic part of the substantia nigra and the ventral tegmental area. We propose a new function scheme of the limbic system, establishing connections between limbic cortical structures (medial prefrontal cortex, amygdala and hippocampus) and the limbic part of the basal ganglia. This new circuit could be composed of a minor part based on the model of cortico-basal ganglia-thalamo-cortical loop, and of a major part linking the subthalamic nucleus with the mesolimbic dopaminergic pathway via the ventral tegmental area and the nucleus accumbens, and with limbic cortical structures. This scheme could explain limbic impairments after subthalamic nucleus stimulation by disruption of limbic information inside the subthalamic nucleus and the ventral tegmental area. abstract_id: PUBMED:34225945 Neuropsychiatric effects of subthalamic deep brain stimulation. The subthalamic nucleus (STN) is a subcortical, glutamatergic, excitatory, relay nucleus that increases the inhibitory drive of the basal ganglia and suppresses action. It is of central relevance to the neuropsychological construct of inhibition, as well as the pathophysiology of Parkinson's disease (PD). Deep brain stimulation (DBS) of the STN (STN-DBS) is an established surgical treatment for PD that can be complicated by adverse neuropsychiatric side effects, most commonly characterized by impulsivity and mood elevation, although depression, anxiety, apathy, and cognitive changes have also been reported. Notwithstanding these adverse neuropsychiatric effects in PD, STN-DBS may also have a role in the treatment of refractory psychiatric disorders, as more is understood about the physiology of this nucleus and techniques in neuromodulation are refined. In this chapter, we link neuropsychiatric symptoms after STN-DBS for PD to the biological effects of electrode implantation, neurostimulation, and adjustments to dopaminergic medication, in the setting of neurodegeneration affecting cortico-striatal connectivity. We then provide an overview of clinical trials that have employed STN-DBS to treat obsessive-compulsive disorder and discuss future directions for subthalamic neuromodulation in psychiatry. abstract_id: PUBMED:30681186 Effects of subthalamic nucleus stimulation and levodopa on decision-making in Parkinson's disease. Background: Parkinson's disease (PD) is frequently associated with behavioral disorders, particularly within the spectrum of motivated behaviors such as apathy or impulsivity. Both pharmacological and neurosurgical treatments have an impact on these impairments. However, there still is controversy as to whether subthalamic nucleus deep brain stimulation (STN-DBS) can cause or reduce impulsive behaviors. Objectives: We aimed to identify the influence of functional surgery on decision-making processes in PD. Methods: We studied 13 PD patients and 13 healthy controls. The experimental task involved squeezing a dynamometer with variable force to obtain rewards of various values under four conditions: without treatment, with l-dopa or subthalamic stimulation alone, and with both l-dopa and subthalamic stimulation. Statistical analyses consisted of generalized linear mixed models including treatment condition, reward value, level of effort, and their interactions. We analyzed acceptance rate (the percentage of accepted trials), decision time, and force applied. Results: Comparatively to controls, patients without treatment exhibited lower acceptance rate and force applied. Patients under l-dopa alone did not exhibit increased acceptance rate. With subthalamic stimulation, either with or without added l-dopa, all measures were improved so that patients' behaviors were undistinguishable from healthy controls'. Conclusions: Our study shows that l-dopa administration does not fully restore cost-benefit decision-making processes, whereas STN-DBS fully normalizes patients' behaviors. These findings suggest that dopamine is partly involved in cost-benefit valuation, and that STN-DBS can have a beneficial effect on motivated behaviors in PD and may improve certain forms of impulsive behaviors. © 2019 International Parkinson and Movement Disorder Society. abstract_id: PUBMED:22450611 Mood response to deep brain stimulation of the subthalamic nucleus in Parkinson's disease. Deep brain stimulation of the subthalamic nucleus (STN DBS) in Parkinson's disease (PD) improves motor functioning but has variable effects on mood. Little is known about the relationship between electrode contact location and mood response. The authors identified the anatomical location of electrode contacts and measured mood response to stimulation with the Visual Analog Scale in 24 STN DBS PD patients. Participants reported greater positive mood and decreased anxiety and apathy with bilateral and unilateral stimulation. Left DBS improved mood more than right DBS. Right DBS-induced increase in positive mood was related to more medial and dorsal contact locations. These results highlight the functional heterogeneity of the STN. abstract_id: PUBMED:16607469 Does subthalamic nucleus stimulation induce apathy in Parkinson's disease? Background: Subthalamic Nucleus Deep Brain Stimulation (STN-DBS) has been shown to significantly improve motor symptoms in advanced Parkinson's disease (PD). Only few studies, however, have focused on the non-motor effects of DBS. Methods: A consecutive series of 15 patients was assessed three months before (M-3), then three months (M3) and six months (M6) after surgery. Mean (+/- SD) age at surgery was 59.7 (7.6). Mean disease duration at surgery was 12.2 (2.8) years. The Mini International Neuropsychiatric Inventory was used to assess psychiatric disorders three months before surgery. Depression was evaluated using Montgomery and Asberg Rating Scale (MADRS). Anxiety was evaluated using the AMDP system (Association for Methodology and Documentation in Psychiatry). Apathy was particularly evaluated using the Apathy Evaluation Scale (AES) and the Starkstein Scale. All these scales were performed at every evaluation. Results: Apathy worsened at M3 and M6 after STN-DBS in comparison with the preoperative evaluation: the AES mean score was significantly impaired between the preoperative (38.4+/-7.1) and both the postoperative M3 (44.6+/-9.5, p = 0.003) and M6 scores (46.0+/-10.9, p = 0.013). Significant worsening of apathy was confirmed using the Starkstein scale. There was no evidence of depression: the mean MADRS score did not differ before surgery (9.1+/-7.4) and at both M3 (8.6+/-8.2) and M6 (9.9+/-7.7) after STN-DBS. The anxiety level did not change between preoperative (9.4+/-9.2) and both M3 (5.5+/-4.5) and M6 (6.6+/-4.6) postoperative states. Conclusion: Although STN-DBS constitutes a therapeutic advance for severely disabled patients with Parkinson's disease, we should keep in mind that this surgical procedure may contribute to the inducing of apathy. Our observation raises the issue of the direct influence of STN- DBS on the limbic system by diffusion of stimulus to the medial limbic compartment of STN. abstract_id: PUBMED:26228098 A preoperative metabolic marker of parkinsonian apathy following subthalamic nucleus stimulation. Background: Subthalamic nucleus deep brain stimulation (STN-DBS) in Parkinson's disease (PD) has been associated with the development of postoperative apathy. Debate on the causes of postoperative apathy continues, and the dominant hypothesis is that stimulation or dopaminergic drug reductions are causal in its development. We hypothesized that a preoperative predisposition to apathy also could exist. To this end, we sought to identify a preoperative metabolic pattern using [(18)]Fluorodeoxyglucose Positron Emission Tomography (PET), which could be associated with the occurrence of postoperative apathy after STN-DBS for PD. Methods: Thirty-four patients with PD, not clinically apathetic, underwent an [(18)]Fluorodeoxyglucose-PET scan before surgery of STN-DBS, and were tested for the occurrence of apathy 1 y after surgery. Whole-brain voxel-based PET intergroup comparison (P &lt; 0.005; corrected for the cluster) was evaluated between patients who developed apathy at 1 y and those who did not. Results: Eight patients (23.5%) became apathetic after surgery. Motor improvement and decrease in dopaminergic treatment were similar in both postoperative apathy and non-apathy groups. We found a cluster of significantly greater metabolism in the postoperative apathy group within the cerebellum, brainstem (in particular ventral tegmental area), temporal lobe, insula, amygdala, lentiform nucleus, subgenual anterior cingulate, and inferior frontal gyrus. A metabolic value above 68 could discriminate patients who would develop postoperative apathy with 100% sensitivity and 88.5% specificity. Conclusions: We describe a preoperative metabolic pattern associated with the development of apathy after STN-DBS in PD. This suggests the existence of a predisposition to apathy, which may further be triggered by perioperative drug modifications. abstract_id: PUBMED:15490353 Psychiatric symptoms of Parkinson's disease following deep brain stimulation surgery on the subthalamic nucleus Aim: To review the increasing number of papers that report diverse neuropsychiatric disorders that happen in patients diagnosed of Parkinson's disease submitted to brain deep stimulation of subthalamic nuclei with high frequency current. Development: It is a fact the need to evaluate carefully all the patients who have to submit to this surgical procedure analyzing previous psychiatric history, and the possible appearance of psychiatric sphere symptoms after surgery. The acute depression and the euphoric moods (than can occur immediately after surgery) and major depression, obsession, widespread anxiety and substance abuse (among those of more delayed appearance) constitute examples of this pathology. The treatment of previous psychiatric disorders is forced in all cases and specially relevant in the major depression when suicide ideas coexist. Conclusions: Information that allow to predict the risk of developing depressive disorders in the postoperative period does not exist at present time, though it is more predictable that it happens in those patients with previous severe depressive history. In general, euphoric moods, apathy and depression, usually are transient and of multifactorial origin that includes the existence of endogenous predisposition, the change to an independence pattern after surgery, the psychotropic effect of levodopa, and the high frequency current stimulation effect on the non motor structures target and in the adjacent regions. It must be outlined that it is possible the appearance of psychotic symptoms after brain deep stimulation of subthalamic nuclei in patients with ideal results on motor disability. Answer: Yes, subthalamic nucleus stimulation can induce apathy in Parkinson's disease. Studies have shown that subthalamic nucleus deep brain stimulation (STN-DBS) can lead to neuropsychiatric symptoms, including apathy, in patients with Parkinson's disease (PD). Apathy is one of the most disabling neuropsychiatric symptoms in PD patients and has a higher prevalence in patients under STN-DBS (PUBMED:31930749). Despite the effectiveness of STN-DBS for alleviating motor symptoms, its neuropsychiatric repercussions, such as apathy, have been observed. This loss of motivation, which is reminiscent of apathy, can occur independently of the dopaminergic neurodegenerative process or reduction in dopamine replacement therapy (PUBMED:31930749). Clinical studies have reported that postoperative apathy is a frequent symptom in PD patients who have undergone bilateral STN-DBS. The occurrence of apathy postoperatively has been associated with preoperative clinical phenotype, dopaminergic drug management, and the volume of tissue activated within the STN and surrounding structures. Notably, neither dopaminergic dose reduction nor subthalamic stimulation were directly associated with postoperative apathy, suggesting that other factors, such as preoperative cognitive status and the action initiation apathy subscore, may play a role (PUBMED:37787488). Furthermore, research has indicated that STN-DBS may contribute to the induction of apathy by potentially disrupting limbic information within the STN and the ventral tegmental area, which are involved in limbic functions (PUBMED:16607469). A preoperative metabolic pattern has also been identified, suggesting a predisposition to apathy that may be triggered by perioperative drug modifications (PUBMED:26228098). In summary, STN-DBS can induce apathy in PD patients, and this effect may be influenced by a combination of preoperative factors, stimulation parameters, and changes in dopaminergic treatment. Careful preoperative assessment and management of neuropsychiatric symptoms are recommended to better prevent or manage postoperative apathy (PUBMED:37787488).
Instruction: Intestinal cancer after cholecystectomy: is bile involved in carcinogenesis? Abstracts: abstract_id: PUBMED:29163805 Thymine DNA Glycosylase (TDG) is involved in the pathogenesis of intestinal tumors with reduced APC expression. Thymine DNA Glycosylase (TDG) is a base excision repair enzyme that acts as a thymine and uracil DNA N-glycosylase on G:T and G:U mismatches, thus protecting CpG sites in the genome from mutagenesis by deamination. In addition, TDG has an epigenomic function by removing the novel cytosine derivatives 5-formylcytosine and 5-carboxylcytosine (5caC) generated by Ten-Eleven Translocation (TET) enzymes during active DNA demethylation. We and others previously reported that TDG is essential for mammalian development. However, its involvement in tumor formation is unknown. To study the role of TDG in tumorigenesis, we analyzed the effects of its inactivation in a well-characterized model of tumor predisposition, the ApcMin mouse strain. Mice bearing a conditional Tdgflox allele were crossed with Fabpl::Cre transgenic mice, in the context of the ApcMin mutation, in order to inactivate Tdg in the small intestinal and colonic epithelium. We observed an approximately 2-fold increase in the number of small intestinal adenomas in the test Tdg-mutant ApcMin mice in comparison to control genotypes (p=0.0001). This increase occurred in female mice, and is similar to the known increase in intestinal adenoma formation due to oophorectomy. In the human colorectal cancer (CRC) TCGA database, the subset of patients with TDG and APC expression in the lowest quartile exhibits an excess of female cases. We conclude that TDG inactivation plays a role in intestinal tumorigenesis initiated by mutation/underexpression of APC. Our results also indicate that TDG may be involved in sex-specific protection from CRC. abstract_id: PUBMED:9287968 The multiple endocrine neoplasia type I gene locus is involved in the pathogenesis of type II gastric carcinoids. Background & Aims: Both gastrin and genetic factors were suggested to underlie the pathogenesis of multiple gastric enterochromaffin-like (ECL) cell carcinoids. To assess the role of genetic alterations in carcinoid tumorigenesis, loss of heterozygosity (LOH) at the locus of the multiple endocrine neoplasia type 1 (MEN-1) gene was studied in gastric carcinoids of patients with MEN-1 and chronic atrophic type A gastritis (A-CAG), as well as in sporadically arising intestinal carcinoids. Methods: DNA extracted from archival tissue sections of 35 carcinoid tumors was assessed for LOH with eight polymorphic markers on chromosome 11q13. A combined tumor and family study was performed in 1 patient with MEN-1-Zollinger-Ellison syndrome (ZES). Results: LOH at 11q13 loci was detected in 15 of 20 (75%) MEN-1-ZES carcinoids, and each ECL-cell carcinoid with LOH showed deletion of the wild-type allele. Only 1 of 6 A-CAG carcinoids displayed LOH at the MEN-1 gene locus, and none of the 9 intestinal and rectal carcinoids showed 11q13 LOH. Conclusions: Gastric ECL-cell carcinoid is an independent tumor type of MEN-1 that shares a common developmental mechanism (via inactivation of the MEN-1 gene) with enteropancreatic and parathyroid MEN-1 tumors. Further analysis of sporadic and A-CAG carcinoids is needed to elucidate genetic factors involved in their tumorigenesis. abstract_id: PUBMED:11522737 Intestinal cancer after cholecystectomy: is bile involved in carcinogenesis? Background & Aims: Results concerning an association between cholecystectomy and right-sided colon cancer are inconsistent. Little is known about the relation between cholecystectomy and small bowel cancer. Therefore, we evaluated cholecystectomy and risk of bowel cancer. Methods: Cholecystectomized patients, identified through the Swedish Inpatient Register, from 1965 through 1997, were followed up for subsequent cancer. The standardized incidence ratio (SIR) estimated relative risk. Results: In total, 278,460 cholecystectomized patients, contributing 3,519,682 person-years, were followed up for a maximum of 33 years after surgery. Cholecystectomized patients had an increased risk of proximal intestinal adenocarcinoma, which gradually declined with increasing distance from the common bile duct. The risk was significantly increased for adenocarcinoma (SIR, 1.77; 95% confidence interval [CI], 1.37-2.24) and carcinoids of the small bowel (SIR, 1.71; 95% CI, 1.39-2.08), and right-sided colon cancer (SIR, 1.16; 95% CI, 1.08-1.24). No association was found with more distal bowel cancer. The gradient was further pronounced when surgery of the common bile duct was included. The associations remained increased up to 33 years after cholecystectomy. No differences between sexes were found. Conclusions: Cholecystectomy increases the risk of intestinal cancer, a risk that declines with increasing distance from the common bile duct. Changes in the intestinal exposure to bile might be the underlying biological mechanism. abstract_id: PUBMED:20683002 Comprehensive analysis of genes involved in the malignancy of gastrointestinal stromal tumors. Background: During tumorigenesis of gastrointestinal stromal tumors (GISTs), the most frequent changes are reported to be gain-of-function mutations in the C-KIT proto-oncogene. However, we speculated that additional genetic alterations are required for the progression of GISTs. Patients And Methods: Using 15 cases diagnosed with GISTs, we searched for novel indicator genes by microarray analyses using an Oligo GEArray(R) PI3K-AKT Signaling Pathway Microarray Kit. In addition, we analyzed the mutational status of C-KIT and the proliferation status indicated by the Ki-67 index. Results: The tumor localizations of the 15 GISTs were as follows: 8 in the stomach; 2 in the small intestine; 2 in the mesentery; 1 in the duodenum; 1 in the rectum; and 1 in liver. Regarding the C-KIT gene analysis, mutations in exon 11 were detected in 11 out of 13 patients. In 1 out of the 13 patients, mutations were detected in both exons 11 and 13. No genetic abnormalities were identified in 1 patient. The Ki-67 labeling indices were significantly lower for the low-risk and intermediate-risk groups than for the high-risk group (p=0.0440). No specific genes were overexpressed in the &gt;1% Ki-67 group. Regarding the primary lesion sites, the following 6 genes were overexpressed in tumors in the stomach: RBL2, RHOA, SHC1, HSP90AB1, ACTB and BAS2C. Conclusion: Gene analysis is currently only useful for diagnostic assessment and predicting therapeutic effects. However, it may be possible for new malignancy-related factors to be identified by comparing and investigating gene expression levels and other factors using such analyses. abstract_id: PUBMED:34966170 Mapping of novel loci involved in lung and colon tumor susceptibility by the use of genetically selected mouse strains. Two non-inbred mouse lines, phenotypically selected for maximal (AIRmin) and minimal (AIRmax) acute inflammatory response, show differential susceptibility/resistance to the development of several chemically-induced tumor types. An intercross pedigree of these mice was generated and treated with the chemical carcinogen dimethylhydrazine, which induces lung and intestinal tumors. Genome wide high-density genotyping with the Restriction Site-Associated DNA genotyping (2B-RAD) technique was used to map genetic loci modulating individual genetic susceptibility to both lung and intestinal cancer. Our results evidence new common quantitative trait loci (QTL) for those phenotypes and provide an improved understanding of the relationship between genomic variation and individual genetic predisposition to tumorigenesis in different organs. abstract_id: PUBMED:18204079 Further upregulation of beta-catenin/Tcf transcription is involved in the development of macroscopic tumors in the colon of ApcMin/+ mice. Apc(Min/+) mouse, a mouse model for human familial adenomatosis polyposis, contains a truncating mutation in the Apc gene and spontaneously develops intestinal tumors. Our previous study revealed two distinct stages of tumorigenesis in the colon of Apc(Min/+) mouse: microadenomas and macroscopic tumors. Microadenomas already have lost their remaining allele of the Apc and all microadenomas show accumulation of beta-catenin, indicating that activation of the canonical Wnt pathway is an initiating event in the tumorigenesis. This study shows that expression of nuclear beta-catenin in macroscopic tumors is further upregulated in comparison with that in microadenomas. Furthermore, transcriptional activity of beta-catenin/T-cell factor (Tcf) signaling, assessed using beta-catenin/Tcf reporter transgenic mice, is higher in the macroscopic tumors than that in microadenomas. In addition, the expression level of Dickkopf-1, which is known to be a negative modifier of the canonical Wnt pathway, was reduced only in colon tumors. These results suggest that activation of beta-catenin/Tcf transcription plays a role not only in the initiation stage but also in the promotion stage of colon carcinogenesis in Apc(Min/+) mice. abstract_id: PUBMED:30451877 An altered gene expression profile in tyramine-exposed intestinal cell cultures supports the genotoxicity of this biogenic amine at dietary concentrations. Tyramine, histamine and putrescine are the most commonly detected and most abundant biogenic amines (BA) in food. The consumption of food with high concentrations of these BA is discouraged by the main food safety agencies, but legal limits have only been set for histamine. The present work reports a transcriptomic investigation of the oncogenic potential of the above-mentioned BA, as assessed in the HT29 human intestinal epithelial cell line. Tyramine had a greater effect on the expression of genes involved in tumorigenesis than did histamine or putrescine. Since some of the genes that showed altered expression in tyramine-exposed cells are involved in DNA damage and repair, the effect of this BA on the expression of other genes involved in the DNA damage response was investigated. The results suggest that tyramine might be genotoxic for intestinal cells at concentrations easily found in BA-rich food. Moreover, a role in promoting intestinal cancer cannot be excluded. abstract_id: PUBMED:21062980 Hepatocyte nuclear factor-4alpha promotes gut neoplasia in mice and protects against the production of reactive oxygen species. Hepatocyte nuclear factor-4α (Hnf4α) is a transcription factor that controls epithelial cell polarity and morphogenesis. Hnf4α conditional deletion during postnatal development has minor effects on intestinal epithelium integrity but promotes activation of the Wnt/β-catenin pathway without causing tumorigenesis. Here, we show that Hnf4α does not act as a tumor-suppressor gene but is crucial in promoting gut tumorigenesis in mice. Polyp multiplicity in ApcMin mice lacking Hnf4α is suppressed compared with littermate ApcMin controls. Analysis of microarray gene expression profiles from mice lacking Hnf4α in the intestinal epithelium identifies novel functions of this transcription factor in targeting oxidoreductase-related genes involved in the regulation of reactive oxygen species (ROS) levels. This role is supported with the demonstration that HNF4α is functionally involved in the protection against spontaneous and 5-fluorouracil chemotherapy-induced production of ROS in colorectal cancer cell lines. Analysis of a colorectal cancer patient cohort establishes that HNF4α is significantly upregulated compared with adjacent normal epithelial resections. Several genes involved in ROS neutralization are also induced in correlation with HNF4A expression. Altogether, the findings point to the nuclear receptor HNF4α as a potential therapeutic target to eradicate aberrant epithelial cell resistance to ROS production during intestinal tumorigenesis. abstract_id: PUBMED:25960239 Application of the Apc(Min/+) mouse model for studying inflammation-associated intestinal tumor. Chronic inflammatory diseases of the intestinal tract have been known to increase risk of developing a form of colorectal cancer known as inflammation-associated cancer. The roles of inflammation in tumor formation and development in Apc(Min/+) mice have been broadly corroborated. The Apc(Min/+) mouse model contains a point mutation in the adenomatous polyposis coli (Apc) gene and only develops intestinal precancerous lesions, the benign adenomas. Thus, it provides an excellent in vivo system to investigate the molecular events involved in the inflammatory process which may contribute to multistep tumorigenesis and carcinogenesis. Recent investigations that employ this model studied the effects of gene alterations, intestinal microorganisms, drugs, diet, exercise and sleep on the inflammatory process and tumor development, and revealed the mechanisms involved in the formation, promotion and carcinogenesis of adenomas with the background of inflammation. Herein, we focus our review on the application of the Apc(Min/+) mouse model for studying inflammation-associated intestinal tumor and find that anti-inflammation is a possible strategy in combating intestinal tumor, but sometimes anti-inflammation cannot help reduce tumor burden. Moreover, various inflammation-related genes are involved in different mechanistic stages of tumor in Apc(Min/+) mice and intricate regulatory effects of inflammation exist in the whole progression of intestinal tumor. abstract_id: PUBMED:30624523 THE CANONICAL WNT PATHWAY IN GASTRIC CARCINOMA. Background: It is believed that the Wnt pathway is one of the most important signaling involved in gastric carcinogenesis. Aim: To analyze the protein expression of canonical and non-canonical Wnt pathways in gastric carcinoma. Method: The immunohistochemistry was performed in 72 specimens of gastric carcinomas for evaluating the expression of Wnt-5a, FZD5, GSK3β, axin, CK1, ubiquitin, cyclin D1 and c-myc. Results: There were significant differences for cytoplasm and nucleus ubiquitin for moderately and well differentiated tumors (p=0.03) and for those of the intestinal type of the Lauren classification (p=0.03). The absence of c-myc was related to Lauren's intestinal tumors (p=0.03). Expression of CK1 in the cytoplasm was related to compromised margin (p=0.03). Expression of cyclin D1 protein was more intense in male patients (p=0.03) There was no relation of the positive or negative expression of the Wnt-5a, FZD5, GSK3 and Axin with any clinicopathological variables. Conclusion: The canonical WNT pathway is involved in gastric carcinoma. Answer: Yes, bile is suggested to be involved in carcinogenesis following cholecystectomy. A study that evaluated the risk of bowel cancer after cholecystectomy found that patients who had undergone the procedure had an increased risk of proximal intestinal adenocarcinoma, which gradually declined with increasing distance from the common bile duct. The risk was significantly increased for adenocarcinoma and carcinoids of the small bowel, as well as right-sided colon cancer. The study proposed that changes in the intestinal exposure to bile might be the underlying biological mechanism for the increased cancer risk, and this association remained increased up to 33 years after cholecystectomy (PUBMED:11522737).
Instruction: Are higher blood mercury levels associated with dry eye symptoms in adult Koreans? Abstracts: abstract_id: PUBMED:27121705 Are higher blood mercury levels associated with dry eye symptoms in adult Koreans? A population-based cross-sectional study. Objectives: The purpose of this study was to investigate whether blood mercury concentrations associated with the presence of dry eye symptoms in a nationally representative Korean population. Methods: Population-based prospective cross-sectional study using the heavy metal data set of the 2010-2012 Korean National Health and Nutrition Examination Survey (KNHANES). A total of 4761 adult Koreans were the eligible population in this study. Of the 7162 survey participants, 2401 were excluded because they were &lt;19 years of age, there were missing data in the heavy metal data set, or they had diabetes, rheumatoid arthritis, thyroid disease, asthma, depression and/or under-the-eye surgery. Blood mercury levels were measured on the day the participants completed a questionnaire regarding the presence of dry eye symptoms (persistent dryness or eye irritation). The population was divided into low and high groups by median level (4.26 and 2.89 µg/L for males and females, respectively). Results: Self-reported dry eye symptoms were present in 13.0% of the cohort. Participants with dry eye symptoms were significantly more likely to have blood mercury levels exceeding the median than those without dry eye symptoms (45.7% vs 51.7%, p=0.021). Logistic regression analysis showed that, after adjusting for age, gender, education, total household income, smoking status, heavy alcohol use, sleep time, perceived stress status, total cholesterol levels and atopy history, dry eye symptoms were significantly associated with blood mercury levels that exceeded the median (reference: lower mercury group; OR, 1.324; 95% CI 1.059 to 1.655; p&lt;0.05). Conclusions: High blood mercury levels were associated with dry eye symptoms in a nationally representative Korean population. abstract_id: PUBMED:30369215 Factors Associated with Dry Eye Symptoms in Elderly Koreans: the Fifth Korea National Health and Nutrition Examination Survey 2010-2012. Background: Dry eye disease is an aging-related ophthalmic disease that not only affects the daily activities but also causes deterioration in the quality of life. This study aimed to evaluate the factors associated with dry eye symptoms in elderly Koreans. Methods: We investigated 4,185 subjects (men=1,787 and women=2,398) aged ≥65 years from the fifth Korea National Health and Nutrition Examination Survey 2010-2012. Data were analyzed using multiple logistic regressions to identify the relationships between dry eye symptoms and other factors. Results: The prevalence of dry eye symptoms was 17.9%. After adjustment for confounding factors, dry eye symptoms were significantly associated with female sex (adjusted odds ratio [aOR], 1.806; 95% confidence interval [CI], 1.410-2.313), a history of cataract (aOR, 1.683; 95% CI, 1.255-2.255), suicidal ideation (aOR, 1.414; 95% CI, 1.070-1.870), hypercholesterolemia (aOR, 1.289; 95% CI, 1.025-1.621), age ≥80 years (aOR, 0.538; 95% CI, 0.337-0.859), and sleep duration ≥9 h/d (aOR, 0.524; 95% CI, 0.330-0.834). Conclusion: Among elderly Koreans, female sex, a history of cataract, suicidal ideation, and hypercholesterolemia may be the risk factors for dry eye symptoms, whereas sleep duration ≥9 h/d can be a protective factor against dry eye symptoms. abstract_id: PUBMED:34441261 Ocular Surface Pathology in Patients Suffering from Mercury Intoxication. Purpose: To report the ocular surface pathology of patients suffering from acute/subacute mercury vapor intoxication. Design: Cross-sectional study. Participants: Male workers intoxicated with inorganic mercury referred for ophthalmic involvement and healthy control subjects. Methods: The following tests were performed: dry eye (DE)-related symptoms indicated by the ocular surface disease (OSDI) index questionnaire; tear osmolarity; analysis of 23 tear cytokine concentrations and principal component and hierarchical agglomerative cluster analyses; tear break-up time (T-BUT); corneal fluorescein and conjunctival lissamine green staining; tear production by Schirmer and tear lysozyme tests; mechanical and thermal corneal sensitivity (non-contact esthesiometry); and corneal nerve analysis and dendritic cell density by in vivo confocal microscopy (IVCM). Results: Twenty-two out of 29 evaluated patients entered the study. Most had DE-related symptoms (OSDI values &gt; 12), that were severe in 63.6% of them. Tear osmolarity was elevated (&gt;308 mOsms/L) in 83.4% of patients (mean 336.23 (28.71) mOsm/L). Corneal and conjunctival staining were unremarkable. T-BUT was low (&lt;7 s) in 22.7% of patients. Schirmer test and tear lysozyme concentration were low in 13.6% and 27.3% of cases, respectively. Corneal esthesiometry showed patient mechanical (mean 147.81 (53.36) mL/min) and thermal thresholds to heat (+2.35 (+1.10) °C) and cold (-2.57 (-1.24) °C) to be significantly higher than controls. Corneal IVCM revealed lower values for nerve density (6.4 (2.94) n/mm2), nerve branching density (2 (2.50) n/mm2), and dendritic cell density (9.1 (8.84) n/mm2) in patients. Tear levels of IL-12p70, IL-6, RANTES, and VEGF were increased, whereas EGF and IP-10/CXCL10 were decreased compared to controls. Based on cytokine levels, two clusters of patients were identified. Compared to Cluster 1, Cluster 2 patients had significantly increased tear levels of 18 cytokines, decreased tear lysozyme, lower nerve branching density, fewer dendritic cells, and higher urine mercury levels. Conclusions: Patients suffering from systemic mercury intoxication showed symptoms and signs of ocular surface pathology, mainly by targeting the trigeminal nerve, as shown by alterations in corneal sensitivity and sub-basal nerve morphology. abstract_id: PUBMED:29268725 Visual symptoms associated with refractive errors among Thangka artists of Kathmandu valley. Background: Prolong near work, especially among people with uncorrected refractive error is considered a potential source of visual symptoms. The present study aims to determine the visual symptoms and the association of those with refractive errors among Thangka artists. Methods: In a descriptive cross-sectional study, 242 (46.1%) participants of 525 thangka artists examined, with age ranged between 16 years to 39 years which comprised of 112 participants with significant refractive errors and 130 absolutely emmetropic participants, were enrolled from six Thangka painting schools. The visual symptoms were assessed using a structured questionnaire consisting of nine items and scoring from 0 to 6 consecutive scales. The eye examination included detailed anterior and posterior segment examination, objective and subjective refraction, and assessment of heterophoria, vergence and accommodation. Symptoms were presented in percentage and median. Variation in distribution of participants and symptoms was analysed using the Kruskal Wallis test for mean, and the correlation with the Pearson correlation coefficient. A significance level of 0.05 was applied for 95% confidence interval. The majority of participants (65.1%) among refractive error group (REG) were above the age of 30 years, with a male predominance (61.6%), compared to the participants in the normal cohort group (NCG), where majority of them (72.3%) were below 30 years of age (72.3%) and female (51.5%). Result: Overall, the visual symptoms are high among Thangka artists. However, blurred vision (p = 0.003) and dry eye (p = 0.004) are higher among the REG than the NCG. Females have slightly higher symptoms than males. Most of the symptoms, such as sore/aching eye (p = 0.003), feeling dry (p = 0.005) and blurred vision (p = 0.02) are significantly associated with astigmatism. Conclusion: Thangka artists present with significant proportion of refractive error and visual symptoms, especially among females. The most commonly reported symptoms are blurred vision, dry eye and watering of the eye. The visual symptoms are more correlated with astigmatism. abstract_id: PUBMED:32942541 Evidence of Pepsin-Related Ocular Surface Damage and Dry Eye (PROD Syndrome) in Patients with Laryngopharyngeal Reflux. Background: patients with laryngopharyngeal reflux (LPR) showed detectable levels of tear pepsin that explain the nasolacrimal obstruction. The purpose of this study was to determine whether patients with LPR show ocular surface changes and to investigate the relationship between lacrimal pepsin concentration and ocular alterations. Methods: Fifty patients with positive endoscopic signs for LPR and an equal or higher score of 13 and 7 for Reflux Symptom Index and Reflux Finding Score were enrolled. Twenty healthy patients with no reflux disease and dry eye were included as the control group. After evaluation of ocular discomfort symptoms, the tear break-up time test, corneal staining, and tear sampling were performed. Tear pepsin levels were measured using Pep-testTM kit. Results: Patients with LPR showed ocular surface changes including epithelial damage (48%) and impairment of lacrimal function (72%). Tear pepsin levels were detectable in 32 out of 50 (64%) patients with LPR (mean ± SD: 55.4 ± 67.5 ng/mL) and in none of the control subjects. Most of the LPR patients complained of ocular discomfort symptoms, including itching (38%), redness (56%), or foreign body sensation (40%). Tear pepsin levels were significantly correlated with the severity of LPR disease and with ocular surface changes. Conclusions: A multidisciplinary approach, including ophthalmological evaluation, should be considered in order to improve the management of patients with LPR. abstract_id: PUBMED:32589162 Stratification of Individual Symptoms of Contact Lens-Associated Dry Eye Using the iPhone App DryEyeRhythm: Crowdsourced Cross-Sectional Study. Background: Discontinuation of contact lens use is mainly caused by contact lens-associated dry eye. It is crucial to delineate contact lens-associated dry eye's multifaceted nature to tailor treatment to each patient's individual needs for future personalized medicine. Objective: This paper aims to quantify and stratify individual subjective symptoms of contact lens-associated dry eye and clarify its risk factors for future personalized medicine using the smartphone app DryEyeRhythm (Juntendo University). Methods: This cross-sectional study included iPhone (Apple Inc) users in Japan who downloaded DryEyeRhythm. DryEyeRhythm was used to collect medical big data related to contact lens-associated dry eye between November 2016 and January 2018. The main outcome measure was the incidence of contact lens-associated dry eye. Univariate and multivariate adjusted odds ratios of risk factors for contact lens-associated dry eye were determined by logistic regression analyses. The t-distributed Stochastic Neighbor Embedding algorithm was used to depict the stratification of subjective symptoms of contact lens-associated dry eye. Results: The records of 4454 individuals (median age 27.9 years, SD 12.6), including 2972 female participants (66.73%), who completed all surveys were included in this study. Among the included participants, 1844 (41.40%) were using contact lenses, and among those who used contact lenses, 1447 (78.47%) had contact lens-associated dry eye. Multivariate adjusted odds ratios of risk factors for contact lens-associated dry eye were as follows: younger age, 0.98 (95% CI 0.96-0.99); female sex, 1.53 (95% CI 1.05-2.24); hay fever, 1.38 (95% CI 1.10-1.74); mental illness other than depression or schizophrenia, 2.51 (95% CI 1.13-5.57); past diagnosis of dry eye, 2.21 (95% CI 1.63-2.99); extended screen exposure time &gt;8 hours, 1.61 (95% CI 1.13-2.28); and smoking, 2.07 (95% CI 1.49-2.88). The t-distributed Stochastic Neighbor Embedding analysis visualized and stratified 14 groups based on the subjective symptoms of contact lens-associated dry eye. Conclusions: This study identified and stratified individuals with contact lens-associated dry eye and its risk factors. Data on subjective symptoms of contact lens-associated dry eye could be used for prospective prevention of contact lens-associated dry eye progression. abstract_id: PUBMED:30746909 Association between Three Heavy Metals and Dry Eye Disease in Korean Adults: Results of the Korean National Health and Nutrition Examination Survey. Purpose: To investigate the associations between blood heavy metal concentrations and dry eye disease using a Korean population-based survey. Methods: This study included 23,376 participants &gt;40 years of age who participated in the Korean National Health and Nutrition Examination Survey from 2010 to 2012. Blood concentrations of lead, cadmium, and mercury were measured in all participants. The associations between blood heavy metal concentrations and dry eye disease were assessed using multivariate logistic regression analyses. Results: After adjusting for potential confounders, including age, sex, lifestyle behaviors and sociodemographic factors, the analyses revealed an increased odds ratio (OR) for dry eye disease with higher blood mercury concentrations (tertile 2: OR, 1.22; 95% confidence interval [CI], 0.91 to 1.64; tertile 3: OR, 1.39; 95% CI, 1.02 to 1.89; p = 0.039). The prevalence of dry eye disease was not associated with blood lead (tertile 2: OR, 1.15; 95% CI, 0.87 to 1.51; tertile 3: OR, 0.83; 95% CI, 0.59 to 1.16; p = 0.283) or cadmium (tertile 2: OR, 1.05; 95% CI, 0.77 to 1.44; tertile 3: OR, 1.15; 95% CI, 0.84 to 1.58; p = 0.389) concentrations. There were no significant associations between any of the three heavy metals and dry eye disease in males after adjusting for potential confounding factors, but blood mercury concentrations in females were associated with dry eye disease (tertile 2: OR, 1.18; 95% CI, 0.83 to 1.69; tertile 3: OR, 1.58; 95% CI, 1.12 to 2.24; p = 0.009). Conclusions: Mercury concentrations in blood were associated with dry eye disease. Our results suggested that controlling environmental exposure to mercury may be necessary to reduce the incidence of dry eye disease. abstract_id: PUBMED:37063600 Mercury intoxication and ophthalmic involvement: An update review. Human intoxication after mercury exposure is a rare condition that can cause severe damage to the central nervous, respiratory, cardiovascular, renal, gastrointestinal, skin, and visual systems and represents a major public health concern. Ophthalmic involvement includes impaired function of the extraocular muscles and the eyelids, as well as structural changes in the ocular surface, lens, retina, and optic nerve causing a potential irreversible damage to the visual system. Although, there are many pathways for poisoning depending on the mercury form, it has been suggested that tissue distribution does not differ in experimental animals when administered as mercury vapor, organic mercury, or inorganic mercury. Additionally, visual function alterations regarding central visual acuity, color discrimination, contrast sensitivity, visual field and electroretinogram responses have also been described widely. Nevertheless, there is still controversy about whether visual manifestations occur secondary to brain damage or as a direct affectation, and which ocular structure is primarily affected. Despite the use of some imaging techniques such as in vivo confocal microscopy of the cornea, optical coherence tomography (OCT) of the retina and optic nerve, and functional tests such as electroretinography has helped to solve in part this debate, further studies incorporating other imaging modalities such as autofluorescence, OCT angiography or adaptive optics retinal imaging are needed. This review aims to summarize the published structural and functional alterations found in the visual system of patients suffering from mercury intoxication. abstract_id: PUBMED:28919183 Impact of oral vitamin D supplementation on the ocular surface in people with dry eye and/or low serum vitamin D. Purpose: To determine the possible association between serum vitamin D levels and dry eye symptoms, and the impact of an oral vitamin D supplement. Methods: Three linked studies were performed. (i) 29 older adult participants, (ii) 29 dry eyed participants, and (iii) 2-month vitamin D supplementation for 32 dry eyed/low serum vitamin D levelled participants. All participants were assessed by the Ocular Surface Diseases Index (OSDI) to determine dry eye symptoms, and the phenol red thread test (PRT) and/or Schirmer's tear test, tear meniscus height, non-invasive tear break up time, grading ocular surface redness and fluorescein staining of the cornea to detect the tear quality and ocular surface conditions. Blood samples were collected for serum vitamin D analysis and interleukin-6 (IL-6) levels. Results: Among older adult participants, vitamin D levels were negatively correlated with dry eye symptoms, the severity of dry eye, and associated with tired eye symptom. Vitamin D levels of people with dry eye diagnosis were not correlated with OSDI scores and IL-6 levels; while IL-6 levels showed correlation with tear production. In supplement study, vitamin D levels increased by 29mol/l, while dry eye symptoms and grading of corneal staining appeared significant reductions. No significant changes in IL-6 levels. Conclusions: Low vitamin D levels (&lt;50nmol/l) were associated with dry eye symptoms in older individuals but not those diagnosed with dry eye. Vitamin D supplement increased the vitamin D levels, and improved dry eye symptoms, the tear quality and ocular surface conditions. abstract_id: PUBMED:31744796 Blood derived treatment from two allogeneic sources for severe dry eye associated to keratopathy: a multicentre randomised cross over clinical trial. Aim: To compare the efficacy of cord blood and peripheral adult donor blood serum eyedrops, controlled for growth factor content, in the treatment of severe dry eye diseases (DED) resistant to conventional therapy. Methods: This was a multicentre randomised, double-masked, cross-over clinical trial. Sixty patients diagnosed as severe DED, associated to persistent corneal epithelial defects were randomised and equally assigned to group A (treated with cord blood serum (CBS)) or group B (treated with PBS), eyedrops administered eight times/day for 1 month. Primary outcome was the pretreatment and post-treatment change in corneal fluorescein staining. Secondary outcomes included the pretreatment and post-treatment change in Ocular Surface Disease Index (OSDI) questionnaire and Visual Analogue Score (VAS) of subjective symptoms, Schirmer I test, tear break-up time and conjunctival staining. Patients with relapse in signs or symptoms after further 2 months switched to the remaining group for one additional month. Data were statistically analysed (p&lt;0.05). Results: Corneal staining was more significantly reduced after the CBS treatment, both VAS and OSDI score reduction was observed in both groups, but group A reported significantly less grittiness and pain. Nineteen patients shifted in the crossover period, the within individual comparison confirmed a better recovery in the CBS treatment period. Reduction in epithelial damage was positively associated with epidermal growth factor, transforming growth factorα and platelet-derived growth factor content. Levels of interleukins (IL-13) were positively associated with symptom decrease. Conclusions: Overall, DED signs improved after both CBS and PBS treatments, with potential advantages of CBS for subjective symptoms and corneal damage reduction. Clinical Trial Registration: NCT03064984. Answer: Yes, higher blood mercury levels are associated with dry eye symptoms in adult Koreans. A population-based cross-sectional study using data from the Korean National Health and Nutrition Examination Survey (KNHANES) found that participants with dry eye symptoms were significantly more likely to have blood mercury levels exceeding the median than those without dry eye symptoms. After adjusting for various factors, the study concluded that dry eye symptoms were significantly associated with blood mercury levels that exceeded the median (PUBMED:27121705). Another study from the same survey also reported that blood mercury concentrations were associated with an increased odds ratio for dry eye disease after adjusting for potential confounders (PUBMED:30746909). These findings suggest that there is a relationship between blood mercury levels and the presence of dry eye symptoms in the Korean adult population.
Instruction: Is sentinel node biopsy feasible in endometrial cancer? Abstracts: abstract_id: PUBMED:16319767 Is sentinel node biopsy feasible in endometrial cancer? Results in 26 patients Objectives: To evaluate detection rate, topography and false negatives of sentinel lymph node in endometrial cancer. Material And Methods: Twenty-six patients were included. Lymphoscintigraphy was performed the day before surgery. Preoperative detection of the sentinel lymph node was performed with cervical blue dye injection and a gamma probe. Separate pathology examinations were performed for sentinel and non-sentinel lymph nodes. Sentinel lymph nodes were examined with hematoxylin-eosin-safran stain, and immunohistochemistry if negative. Results: Twenty-six patients had a positive lymphoscintigraphy. Preoperative detection was successful in 21 patients (80.8%): the detection rate with isotopic method, 19 cases (73.1%), was superior to the dye detection, 15 cases (57.7%). No isolated lombo-aortic sentinel lymph nodes were observed, and all sentinel lymph nodes were in the ilio-obturator region. Seven patients presented lymphatic spread, and 4 of them had at least one sentinel node. There was one micrometastasis in sentinel node, associated with isolated tumoral cells in pelvic lymphadenectomy. There was no false negative of sentinel node. Conclusion: The biopsy of sentinel lymph node is a feasible procedure in endometrial cancer. There was one micrometastatic sentinel node. However there was no isolated lomboaortic sentinel lymph node in this study. abstract_id: PUBMED:33299284 A Feasibility Study of Sentinel Lymph Node Biopsy in Endometrial Cancer Using Technetium 99m Nanocolloid. To study the feasibility of sentinel node biopsy in early-stage endometrial cancer and to analyse the detection rate of sentinel lymph node (SLN) using preoperative cervical injection of Tc99m nanocolloid. Thirty-five patients with preoperative histological diagnosis of endometrial cancer without any extrauterine involvement on imaging were included in the study. Sentinel node mapping was done by cervical injection of Tc99m nanocolloid on the evening before surgery. Scintigraphic images were taken using gamma camera. Intraoperatively, nodes showing radioactivity were detected using hand-held gamma probe, dissected out separately and labelled as sentinel lymph nodes. Detection rate was calculated and analysed with respect to various parameters. Sentinel lymph node biopsy (SLNB) is feasible in endometrial cancer using cervical injection of Tc99m nanocolloid. SLN detection was done in 33 (94.3%) out of 35 patients. Bilateral detection was feasible in 19 patients (54.3%) with detection in left and right hemipelvis being 74.3%. Detection rate of SLN was 93.7% in endometrioid adenocarcinoma. Sentinel node was detected in all the patients with non-endometrioid histology. The SLNB using cervical injection of Tc99m nanocolloid is feasible in endometrial cancer. It is a safe and easily reproducible technique with good detection rate and high sensitivity. Stage of the tumour, grade and myometrial invasion do not seem to have an influence on sentinel node detection. Cervical involvement, enlarged lymph nodes and obstructed lymphatics can affect sentinel node mapping adversely. abstract_id: PUBMED:28213057 Utilization of sentinel lymph node biopsy for uterine cancer. Background: To limit the potential short and long-term morbidity of lymphadenectomy, sentinel lymph node biopsy has been proposed for endometrial cancer. The principle of sentinel lymph node biopsy relies on removal of a small number of lymph nodes that are the first drainage basins from a tumor and thus the most likely to harbor tumor cells. While the procedure may reduce morbidity, efficacy data are limited and little is known about how commonly the procedure is performed. Objective: We examined the patterns and predictors of use of sentinel lymph node biopsy and outcomes of the procedure in women with endometrial cancer who underwent hysterectomy. Study Design: We used the Perspective database to identify women with uterine cancer who underwent hysterectomy from 2011 through 2015. Billing and charge codes were used to classify women as having undergone lymphadenectomy, sentinel lymph node biopsy, or no nodal assessment. Multivariable models were used to examine clinical, demographic, and hospital characteristics with use of sentinel lymph node biopsy. Length of stay and cost were compared among the different methods of nodal assessment. Results: Among 28,362 patients, 9327 (32.9%) did not undergo nodal assessment, 17,669 (62.3%) underwent lymphadenectomy, and 1366 (4.8%) underwent sentinel lymph node biopsy. Sentinel lymph node biopsy was performed in 1.3% (95% confidence interval, 1.0-1.6%) of abdominal hysterectomies, 3.4% (95% confidence interval, 2.7-4.1%) of laparoscopic hysterectomies, and 7.5% (95% confidence interval, 7.0-8.0%) of robotic-assisted hysterectomies. In a multivariable model, more recent year of surgery was associated with performance of sentinel lymph node biopsy. Compared to abdominal hysterectomy, those undergoing laparoscopic (adjusted risk ratio, 2.45; 95% confidence interval, 1.89-3.18) and robotic-assisted (adjusted risk ratio, 2.69; 95% confidence interval, 2.19-3.30) hysterectomy were more likely to undergo sentinel lymph node biopsy. Among women who underwent minimally invasive hysterectomy, length of stay and cost were lower for sentinel lymph node biopsy compared to lymphadenectomy. Conclusion: The use of sentinel lymph node biopsy for endometrial cancer increased from 2011 through 2015. The increased use was most notable in women who underwent a robotic-assisted hysterectomy. abstract_id: PUBMED:36362690 Applications and Safety of Sentinel Lymph Node Biopsy in Endometrial Cancer. Lymph node status is important in predicting the prognosis and guiding adjuvant treatment in endometrial cancer. However, previous studies showed that systematic lymphadenectomy conferred no therapeutic values in clinically early-stage endometrial cancer but might lead to substantial morbidity and impact on the quality of life of the patients. The sentinel lymph node is the first lymph node that tumor cells drain to, and sentinel lymph node biopsy has emerged as an acceptable alternative to full lymphadenectomy in both low-risk and high-risk endometrial cancer. Evidence has demonstrated a high detection rate, sensitivity and negative predictive value of sentinel lymph node biopsy. It can also reduce surgical morbidity and improve the detection of lymph node metastases compared with systematic lymphadenectomy. This review summarizes the current techniques of sentinel lymph node mapping, the applications and oncological outcomes of sentinel lymph node biopsy in low-risk and high-risk endometrial cancer, and the management of isolated tumor cells in sentinel lymph nodes. We also illustrate a revised sentinel lymph node biopsy algorithm and advocate to repeat the tracer injection and explore the presacral and paraaortic areas if sentinel lymph nodes are not found in the hemipelvis. abstract_id: PUBMED:35576340 Utilization and Outcomes of Sentinel Lymph Node Biopsy for Early Endometrial Cancer. Objective: To examine trends, characteristics, and oncologic outcomes of sentinel lymph node biopsy for early endometrial cancer. Methods: This observational study queried the National Cancer Institute's Surveillance, Epidemiology, and End Results Program by examining 83,139 women with endometrial cancer who underwent primary hysterectomy with nodal evaluation for T1 disease from 2003 to 2018. Primary outcome measures were the temporal trends in utilization of sentinel lymph node biopsy and patient characteristics associated with sentinel lymph node biopsy use, assessed by multivariable binary logistic regression models. Secondary outcome measure was endometrial cancer-specific mortality associated with sentinel lymph node biopsy, assessed by propensity score inverse probability of treatment weighting. Results: The utilization of sentinel lymph node biopsy increased from 0.2 to 29.7% from 2005 to 2018 (P&lt;.001). The uptake was higher for women with endometrioid (0.3-31.6% between 2005 and 2018) compared with nonendometrioid (0.6-21.0% between 2006 and 2018) histologic subtypes (both P&lt;.001). In a multivariable analysis, more recent year surgery, endometrioid histology, well-differentiated tumors, T1a disease, and smaller tumor size were independently associated with sentinel lymph node biopsy use (P&lt;.05). Performance of sentinel lymph node biopsy was not associated with increased endometrial cancer-specific mortality compared with lymphadenectomy for endometrioid tumors (subdistribution hazard ratio [HR] 0.96, 95% CI 0.82-1.13) or nonendometrioid tumors (subdistribution HR 0.85, 95% CI 0.69-1.04). For low-risk endometrial cancer, the increase in sentinel lymph node biopsy resulted in a 15.3 percentage-point (1.4-fold) increase in surgical nodal evaluation by 2018 (expected vs observed rates, 37.8 vs 53.1%). Conclusion: The landscape of surgical nodal evaluation is shifting from lymphadenectomy to sentinel lymph node biopsy for early endometrial cancer in the United States, with no indication of a negative effect on cancer-specific survival. abstract_id: PUBMED:15380747 Value of the sentinel node biopsy in uterine cancers In cancer research, regional lymph node status is a major prognostic factor and a decision criterion for adjuvant therapy. The sentinel node procedure, which has emerged to reduce morbidity of extensive lymphadenectomy, remains a major step in the surgical management of various cancers. Sentinel node procedure has become a standard technique for the determination of the nodal stage of the disease in patients with melanoma, vulvar cancer and recently in breast cancer. In cervical and endometrial cancers, the sentinel node biopsy is still at the stage of feasibility. In this article, we review the technical aspects, results and clinical implications of sentinel node procedure in cervical and endometrial cancers. abstract_id: PUBMED:38006759 Sentinel-node biopsy in apparent early stage ovarian cancer: final results of a prospective multicentre study (SELLY). Aim: To evaluate the sensitivity and specificity of sentinel-lymph-node mapping compared with the gold standard of systematic lymphadenectomy in detecting lymph node metastasis in apparent early stage ovarian cancer. Methods: Multicenter, prospective, phase II trial, conducted in seven centers from March 2018 to July 2022. Patients with presumed stage I-II epithelial ovarian cancer planned for surgical staging were eligible. Patients received injection of indocyanine green in the infundibulo-pelvic and, when feasible, utero-ovarian ligaments and sentinel lymph node biopsy followed by pelvic and para-aortic lymphadenectomy was performed. Histopathological examination of all nodes was performed including ultra-staging protocol for the sentinel lymph node. Results: 174 patients were enrolled and 169 (97.1 %) received study interventions. 99 (58.6 %) patients had successful mapping of at least one sentinel lymph node and 15 (15.1 %) of them had positive nodes. Of these, 11 of 15 (73.3 %) had a correct identification of the disease in the sentinel lymph node; 7 of 11 (63.6 %) required ultra-staging protocol to detect nodal metastasis. Four (26.7 %) patients with node-positive disease had a negative sentinel-lymph-node (sensitivity 73.3 % and specificity 100.0 %). Conclusions: In a multicenter setting, identifying sentinel-lymph nodes in apparent early stage epithelial ovarian cancer did not reach the expected sensitivity: 1 of 4 patients might have metastatic lymphatic disease unrecognized by sentinel-lymph-node biopsy. Nevertheless, 35.0 % of node positive patients was identified only thanks to ultra-staging protocol on sentinel-lymph-nodes. abstract_id: PUBMED:24475571 Sentinel node biopsy in endometrial cancer: systematic review and meta-analysis of the literature. Purpose: Sentinel lymph node biopsy is a fairly new approach for staging of gynecological malignancies. In the current study, the authors comprehensively reviewed the available reports on sentinel node biopsy of endometrial cancer. Materials And Methods: The authors searched Medline, SCOPUS, ISI web of knowledge, Science Direct, Springer, OVID SP, and Google Scholar with the following search terms: "endometrium OR endometrial OR uterine OR uterus AND sentinel". The outcomes of interest were detection rate and sensitivity. Results: Overall, 35 studies had enough information for false negative rate evaluation and 51 studies (including the sub-groups of individual studies) for detection rate evaluation (2,071 patients overall). Pooled detection rate was 77.8% (95% CI: 73.5-81.5%) and pooled sensitivity was 89% (95% CI: 83-93%). Cervical injection, as well as using both blue dye and radiotracer, results in higher detection rate and sensitivity. New techniques such as fluorescent dye injection and robotic-assisted surgery showed high detection rate and sensitivity. Conclusion: Sentinel node mapping is feasible in endometrial cancer. Using both blue dye and radiotracer and cervical injection of the mapping material can optimize the sensitivity and detection rate of this technique. Larger studies are still needed to evaluate the false negative rate and the factors influencing the sensitivity before considering this method safe. abstract_id: PUBMED:38041023 A multicenter noninferior randomized controlled study of sentinel lymph node biopsy alone versus sentinel lymph node biopsy plus lymphadenectomy for patients with stage I endometrial cancer, INSEC trial concept. Background: Up to the present time, there has remained a lack of strong evidence as to whether sentinel lymph node biopsy can replace lymphadenectomy for early endometrial cancer. The traditional surgery for endometrial cancer includes pelvic lymphadenectomy and paraaortic lymph node resection, but complications often seriously affect patients' quality of life. Two randomized controlled trials with large samples have proved that lymphadenectomy does not improve the overall recurrence rate and survival rate of patients. On the contrary, it increases the incidence of complications and even mortality. The current trial is designed to clarify whether sentinel lymph node biopsy can replace lymphadenectomy for early endometrial cancer patients with negative lymph nodes. Methods: This study is a randomized, open-label, multicenter and non-inferiority controlled clinical trial in China. Potential participants will be patients with pathologically confirmed endometrial cancer at the Zhejiang Cancer Hospital, Jiaxing Maternity and Child Health Care Hospital, and the First Hospital of Jiaxing in China. The total sample size for this study is 722. Patients will be randomly assigned in a 1:1 ratio to two groups. Patients in one group will undergo sentinel lymph node biopsy + total hysterectomy + bilateral salpingo-oophorectomy ± paraaortic lymph node resection. Patients in the other group will undergo sentinel lymph node biopsy + total hysterectomy + bilateral salpingo-oophorectomy + pelvic lymphadenectomy ± paraaortic lymph node resection. The 3-year disease-free survival rate, overall survival rate, quality of life (use EORTC QLQ-C30 + QLQ-CX24), and perioperative related indexes of the two groups will be compared. Results: We expect to find that for patients with early endometrial cancer, the 3-year disease-free survival rate following sentinel lymph node biopsy with indocyanine green combined with near-infrared fluorescence imaging is similar to that following lymphadenectomy. The operation time, as well as incidence of pelvic lymphocyst, lymphedema of lower limb, and edema of vulva in patients who only undergo sentinel lymph node biopsy are expected to be significantly lower than in patients who undergo lymphadenectomy. The quality of life of patients who undergo sentinel lymph node biopsy alone will be significantly better than that of patients who undergo lymph node dissection. Conclusion: This will prove that the prognosis of sentinel lymph node biopsy alone with indocyanine green combined with near-infrared fluorescence imaging is not inferior to that of sentinel lymph node biopsy plus lymphadenectomy for early stage endometrial cancer with negative nodal assessment intraoperatively. In addition, sentinel lymph node biopsy alone with indocyanine green combined with near-infrared fluorescence imaging results in fewer surgical complications and gives patients better quality of life. Trial Registration: chictr.org.cn, ChiCTR1900023161. Registered 14 May 2019, http://www.chictr.org.cn/edit.aspx?pid=38659&amp;htm=4 . abstract_id: PUBMED:34035551 Status of Sentinel Lymph Node Biopsy in Endometrial Cancer. The role of lymphadenectomy in surgical staging remains one of the biggest controversies in the management of endometrial cancer. The concept of sentinel lymph node biopsy in endometrial cancer has been evaluated for a number of years, with promising sensitivity rates and negative predictive values. The possibility of adequate staging while avoiding systematic lymphadenectomy leads to a significant reduction in the rate of peri- and postoperative morbidity. Nevertheless, the status of sentinel lymph node biopsy in endometrial cancer has not yet been fully elucidated and is variously assessed internationally. According to current European guidelines and recommendations, sentinel lymph node biopsy in endometrial cancer should be performed only in the context of clinical studies. In this review article, the developments of the past decade are explored concisely. In addition, current data regarding the technical aspects, accuracy and prognostic relevance of sentinel lymph node biopsy are explained and evaluated critically. Answer: Sentinel node biopsy is indeed feasible in endometrial cancer, as evidenced by multiple studies. The detection rate of sentinel lymph nodes (SLNs) using preoperative cervical injection of Tc99m nanocolloid was found to be high, with a detection rate of 94.3% in one study of 35 patients (PUBMED:33299284). Another study with 26 patients reported a successful preoperative detection rate of 80.8% (PUBMED:16319767). Furthermore, sentinel lymph node biopsy has been shown to have a high detection rate, sensitivity, and negative predictive value, and it can reduce surgical morbidity while improving the detection of lymph node metastases compared to systematic lymphadenectomy (PUBMED:36362690). The use of sentinel lymph node biopsy for endometrial cancer has been increasing, particularly in women who undergo robotic-assisted hysterectomy (PUBMED:28213057). A systematic review and meta-analysis also supported the feasibility of sentinel node mapping in endometrial cancer, with pooled detection rates and sensitivity indicating that the technique is promising (PUBMED:24475571). Moreover, sentinel lymph node biopsy is being explored as an alternative to full lymphadenectomy to limit potential morbidity. It is considered an acceptable alternative in both low-risk and high-risk endometrial cancer (PUBMED:36362690). The utilization of sentinel lymph node biopsy has been increasing over time, and it has not been associated with increased endometrial cancer-specific mortality compared with lymphadenectomy (PUBMED:35576340). However, it is important to note that the status of sentinel lymph node biopsy in endometrial cancer has not been fully elucidated and is variously assessed internationally. Some guidelines recommend that sentinel lymph node biopsy in endometrial cancer should be performed only in the context of clinical studies (PUBMED:34035551). In conclusion, sentinel node biopsy is a feasible procedure in endometrial cancer with a high detection rate and sensitivity, and it is increasingly being used as an alternative to full lymphadenectomy. However, further research and clinical trials are needed to fully establish its role in the management of endometrial cancer.
Instruction: Does repeated hyperbaric exposure to 4 atmosphere absolute cause hearing impairment? Abstracts: abstract_id: PUBMED:14501446 Does repeated hyperbaric exposure to 4 atmosphere absolute cause hearing impairment? Study in Guinea pigs and clinical incidences. Hypothesis: Direct pressure applied on the inner ear cannot induce hearing loss. Background: Three possible causes have been described in the literature for inner ear permanent lesions during scuba diving: pressure imbalance between the middle ear and the external ear, appearance of microbubbles in the internal ear, and direct effect of pressure on the inner ear. We seek to determine whether this last factor can be involved. Methods: We submitted two groups of guinea pigs previously implanted with an electrode in the round window to a protocol of air diving in a hyperbaric chamber. Eardrums of animals in one of the two groups had been perforated beforehand. Twenty dives were practiced over 4 weeks. We chose dive parameters consistent with common sport diving: maximal pressure of 4 atmosphere absolute and duration of 30 minutes. Auditory threshold and cochlear spontaneous activity were recorded at regular intervals. Furthermore, we recorded spontaneous cochlear activity in Heliox 400-m and 600-m dives to determine whether our conclusions hold for "extreme" diving. Results: In the group with perforated eardrums, no variation of those parameters were recorded, even in extreme diving. Important variations were noticed in the other group. Conclusions: Pressure applied directly on the inner ear during diving does not disturb cochlear activity. abstract_id: PUBMED:12839352 Hyperbaric oxygen therapy: current trends and applications. Hyperbaric medicine is the fascinating use of barometric pressure for delivering increased oxygen dissolved in plasma to body tissues. Hyperbaric oxygen therapy (HOT) or hyperbaric oxygen (HBO) involves intermittent inhalation of 100% oxygen under a pressure exceeding that of the atmosphere, that is greater than 1 atmosphere absolute (ATA). Therapy is given in special therapeutic chambers which were earlier used primarily to treat illnesses of deep sea divers. There is recently a renewed interest in this field all over the world. Acute traumatic wounds, crush injuries, burns, gas gangrene and compartment syndrome are indications where addition of hyperbaric oxygen may be life and limb saving. Patients who are suffering with non-healing ulcers, decubitus ulcers (bed sores) and all late sequelae of radiation therapy are also benefited with HBO therapy. Acute hearing loss and many neurological illnesses are also now known to possibly benefit from hyperbaric oxygen therapy. This article aims to give a brief overview of the rationale, existing trends and applications of this therapy. abstract_id: PUBMED:11845376 Effects of Repetitive Exposure to Hyperbaric Oxygen (HBO) on Leukocyte Function Objective: Despite favourable clinical data on the successful use of hyperbaric oxygen (HBO), only limited investigations have been carried out to date regarding the influence of hyperoxia on leukocyte function. In a murine model, CD4+ T-cell population remained unchanged after repeated HBO exposure, however CD8+ cells were found to be increased. The aim of this study was to investigate whether repetitive exposure to hyperoxia would affect human monocyte and lymphocyte function. Methods: Methods: After Ethics Committee approval the effects of elevated partial oxygen pressure were studied in the course of a ten-day HBO therapy (2.5 atmospheres absolute over a daily period of 90 min). Monocytes and lymphocytes of 30 patients with acute hearing loss were determined by flow cytometry before, throughout and after HBO therapy using monoclonal antibodies to CD3, CD4, CD8, CD14, CD25, CD45 and HLA-DR. Statistical analysis was made by ANOVA (analysis of variance). Results: The relative percentage of CD3+, CD4+, CD8+, CD25+, CD14+, and HLA-DR+ cells remained unchanged during the course of and after HBO therapy. Conclusions: We conclude that repetitive exposure to hyperoxia does not influence human monocyte and lymphocyte functions in contrast to experimental data. abstract_id: PUBMED:22183701 Suitability of the partially implantable active middle-ear amplifier Vibrant Soundbridge® to hyperbaric exposure. Introduction: Active middle ear amplifiers represent a modern possibility to treat sensorineural, conductive and combined hearing loss. They can be in use in divers and patients who need hyperbaric oxygen therapy. Therefore, active middle-ear amplifiers have to be tested to determine whether or not they are prone to implosion or function loss in hyperbaric conditions. Material And Methods: We asked three of the companies registered by the German health authorities as manufacturers of active middle ear amplifiers to test their devices in hyperbaric conditions. Med-El agreed to support the study; Envoy stated that their devices were unable to withstand a pressure of 608 kPa; Otologics had no capacity to take part in this study. Twelve Vibrant Soundbridge® (Med-El) middle-ear amplifiers were tested in a water bath in a hyperbaric chamber. Four devices were pressurised to a maximum of 284 kPa, four devices to 405 kPa and four devices to 608 kPa, each for a maximum dive time of 78 minutes. Results: The functions of the 12 devices were tested by the manufacturer pre- and post-hyperbaric exposure. Visual inspections as well as laboratory function tests were normal in all 12 devices after hyperbaric exposure. Discussion And Conclusion: Hyperbaric exposure to more than one bar pressure difference can result in structure damage, implosion or loss of function of the mechanical device. The Vibrant Soundbridge® middle-ear amplifier tolerated a single hyperbaric exposure to pressures of up to 608 kPa for 78 minutes with no loss of performance. abstract_id: PUBMED:17593107 Hyperbaric oxygen therapy for interstitial cystitis resistant to conventional treatments. We treated two cases of interstitial cystitis (IC) that were resistant to some conventional therapies with hyperbaric oxygen (HBO). Both patients underwent 20 sessions of 100% oxygen inhalation (2.0 atmosphere absolute for 60 min/day x 5 days/week for 4 weeks) in a hyperbaric chamber. The period of follow up was 12 months for case 1 and 9 months for case 2. After a course of HBO, the bladder mucosal ulcer (Hunner's ulcer) disappeared, and changes from baseline in pain and urinary frequency was constitutively inhibited. There were no adverse events during the 20 treatment sessions. One woman (case 1) had mild Eustachian tube dysfunction, resulting in a transient hearing impairment. HBO seems to be an option for treatment of IC resistant to conventional therapies. abstract_id: PUBMED:26162417 Repeated Moderate Noise Exposure in the Rat--an Early Adulthood Noise Exposure Model. In this study, we investigated the effects of varying intensity levels of repeated moderate noise exposures on hearing. The aim was to define an appropriate intensity level that could be repeated several times without giving rise to a permanent hearing loss, and thus establish a model for early adulthood moderate noise exposure in rats. Female Sprague-Dawley rats were exposed to broadband noise for 90 min, with a 50 % duty cycle at levels of 101, 104, 107, or 110 dB sound pressure level (SPL), and compared to a control group of non-exposed animals. Exposure was repeated every 6 weeks for a maximum of six repetitions or until a permanent hearing loss was observed. Hearing was assessed by the auditory brainstem response (ABR). Rats exposed to the higher intensities of 107 and 110 dB SPL showed permanent threshold shifts following the first exposure, while rats exposed to 101 and 104 dB SPL could be exposed at least six times without a sustained change in hearing thresholds. ABR amplitudes decreased over time for all groups, including the non-exposed control group, while the latencies were unaffected. A possible change in noise susceptibility following the repeated moderate noise exposures was tested by subjecting the animals to high-intensity noise exposure of 110 dB for 4 h. Rats previously exposed repeatedly to 104 dB SPL were slightly more resistant to high-intensity noise exposure than non-exposed rats or rats exposed to 101 dB SPL. Repeated moderate exposure to 104 dB SPL broadband noise is a viable model for early adulthood noise exposure in rats and may be useful for the study of noise exposure on age-related hearing loss. abstract_id: PUBMED:19860136 Hyperbaric oxygen improves nasal air flow. Objective: We investigated whether hyperbaric oxygen (HBO2) treatment is able to cause any changes in the nasal peak inspiratory flow (NPIF) values of patients submitted to this therapy. Study Design: NPIF was measured in a group of 13 patients who were submitted to at least 10 sessions of 75 minutes long HBO2 treatments over a period of 20 days. HBO2 was prescribed to the patients to treat hearing loss, diabetic ulcers or chronic inflammatory disease. Three timings were chosen to perform the NPIF measurements: during HBO2, five minutes before and five minutes after the treatment. Methods: For NPIF evaluation, the highest inspiratory flow of three inspirations was recorded. To search for statistical differences between NPIF measurements at the three different timings of the HBO2 treatment, we have analysed the data using the repeated measures ANOVA test with the Epsilon lower bound correction for the F ratio. Results: NPIF values were significantly higher when the patients were inside the HBO2 chamber when compared with NPIF measurements obtained in the same individuals five minutes before starting or five minutes after ending the treatment. A small but significant increase in NPIF values was detected in patients five minutes after stopping the HBO2 treatment, in comparison with values obtained five minutes before initiating the therapy. NPIF values remained stable along the 10 HBO2 sessions, i.e. with repetition of the HBO2 treatments, NPIF values were not further enhanced. Conclusions: Exposure to HBO2 causes significant improvement in nasal air flow. This increase is restricted mostly to the period that the patients are inside the hyperbaric chamber. Further investigations are needed to determine the relative contributions of enhancement in air pressure and in oxygen concentration (that characterize HBO2) in the enhancement of nasal air flow. The herein finding may be helpful in future investigations on the treatment of nasal or sinus diseases. abstract_id: PUBMED:8677379 Therapeutic effect of hyperbaric oxygenation in acute acoustic trauma. Retrospectively 78 patients with uni- or bilateral acute acoustic trauma (AAT) were evaluated to assess the therapeutic effect of hyperbaric oxygenation (HBO). All subjects received saline or dextran (Rheomacodrex) infusions with Ginkgo extracts (Tebonin) and prednisone. Thirty six patients underwent additional hyperbaric oxygenation at a pressure of 2 atmospheres absolute for 60 minutes once daily. Both treatment groups were comparable as far as age, gender, initial hearing loss and prednisone dose are concerned. The delay of therapy onset was 15 hours in both groups and treatment was started within 72 hours in all cases. Control audiometry was performed after 6.5 days, when the HBO group had had 5 exposures to hyperbaric oxygenation. The average hearing gain in the group without HBO was 74.3 dB and in the group treated additionally with HBO 121.3 dB (P &lt; 0.004). It is concluded, that hyperbaric oxygenation significantly improves hearing recovery after AAT. Therefore acute acoustic trauma with significant hearing threshold depression remains an otological emergency. Minimal therapy involving waiting for spontaneous recovery, which is mostly incomplete leaving a residual C5 or C6 and handicapping tinnitus, is not the treatment of choice. Randomized prospective clinical trials with a larger patient series are needed and further experimental studies are required to understand the physiological mechanisms of HBO responsible for the clinical success in AAT. abstract_id: PUBMED:29390550 Successful treatment of sudden sensorineural hearing loss by means of pharmacotherapy combined with early hyperbaric oxygen therapy: Case report. Rationale: According to the World Health Organization reports, adult-onset hearing loss is the 15th leading cause of burden of disease, and is projected to move up to 7th by the year 2030, especially in high-income countries. Sudden sensorineural hearing loss is considered by otologists as a true otologic emergency. The current standard treatment for sudden hearing loss is a tapered course of oral high-dose corticosteroids. The described clinical case points to the validity of undertaking early hyperbaric oxygenation (HBO) therapy together with corticosteroids for full recovery of adult onset idiopathic sudden sensorineural hearing loss. Patient Concerns: A 44-year-old woman complained of an abrupt hearing deterioration in the left ear with the sensation of aural fullness and loud tinnitus presented for 48 hours. The patient was admitted to the Department of Otolaryngology of Public Hospital for diagnosis and treatment. Diagnoses: The patient was diagnosed with unilateral sudden idiopathic sensorineural hearing loss, assessed by measuring the tonal audiograms. Interventions: The patient received treatment including oral high-dose corticosteroids combined with HBO protocol including 15 daily 1-hour exposures to 100% oxygen at 2.5 atmosphere absolute. Outcomes: A pharmacotherapy combined with early HBO resulted in full recovery of hearing. Lessons: Early implementation of HBO to the pharmacotherapy in sudden sensorineural hearing loss may lead to full recovery of hearing. There is a need for systematic research to establish guidelines for optimal number of HBO sessions in relation to the timeframe from hearing loss symptoms onset to implementation of HBO therapy. abstract_id: PUBMED:25109087 Hyperbaric oxygen therapy for carbon monoxide poisoning. Despite established exposure limits and safety standards, and the availability of carbon monoxide (CO) alarms, each year 50,000 people in the United States visit emergency departments for CO poisoning. Carbon monoxide poisoning can occur from brief exposures to high levels of CO, or from longer exposures to lower levels. Common symptoms include headaches, nausea and vomiting, dizziness, general malaise, and altered mental status. Some patients may have chest pain, shortness of breath and myocardial ischemia, and may require mechanical ventilation and treatment of shock. Individuals poisoned by CO often go on to develop neurological problems, including cognitive sequelae, anxiety and depression, persistent headaches, dizziness, sleep problems, motor weakness, vestibular and balance problems, gaze abnormalities, peripheral neuropathies, hearing loss, tinnitus and Parkinsonian-like syndrome. While breathing oxygen hastens the removal of carboxyhemoglobin (COHb), hyperbaric oxygen (HBO2) hastens COHb elimination and favorably modulates inflammatory processes instigated by CO poisoning, an effect not observed with breathing normobaric oxygen. Hyperbaric oxygen improves mitochondrial function, inhibits lipid peroxidation transiently, impairs leukocyte adhesion to injured microvasculature, and reduces brain inflammation caused by the CO-induced adduct formation of myelin basic protein. Based upon three supportive randomized clinical trials in humans and considerable evidence from animal studies, HBO2 should be considered for all cases of acute symptomatic CO poisoning. Hyperbaric oxygen is indicated for CO poisoning complicated by cyanide poisoning, often concomitantly with smoke inhalation. Answer: Based on the provided abstracts, repeated hyperbaric exposure to 4 atmosphere absolute does not appear to cause hearing impairment. A study conducted on guinea pigs, which were submitted to a protocol of air diving in a hyperbaric chamber with parameters consistent with common sport diving (maximal pressure of 4 atmosphere absolute and duration of 30 minutes), found that direct pressure applied on the inner ear during diving does not disturb cochlear activity. This was observed in the group with perforated eardrums, where no variation in auditory threshold and cochlear spontaneous activity were recorded, even in extreme diving conditions (PUBMED:14501446). Furthermore, another study involving patients with acute hearing loss undergoing hyperbaric oxygen therapy (HBO) at 2.5 atmospheres absolute over a daily period of 90 minutes did not show any influence on human monocyte and lymphocyte functions, suggesting that repetitive exposure to hyperoxia does not adversely affect these immune cells, which could be indicative of a lack of systemic adverse effects from the therapy (PUBMED:11845376). However, one case mentioned in the abstracts indicates that a patient undergoing hyperbaric oxygen therapy for interstitial cystitis experienced mild Eustachian tube dysfunction, resulting in transient hearing impairment (PUBMED:17593107). This suggests that while direct pressure may not cause hearing loss, other factors associated with hyperbaric environments, such as changes in pressure affecting the Eustachian tube, could potentially lead to temporary hearing issues. In conclusion, the evidence from the provided abstracts suggests that repeated hyperbaric exposure to 4 atmosphere absolute does not cause hearing impairment, although individual cases may experience temporary issues related to pressure changes in the ear.
Instruction: Transcutaneous electrogastrography: a non-invasive method to evaluate post-operative gastric disorders? Abstracts: abstract_id: PUBMED:10370700 Transcutaneous electrogastrography: a non-invasive method to evaluate post-operative gastric disorders? Background/aims: With the development of high-performance computer programs, transcutaneous electrogastrography has experienced a renaissance in the last few years and is widely recommended as a non-invasive diagnostic tool to evaluate functional gastric disorders. We assessed the clinical value of electrogastrography in symptomatic and asymptomatic patients after a variety of procedures of the upper gastrointestinal (GI) tract. Methodology: Electrogastrography tracings were recorded with a commercially available data logger using a recording frequency of 4 Hz. A standard meal was given between a 60 min preprandial and a 60 min postprandial period. The following parameters were analyzed pre- and postprandially utilizing Fourier and spectral analysis: Regular gastric activity (2-4 cycles/minute), bradygastria (0.5-2 cycles/minute), tachygastria (4-9 cycles/minute), dominant frequency and power of the dominant frequency. Nineteen asymptomatic healthy volunteers served as a control group. Forty-nine patients, who had undergone upper intestinal surgery, were included in the study (cholecystectomy n = 10, Nissen fundoplication n = 10, subtotal gastrectomy n = 8, truncal vagotomy, and gastric pull-up as esophageal replacement n = 6). Twenty of these patients complained of epigastric symptoms post-operatively, while 12 of these 20 patients also had a scintigraphic gastric emptying study with Tc99m labeled semisolid meal. Results: Preprandial gastric electric activity was between 2 and 4 cycles/minute in 60-90% of the study time in healthy volunteers. In all study groups the prevalence and power of normal electric activity increased significantly after the test meal (p &lt; 0.001). After cholecystectomy, Nissen fundoplication, subtotal gastrectomy or vagotomy and gastric pull-up pre- and postprandial gastric electric activity showed a greater variability compared to normal volunteers (p &lt; 0.05), but no typical electrogastrography pattern could be identified for the different surgical procedures. There was no significant difference in the electrogastrography pattern between asymptomatic and symptomatic patients and patients with normal or abnormal scintigraphic gastric emptying curves. Conclusions: There is no specific electrogastrography pattern to differentiate between typical surgical procedures or epigastric symptoms. To date, electrogastrography does not contribute to the diagnosis and analysis of gastric motility disorders after upper intestinal surgery. abstract_id: PUBMED:14712792 Exactness of transcutaneous sonography in the diagnosis of gastric wall lesions Objective: Our objective was to determine sensitivity, especificity and predictive values of transcutaneous sonography for detecting gastric wall lesions. Materials And Methods: This prospective study was performed from March 1999 to April 2000 on 150 patients referred for transcutaneous sonography by the Endoscopic Service Unit. Sonographic examinations were performed using RT 4000 General Electric equipment with 5 Mhz transducer and replenishment of stomach with fluid. All scanning was done by the same sonographer, who was unaware of endoscopic, tomographic, or upper gastrointestinal series features. Results from sonography were compared with gastrointestinal tract endoscopy. Sensitivity, specificity, and predictive values were determined using contingency statistical procedure. Sonographic examination accuracy was calculated evaluating sensitivity and specificity confidence intervals (CI). Kappa index was calculated. Diagnostic accuracy differences observed between tumoral and non-tumoral lesions by sonography were evaluated by chi 2 probe. Results: Sensitivity of 85% (95% CI, from 75.2 to 94.8%) and specificity of 90% (95% CI, from 86 to 93.9%) were obtained. Positive predictability was 78% and negative predictability was 94%. Diagnostic accuracy was 87%. Kappa index was 0.717. There were 35 no false-positive results (19 tumoral lesions and 16 non-tumoral lesions), seven false-negative results (one tumoral lesion and six non-tumoral lesions) and 10 false-positive results (two tumoral lesions and eight non-tumoral lesions). Only one of 20 tumoral lesions were diagnosed by ultrasound whereas from 22 non-tumoral lesions were not diagnosed 6 (chi 2 = 3.74, p &gt; 0.05). Conclusion: Transcutaneous sonography is a rapid, low cost and non-invasive method that may be useful to establish clinic diagnosis and in the first steps of gastric wall lesions evaluation, it is valuable in assessment of diagnostic orientation for the referring clinic. abstract_id: PUBMED:37192662 A pattern-recognition-based clustering method for non-invasive diagnosis and classification of various gastric conditions. Conventional endoscopic biopsy tests are not suitable for early detection of the acute onset and progression of peptic ulcer as well as various gastric complications. This also limits its suitability for widespread population-based screening and consequently, many people with complex gastric phenotypes remain undiagnosed. Here, we demonstrate a new non-invasive methodology for accurate diagnosis and classification of various gastric disorders exploiting a pattern-recognition-based cluster analysis of a breathomics dataset generated from a simple residual gas analyzer-mass spectrometry. The clustering approach recognizes unique breathograms and "breathprints" signatures that clearly reflect the specific gastric condition of an individual person. The method can selectively distinguish the breath of peptic ulcer and other gastric dysfunctions like dyspepsia, gastritis, and gastroesophageal reflux disease patients from the exhaled breath of healthy individuals with high diagnostic sensitivity and specificity. Moreover, the clustering method exhibited a reasonable power to selectively classify the early-stage and high-risk gastric conditions with/without ulceration, thus opening a new non-invasive analytical avenue for early detection, follow-up, and fast population-based robust screening strategy of gastric complications in the real-world clinical domain. abstract_id: PUBMED:27194259 Domestically produced Chinese minimally invasive surgical robot system "Micro Hand S" is applied to clinical surgery preliminarily in China. Objective: To develop and validate one low-cost and easy-use domestically produced Chinese minimally invasive surgical robot system "Micro Hand S" that surgeons can use to resolve the complicated surgeries challenge. Methods: From April 2014 to April 2015, one patient with gastric perforation, three patients with acute appendicitis, five patients with acute cholecystitis, and one patient with right colon cancer underwent robotic-assisted surgeries. Eight of these patients were followed for 1 month, and pre- and postoperative changes in blood route test and hepatorenal function examination, surgery duration, hospital stay, total robotic setup time, total robotic operation time, intraoperative blood loss, total postoperative drainage amount, duration of bearing drainage tubes were recorded. Two patients withdrew from the study because of individual privacies. Results: We accomplished surgical procedures using "Micro Hand S." No intraoperative complications or technical problems were encountered. All patients recovered and discharged from hospital without complications. Conclusions: The domestic surgical robot system "Micro Hand S" was validated as safe and effective through these clinical cases. The proposed design method is an effective way to make "Micro Hand S" become low-cost and easy-use robot system. abstract_id: PUBMED:19086236 Clinical trial: alvimopan for the management of post-operative ileus after abdominal surgery: results of an international randomized, double-blind, multicentre, placebo-controlled clinical study. Background: Post-operative ileus (POI) affects most patients undergoing abdominal surgery. Aim: To evaluate the effect of alvimopan, a peripherally acting mu-opioid receptor antagonist, on POI by negating the impact of opioids on gastrointestinal (GI) motility without affecting analgesia in patients outside North America. Methods: Adult subjects undergoing open abdominal surgery (n = 911) randomly received oral alvimopan 6 or 12 mg, or placebo, 2 h before, and twice daily following surgery. Opioids were administered as intravenous patient-controlled analgesia (PCA) or bolus injection. Time to recovery of GI function was assessed principally using composite endpoints in subjects undergoing bowel resection (n = 738). Results: A nonsignificant reduction in mean time to tolerate solid food and either first flatus or bowel movement (primary endpoint) was observed for both alvimopan 6 and 12 mg; 8.5 h (95% CI: 0.9, 16.0) and 4.8 h (95% CI: -3.2, 12.8), respectively. However, an exploratory post hoc analysis showed that alvimopan was more effective in the PCA (n = 317) group than in the non-PCA (n = 318) group. Alvimopan was well tolerated and did not reverse analgesia. Conclusion: Although the significant clinical effect of alvimopan on reducing POI observed in previous trials was not reproduced, this trial suggests potential benefit in bowel resection patients who received PCA. abstract_id: PUBMED:16501944 Evaluation of a technique for blind placement of post-pyloric feeding tubes in intensive care: application in patients with gastric ileus. Objective: To evaluate a blind 'active' technique for the bedside placement of post-pyloric enteral feeding tubes in a critically ill population with proven gastric ileus. Design And Setting: An open study to evaluate the success rate and duration of the technique in cardiothoracic and general intensive care units of a tertiary referral hospital. Patients: 20 consecutive, ventilated patients requiring enteral nutrition, where feeding had failed via the gastric route. Interventions: Previously described insertion technique-the Corpak 10-10-10 protocol-for post-pyloric enteral feeding tube placement, modified after 20 min if placement had not been achieved, by insufflation of air into the stomach to promote pyloric opening. Measurements And Results: A standard protocol and a set method to identify final tube position were used in each case. In 90% (18/20) of cases tubes were placed on the first attempt, with an additional tube being successfully placed on the second attempt. The median time for tube placement was 18 min (range 3-55 min). In 20% (4/20) insufflation of air was required to aid trans-pyloric passage. Conclusions: The previously described technique, modified by insufflation of air into the stomach in prolonged attempts to achieve trans-pyloric passage, proved to be an effective and cost efficient method to place post-pyloric enteral feeding tubes. This technique, even in the presence of gastric ileus, could be incorporated by all critical care facilities, without the need for any additional equipment or costs. This approach avoids the costs of additional equipment, time-delays and necessity to transfer the patient from the ICU for the more traditional techniques of endoscopy and radiographic screening. abstract_id: PUBMED:24927224 Learning curve for endoscopic submucosal dissection of gastric submucosal tumors: is it more difficult than it may seem? Background: Endoscopic submucosal dissection (ESD), as a minimally invasive technique, is gaining wide acceptance for treating epithelial neoplasms. More recently, some pioneers have developed ESD for the treatment of submucosal tumors (SMTs), but characterization of the learning curve is lacking. In this study we aimed to evaluate the learning curve for ESD of gastric SMTs. Subjects And Methods: From September 2008 to April 2011, ESD was performed in 50 consecutive patients with gastric SMTs by a single experienced endoscopist at our high-volume institution. The cumulative sum (CUSUM) method was performed to analyze the shifts in operative time (OT) and consequently to investigate the learning curve. Results: Analysis of the OT using the CUSUM method identified two distinct phases: Phase 1 (the initial 32 cases) and Phase 2 (the remaining 18 cases). Phase 1 represented the initial learning period, whereas Phase 2 showed the more skilled and higher proficiency period, with a significant reduction in OT (90±29 minutes versus 55±20 minutes; P&lt;.0001). The two phases did not differ significantly with respect to patient characteristics and other perioperative parameters. Conclusions: Mastery of operative technique for ESD of SMTs is evident by a decrease in OT identified by CUSUM graphs. For endoscopists competent in basic endoscopic intervention skills, the learning curve should be achieved after approximately 32 cases. Offering this minimally invasive endoscopic intervention does not result in increased complication rate even in the early phase of the learning curve. abstract_id: PUBMED:1864531 Can transcutaneous recordings detect gastric electrical abnormalities? The ability of transcutaneous recordings of gastric electrical activity to detect gastric electrical abnormalities was determined by simultaneous measurements of gastric electrical activity with surgically implanted serosal electrodes and cutaneous electrodes in six patients undergoing abdominal operations. Transient abnormalities in gastric electrical activity were seen in five of the six patients during the postoperative period. Recognition of normal gastric electrical activity by visual analysis was possible 67% of the time and with computer analysis 95% of the time. Ninety four per cent of abnormalities in frequency were detected by visual analysis and 93.7% by computer analysis. Abnormalities involving a loss of coupling, however, were not recognised by transcutaneous recordings. Transcutaneous recordings of gastric electrical activity assessed by computer analysis can usually recognise normal gastric electrical activity and tachygastria. Current techniques, however, are unable to detect abnormalities in electrical coupling. abstract_id: PUBMED:9496486 Transcutaneous ultrasound of gastric pathology. Background/aims: Examination of the stomach during transcutaneous upper gastrointestinal ultrasound is often ignored. Two thousand seven hundred and eighty patients were referred for endoscopy over the period of August 1994 until August 1995. Nearly half of those patients underwent transcutaneous ultrasound. We report on the ultrasonographic demonstration of gastric pathology in 18 patients. Methodology: The stomach was examined in a collapsed state after an overnight fast. No paralytic agents or water distention were used. Results: Seven patients had gastric tumors. Six patients had diffuse gastric wall thickening. Large varices were seen in two patients. A patient with multiple ulcers showed irregular walls. Two patients had retained gastric contents. Conclusions: Results of the ultrasound matched well with endoscopic findings. We recommend that in all abdominal ultrasounds, the stomach should be examined carefully and evaluated systematically. abstract_id: PUBMED:25974066 Ameliorating effects and autonomic mechanisms of needle-less transcutaneous electrical stimulation at ST36 on stress-induced impairment in gastric slow waves. Background And Aim: Stress has long been documented to alter gastrointestinal motility. The effects of electroacupuncture (EA) on stress and gastric motility are relatively well known; however, whether EA has an ameliorating effect on stress-induced dysmotility remained unclear. This study aims to investigate the effects and mechanisms of needle-less transcutaneous electroacupuncture (TEA) on stress-induced impairment in gastric slow waves. Methods: A watch-size digital stimulator was developed. Ten healthy volunteers were involved in a four-session study (control, cold stress, TEA, and sham TEA). Electrogastrograpy was used to assess gastric slow waves, and electrocardiogram was recorded for the assessment of autonomic functions. The recordings were made in each session with/without stress and with TEA at ST36 or sham points. Results: The results are as follows: (i) Cold stress-induced gastric dysrhythmia and impaired normal slow waves (P &lt; 0.01). TEA showed a preventive effect on cold stress-induced impairment in gastric slow waves. TEA at ST36, but not sham TEA, normalized slow waves (P = 0.03 vs stress; P = 0.44 vs control), attributed to the suppression of gastric dysrhythmia; (ii) Postprandially, there was a decrease in vagal activity in both control (P = 0.004) and stress (P = 0.002) sessions; this decrease was prevented with TEA (P &lt; 0.05). Similarly, there was a postprandial increase in sympathetic activity in both control (P = 0.01) and stress (P = 0.002) sessions, and this increase was suppressed with TEA. Conclusions: Needle-less TEA at ST36 using a watch-size stimulator is able to improve stress-induced impairment in gastric slow waves, possibly mediated via the autonomic mechanism. Home-based needle-less TEA may be a viable therapy for stress-induced impairment in gastric motility functions. Answer: Transcutaneous electrogastrography (EGG) has been explored as a non-invasive diagnostic tool to evaluate functional gastric disorders, including post-operative gastric disorders. However, the clinical value of EGG in this context appears to be limited. A study assessing the clinical value of EGG in symptomatic and asymptomatic patients after various upper gastrointestinal (GI) tract procedures found that while there was a significant increase in the prevalence and power of normal electric activity after a test meal, there was no specific EGG pattern that could differentiate between typical surgical procedures or epigastric symptoms. Furthermore, there was no significant difference in the EGG pattern between asymptomatic and symptomatic patients or between patients with normal or abnormal scintigraphic gastric emptying curves. The study concluded that EGG does not contribute to the diagnosis and analysis of gastric motility disorders after upper intestinal surgery (PUBMED:10370700). In contrast, other non-invasive methods have shown promise in the evaluation of gastric conditions. For example, transcutaneous sonography has demonstrated good sensitivity and specificity for detecting gastric wall lesions, with a diagnostic accuracy of 87% (PUBMED:14712792). Additionally, a pattern-recognition-based clustering method using breathomics dataset generated from a simple residual gas analyzer-mass spectrometry has been shown to accurately diagnose and classify various gastric disorders, including peptic ulcer and other gastric dysfunctions, with high diagnostic sensitivity and specificity (PUBMED:37192662). In summary, while transcutaneous electrogastrography has been recommended as a non-invasive diagnostic tool for functional gastric disorders, its specific utility in evaluating post-operative gastric disorders is questionable, as it has not shown the ability to differentiate between various conditions or surgical outcomes. Other non-invasive methods may offer more promise in the diagnosis and classification of gastric conditions.
Instruction: Practicing medicine without borders: tele-consultations and tele-mentoring for improving paediatric care in a conflict setting in Somalia? Abstracts: abstract_id: PUBMED:22845678 Practicing medicine without borders: tele-consultations and tele-mentoring for improving paediatric care in a conflict setting in Somalia? Objectives: In a district hospital in conflict-torn Somalia, we assessed (i) the impact of introducing telemedicine on the quality of paediatric care, and (ii) the added value as perceived by local clinicians. Methods: A 'real-time' audio-visual exchange of information on paediatric cases (Audiosoft Technologies, Quebec, Canada) took place between clinicians in Somalia and a paediatrician in Nairobi. The study involved a retrospective analysis of programme data, and a perception study among the local clinicians. Results: Of 3920 paediatric admissions, 346 (9%) were referred for telemedicine. In 222 (64%) children, a significant change was made to initial case management, while in 88 (25%), a life-threatening condition was detected that had been initially missed. There was a progressive improvement in the capacity of clinicians to manage complicated cases as demonstrated by a significant linear decrease in changes to initial case management for meningitis and convulsions (92-29%, P = 0.001), lower respiratory tract infection (75-45%, P = 0.02) and complicated malnutrition (86-40%, P = 0.002). Adverse outcomes (deaths and lost to follow-up) fell from 7.6% in 2010 (without telemedicine) to 5.4% in 2011 with telemedicine (30% reduction, odds ratio 0.70, 95% CI: 0.57-0.88, P = -0.001). The number needed to be treated through telemedicine to prevent one adverse outcome was 45. All seven clinicians involved with telemedicine rated it to be of high added value. Conclusion: The introduction of telemedicine significantly improved quality of paediatric care in a remote conflict setting and was of high added value to distant clinicians. abstract_id: PUBMED:26393014 Paediatric in-patient care in a conflict-torn region of Somalia: are hospital outcomes of acceptable quality? Setting: A district hospital in conflict-torn Somalia. Objective: To report on in-patient paediatric morbidity, case fatality and exit outcomes as indicators of quality of care. Design: Cross-sectional study. Results: Of 6211 children, lower respiratory tract infections (48%) and severe acute malnutrition (16%) were the leading reasons for admission. The highest case-fatality rate was for meningitis (20%). Adverse outcomes occurred in 378 (6%) children, including 205 (3.3%) deaths; 173 (2.8%) absconded. Conclusion: Hospital exit outcomes are good even in conflict-torn Somalia, and should boost efforts to ensure that such populations are not left out in the quest to achieve universal health coverage. abstract_id: PUBMED:37821559 Surgical tele-mentoring using a robotic platform: initial experience in a military institution. Background: Surgical tele-mentoring leverages technology by projecting surgical expertise to improve access to care and patient outcomes. We postulate that tele-mentoring will improve surgeon satisfaction, procedural competence, the timeliness of operative intervention, surgical procedure efficiency, and key intra-operative decision-making. As a first step, we performed a pilot study utilizing a proof-of-concept tele-mentoring process during robotic-assisted surgery to determine the effects on the perceptions of all members of the surgical team. Methods: An IRB-approved prospective feasibility study to determine the safety and efficacy of remote surgical consultation to local surgeons utilizing robotic surgery technology in the fields of general, urology, gynecology and thoracic surgery was performed. Surgical teams were provided a pre-operative face-to-face orientation. During the operation, the mentoring surgeon was located at the same institution in a separate tele-mentoring room. An evaluation was completed pre- and post-operatively by the operative team members and mentor. Results: Fifteen operative cases were enrolled including seven general surgery, four urology, one gynecology and three thoracic surgery operations. Surveys were collected from 67 paired survey respondents and 15 non-paired mentor respondents. Participation in the operation had a positive effect on participant responses regarding all questions surveyed (p &lt; 0.05) indicating value to tele-mentoring integration. Connectivity remained uninterrupted with clear delivery of audio and visual components and no perceived latency. Participant perception of leadership/administrative support was varied. Conclusions: Surgical tele-mentoring is safe and efficacious in providing remote surgical consultation to local surgeons utilizing robotic surgery technology in a military institution. Operative teams overwhelmingly perceived this capability as beneficial with reliable audio-visual connectivity demonstrated between the main operative room and the Virtual Medical Center. Further study is needed to develop surgical tele-mentoring to improve patient care without geographic limitations during times of peace, war and pandemic outbreaks. abstract_id: PUBMED:30966860 Paediatric tele-emergency care: A study of two delivery models. Introduction: Tele-emergency models have been utilized for decades, with growing evidence of their effectiveness. Due to the variety of tele-emergency department (tele-ED) models used in practice, however, it is challenging to build standardized metrics for ongoing evaluation. This study describes two tele-ED programs, one specialized and one general, that provide care to paediatric populations. Through an examination of model structures and patient populations, we gain insight into how evaluative measures should reflect tele-ED model design and purpose. Methods: Qualitative descriptions of the two tele-ED models are presented. We show a retrospective cohort analysis describing paediatric patients' key characteristics, reasons for visit, and disposition status by case/control status. Case/control patient encounter data were collected October 2015 through December 2017, from 15 spoke hospitals within each tele-ED program. Results: The two tele-ED models serve distinct paediatric populations, and measures of tele-ED utilization and disposition reflect those differences. In the specialized University of California (UC) Davis Health program, tele-ED was utilized in 36% of paediatric critical care encounters and 78% of those were transferred. In the Avera eCARE program, tele-ED was activated in 1.7% of paediatric encounters and 50.6% of those were transferred. When Avera eCARE paediatric encounters were stratified by severity, measures of tele-ED use and disposition status among high-severity encounters were more similar to UC Davis Health. Discussion: This study describes how design choices of tele-ED models have implications for evaluative measures. Measures of tele-ED model success need to reflect model purpose, populations served, and for whom tele-ED service use is appropriate. abstract_id: PUBMED:34256415 Towards development of a tele-mentoring framework for minimally invasive surgeries. Background: Tele-mentoring facilitates the transfer of surgical knowledge. The objective of this work is to develop a tele-mentoring framework that enables a specialist surgeon to mentor an operating surgeon by transferring information in a form of surgical instruments' motion required during a minimally invasive surgery. Method: A tele-mentoring framework is developed to transfer video stream of the surgical field, poses of the scope and port placement from the operating room to a remote location. From the remote location, the motion of virtual surgical instruments augmented onto the surgical field is sent to the operating room. Results: The proposed framework is suitable to be integrated with laparoscopic as well as robotic surgeries. It takes on average 1.56 s to send information from the operating room to the remote location and 0.089 s for vice versa over a local area network. Conclusions: The work demonstrates a tele-mentoring framework that enables a specialist surgeon to mentor an operating surgeon during a minimally invasive surgery. abstract_id: PUBMED:32189705 A Collaborative Tele-Neurology Outpatient Consulation Service in Karnataka: Seven Years of Experience From a Tele-Medicine Center. Background: Neurology services in rural and semi-urban part of India are very limited, due to poor infrastructure, resources, and manpower. Tele-neurology consultations at a non-urban setup can be considered as an alternative and innovative approach and have been quite successful in developed countries. Therefore, an initiative to bridge this health gap through Tele-Medicine has been taken by the Government of India. Aim: To study the sociodemographic and clinical profiles of patients who have received collaborative Tele-Neurology consultations from the Tele-Medicine Centre, National Institute of Mental Health and Neurosciences, Bengaluru. Methodology: We reviewed case files of such patients between December 2010 and March 2017. A total 189 collaborative tele-neurology outpatient consultations were provided through the Tele-Medicine Centre, located at a tertiary hospital-based research centre in southern India. Results: The mean age of the patients was 39.6 (±19) years and 65.6% were aged between 19 to 60 years; 50.8% were male. The most common diagnosis was a seizure disorder in 17.5%, followed by cerebrovascular accident/stroke in 14.8%. Interestingly, 87.3% were found to benefit from tele-neurology consultations using interventions such as a change of medications in 30.1%, referral to a specialist for review in 15.8%, and further evaluation of illness and inpatient care for 7.93%. Conclusion: This study has demonstrated the successful implementation of outpatient-based collaborative tele neurology consultation in Karnataka. abstract_id: PUBMED:35246742 Preliminary design and evaluation of a remote tele-mentoring system for minimally invasive surgery. Background: Tele-mentoring during surgery facilitates the transfer of surgical knowledge from a mentor (specialist surgeon) to a mentee (operating surgeon). The aim of this work is to develop a tele-mentoring system tailored for minimally invasive surgery (MIS) where the mentor can remotely demonstrate to the mentee the required motion of the surgical instruments. Methods: A remote tele-mentoring system is implemented that generates visual cues in the form of virtual surgical instrument motion overlaid onto the live view of the operative field. The technical performance of the system is evaluated in a simulated environment, where the operating room and the central location of the mentor were physically located in different countries and connected over the internet. In addition, a user study was performed to assess the system as a mentoring tool. Results: On average, it took 260 ms to send a view of the operative field of 1920 × 1080 resolution from the operating room to the central location of the mentor and an average of 132 ms to receive the motion of virtual surgical instruments from the central location to the operating room. The user study showed that it is feasible for the mentor to demonstrate and for the mentee to understand and replicate the motion of surgical instruments. Conclusion: The work demonstrates the feasibility of transferring information over the internet from a mentor to a mentee in the form of virtual surgical instruments. Their motion is overlaid onto the live view of the operative field enabling real-time interactions between both the surgeons. abstract_id: PUBMED:31210781 Essential newborn care practice at four primary health facilities in conflict affected areas of Bossaso, Somalia: a cross-sectional study. Background: Newborn mortality is increasingly concentrated in contexts of conflict and political instability. However, there are limited guidelines and data on the availability and quality of newborn care in conflict settings. In 2016, an interagency collaboration developed the Newborn Health in Humanitarian Settings Field Guide- Interim version (Field Guide). In this study, we sought to understand the baseline availability and quality of essential newborn care in Bossaso, Somalia as part of an investigation to determine the feasibility and effectiveness of the Field Guide in improving newborn care in humanitarian settings. Methods: A cross-sectional study was conducted at four purposely selected health facilities serving internally displaced persons affected by conflict in Bossaso. Essential newborn care practice and patient experience with childbirth care received at the facilities were assessed via observation of clinical practice during childbirth and the immediate postnatal period, and through postnatal interviews of mothers. Descriptive statistics and logistic regression were employed to summarize and examine variation by health facility. Results: Of the 332 pregnant women approached, 253 (76.2%) consented and were enrolled. 97.2% (95% CI: 94.4, 98.9) had livebirths and 2.8% (95% CI: 1.1, 5.6) had stillbirths. The early newborn mortality was 1.7% (95% CI: 0.3, 4.8). Nearly all [95.7%, (95% CI: 92.4, 97.8)] births were attended by skilled health worker. Similarly, 98.0% (95% CI: 95.3, 99.3) of newborns received immediate drying, and 99.2% (95% CI: 97.1, 99.9) had delayed bathing. Few [8.6%, (95% CI: 5.4, 12.9)] received immediate skin-to-skin contact and the practice varied significantly by facility (p &lt; 0.001). One-third of newborns [30.1%, (95% CI: 24.4, 36.2)] received early initiation of breastfeeding and there was significant variation by facility (p &lt; 0.001). While almost all [99.2%, (95% CI: 97.2, 100)] service providers wore gloves while attending births, handwashing was not as common [20.2%, (95% CI: 15.4, 25.6)] and varied by facility (p &lt; 0.001). Nearly all [92%, (95% CI: 86.9, 95.5)] mothers were either very happy or happy with the childbirth care received at the facility. Conclusion: Essential newborn care interventions were not universally available. Quality of care varied by health facility and type of intervention. Training and supervision using the Field Guide could improve newborn outcomes. abstract_id: PUBMED:31861584 Changes in Opioid Prescribing Behaviors among Family Physicians Who Participated in a Weekly Tele-Mentoring Program. A weekly tele-mentoring program was implemented in Ontario to help address the growing opioid crisis through teaching and mentoring family physicians on the management of chronic pain and opioid prescribing. This study assessed opioid prescribing behaviours among family physicians who attended the tele-mentoring program compared to two groups of Ontario family physicians who did not attend the program. We conducted a retrospective cohort study with two control groups: a matched cohort, and a random sample of 3000 family physicians in Ontario. Each physician was followed from one year before the program, which is the index date, and one year after. We examined the number and proportion of patients on any opioid, on high dose opioids, and the average daily morphine equivalent doses prescribed to each patient. We included 24 physicians who participated in the program (2760 patients), 96 matched physicians (11,117 patients) and 3000 random family doctors (374,174 patients). We found that, at baseline, the tele-mentoring group had similar number of patients on any opioid, but more patients on high dose opioids than both control groups. There was no change in the number of patients on any opioid before and after the index date, but there was a significant reduction in high-dose opioid prescriptions in the extension for community healthcare outcomes (ECHO) group, compared to a non-significant increase in the matched cohort, and a non-significant reduction in the Ontario group during the same comparable periods. Participation in the program was associated with a greater reduction in high-dose opioid prescribing. abstract_id: PUBMED:34660024 Patient Satisfaction With Remote Consultations in a Primary Care Setting. Introduction In recent years, the use of remote consultations has increased considerably. Many patient encounters in general practice are now conducted by phone or computer as opposed to traditional face-to-face appointments. The aim of this study was to measure patient satisfaction with remote consultations in a primary care setting. Aims To assess patient satisfaction with telephone consultations in a general practice setting and explore patient's experiences and attitudes toward remote consultations in general practice to formulate recommendations for potential telehealth improvements. Methods A total of 407 patients who had undergone primary care telephone consultations within the previous week were invited to provide feedback. Patient satisfaction was measured by a four-step questionnaire on patient experience, which was quantified on a Likert agreement scale, and the additional option of a comment section at the end of each questionnaire. The responses in the comment section were analysed according to the frequency of recurrent themes. Results The responses of 104 patients were included in the final analysis, and 44 patients used the comment section to provide additional information about their experience. Overall, the satisfaction with remote consultations was high while the rate of technical failure and the need for in-person follow-up were both low: 60 patients (58%) either agreed or strongly agreed that remote consultations are a convenient way of receiving health care and 26 patients (25%) would prefer to have remote consultations over face-to-face ones in the future while 42 patients (40%) would prefer face-to-face consultations in the future. Ninety-six (96) patients (92%) reported no technical problems affecting the consultation quality. Of all 104 face-to-face appointments, only 36 patients (35%) required in-person follow-up. Analyses of the comment section largely reflect the above findings but also highlight concerns from patients that remote consultations can generate additional anxiety, and symptoms might not be communicated effectively or even missed completely. Discussion The high satisfaction levels, low rates of technical failure and low need for a face-to-face follow-up show that in a primary care setting, remote consultations are an effective complement to face-to-face appointments. Nonetheless, the requirement for face-to-face contact goes beyond the need for physical examination alone, with many patients preferring face-to-face contact when discussing complex and sensitive health-related topics and symptoms. Answer: The introduction of telemedicine in a district hospital in conflict-torn Somalia had a significant impact on the quality of pediatric care. A study involving real-time audio-visual exchange of information on pediatric cases between clinicians in Somalia and a pediatrician in Nairobi showed that out of 3920 pediatric admissions, 346 (9%) were referred for telemedicine. In 222 (64%) of these cases, a significant change was made to initial case management, and in 88 (25%), a life-threatening condition was detected that had been initially missed. There was also a progressive improvement in the capacity of clinicians to manage complicated cases, as evidenced by a significant linear decrease in changes to initial case management for various conditions. Moreover, adverse outcomes (deaths and lost to follow-up) fell from 7.6% in 2010 (without telemedicine) to 5.4% in 2011 with telemedicine, representing a 30% reduction. All seven clinicians involved with telemedicine rated it to be of high added value, indicating that telemedicine significantly improved the quality of pediatric care in a remote conflict setting (PUBMED:22845678). In a related study, hospital exit outcomes for pediatric in-patient care in the same conflict-torn region of Somalia were reported to be good, with lower respiratory tract infections (48%) and severe acute malnutrition (16%) being the leading reasons for admission. The highest case-fatality rate was for meningitis (20%). Adverse outcomes occurred in 378 (6%) children, including 205 (3.3%) deaths; 173 (2.8%) absconded. These findings suggest that hospital outcomes are of acceptable quality and support efforts to ensure that populations in conflict settings are not left out in the quest to achieve universal health coverage (PUBMED:26393014). Overall, the practice of medicine without borders through tele-consultations and tele-mentoring has shown to be effective in improving pediatric care in a conflict setting in Somalia.
Instruction: Does long-term experience of nonstandard employment increase the incidence of depression in the elderly? Abstracts: abstract_id: PUBMED:27108642 Does long-term experience of nonstandard employment increase the incidence of depression in the elderly? Objectives: Our prospective study aimed to elucidate the effect of long-term experience of nonstandard employment status on the incidence of depression in elderly population using the Korean Longitudinal Study of Ageing (KLoSA) study. Methods: This study used the first- to fourth-wave cohorts of KLoSA. After the exclusion of the unemployed and participants who experienced a change in employment status during the follow-up periods, we analyzed a total of 1,817 participants. Employment contracts were assessed by self-reported questions:standard or nonstandard employment. The short form of the Center for Epidemiologic Studies Depression Scale (CES-D) served as the outcome measure. Hazard ratios (HRs) with 95% confidence intervals (CIs) were calculated using Cox proportional hazards models to evaluate the association between standard/nonstandard employees and development of depression. Results: The mean age of the participants was 53.90 (±7.21) years. We observed that nonstandard employment significantly increased the risk of depression. Compared with standard employees, nonstandard employees had a 1.5-fold elevated risk for depression after adjusting for age, gender, CES-D score at baseline, household income, occupation category, current marital status, number of living siblings, perceived health status, and chronic diseases [HR=1.461, 95% CI= (1.184, 1.805) ]. Moreover, regardless of other individual characteristics, the elevated risk of depression was observed among all kinds of nonstandard workers, such as temporary and day workers, full-time and part-time workers, and directly employed and dispatched labor. Conclusions: The 6-year follow-up study revealed that long-term experience of nonstandard employment status increased the risk of depression in elderly population in Korea. abstract_id: PUBMED:31354169 Nonstandard Work and Educational Differentials in Married Women's Employment in Japan: Patterns of Continuity and Change. The rapid expansion of nonstandard work has altered the nature of women's employment, but previous research on married women's employment trajectories in japan has paid little attention to the role of nonstandard work. to fill this gap, we examine how patterns of employment in regular and nonstandard positions vary by married women's socioeconomic status using nationally representative longitudinal data. results from discrete-time competing risks models of labor force transitions indicate that university graduates have the most stable labor force attachment in that they are the least likely to move from standard to nonstandard employment and to exit nonstandard jobs. in contrast, married women with a high school degree or less are more likely to reenter the labor force to take low-quality nonstandard jobs. these results are consistent with a scenario characterized by both continuity and change. older patterns of labor force exit and reentry, combined with the rise in nonstandard employment, are most relevant for less educated women while the emergence of more career employment opportunities is most relevant for highly educated women. considering the role of women's income in shaping patterns of inequality, these findings have important implications for stratification in japan. abstract_id: PUBMED:27344567 Who is working while sick? Nonstandard employment and its association with absenteeism and presenteeism in South Korea. Objectives: This study sought to examine whether nonstandard employment is associated with presenteeism as well as absenteeism among full-time employees in South Korea. Methods: We analyzed a cross-sectional survey of 26,611 full-time employees from the third wave of the Korean Working Conditions Survey in 2011. Experience of absenteeism and presenteeism during the past 12 months was assessed through self-reports. Employment condition was classified into six categories based on two contract types (parent firm and subcontract) and three contract durations [permanent (≥1 year, no fixed term), long term (≥1 year, fixed term), and short term (&lt;1 year, fixed term)]. Results: We found opposite trends between the association of nonstandard employment with absenteeism and presenteeism after adjusting for covariates. Compared to parent firm-permanent employment, which has been often regarded as a standard employment, absenteeism was not associated or negatively associated with all nonstandard employment conditions except parent firm-long term employment (OR 1.88; 95 % CI 1.57, 2.26). However, presenteeism was positively associated with parent firm-long term (OR 1.64; 95 % CI 1.42, 1.91), subcontract-long term (OR 1.61; 95 % CI 1.12, 2.32), and subcontract-short term (OR 1.26; 95 % CI 1.02, 1.56) employment. Conclusions: Our results found that most nonstandard employment may increase risk of presenteeism, but not absenteeism. These results suggest that previous findings about the protective effects of nonstandard employment on absenteeism may be explained by nonstandard workers being forced to work when sick. abstract_id: PUBMED:34561801 Association of residential greenness with geriatric depression among the elderly covered by long-term care insurance in Shanghai. Residential greenness exposure has been linked to a number of physical and mental disorders. Nevertheless, evidence on the association between greenness and geriatric depression was limited and focused on developed countries. This study was aimed to investigate whether the relationship between residential greenness exposure and geriatric depression exists among the elderly with long-term care insurance (LTCI) in Shanghai, China. In 2018, a total of 1066 LTCI elderly from a cross-sectional survey completed a questionnaire in Shanghai. Residential greenness indicators, including normalized difference vegetation index (NDVI) and soil-adjusted vegetation index (SAVI), were calculated from the Landsat 8 imagery data in different buffers (100-m, 300-m, and 500-m). Mediation analysis by perceived social support was conducted to explore potential mechanisms underlying the associations. In the fully adjusted model, one IQR increase of NDVI and SAVI in the 300-m buffer size was associated with an 11.9% (PR: 0.881, 95% CI: 0.795, 0.977) and 14.7% (PR: 0.853, 95% CI: 0.766, 0.949) lower prevalence of geriatric depression, respectively. Stronger association was observed in the elderly with lower education level, living in non-central area, and lower family monthly income. Perceived social support significantly mediated 40.4% of the total effect for NDVI 300-m buffer and 40.3% for SAVI 300-m buffer to the greenness-depression association, respectively. Our results indicate the importance of residential greenness exposure to geriatric depression, especially for the elderly with lower education level, living in non-central area, and lower family monthly income. Perceived social support might mediate the association. Well-designed longitudinal studies are warranted to confirm our findings and investigate the underlying mechanisms. abstract_id: PUBMED:36159254 Long-term effects of left-behind experience on adult depression: Social trust as mediating factor. Background: Despite much attention paid to the mental health of left-behind children, there has not been sufficient research on whether and how left-behind experiences have long-term effects on adults among the general population. This paper aims to evaluate the long-term effects of left-behind experience on adult psychological depression. Methods: By using the China Labor-force Dynamics Survey in 2018 (CLDS 2018), we assessed depression by the Center for Epidemiological Studies, Depression Scale (CES-D) and used a cut-off score of 20 for detecting depression (Yes = 1, No = 0). The Binomial logistic regression was used to compare the odds ratio across groups. We used the KHB method in the mediation analysis, to measure the indirect effect of social trust on the relationship between left-behind experience and depression. Results: The rate of depression (χ2 = 17.94, p &lt; 0.001) for the children who have left-behind experience (LBE) (10.87%) was higher than the children who have non-left-behind experience (N-LBE) (6.37%). The rate of social trust (χ2 = 27.51, p &lt; 0.001) of LBE (65.70%) was lower than N-LBE (75.05%). Compared with the other three groups, left-behind experience occurred in preschool (OR = 2.07, p &lt; 0.001, 95% CI = [1.45, 2.97]) was more likely to suffer from depression. The indirect effect of social trust (OR = 1.06, p &lt; 0.01, 95% CI = [1.02, 1.10]) is significantly on the relationship between LBE and psychological depression, with the total effect (OR = 1.71, p &lt; 0.001, 95% CI = [1.27, 2.31]) and direct effect (OR = 1.62, p &lt; 0.01, 95% CI = [1.20, 2.18]) are both significantly. The proportion of indirect effect in the total effect is 10.69%. Conclusion: The left-behind experience that occurred in childhood has a significantly negative effect on adult psychological depression, in which preschool left-behind experience played the most critical role. Social trust is the mediating factor associated with left-behind experience and psychological depression. To mitigate the long-term effects of the left-behind experience on psychological depression, parents need to be prudent about the decision-making of migration in the preschool stage of their children. and subsequent policies should strengthen social work targeting vulnerable youth groups especially those with left-behind experience at an early age in terms of their psychological depression. abstract_id: PUBMED:34913376 Aging, Dependence, and Long-Term Care: A Systematic Review of Employment Creation. Population aging is an economic and social challenge in most countries in the world as it generates higher dependency rates and increased demand for long-term care. Undertaking the care of older dependent adults can result in new opportunities for job creation. There is limited knowledge of the impact of dependent care and long-term care on employment. We examined this impact through a systematic review. Countries with conditional cash benefits show job creation, and countries with unconditional economic benefits reveal the development of a grey care market with high participation of migrant labor. Migrant employment in developed countries affects the development of the labor market in the countries of origin. The employment created to care for dependent persons is generally precarious. In conclusion, global aging will increase long-term care worker demand, but the variations in policies can determine what kind of employment is created. abstract_id: PUBMED:28971340 Unemployment, Nonstandard Employment, and Fertility: Insights From Japan's "Lost 20 Years". In this study, we examine relationships of unemployment and nonstandard employment with fertility. We focus on Japan, a country characterized by a prolonged economic downturn, significant increases in both unemployment and nonstandard employment, a strong link between marriage and childbearing, and pronounced gender differences in economic roles and opportunities. Analyses of retrospective employment, marriage, and fertility data for the period 1990-2006 indicate that changing employment circumstances for men are associated with lower levels of marriage, while changes in women's employment are associated with higher levels of marital fertility. The latter association outweighs the former, and results of counterfactual standardization analyses indicate that Japan's total fertility rate would have been 10 % to 20 % lower than the observed rate after 1995 if aggregate- and individual-level employment conditions had remained unchanged from the 1980s. We discuss the implications of these results in light of ongoing policy efforts to promote family formation and research on temporal and regional variation in men's and women's roles within the family. abstract_id: PUBMED:36388378 Recruitment, retention and employment growth in the long-term care sector in England. This paper studies the relationship between turnover, hiring and employment growth in the long-term care (LTC) sector in England and sheds light on how challenges in both recruitment and retention affect the sector's ability to meet growing demand for care services. Using the Adult Social Care Workforce Data Set (ASC-WDS), a large longitudinal dataset of LTC establishments in England, and fixed effects estimation methods we: (a) quantify the relationship between the in/outflow of care workers and the expansion/contraction of employment within establishments, (b) establish the role of staff retention policy for workforce expansion, and (c) identify the role of recruitment frictions and its impact on hiring and employment contraction. Our analysis indicates that care worker turnover and employment growth are negatively related. A one percentage point increase in employment contraction is associated with a 0.71 percentage point rise in turnover, while a one percentage point increase in employment expansion is associated with a 0.23 percentage point fall in turnover. In contrast, we find that hiring rates and employment growth are positively related. A one percentage point increase in employment expansion is associated with a 0.76 percentage point rise in hiring, while a one percentage point increase in employment contraction is associated with a 0.26 percentage point decrease in hiring. We argue that the negative turnover-employment growth relationship within expanding establishments provides evidence that better staff retention is associated with higher employment growth. Using information on establishments' annual change in vacancies, and controlling for changes in new labor demand, we also find rising year-on-year vacancies amongst establishments with declining employment. This provides evidence that recruitment frictions drive the declining rate of replacement hiring amongst contracting establishments. Across sectors, we find that the employment growth-turnover and the employment decline-hiring relationships are relatively stronger in the private and voluntary sectors compared to the public sector, suggesting that the impact of staff retention and recruitment frictions on employment is more acute in these sectors. abstract_id: PUBMED:30473319 Transforming clients into experts-by-experience: A pilot in client participation in Dutch long-term elderly care homes inspectorate supervision. As experts-by-experience, clients are thought to give specific input for and legitimacy to regulatory work. In this paper we track a 2017 pilot by the Dutch Health and Youth Care Inspectorate that aimed to use experiential knowledge in risk regulation through engaging with clients of long-term elderly care homes. Through an ethnographic inquiry we evaluate the design of this pilot. We find how the pilot transforms selected clients into experts-by-experience through training and site visits. In this transformation, clients attempt, and fail, to bring to the fore their definitions of quality and safety, negating their potentially specific contributions. Paradoxically, in their attempts to expose valid new knowledge on the quality of care, the pilot constructs the experts-by-experience in such a way that this knowledge is unlikely to be opened up. Concurrently, we find that in their attempts to have their input seen as valid, experts-by-experience downplay the value of their experiential knowledge. Thus, we show how dominating, legitimated interpretations of (knowledge about) quality of care resonate in experimental regulatory practices that explicitly try to move beyond them, emphasizing the need for a pragmatic and reflexive engagement with clients in the supervision of long-term elderly care. abstract_id: PUBMED:25480270 Long-term care of the elderly in Uruguay. Objective: To explore the perception of care of the elderly in long-term centers. Design: Qualitative study through participant observation and 23 in-depth interviews with administrators, caregivers, and residents between January 2011 and January 2012 in nine long-term care centers in Uruguay. From analysis, three care-perception factors emerged: (a) meaning of care, (b) experience of care, and (c) care continuity. Conclusion: Care of the elderly in nursing homes is perceived as a duty. There is an exchange of feelings and recognition of the caregiver is needed. However, the need for technical or specialized training is not mentioned. Recognition of the task of care provision, duty as a moral determinant, the exchange of affection and solidarity between staff and the elderly allows care continuity despite the centers' lack of resources. Clinical Relevance: This study contributes information that may help improve the care conditions of elderly institutionalized people. Answer: Yes, long-term experience of nonstandard employment does increase the incidence of depression in the elderly. A 6-year follow-up study using data from the Korean Longitudinal Study of Ageing (KLoSA) found that nonstandard employment significantly increased the risk of depression in the elderly population in Korea. Compared with standard employees, nonstandard employees had a 1.5-fold elevated risk for depression after adjusting for various factors such as age, gender, baseline depression score, household income, occupation category, marital status, number of living siblings, perceived health status, and chronic diseases (PUBMED:27108642).
Instruction: Is routine preoperative 2-dimensional echocardiography necessary for infants with esophageal atresia, omphalocele, or anorectal malformations? Abstracts: abstract_id: PUBMED:20438917 Is routine preoperative 2-dimensional echocardiography necessary for infants with esophageal atresia, omphalocele, or anorectal malformations? Background: Infants with esophageal atresia (EA), omphalocele, and anorectal malformation (ARM) often have associated congenital heart disease. Recognition of significant cardiac defects, which compromise patient well-being in the perioperative period, is essential before going to the operating room. However, urgent echocardiography may be unavailable, and surgery may therefore be delayed in some cases. We wished to determine if routine echocardiography is necessary for neonates with these diagnoses, or if appropriate patients could be selected. Methods: Retrospective review of all infants admitted to the neonatal intensive care unit with EA, omphalocele, or ARM for 5 years (2003-2008). Clinically relevant findings in the cardiovascular examination (murmur, tachycardia, abnormal 4 limb blood pressure, cyanosis, shock), abnormalities in respiratory examination (intubation, tachypnea, desaturations), or abnormal chest x-ray (cardiomegaly, abnormal pulmonary vasculature) were documented. Cardiac defects were categorized according to their clinical impact as major or minor to differentiate those disorders which may influence timing of surgical intervention. Results: Eighty-six infants were identified (33 EA, 21 omphalocele, 32 ARM). Thirty-seven (42.9%) patients had congenital heart disease on echocardiography evaluation, of which 11 (12.7%) were classified as major and 26 (30.2%) were minor. The sensitivity, specificity, positive predictive value, and negative predictive value of abnormal clinical and radiologic combined assessment for a major cardiac defect were 100% (95% confidence interval [CI], 0.76-1), 64% (95% CI, 0.61-0.64), 28% (95% CI, 0.22-0.29), and 100% (95% CI, 0.94-1.00), respectively. Conclusions: Normal clinical and radiologic examination predicted absence of a significant cardiac abnormality on echocardiography in 100% of cases. We conclude that routine echocardiography before embarking on surgical intervention may not always be necessary but should be reserved for infants with abnormal clinical and/or radiologic findings. abstract_id: PUBMED:6804616 Screening for latent malformations: cost effectiveness in neonates with correctable anomalies. A screening program for latent malformations in infants born with surgically correctable anomalies was reviewed to determine its cost effectiveness. Two hundred and seventy six infants with esophageal atresia, imperforate anus, omphalocele, gastroschisis, or diaphragmatic hernia were screened for latent congenital anomalies not detected by the routine history, physical examination, and roentgenograms. While additional malformations were detected, many congenital defects were missed only to become evident later in the infant's course. Routine screening for latent malformations is not cost effective in all infants with surgically correctable anomalies, but directed screening is indicated in selected neonates. Screening IVP's are indicated in patients with esophageal atresia, high pouch imperforate anus and possibly diaphragmatic hernia. Screening IVP's are not indicated in infants with gastroschisis, omphalocele, or females with low pouch imperforate anus who have normal sacral spine films. abstract_id: PUBMED:9215761 The spectrum of congenital anomalies of the VATER association: an international study. The spectrum of the VATER association has been debated ever since its description more than two decades ago. To assess the spectrum of congenital anomalies associated with VATER while minimizing the distortions due to small samples and referral patterns typical of clinical series, we studied infants with VATER association reported to the combined registry of infants with multiple congenital anomalies from 17 birth defects registries worldwide that are part of the International Clearinghouse for Birth Defects Monitoring Systems (ICB-DMS). Among approximately 10 million infants born from 1983 through 1991, the ICB-DMS registered 2,295 infants with 3 or more of 25 unrelated major congenital anomalies of unknown cause. Of these infants, 286 had the VATER association, defined as at least three of the five VATER anomalies (vertebral defects, anal atresia, esophageal atresia, renal defects, and radial-ray limb deficiency), when we expected 219 (P&lt;0.001). Of these 286 infants, 51 had at least four VATER anomalies, and 8 had all five anomalies. We found that preaxial but not other limb anomalies were significantly associated with any combination of the four nonlimb VATER anomalies (P&lt;0.001). Of the 286 infants with VATER association, 214 (74.8%) had additional defects. Genital defects, cardiovascular anomalies, and small intestinal atresias were positively associated with VATER association (P&lt;0.001). Infants with VATER association that included both renal anomalies and anorectal atresia were significantly more likely to have genital defects. Finally, a subset of infants with VATER association also had defects described in other associations, including diaphragmatic defects, oral clefts, bladder exstrophy, omphalocele, and neural tube defects. These results offer evidence for the specificity of the VATER association, suggest the existence of distinct subsets within the association, and raise the question of a common pathway for patterns of VATER and other types of defects in at least a subset of infants with multiple congenital anomalies. abstract_id: PUBMED:12015653 Gastrointestinal malformations in Funen county, Denmark--epidemiology, associated malformations, surgery and mortality. Aim: To report the epidemiology, associated malformations, morbidity and mortality for the first 5 years of life for infants with gastrointestinal malformations (GIM). Methods: Population-based study using data from a registry of congenital malformations (Eurocat) and follow-up data from hospital records. The study included livebirths, fetal deaths with a gestational age of 20 weeks and older and induced abortions after prenatal diagnosis of malformations born during the period 1980 - 1993. Results: A total of 109 infants/fetuses with 118 GIM were included in the study giving a prevalence of 15.3 (12.6 - 18.5) cases per 10 000 births. Anal atresia was present in seven of the 9 cases with more than one GIM. There were 38 cases (35 %) with associated malformations and/or karyotype anomalies. Thirty-two of the 90 live-born infants died during the first 5 years of life with the majority of deaths during the first week of life. Mortality was significantly increased for infants with associated malformations or karyotype anomalies compared to infants with isolated GIM (p &lt; 0.01). An uneventful surgical course was reported for 74 % of the 58 survivors. Conclusions: The prognosis for infants with GIM is highly dependent on the presence of associated malformations or karyotype anomalies. Surgery for GIM can be performed with low mortality. Morbidity is high for a small group of infants, but the majority of survivors have an uncomplicated surgical course. abstract_id: PUBMED:22250027 Maternal asthma medication use and the risk of selected birth defects. Objectives: Approximately 4% to 12% of pregnant women have asthma; few studies have examined the effects of maternal asthma medication use on birth defects. We examined whether maternal asthma medication use during early pregnancy increased the risk of selected birth defects. Methods: National Birth Defects Prevention Study data for 2853 infants with 1 or more selected birth defects (diaphragmatic hernia, esophageal atresia, small intestinal atresia, anorectal atresia, neural tube defects, omphalocele, or limb deficiencies) and 6726 unaffected control infants delivered from October 1997 through December 2005 were analyzed. Mothers of cases and controls provided telephone interviews of medication use and additional potential risk factors. Exposure was defined as maternal periconceptional (1 month prior through the third month of pregnancy) asthma medication use (bronchodilator or anti-inflammatory). Associations between maternal periconceptional asthma medication use and individual major birth defects were estimated by using adjusted odds ratios (aOR) and 95% confidence intervals (95%CI). Results: No statistically significant associations were observed for maternal periconceptional asthma medication use and most defects studied; however, positive associations were observed between maternal asthma medication use and isolated esophageal atresia (bronchodilator use: aOR = 2.39, 95%CI = 1.23, 4.66), isolated anorectal atresia (anti-inflammatory use: aOR = 2.12, 95%CI = 1.09, 4.12), and omphalocele (bronchodilator and anti-inflammatory use: aOR = 4.13, 95%CI = 1.43, 11.95). Conclusions: Positive associations were observed for anorectal atresia, esophageal atresia, and omphalocele and maternal periconceptional asthma medication use, but not for other defects studied. It is possible that observed associations may be chance findings or may be a result of maternal asthma severity and related hypoxia rather than medication use. abstract_id: PUBMED:22009544 What must the (abdominal) surgeon know about paediatric surgery - paediatric surgical aspects in general (abdominal) surgery Due to the advances in neonatal intensive care medicine, prenatal ultrasound-guided diagnostic measures and paediatric surgical options, conditions have been established to achieve long-term survival in newborns with severe diseases. In addition, this means that the "non-paediatric" physician can be increasingly confronted with patients who would not have survived childhood some decades ago. Therefore, the article summarises concisely selected diseases of premature infants and newborns, e. g., congenital abdominal wall defects, and outlines possible long-term consequences based on the surgical interventions and their basic diseases, respectively, which need to be adequately cared for in the case of a surgical disease of the former patient of paediatric surgery. The overview cannot be considered as a complete revision course; however, it might constitute a basic outline for thought-provoking impulses for personal professional skills and expertise in managing such patients in later age from a surgical perspective. abstract_id: PUBMED:13511214 Emergency operations in the newborn. With the present-day development and understanding of anesthetic methods, fluid and electrolyte therapy, antibiotic medications and pediatric care, many congenital anomalies once uniformly fatal are now being successfully treated by emergency operations in the neonatal period. The eight most common of these which demand emergency operation in the immediate postnatal period are esophageal atresia and tracheoesophageal fistula, diaphragmatic hernia with dislocation of the abdominal viscera into the chest, malrotation of the intestine with obstruction, intestinal atresia, meconium ileus, imperforate anus, omphalocele and myelomeningocele. Although infants born with any of these serious problems often are born prematurely and often have more than one congenital anomaly, survival rates in the surgical treatment of these conditions are steadily improving. Early diagnosis and prompt treatment are the most important factors in the continued improvement of these survival rates. abstract_id: PUBMED:19524742 Conflicts in wound classification of neonatal operations. Background/purpose: This study sought to determine the reliability of wound classification guidelines when applied to neonatal operations. Methods: This study is a cross-sectional web-based survey of pediatric surgeons. From a random sample of 22 neonatal operations, participants classified each operation as "clean," "clean-contaminated," "contaminated," or "dirty or infected," and specified duration of perioperative antibiotics as "none," "single preoperative," "24 hours," or "&gt;24 hours." Unweighted kappa score was calculated to estimate interrater reliability. Results: Overall interrater reliability for wound classification was poor (kappa = 0.30). The following operations were classified as clean: pyloromyotomy, resection of sequestration, resection of sacrococcygeal teratoma, oophorectomy, and immediate repair of omphalocele; as clean-contaminated: Ladd procedure, bowel resection for midgut volvulus and meconium peritonitis, fistula ligation of tracheoesophageal fistula, primary esophageal anastomosis of esophageal atresia, thoracic lobectomy, staged closure of gastroschisis, delayed repair and primary closure of omphalocele, perineal anoplasty and diverting colostomy for imperforate anus, anal pull-through for Hirschsprung disease, and colostomy closure; and as dirty: perforated necrotizing enterocolitis. Conclusions: There is poor consensus on how neonatal operations are classified based on contamination. An improved classification system will provide more accurate risk assessment for development of surgical site infections and identify neonates who would benefit from antibiotic prophylaxis. abstract_id: PUBMED:8013895 Comparative epidemiology of selected midline congenital abnormalities. We present comparative epidemiologic characteristics of five congenital abnormalities that have been suggested to result from midline abnormal developmental disturbances: esophageal atresia with or without tracheoesophageal fistula (EA/TEF), imperforate anus with or without fistula (IA/F), omphalocele (OM), bladder exstrophy (BE), and diaphragmatic hernia (DH). The purpose was to assess the extent of epidemiologic similarities among these five defects. Data were collected as part of a population-based case-control study of infants with these defects born to mothers residing in Maryland, Washington, D.C., or Northern Virginia from 1980 through 1987. The estimated annual birth prevalences (per 10,000 live births) and 95% confidence intervals (CI) of these five defects were 0.40 (0.26-0.61) for BE, 1.34 (1.08-1.67) for OM, 1.59 (1.29-1.95) for DH, 2.11 (1.76-2.53) for EA/TEF, and 2.97 (2.55-3.46) for IA/F. The birth prevalence of IA/F and DH increased between 1980 and 1987. In contrast to the other four defects, DH showed a significant male preponderance (rate ratio 1.57, 95% CI 1.03-2.47), a significant white excess (rate ratio white:other, 1.56, 95% CI 1.00-2.48), and a lower proportion of multiple associated defects (30% vs. 46-61%). We concluded from this study that the descriptive epidemiology of diaphragmatic hernia is different from that of the other four defects. This finding may imply differences in etiologic and pathogenetic mechanisms underlying DH. abstract_id: PUBMED:2748254 Selected midline defect associations: a population study. Using data from the population-based Metropolitan Atlanta Congenital Defects Program, the association of seven relatively common and easily ascertainable groups of midline defects was studied. These defects were neural tube defects (575 patients), oral clefts (633 patients), omphalocele (141 patients), esophageal atresia/tracheoesophageal fistula (88 patients), imperforate anus (151 patients), conotruncal heart defects (289 patients), and diaphragmatic hernia (75 patients). Known syndromes were excluded from the analysis. Of 1743 infants with at least one midline defect, 86 (4.9%) had at least a second midline defect, and 9 (0.5%) had two additional midline defects. Pairwise analysis of the seven defects shows that, although most midline defects tend to be statistically associated with other midline defects, specific combinations of midline defects are seen. For example, neural tube defects are more strongly associated with cleft lip with or without cleft palate than with cleft palate alone; imperforate anus is more strongly associated with spina bifida than with anencephaly or encephalocele. Moreover, some combinations of defects are not observed (eg, neural tube defect and conotruncal heart defect, clefts and diaphragmatic hernia, omphalocele and esophageal atresia/tracheoesophageal fistula). These data point to the need for further refinement in the study of the association of midline defects in terms of embryologic and pathogenetic mechanisms because most midline defects tend to occur as an isolated defect, some midline defects occur with nonmidline defects (such as limb defects), and specific associations among midline defects are observed. Answer: Routine preoperative 2-dimensional echocardiography may not always be necessary for infants with esophageal atresia (EA), omphalocele, or anorectal malformations (ARM). A study found that normal clinical and radiologic examination predicted the absence of a significant cardiac abnormality on echocardiography in 100% of cases. Therefore, it was concluded that routine echocardiography before surgical intervention may not be necessary but should be reserved for infants with abnormal clinical and/or radiologic findings (PUBMED:20438917). This suggests that appropriate patient selection based on clinical and radiological assessment can potentially avoid unnecessary echocardiography in this patient population.
Instruction: Infections after fiducial marker implantation for prostate radiotherapy: are we underestimating the risks? Abstracts: abstract_id: PUBMED:25451675 Fiducial marker implantation in prostate radiation therapy: complication rates and technique. Purpose: This study aims to report the complication rate from the transrectal ultrasound-guided implantation of gold seed markers in prostate radiotherapy, as well as describing the technique used. Materials And Methods: Between May 2010 and December 2012, 169 patients with localized prostate cancer had an intraprostatic fiducial marker implantation under transrectal ultrasound guidance. The procedure included prophylactic antibiotic therapy, fleet enema, implantation performed by trained radiation oncologists at our center prior to image-guided radiotherapy. Toxicity occurring between implantation and subsequent radiotherapy start date was assessed. The following parameters were analyzed via medical chart review: antibiotic therapy, anticoagulant interruption, bleeding, pain, prostate volume, number of markers implanted, post-implantation complications and delay before starting radiotherapy. Results: Of the 169 men, 119 (70.4%) underwent insertion of 4 fiducial markers and the other 50 (29.6%) had 3. The procedure was well-tolerated. There was no interruption of the implantation with regards to pain or hemorrhage. No grade 3 or 4 complications were observed. Seed migration rate was 0.32%, for the migration of 2 markers on 626 implanted. Mean prostate volume was 38 cm(3) (range: 10-150 cm(3)). Two patients (1.18%) developed a urinary tract infection following the procedure: prostate volume of 25 and 65 cm(3), four gold seed markers implanted, urinary tract infection resistant to prophylactic antibiotherapy, and treated with antibiotics specific to their infection as determined on urine culture. Conclusion: Transrectal fiducial marker implantation for image-guided radiotherapy in prostate cancer is a well-tolerated procedure without major associated complications. abstract_id: PUBMED:37080858 Infection after prostatic transrectal fiducial marker implantation for image guided radiation therapy. Purpose: The aim of this retrospective study is to assess the risk of infection after transrectal ultrasound-guided fiducial marker insertion for image-guided radiotherapy of prostate cancer. Material And Methods: Between January 2016 and December 2020, 829 patients scheduled for intensity-modulated radiotherapy for prostate cancer had an intraprostatic fiducial marker transrectal implantation under ultrasound guidance by radiation-oncologists specialized in brachytherapy. Patients received standard oral prophylactic antibiotic with quinolone. If Gram negative bacteria resistant to quinolone were detected at the time of the prostate cancer biopsies, the antibioprophylaxis regimen was modified accordingly. The resistance to quinolone screening test was not repeated before fiducial marker insertion. Infectious complications were assessed with questionnaires at the time of CT-planning and medical record reviewed. Toxicity was evaluated according to CTCAE v5.0. Results: The median time between fiducial marker implantation and evaluation was 10 days (range: 0-165 days). Four patients (0.48%) developed urinary tract infection related to the procedure, mostly with Gram-negative bacteria resistant to quinolone (75%). Three had a grade 2 infection, and one patient experienced a grade 3 urosepsis. The quinolone-resistance status was known for two patients (one positive and one negative) and was unknown for the other two patients prior to fiducial marker implantation. Conclusion: Intraprostatic transrectal fiducial marker implantation for image-guided radiotherapy is well tolerated with a low rate of infection. With such a low rate of infection, there is no need to repeat the search of Gram-negative bacteria resistant to quinolone before fiducial marker implantation if it was done at the time of prostate biopsies. Optimal antibioprophylaxis should be adapted to the known status of Gram-negative bacteria resistant to quinolone. abstract_id: PUBMED:25890179 Infections after fiducial marker implantation for prostate radiotherapy: are we underestimating the risks? Background: The use of gold fiducial markers (FM) for prostate image-guided radiotherapy (IGRT) is standard practice. Published literature suggests low rates of serious infection following this procedure of 0-1.3%, but this may be an underestimate. We aim to report on the infection incidence and severity associated with the use of transrectally implanted intraprostatic gold FM. Methods: Three hundred and fifty-nine patients who underwent transrectal FM insertion between January 2012 and December 2013 were assessed retrospectively via a self-reported questionnaire. All had standard oral fluoroquinolone antibiotic prophylaxis. The patients were asked about infective symptoms and the treatment received including antibiotics and/or related hospital admissions. Potential infective events were confirmed through medical records. Results: 285 patients (79.4%) completed the questionnaire. 77 (27.0%) patients experienced increased urinary frequency and dysuria, and 33 patients (11.6%) reported episodes of chills and fevers after the procedure. 22 patients (7.7%) reported receiving antibiotics for urinary infection and eight patients (2.8%) reported hospital admission for urosepsis related to the procedure. Conclusion: The overall rate of symptomatic infection with FM implantation in this study is 7.7%, with one third requiring hospital admission. This exceeds the reported rates in other FM implantation series, but is in keeping with the larger prostate biopsy literature. Given the higher than expected complication rate, a risk-adaptive approach may be helpful. Where higher accuracy is important such as stereotactic prostate radiotherapy, the benefits of FM may still outweigh the risks. For others, a non-invasive approach for prostate IGRT such as cone-beam CT could be considered. abstract_id: PUBMED:30361924 Fiducial markers implantation for prostate image-guided radiotherapy: a report on the transperineal approach. Introduction: In the external beam prostate cancer radiation therapy, daily gland displacement could lead to a target missing. The use of intra-prostatic gold fiducial markers for daily prostate position verification and correction before and during treatment delivery (image-guided radiotherapy, IGRT) is widely used in the radiation therapy centers to accurately target the prostate. Usually, the fiducial markers are implanted through the rectum, with complications such as infections and rectal bleeding. We report our experience in prostate fiducial markers implantation through a transperineal approach. Patients And Methods: Between September 2011 and January 2018 at our center, 101 patients underwent gold seed fiducial marker transperineal ultrasound-guided implantation for prostate IGRT. We retrospectively reviewed their features and outcome. Twenty-two (21.8%) patients had previously been subjected to a transurethral prostate resection (TURP) for obstructive urinary symptoms because of benign prostatic hypertrophy. No antibiotic prophylaxis was used. Results: The procedure was well tolerated. In one patient, a single episode of self-limiting urinary bleeding occurred just after it. No other complication was recorded. All the patients, at the evaluation before discharge, reported no pain or dysuria. No rectal bleeding, hematospermia, urinary obstruction or infection were reported in the next days. No markers lost or migration occurred. Discussion And Conclusion: According to our experience, prostate fiducial markers implantation through a transperineal approach is safe and should be recommended to limit the use of antibiotic therapy and patients morbidity. A previous TURP was not related to a higher risk of loss of seeds. abstract_id: PUBMED:30648061 Low Infection Rate After Transrectal Implantation of Gold Anchor ™ Fiducial Markers in Prostate Cancer Patients After Non-broad-spectrum Antibiotic Prophylaxis. Background In 621 consecutive prostate cancer patients, the frequency of urinary tract infections (UTI) and marker loss was evaluated. They prophylactically received a single dose of non-broad-spectrum antibiotics and transrectal implantation of three thin needle fiducial markers, Gold Anchor ™ (GA). Methods The occurrence of UTIs, sepsis, hospitalization due to infection, and marker loss after implantation was assessed from the medical records containing notes from physicians and nurses from the day of implantation to the end of 29 fractions. Results UTIs occurred in two (0.3%) of the 621 patients. Neither sepsis nor hospitalization was noted. Loss/drop-out of three markers was noted among 1,863 markers implanted. Conclusion The use of thin needles for the implantation of fiducials appears to reduce the rate of infection despite the use of a single dose of non-broad-spectrum antibiotics as prophylaxis. The marker construct appears to provide stability in the tissues. abstract_id: PUBMED:28154882 Transperineal gold marker implantation for image-guided external beam radiotherapy of prostate cancer : A single institution, prospective study. Purpose: To present the feasibility and complications of transperineal fiducial marker implantation in prostate cancer patients undergoing image-guided radiotherapy (IGRT) METHODS AND MATERIALS: Between November 2011 and April 2016, three radiopaque, gold-plated markers were transperineally implanted into the prostate of 300 patients under transrectal ultrasound guidance and with local anaesthesia. A week after the procedure patients filled in a questionnaire regarding pain, dysuria, urinary frequency, nocturia, rectal bleeding, hematuria, hematospermia or fever symptoms caused by the implantation. Pain was scored on a 1-10 scale, where score 1 meant very weak and score 10 meant unbearable pain. The implanted gold markers were used for daily verification and online correction of patients' setup during IGRT. Results: Based on the questionnaires no patient experienced fever, infection, dysuria or rectal bleeding after implantation. Among the 300 patients, 12 (4%) had hematospermia, 43 (14%) hematuria, which lasted for an average of 3.4 and 1.8 days, respectively. The average pain score was 4.6 (range 0-9). Of 300 patients 87 (29%) felt any pain after the intervention, which took an average of 1.5 days. None of the patients needed analgesics after implantation. Overall, 105 patients (35%) reported less, 80 patients (27%) more, and 94 patients (31%) equal amount of pain during marker implantation compared to biopsy. The 21 patients who had a biopsy performed under general anesthesia did not answer this question. Conclusion: Transperineal gold marker implantation under local anesthesia was well tolerated. Complications were limited; rate and frequency of perioperative pain was comparable to the pain caused by biopsy. The method can be performed safely in clinical practice. abstract_id: PUBMED:26161566 A single dose of prophylactic antibiotic may be sufficient to prevent postprocedural infection in upper endosonography guided fiducial marker placement. Aim: Prophylactic antibiotic after endosonographic ultrasound (EUS) guided fiducial marker placement is common practice to prevent infection. Duration of using prophylaxis antibiotic is unknown. The aim of this paper was to assess whether one time intraprocedural administration of a prophylactic antibiotic is sufficient to prevent infection after EUS guided fiducial marker placement. Methods: Retrospective study was performed included all adult patients who underwent EUS guided fiducial markers over 18 month period. Procedure related infection was defined as any infection not directly attributable to any other cause within 30 days of the procedure. The patients followed up with the Gastroenterology clinic in one week and with Radiation Oncology clinic weekly after undergoing EUS guided fiducial marker placement. Results: A total of 35 upper EUS-guided fiducial markers were placed during 20 procedures on 18 patients. The average age of patients was 59 years. There were 10 females and 8 males.. All patients received one dose of cephalosporin, amoxicillin, clindamycin or levoflocaxin. The fiducial markers were deployed in different organs. None of the patients developed any infections due to the procedure. Conclusion: This study suggests that one dose of intravenous antibiotic administered intraprocedurally is sufficient to prevent infection related to upper EUS guided fiducial marker placement. abstract_id: PUBMED:32925684 Prevention of Wrong-level Surgery in the Thoracic Spine: Preoperative Computer Tomography Fluoroscopy-guided Percutaneous Gold Fiducial Marker Placement in 57 Patients. Study Design: Retrospective review. Objective: The aim of this study was to evaluate the feasibility, safety,s and complications of computer tomography (CT) fluoroscopy-guided percutaneous transpedicular gold fiducial marker insertion to reduce incidence of wrong-level surgery in the thoracic spine. Summary Of Background Data: Intraoperative localization of the correct thoracic level can be challenging and time-consuming, especially in obese patients and patients with anatomical variations. In the literature there are very few studies containing low numbers of patients which assessed CT or CT fluoroscopy-guided fiducial marker placement of the thoracic spine. Description of this technique has been similarly scarce. Methods: All patients who underwent percutaneous CT fluoroscopy-guided gold fiducial marker placement of the thoracic spine were retrospectively reviewed. Indications for surgery included degenerative disc disease, infection, spinal metastasis, and intra- and extradural tumors. Gold fiducial markers were placed using a percutaneous CT fluoroscopy-guided transpedicular approach with local anesthesia. In addition, sex, age, body mass index (BMI), thoracic level, related pathology, and procedure-related complications were also recorded. Results: A total of 57 patients (24 females, 33 males) were included. Mean age was 58.6 ± 15.5 years. No complications during CT fluoroscopy-guided gold fiducial marker placement were recorded. Intraoperative localization was successful in all patients. Mean BMI was 32.98 kg/m (range, 18.63-56.03 kg/m), and 63% of patients were obese (&gt;30 kg/m). T7 (n = 11) was the most often marked vertebral body, followed by T10 (n = 10) and T6 (n = 7). The most cranial and most caudal levels marked were T2 and T12, respectively. Conclusion: Preoperative CT fluoroscopy-guided percutaneous gold fiducial marker placement is safe, feasible, and accurate. The resulting facilitated localization of the intended thoracic level of surgery can reduce the length of surgery and prevent wrong-level surgery. Further studies are needed to evaluate in the effect on exposure to radiation and quantify the difference in operating room time. Level Of Evidence: 4. abstract_id: PUBMED:26054865 Hydrogel Spacer Prospective Multicenter Randomized Controlled Pivotal Trial: Dosimetric and Clinical Effects of Perirectal Spacer Application in Men Undergoing Prostate Image Guided Intensity Modulated Radiation Therapy. Purpose: Perirectal spacing, whereby biomaterials are placed between the prostate and rectum, shows promise in reducing rectal dose during prostate cancer radiation therapy. A prospective multicenter randomized controlled pivotal trial was performed to assess outcomes following absorbable spacer (SpaceOAR system) implantation. Methods And Materials: Overall, 222 patients with clinical stage T1 or T2 prostate cancer underwent computed tomography (CT) and magnetic resonance imaging (MRI) scans for treatment planning, followed with fiducial marker placement, and were randomized to receive spacer injection or no injection (control). Patients received postprocedure CT and MRI planning scans and underwent image guided intensity modulated radiation therapy (79.2 Gy in 1.8-Gy fractions). Spacer safety and impact on rectal irradiation, toxicity, and quality of life were assessed throughout 15 months. Results: Spacer application was rated as "easy" or "very easy" 98.7% of the time, with a 99% hydrogel placement success rate. Perirectal spaces were 12.6 ± 3.9 mm and 1.6 ± 2.0 mm in the spacer and control groups, respectively. There were no device-related adverse events, rectal perforations, serious bleeding, or infections within either group. Pre-to postspacer plans had a significant reduction in mean rectal V70 (12.4% to 3.3%, P&lt;.0001). Overall acute rectal adverse event rates were similar between groups, with fewer spacer patients experiencing rectal pain (P=.02). A significant reduction in late (3-15 months) rectal toxicity severity in the spacer group was observed (P=.04), with a 2.0% and 7.0% late rectal toxicity incidence in the spacer and control groups, respectively. There was no late rectal toxicity greater than grade 1 in the spacer group. At 15 months 11.6% and 21.4% of spacer and control patients, respectively, experienced 10-point declines in bowel quality of life. MRI scans at 12 months verified spacer absorption. Conclusions: Spacer application was well tolerated. Increased perirectal space reduced rectal irradiation, reduced rectal toxicity severity, and decreased rates of patients experiencing declines in bowel quality of life. The spacer appears to be an effective tool, potentially enabling advanced prostate RT protocols. abstract_id: PUBMED:25260082 Use of gold radionuclide markers implanted into the prostate for image-guided radiotherapy in prostate cancer: side effects caused by the marker implantation The purpose of the study was to introduce the use of the gold radiopaque markers implanted into the prostate for image-guided radiotherapy of prostate cancer patients and to present the side effects caused by the marker implantation. Between November 2011 and November 2013, three radiopaque, gold-plated markers (Best Medical International, Springfield, VA, USA, 1.0 mm x 3.0 mm) were implanted transperineally into the prostate of 60 patients under transrectal ultrasound guidance. Local anaesthesia was performed in all patients. A week after the procedure the patients filled in a questionnaire regarding the pain, dysuria, urinary frequency, nycturia, rectal bleeding, haematuria, haematospermia or fever symptoms caused by the implantation. The pain caused by the intervention was scored on a 1-10 scale, where 1 was a very weak and 10 was an unbearable pain. Ten days after the implantation a treatment planning CT was performed and subsequently patients started intensity-modulated radiation therapy (IMRT) within one week. During the treatments markers were used for daily verification and correction of patient's setup. No patients experienced fever or infection. Based on the questionnaires nobody experienced dysuria or rectal bleeding after implantation. Among the 60 patients studied, five (8 %) had haematospermia, nine (15 %) haematuria, which lasted in average of 3.4 and 1.8 days, respectively. The average pain score on 1-10 scale was 4.2 (range: 0-9). After the marker implantation 18 patients (30%) reported less, 10 patients (17%) more, and 27 patients (45%) equal amount of pain compared to biopsy. Five patients, who had a biopsy performed under general anaesthesia, did not answer this question. None of the patients needed analgesics after implantation. The gold marker implantation implemented for image-guided radiotherapy was well tolerated under a local anaesthesia. The complications were limited, rate and frequency of perioperative pain was comparable to the pain caused by biopsy. After implantation, the patients did not require analgesics. The method can be performed safely in clinical practice. Answer: The question of whether the risks of infection following fiducial marker implantation for prostate radiotherapy are underestimated is addressed in several studies. The studies present varying rates of infection and complications, suggesting that the risk may indeed be higher than some reports suggest. One study reports a low rate of symptomatic infection at 7.7%, with one-third of those cases requiring hospital admission for urosepsis, which is higher than the rates reported in other series of fiducial marker implantation (PUBMED:25890179). This suggests that the risk of infection may be underestimated in some reports. Another study found a very low rate of infection (0.48%) after transrectal ultrasound-guided fiducial marker insertion, with most infections caused by Gram-negative bacteria resistant to quinolone (PUBMED:37080858). This study suggests that the risk of infection is low and that there is no need to repeat the search for Gram-negative bacteria resistant to quinolone before fiducial marker implantation if it was done at the time of prostate biopsies. A different study reported no major complications associated with transrectal fiducial marker implantation, with a seed migration rate of only 0.32% and a urinary tract infection rate of 1.18% following the procedure (PUBMED:25451675). This indicates that the procedure is generally well-tolerated with a low complication rate. In contrast, a study using a transperineal approach for fiducial marker implantation reported no complications such as infections or rectal bleeding, suggesting that this approach may be safer and should be recommended to limit the use of antibiotic therapy and patient morbidity (PUBMED:30361924). Another study evaluating the use of thin needles for the implantation of fiducials and a single dose of non-broad-spectrum antibiotics as prophylaxis found a very low rate of UTIs (0.3%) and no cases of sepsis or hospitalization, indicating that this method may reduce the rate of infection (PUBMED:30648061). Overall, the evidence suggests that while some studies report low rates of infection and complications, others indicate that the risks may be higher than previously thought. The use of different techniques, such as the transperineal approach, and the type of antibiotic prophylaxis may influence the rate of infection and complications.
Instruction: Older adults: are they ready to adopt health-related ICT? Abstracts: abstract_id: PUBMED:21481631 Older adults: are they ready to adopt health-related ICT? Background: The proportion of older adults in the population is steadily increasing, causing healthcare costs to rise dramatically. This situation calls for the implementation of health-related information and communication technologies (ICT) to assist in providing more cost-effective healthcare to the elderly. In order for such a measure to succeed, older adults must be prepared to adopt these technologies. Prior research shows, however, that this population lags behind in ICT adoption, although some believe that this is a temporary phenomenon that will soon change. Objectives: To assess use by older adults of technology in general and ICT in particular, in order to evaluate their readiness to adopt health-related ICT. Method: We employed the questionnaire used by Selwyn et al. in 2000 in the UK, as well as a survey instrument used by Morris and Venkatesh, to examine the validity of the theory of planned behavior (TPB) in the context of computer use by older employees. 123 respondents answered the questions via face-to-face interviews, 63 from the US and 60 from Israel. SPSS 17.0 was used for the data analysis. Results: The results show that although there has been some increase in adoption of modern technologies, including ICT, most of the barriers found by Selwyn et al. are still valid. ICT use was determined by accessibility of computers and support and by age, marital status, education, and health. Health, however, was found to moderate the effect of age, healthier older people being far more likely to use computers than their unhealthy coevals. The TPB was only partially supported, since only perceived behavioral control (PBC) emerged as significantly affecting intention to use a computer, while age, contrary to the findings of Morris and Venkatesh, interacted differently for Americans and Israelis. The main reason for non-use was 'no interest' or 'no need', similar to findings from data collected in 2000. Conclusions: Adoption of technology by older adults is still limited, though it has increased as compared with results of the previous study. Modern technologies have been adopted (albeit selectively) by older users, who were presumably strongly motivated by perceived usefulness. Particularly worrying are the effects of health, PBC, and the fact that many older adults do not share the perception that ICT can significantly improve their quality of life. We therefore maintain that older adults are not yet ready to adopt health-related ICT. Health-related ICT for the elderly should be kept simple and demonstrate substantial benefits, and special attention should be paid to training and support and to specific personal and cultural characteristics. These are mandatory conditions for adoption by potential unhealthy and older consumers. abstract_id: PUBMED:36582385 Acceptance of digital health services among older adults: Findings on perceived usefulness, self-efficacy, privacy concerns, ICT knowledge, and support seeking. Background: Over the last decade, the rapid advancements in information and communication technologies (ICTs) have also driven the development of digital health services and applications. Older adults could particularly benefit from these technologies, but they still have less access to the Internet and less competence in using it. Based on the empirical literature on technology acceptance among older adults, this study examines the relations of perceived usefulness, self-efficacy, privacy concerns, ICT knowledge, and support seeking (family, informal, formal/institutional) with older adults' intention to adopt new digital health services. Methods: The study included 478 older adults who participated in an online or paper/pencil questionnaire (M = 70.1 years, SD = 7.8; 38% male). Sociodemographic characteristics, subjective health status, and variables related to technology acceptance were assessed. Results: Latent structural equation modeling revealed that higher perceived usefulness, higher self-efficacy regarding digital health technologies, and lower privacy concerns contributed to a higher intention to use digital health services among older adults. Contrary to our expectations, general ICT knowledge was not a significant predictor. Older adults who reported seeking more support regarding technology problems from family members and formal/institutional settings also reported higher usage intentions, whereas informal support was not as relevant. Furthermore, higher age was associated with higher perceived usefulness and lower self-efficacy. Discussion: Future studies should further explore mediating factors for intention and actual use of digital health services and develop educational programs including follow-up assessments. abstract_id: PUBMED:32951396 ICT as an instrument for social and emotional ageing. A qualitative study with older adults with cognitive impairments Inspired by theories from the field of social and emotional aging, we studied the use of ICTs by older adults with cognitive impairments. By means of qualitative interviews (N=30) with older adults with cognitive impairments and their relatives, we got a detailed picture of the role of ICTs in their daily lives.First, our data showed that older adults with cognitive impairments used ICTs to enhance their social and emotional wellbeing. This involved social interaction, feelings of belongingness, and engagement in hobbies and regular daily activities. Second, our research provided insight into the strategies applied when ICT use became too difficult, with a considerable role for the social network. When the network offered help upon request or proactively encouraged the older person, this increased the perception of control. This also applied to the indirect use of ICTs, when someone from the social network operated the devices. Denying the older person the use of ICTs undermined the perception of control.The findings provide insight into how the potential of ICT can be exploited for this target group. We end the paper with practical recommendations. abstract_id: PUBMED:36767082 Socialisation Agents' Use(fulness) for Older Consumers Learning ICT. This research investigates the socialisation agents older consumers use to learn about information and communication technologies (ICT). We surveyed 871 older consumers in Victoria, Australia, about whom they would most likely turn to for advice (i.e., their preferred socialisation agents) if they needed help using or fixing an ICT device. They were asked to identify the most and second most likely source of advice. Participants were also asked to assess the usefulness of the advice received from their preferred agents and to estimate their level of ICT knowledge. The findings reveal that older consumers tend to rely on younger family members. Still, the agency they receive from non-familial sources is essential when preparing for a digital consumer role. Surprisingly, ICT knowledge is determined by the socialisation agency received by older adults' second advice option-which is less likely to be their own adult children. This research expands current knowledge about how older consumers perceive various ICT socialisation agents. Consumer socialisation theory suggests that socialisation agents impact how consumers function in the marketplace. Although the first choice of socialisation agent may be perceived as beneficial for older adults, the advice given does not relate to marketplace functioning regarding improved ICT knowledge. abstract_id: PUBMED:37818293 Information communication technology accessibility and mental health for older adults during the coronavirus disease in South Korea. Introduction: As society ages and the digital economy continues to develop, accessibility to information and communication technology (ICT) has emerged as a critical factor influencing the mental health of older adults. Particularly, in the aftermath of the COVID-19 pandemic, the need for non-face-to-face communication has significantly increased older adults' reliance on ICT for accessibility. This transition from a self-motivated engagement to a more socially passive mode of interaction highlights the importance of creating a digitally inclusive aging society. Methods: This empirical study used pooled cross-sectional data from the Digital Gap Survey conducted in South Korea in 2018 and 2020. It aimed to analyze the association between ICT accessibility and the mental health of older adults during the COVID-19 pandemic. Results: A significant positive relationship was found between ICT and mental health among older adults in South Korea. However, this positive association weakened during the COVID-19 period. Furthermore, the analysis revealed heterogeneity among older adults by age, sex, and place of residence, with older females in their 70s living in rural areas experiencing the greatest weakening. Discussion: These results highlight the need for tailored interventions and support mechanisms for specific demographic groups of older adults. We recommend that the South Korean government implement various policies to facilitate the post-COVID-19 digital landscape. These include initiatives such as ICT-related education programs, development of user-friendly e-government systems, and creation of social media platforms designed to accommodate the needs and preferences of older adults. abstract_id: PUBMED:32069853 Elderly's Attitude towards the Selected Types of e-Health. This current study was sought to explore how older adults' adaptation of information and communication technology (ICT) devices was associated with their preference for e-Health services. A total of 224 Czech older adults aged 60+ were analyzed for the study. The sample comprised 21% male and 79% female. A self-reported survey questionnaire was employed to assess the prevalence of the use of ICT devices and the Internet and general preference for e-Health services. A series of t-tests were performed between and within two groups divided into e-Health supporters and non-supporters. The results indicated that nearly half of the respondents preferred to use the Internet for searching for health-related information. We found that older adults' use of ICT devices and educational level was significantly associated with the selection of the e-Health services. However, gender, household type, and the place for a residence did not count additional variance for the preferred e-Health services. For those who express willingness to receive the e-Health service, the preferred e-Health services should be implemented across relevant health domains. To do so, health professionals ought to provide the necessary equipment and educational programs that help older adults better access and adapt to e-Health services. abstract_id: PUBMED:36612373 Co-Creating ICT Risk Strategies with Older Australians: A Workshop Model. As digital inclusion becomes a growing indicator of wellbeing in later life, the ability to understand older adults' preferences for information and communication technologies (ICTs) and develop strategies to support their digital literacy is critical. The barriers older adults face include their perceived ICT risks and capacity to learn. Complexities, including ICT environmental stressors and societal norms, may require concerted engagement with older adults to achieve higher digital literacy competencies. This article describes the results of a series of co-design workshops to develop strategies for increased ICT competencies and reduced perceived risks among older adults. Engaging older Australians in three in-person workshops (each workshop consisting of 15 people), this study adapted the "Scenario Personarrative Method" to illustrate the experiences of people with technology and rich pictures of the strategies seniors employ. Through the enrichment of low-to-high-digital-literacy personas and mapping workshop participant responses to several scenarios, the workshops contextualized the different opportunities and barriers seniors may face, offering a useful approach toward collaborative strategy development. We argued that in using co-designed persona methods, scholars can develop more nuance in generating ICT risk strategies that are built with and for older adults. By allowing risks to be contextualized through this approach, we illustrated the novelty of adapting the Scenario Personarrative Method to provide insights into perceived barriers and to build skills, motivations, and strategies toward enhancing digital literacy. abstract_id: PUBMED:35010408 The Role of Information and Communication Technology (ICT) for Older Adults' Decision-Making Related to Health, and Health and Social Care Services in Daily Life-A Scoping Review. Information and communication technology (ICT) can potentially support older adults in making decisions and increase their involvement in decision-making processes. Although the range of technical products has expanded in various areas of society, knowledge is lacking on the influence that ICT has on older adults' decision-making in everyday situations. Based on the literature, we aimed to provide an overview of the role of ICT in home-dwelling older adults' decision-making in relation to health, and health and social care services. A scoping review of articles published between 2010 and 2020 was undertaken by searching five electronic databases. Finally, 12 articles using qualitative, quantitative, and mixed-method designs were included. The articles were published in journals representing biology and medicine, nursing, informatics, and computer science. A majority of the articles were published in the last five years, and most articles came from European countries. The results are presented in three categories: (i) form and function of ICT for decision-making, (ii) perceived value and effect of ICT for decision-making, and (iii) factors influencing ICT use for decision-making. According to our findings, ICT for decision-making in relation to health, and health and social care services was more implicitly described than explicitly described, and we conclude that more research on this topic is needed. Future research should engage older adults and health professionals in developing technology based on their needs. Further, factors that influence older adults' use of ICT should be evaluated to ensure that it is successfully integrated into their daily lives. abstract_id: PUBMED:33183083 Addressing elderly loneliness with ICT Use: the role of ICT self-efficacy and health consciousness. With an increasing aging population worldwide, loneliness among elderly individuals has become a salient societal problem. Fortunately, the last decade has also witnessed an upsurge in information and communication technology (ICT), which is ubiquitously deployed and integrated into our daily lives, including the lives of elderly people. This research investigates the potential exploitation of well-developed ICT to mitigate loneliness among the elderly. Specifically, we examined the effects of two dimensions of ICT use: communication use and information use. Moreover, we examined the moderating effects of two relevant features in the elderly population, namely, ICT self-efficacy and health consciousness. We applied structural equation modeling (SEM) to evaluate survey data from mainland China comprising 436 effective responses from the elderly population. We find that ICT use has a positive effect on loneliness among the elderly, and our results support and deepen this understanding, indicating that ICT self-efficacy and health consciousness can moderate the relationship between ICT use and loneliness. Our findings suggest that ICT use plays a significant role in mitigating elderly loneliness. Moreover, it is also suggested that the characteristics of ICT self-efficacy and health consciousness for the elderly can influence the relationship between their ICT use and loneliness. This gives a more accurate description, as compared with the main findings in prior literature, that ICT can help mitigate loneliness in the elderly. Finally, by adopting social cognitive theory, our research explains the moderating effect of ICT self-efficacy and health consciousness between the use of ICT by the elderly and their loneliness. abstract_id: PUBMED:35627939 ICT and Women's Health: An Examination of the Impact of ICT on Maternal Health in SADC States. Attainment of sexual and reproductive health is regarded as a human rights matter. Notwithstanding this, maternal mortality continues to be a major public health concern in low-income countries, especially those in sub-Saharan Africa. Maternal mortality remains high in Africa, yet there are information communication technologies (ICTs) (such as the internet, mobile communication, social media, and community radios) that have the potential to make a difference. Making effective use of all of these ICTs can considerably decrease preventable maternal deaths. ICTs, particularly mobile devices, offer a platform for access to health information and services that can bring change in areas where health infrastructure and resources are often limited. However, for Southern Africa, maternal mortality remains high despite the presence of ICT tools that have transformative potential to improve maternal health. In light of this, this study sought to examine the impact of ICT on maternal health. The study was quantitative in nature, and it used panel data that covered the period from 2000-2018. The Mean Group and Pooled Mean Group cointegration techniques and a generalised method of moments panel technique were used for estimation purposes. Results showed that ICT has a negative effect on maternal health. This shows that ICT tools contribute positively to maternal health. The study gave a number of recommendations. The mobile gender gap should be closed (digital inclusion), mobile network connectivity boosted, and digital platforms must be created in order to enhance the transformative potential of ICT in improving health outcomes. Answer: The adoption of health-related information and communication technologies (ICT) by older adults is a complex issue influenced by various factors. While there has been some increase in the adoption of modern technologies by older adults, the overall readiness for adopting health-related ICT among this population is still limited (PUBMED:21481631). Factors such as perceived usefulness, self-efficacy, privacy concerns, ICT knowledge, and support seeking behavior play significant roles in older adults' intention to adopt new digital health services (PUBMED:36582385). Older adults with cognitive impairments have been found to use ICTs to enhance their social and emotional wellbeing, suggesting that with the right support, ICT can be beneficial for this group (PUBMED:32951396). However, the socialisation agents that older consumers rely on to learn about ICT, such as younger family members and non-familial sources, are crucial for their understanding and use of these technologies (PUBMED:36767082). Accessibility to ICT has emerged as a critical factor influencing the mental health of older adults, especially during the COVID-19 pandemic, when non-face-to-face communication became more prevalent (PUBMED:37818293). The attitude of the elderly towards e-Health services is also influenced by their adaptation of ICT devices and educational level, indicating that targeted educational programs could enhance their access to and adaptation of e-Health services (PUBMED:32069853). Co-creating ICT risk strategies with older adults through workshops has been suggested as a method to increase their competencies and reduce perceived risks associated with ICT use (PUBMED:36612373). Moreover, ICT can potentially support older adults in making decisions related to health and health and social care services, but more research is needed to understand how ICT influences their decision-making in everyday situations (PUBMED:35010408). Lastly, ICT self-efficacy and health consciousness have been identified as moderating factors that can influence the relationship between ICT use and loneliness among the elderly, suggesting that ICT use can play a significant role in mitigating loneliness if these factors are considered (PUBMED:33183083). In conclusion, while there is potential for older adults to adopt health-related ICT, their readiness is contingent upon addressing barriers such as perceived usefulness, self-efficacy, privacy concerns, and providing adequate support and education. Tailored interventions and support mechanisms are recommended to facilitate the adoption of ICT by older adults, taking into account their personal and cultural characteristics (PUBMED:21481631).
Instruction: Does slowed processing speed account for executive deficits in multiple sclerosis? Abstracts: abstract_id: PUBMED:25133903 Does slowed processing speed account for executive deficits in multiple sclerosis? Evidence from neuropsychological performance and structural neuroimaging. Objective: Executive deficits and slow processing speed (PS) are observed in persons with multiple sclerosis (MS). The question of whether executive deficits can be explained by slow PS was examined with neuropsychological measures and a neurostructural measure (brain atrophy). Method: Fifty MS patients were compared with 28 healthy controls (HCs) on tasks of executive functioning with and without a PS element (e.g., Trail Making Test and Wisconsin Card Sorting Test). Results: The MS group performed worse than HCs on speeded tasks of executive function. However, after controlling for speed, group differences on executive tasks disappeared. There were also no group differences on executive tasks with no PS demands. The effect of disease progression on executive task performance was assessed in the MS group. Higher atrophy in MS participants was associated with greater deficits on speeded executive tasks, but this association disappeared when controlling for PS. There was no association between atrophy and performance on nonspeeded executive tasks. Conclusions: Our results support the notion that executive deficits in MS may be explained by slow PS. These findings highlight the role of slowed PS as a primary impairment underlying other cognitive functions. Disentangling the relative contribution of PS to executive function is an important step toward the development of appropriate rehabilitation strategies for persons with MS. abstract_id: PUBMED:26010017 Information processing speed and attention in multiple sclerosis: Reconsidering the Attention Network Test (ANT). Objective: The Attention Network Test (ANT) assesses attention in terms of discrepancies between response times to items that differ in the burden they place on some facet of attention. However, simple arithmetic difference scores commonly used to capture these discrepancies fail to provide adequate control for information processing speed, leading to distorted findings when patient and control groups differ markedly in the speed with which they process and respond to stimulus information. This study examined attention networks in patients with multiple sclerosis (MS) using simple difference scores, proportional scores, and residualized scores that control for processing speed through statistical regression. Method: Patients with relapsing-remitting (N = 20) or secondary progressive (N = 20) MS and healthy controls (N = 40) of similar age, education, and gender completed the ANT. Results: Substantial differences between patients and controls were found on all measures of processing speed. Patients exhibited difficulties in the executive control network, but only when difference scores were considered. When deficits in information processing speed were adequately controlled using proportional or residualized score, deficits in the alerting network emerged. The effect sizes for these deficits were notably smaller than those for overall information processing speed and were also limited to patients with secondary progressive MS. Conclusions: Deficits in processing speed are more prominent in MS than those involving attention, and when the former are properly accounted for, differences in the latter are confined to the alerting network. abstract_id: PUBMED:14706226 Depressive symptoms account for deficient information processing speed but not for impaired working memory in early phase multiple sclerosis (MS). Depressive symptoms may influence neuropsychological functioning negatively. A substantial proportion of multiple sclerosis (MS) patients exhibit neuropsychological impairments and depressive symptomatology is more common in MS as compared to healthy controls and to other neurological diseases. The objectives of the present study were to assess information processing speed, working memory and executive functions in early phase MS and to investigate whether severity of depressive symptoms account for these aspects of cognition in MS. The patients show slowed information processing speed and impaired working memory, whereas executive functioning, as measured with the Wisconsin Card Sorting Test, is unaffected. Depressive symptoms account for slowed information processing speed, but not for impaired working memory. abstract_id: PUBMED:27264121 Parkinson's disease and the Stroop color word test: processing speed and interference algorithms. Objective: Processing speed alters the traditional Stroop calculations of interference. Consequently, alternative algorithms for calculating Stroop interference have been introduced to control for processing speed, and have done so in a multiple sclerosis sample. This study examined how these processing speed correction algorithms change interference scores for individuals with idiopathic Parkinson's disease (PD, n = 58) and non-PD peers (n = 68). Method: Linear regressions controlling for demographics predicted group (PD vs. non-PD) differences for Jensen's, Golden's, relative, ratio, and residualized interference scores. To examine convergent and divergent validity, interference scores were correlated with standardized measures of processing speed and executive function. Results: PD-non-PD differences were found for Jensen's interference score, but not Golden's score, or the relative, ratio, and residualized interference scores. Jensen's score correlated significantly with standardized processing speed but not executive function measures. Relative, ratio, and residualized scores correlated with executive function but not processing speed measures. Golden's score did not correlate with any other standardized measures. Conclusions: The relative, ratio, and residualized scores were comparable for measuring Stroop interference in processing speed-impaired populations. Overall, the ratio interference score may be the most useful calculation method to control for processing speed in this population. abstract_id: PUBMED:27144616 The mediating role of processing speed in the relationship between depressive symptoms and cognitive function in multiple sclerosis. Introduction: Although disorders of mood and cognition are frequently observed in multiple sclerosis, their relationship remains unclear. We aimed to investigate whether this mood-cognition relationship is mediated by inefficient processing speed, a deficit typically associated with mood symptomatology in the psychiatric literature and a common deficit observed in multiple sclerosis patients. Method: In this study, comprehensive cognitive data and self-reported mood data were retrospectively analyzed from 349 patients with relapsing remitting multiple sclerosis. We performed a bootstrapping analysis to examine whether processing speed provided an indirect means by which depressive symptoms influenced cognitive functioning, specifically memory and executive function. Results: We observed that processing speed mediated the relationship between depressive symptoms and measures of memory and executive function. Interestingly, exploratory analyses revealed that this mediational role of processing speed was specific to MS patients who were younger, had a lower disability level, and had fewer years since MS diagnosis. Conclusions: Together, these findings have implications for mood and cognitive intervention with multiple sclerosis patients. abstract_id: PUBMED:33626431 The role of language ability in verbal fluency of individuals with multiple sclerosis. Background: While cognitive deficits in memory and processing speed have been well-documented in individuals with multiple sclerosis (MS), language is largely considered to be intact. Verbal fluency deficits observed in MS are often attributed to impaired processing speed and executive functions rather than language ability. The current study evaluates the contribution of various cognitive factors to verbal fluency including language ability, oral-motor speed, processing speed, and executive functions. Methods: We analyzed pre-existing data from seventy-four (74) individuals with MS who completed a battery of neuropsychological tests designed to assess individual ability for various cognitive factors. We conducted linear multiple regression analyses with letter and category verbal fluency as outcome variables and performance on other cognitive domains (e.g., processing speed, executive functioning) as predictors. Results: Both vocabulary and processing speed predicted letter fluency while only vocabulary predicted category fluency. These findings suggest that the observed verbal fluency deficits in MS may reflect both impaired language ability and processing speed. Conclusion: We propose that further research on language ability in MS is needed to determine if comprehensive neuropsychological test batteries for persons with MS should include tests of language ability to fully understand the cognitive profile of any given patient. Given the importance of language ability, it may be necessary to conduct a more thorough assessment of language in individuals with MS who experience a deficit in this domain. abstract_id: PUBMED:19395356 Examining the link between information processing speed and executive functioning in multiple sclerosis. Slowed information processing speed (IPS) is frequently reported in those with multiple sclerosis (MS), and at least 20% are compromised on some aspect of executive functioning also. However, any relationship between these two processes has not been examined. The Sternberg Memory Scanning Test, Processing Speed Index (WAIS-III), Delis Kaplan Executive Function System (D.KEFS), and Working Memory Index (WMS-III) were administered to 90 participants with MS. Their performance on the PSI was significantly below the normative scores but no deficits in memory scanning speed were evident. The initial response speed of the Sternberg and the PSI were more closely related to D.KEFS performance, particularly in timed tasks with a high cognitive demand (switching tasks). In contrast, memory scanning speed was related to working memory. This study reinforces the link between IPS and working memory in MS, and supports the suggestion that IPS is not a unitary construct. abstract_id: PUBMED:31785491 Cognitive processing speed deficits in multiple sclerosis: Dissociating sensorial and motor processing changes from cognitive processing speed. Background: The assessment of cognitive information processing speed (IPS) is complicated in MS, with altered performance on tests such as the Symbol Digit Modalities Test (SDMT) potentially representing changes not only within cognitive networks but in the initial sensorial transmission of information to cognitive networks, and/or efferent transmission of the motor response. Objective: We aimed to isolate and characterise cognitive IPS deficits in MS using ocular motor tasks; a prosaccade task (used to assess and control for sensorial and motor IPS) which was then used to adjust performance on the Simon task (cognitive IPS). Methods: All participants (22 MS patients with early disease, 22 healthy controls) completed the ocular motor tasks and the SDMT. The Simon task assessed cognitive IPS by manipulating the relationship between a stimulus location and its associated response direction. Two trial types were interleaved: (1) congruent, where stimulus location = response direction; or (2) incongruent, where stimulus location ≠ response direction. RESULTS MS patients did not perform differently to controls on the SDMT. For OM tasks, when sensorial and motor IPS was controlled, MS patients had significantly slower cognitive IPS (incongruent trials only) and poorer conflict resolution. SDMT performance did not correlate with slower cognitive IPS in MS patients, highlighting the limitation of using SDMT performance to interpret cognitive IPS changes in patients with MS. Conclusion: Cognitive IPS deficits in MS patients are dissociable from changes in other processing stages, manifesting as impaired conflict resolution between automatic and non-automatic processes. Importantly, these results raise concerns about the SDMT as an accurate measure of cognitive IPS in MS. abstract_id: PUBMED:23777468 The relationship between executive functioning, processing speed, and white matter integrity in multiple sclerosis. The primary purpose of the current study was to examine the relationship between performance on executive tasks and white matter integrity, assessed by diffusion tensor imaging (DTI) in multiple sclerosis (MS). A second aim was to examine how processing speed affects the relationship between executive functioning and fractional anisotropy (FA). This relationship was examined in two executive tasks that rely heavily on processing speed: the Color-Word Interference Test and the Trail Making Test (Delis-Kaplan Executive Function System). It was hypothesized that reduced FA is related to poor performance on executive tasks in MS, but that this relationship would be affected by the statistical correction of processing speed from the executive tasks. A total of 15 healthy controls and 25 persons with MS participated. Regression analyses were used to examine the relationship between executive functioning and FA, both before and after processing speed was removed from the executive scores. Before processing speed was removed from the executive scores, reduced FA was associated with poor performance on the Color-Word Interference Test and Trail Making Test in a diffuse network including corpus callosum and superior longitudinal fasciculus. However, once processing speed was removed, the relationship between executive functions and FA was no longer significant on the Trail Making Test, and significantly reduced and more localized on the Color-Word Interference Test. abstract_id: PUBMED:36781412 Prominent role of executive functioning on the Phonemic Fluency Test in people with multiple sclerosis. Objective: Executive functioning (EF) can be one of the earliest, despite under-detected, impaired cognitive domains in patients with multiple sclerosis (pwMS). However, it is still not clear the role of EF on verbal fluency tests given the presence of information processing speed (IPS) deficits in pwMS. Method: Performance of a group of 43 pwMS without IPS impairment as measured with the Symbol Digit Modalities Test (SDMT) and a group of 32 healthy controls (HC) was compared on the Phonemic and Semantic Fluency Tests. For each group, we scored the number of words generated (i) in the early time interval (i.e., first 15 sec, semi-automatic process) and (ii) in the late time interval (i.e., from 15 to 60 sec, controlled process). Results: Globally, pwMS produced significantly fewer words than HC on the Phonemic but not on the Semantic Fluency Test. Crucially, in the Phonemic Fluency Test pwMS generated significantly fewer words than HC in the late time interval, whereas no significant difference between the two groups emerged in the early time interval. Conclusions: These findings suggest that executive dysfunction is the core element on the Phonemic Fluency Test also in pwMS and it deserves attention in both research and clinical practice. Answer: Yes, slowed processing speed (PS) appears to account for executive deficits in multiple sclerosis (MS). Studies have shown that when controlling for PS, the differences in executive task performance between MS patients and healthy controls diminish or disappear. For instance, in one study, after controlling for speed, group differences on speeded executive tasks were no longer present, and there were no group differences on executive tasks with no PS demands (PUBMED:25133903). Similarly, another study found that when deficits in information processing speed were adequately controlled using proportional or residualized scores, deficits in the alerting network of attention emerged, which were notably smaller than those for overall information processing speed (PUBMED:26010017). Moreover, depressive symptoms, which are common in MS, have been shown to account for slowed information processing speed but not for impaired working memory, suggesting that PS is a primary impairment underlying other cognitive functions in MS (PUBMED:14706226). Additionally, processing speed has been identified as a mediator in the relationship between depressive symptoms and cognitive functioning, specifically memory and executive function, in MS patients (PUBMED:27144616). Research also indicates that verbal fluency deficits observed in MS may reflect both impaired language ability and processing speed, further supporting the role of PS in cognitive deficits associated with MS (PUBMED:33626431). Furthermore, studies have demonstrated that slowed information processing speed is closely related to executive functioning performance, particularly in timed tasks with high cognitive demand (PUBMED:19395356). Lastly, when examining the relationship between executive functioning and white matter integrity in MS, the relationship between executive functions and fractional anisotropy was significantly reduced and more localized once processing speed was removed from the executive scores, indicating that PS has a prominent role in executive functioning deficits in MS (PUBMED:23777468). Additionally, executive dysfunction has been identified as a core element in the Phonemic Fluency Test performance in MS patients, even in the absence of information processing speed impairment (PUBMED:36781412). In summary, the evidence suggests that slowed processing speed is a primary factor accounting for executive deficits in individuals with multiple sclerosis.
Instruction: Targets for parathyroid hormone in secondary hyperparathyroidism: is a "one-size-fits-all" approach appropriate? Abstracts: abstract_id: PUBMED:25123022 Targets for parathyroid hormone in secondary hyperparathyroidism: is a "one-size-fits-all" approach appropriate? A prospective incident cohort study. Background: Recommendations for secondary hyperparathyroidism (SHPT) consider that a "one-size-fits-all" target enables efficacy of care. In routine clinical practice, SHPT continues to pose diagnosis and treatment challenges. One hypothesis that could explain these difficulties is that dialysis population with SHPT is not homogeneous. Methods: EPHEYL is a prospective, multicenter, pharmacoepidemiological study including chronic dialysis patients (≥ 3 months) with newly SHPT diagnosis, i.e. parathyroid hormone (PTH) ≥ 500 ng/L for the first time, or initiation of cinacalcet, or parathyroidectomy. Multiple correspondence analysis and ascendant hierarchical clustering on clinico-biological (symptoms, PTH, plasma phosphorus and alkaline phosphatase) and treatment of SHPT (cinacalcet, vitamin D, calcium, or calcium-free calcic phosphate binder) were performed to identify distinct phenotypes. Results: 305 patients (261 with incident PTH ≥ 500 ng/L; 44 with cinacalcet initiation) were included. Their mean age was 67 ± 15 years, and 60% were men, 92% on hemodialysis and 8% on peritoneal dialysis. Four subgroups of SHPT patients were identified: 1/ "intermediate" phenotype with hyperphosphatemia without hypocalcemia (n = 113); 2/ younger patients with severe comorbidities, hyperphosphatemia and hypocalcemia, despite SHPT multiple medical treatments, suggesting poor adherence (n = 73); 3/ elderly patients with few cardiovascular comorbidities, controlled phospho-calcium balance, higher PTH, and few treatments (n = 75); 4/ patients who initiated cinacalcet (n = 43). The quality criterion of the model had a cut-off of 14 (&gt;2), suggesting a relevant classification. Conclusion: In real life, dialysis patients with newly diagnosed SHPT constitute a very heterogeneous population. A "one-size-fits-all" target approach is probably not appropriate. Therapeutic management needs to be adjusted to the 4 different phenotypes. abstract_id: PUBMED:35608699 Prevertebral cervical approach to posterior mediastinum parathyroid adenomas. Background: About 4 years ago, we described the pure endoscopic cervical approach to posterior mediastinum parathyroid adenomas, which we called the "prevertebral cervical approach". At that time, we had operated on three patients and did not have enough quality videos to demonstrate this approach. After broadening our experience, we present our results and show this technique through a video. Methods: From June 2015 to January 2021, information on patients undergoing the prevertebral cervical approach was obtained from a specific prospective database, including clinical presentation, biochemistry, preoperative imaging, surgical approach and patient outcomes. The step by step technique is described for both right- and left-sided adenomas, by means of a short video clip. Results: Ten patients were operated on using this technique. Seven adenomas were right-sided and three were left-sided. The mean surgical time was 33 ± 7 min. There were neither intraoperative nor major postoperative complications. Seven patients presented with a slight subcutaneous emphysema, which did not cause complaints. All patients were discharged the day after surgery, except for one patient with a previous open neck removal of four glands due to secondary hyperparathyroidism, which required calcium replacement. Calcium and parathyroid hormone levels were normalised in the other nine patients after surgery. One patient experienced a transient recurrent laryngeal nerve injury which was spontaneously resolved within 1 month. No permanent recurrent laryngeal nerve injury was found. The postoperative cosmetic outcomes were excellent. Conclusion: In our experience, the pure cervical endoscopic approach has shown a high feasibility and short operation time, with excellent postoperative results regarding patient comfort, length of stay and disease cure. This approach also offers a very reasonable procedure cost, and may result in a less aggressive surgical option when compared with thoracic approaches. abstract_id: PUBMED:19955824 Response of secondary hyperparathyroidism to cinacalcet depends on parathyroid size. Background/aims: Several reports have indicated that the measurement of parathyroid gland size assists the management of patients with secondary hyperparathyroidism. This study examined whether parathyroid gland enlargement influenced the response of secondary hyperparathyroidism to cinacalcet. Methods: Clinically stable hemodialysis patients with secondary hyperparathyroidism that was resistant to conventional treatment received cinacalcet for 6 months. Based on the parathyroid gland size measured by ultrasonography, the patients were divided into group S (gland &lt;500 mm(3)) and group L (gland &gt;or=500 mm(3)). Serum levels of intact parathyroid hormone (intact PTH), bone-specific alkaline phosphatase, osteocalcin, and cross-linked N-terminal telopeptide of type 1 collagen were measured over time. Results: Twenty-four patients completed the study. In group S, all markers of bone metabolism and intact PTH were significantly decreased after 3 months of cinacalcet treatment. In contrast, there were no significant changes of these parameters, except for intact PTH, after 3 months in group L. After 6 months of cinacalcet treatment, however, all of the markers of bone metabolism were significantly decreased in both groups. Conclusions: The response to cinacalcet differed between groups S and L. Thus, the presence of parathyroid enlargement (nodular hyperplasia) may delay the response of secondary hyperparathyroidism to cinacalcet. abstract_id: PUBMED:26058796 Relationship between parathyroid mass and parathyroid hormone level in hemodialysis patients with secondary hyperparathyroidism. Background: To evaluate the influence of parathyroid mass on the regulation of parathyroid hormone (PTH) secretion, we investigated the relationship between the resected parathyroid gland in total parathyroidectomy and the parathyroid hormone level in hemodialysis patients with secondary hyperparathyroidism. Methods: From January 2009 to July 2014, 223 patients undergoing total parathyroidectomy were included. The size and the weight of parathyroid gland were measured during the operation. Results: 874 parathyroid glands were removed. A positive correlation was identified between the size and the weight of resected parathyroid glands. We found that both the preoperative PTH and the reduction of PTH were significantly correlated with the size and the weight of parathyroid glands in a positive manner. However, in the subgroup of patients with PTH &lt; 1000 pg/ml, no significant correlation was found. Conclusions: Larger parathyroid gland secretes more PTH and high level of serum PTH usually indicated that surgical removal might be required. However, since PTH levels could be influenced by the pharmaceutical drug, the large size of parathyroid gland might be used as a much more appropriate guide that indicates the requirement of surgery treatment even when the parathyroid hormone was less than 1000 pg/ml. abstract_id: PUBMED:36319824 Determinants of Secondary Hyperparathyroidism 1 Year After One-Anastomosis Gastric Bypass or Sleeve Gastrectomy. Purpose: Bariatric surgery alters the anatomic and physiological structure of the gastrointestinal tract, predisposing patients to the malabsorption of nutrients. The purpose of this study was to determine the prevalence and determinants of secondary hyperparathyroidism (SHPT) in the patients undergoing either one-anastomosis gastric bypass (OAGB) or sleeve gastrectomy (SG). Materials And Methods: A total of 517 patients (without SHPT at the baseline) who had undergone OAGB or SG were prospectively assessed 1 year after the surgery. Anthropometric parameters, calcium, intact parathyroid hormone (iPTH), and 25(OH)D levels were compared according to the surgery type before and 1 year after surgery. Multiple logistic regression models were used to evaluate possible SHPT predictors after bariatric surgery. Results: The overall prevalence of SHPT was 12.6% after surgery, significantly different between the OAGB and SG groups (17.1 vs. 9.9%, respectively). The serum levels of albumin-corrected calcium and 25(OH)D were not significantly different between the two groups. The patients undergoing OAGB had significantly higher serum levels of ALP (198.2 vs. 156.6) compared to the subjects undergoing SG. Higher iPTH levels preoperatively, lower 1-year excess weight loss%, and OAGB surgery seemed to be independent predictors for SHPT 1 year after surgery. Conclusion: Morbidly-obese patients undergoing OAGB had a higher risk of SHPT than their counterparts undergoing SG, whereas 25(OH)D deficiency and calcium levels did not differ between the two groups. The OAGB procedure, preoperative iPTH levels, and 1-year weight loss were predictors of postoperative SHPT development. abstract_id: PUBMED:34794597 ACR Appropriateness Criteria® Parathyroid Adenoma. Hyperparathyroidism is defined as excessive parathyroid hormone production. The diagnosis is made through biochemical testing, in which imaging has no role. However, imaging is appropriate for preoperative parathyroid gland localization with the intent of surgical cure. Imaging is particularly useful in the setting of primary hyperparathyroidism whereby accurate localization of a single parathyroid adenoma can facilitate minimally invasive parathyroidectomy. Imaging can also be useful to localize ectopic or supernumerary parathyroid glands and detail anatomy, which may impact surgery. This document summarizes the literature and provides imaging recommendations for hyperparathyroidism including primary hyperparathyroidism, recurrent or persistent primary hyperparathyroidism after parathyroid surgery, secondary hyperparathyroidism, and tertiary hyperparathyroidism. Recommendations include ultrasound, CT neck without and with contrast, and nuclear medicine parathyroid scans. The American College of Radiology Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed annually by a multidisciplinary expert panel. The guideline development and revision include an extensive analysis of current medical literature from peer reviewed journals and the application of well-established methodologies (RAND/UCLA Appropriateness Method and Grading of Recommendations Assessment, Development, and Evaluation or GRADE) to rate the appropriateness of imaging and treatment procedures for specific clinical scenarios. In those instances where evidence is lacking or equivocal, expert opinion may supplement the available evidence to recommend imaging or treatment. abstract_id: PUBMED:3444805 Raised parathyroid hormone levels in the milk alkali syndrome: an appropriate response? A case of the 'milk alkali syndrome' associated with grossly elevated levels of amino terminal parathyroid hormone is described. The hypercalcaemia (calcium 4.09 mmol/l) and hyperparathyroidism settled on conservative measures. Factors in the milk alkali syndrome which might stimulate the release of parathyroid hormone include parathyroid gland hyperplasia secondary to suppression of ionized calcium, alteration in sensitivity of calcium receptors on the cells of the parathyroid glands, the stimulation of an intermittent alkaline tide in the blood and the high intake of phosphate and bicarbonate. We suggest that high levels of parathyroid hormone in the milk alkali syndrome may be appropriate rather than paradoxical. abstract_id: PUBMED:16272632 Correlation of serum Bio-intact PTH (1-84) and parathyroid gland size in hemodialysed patients Bio-intact parathyroid hormone (Bio-PTH) assay, which measures exclusively intact PTH (1-84) molecule, provides a better assay for estimating parathyroid function in hemodialysis (HD) patients, whereas intact PTH (I-PTH) assay cross-react with PTH (7-84) as well as PTH (1-84). We have found that PTH (7-84) accumulated into serum of hemodialysis patients due probably to its impaired excretion into urine. We have reported that parathyroid gland size is one of major predictor for vitamin D responsiveness in secondary hyperparathyroidism. Therefore, we investigated whether serum Bio-PTH, in comparison with serum I-PTH, may provide a relevant assay to estimate parathyroid function as evidence by its correlation with parathyroid gland size on ultrasound examination. abstract_id: PUBMED:31764770 Total parathyroidectomy plus multi-point subcutaneous transplantation in the forearm may be a reliable surgical approach for patients with end-stage renal disease: A case report. Rationale: We studied the feasibility of total arathyroidectomy(tPTX)+multi-point transplantation in the forearm for treatment of secondary hyperparathyroidism. Considering the controversial nature of the appropriate timing for and location of this type of surgery, relevant research is relatively rare. Our experience may be a relatively successful one. Patient Concerns: Our patient was a 28-year-old woman with end-stage renal disease (ESRD), who was on dialysis for 7 years, and a 2-year history of progressively aggravated bone pain. She also had hypercalcemia and hyperphosphatemia. Diagnoses: Given the patient's history of long-term dialysis, bone pain, high levels of intact parathyroid hormone(i-PTH) and hypercalcemia, we performed ultrasonography which showed solid nodules in the bilateral parathyroid glands. She was accordingly diagnosed with SHPT. Interventions: The patient underwent tPTX+multi-point subcutaneous transplantation in the forearm. Outcomes: Her i-PTH level dropped to &lt; 300 pg/mL, and the symptoms of bone pain markedly reduced after surgery. Lessons: Total parathyroidectomy+multi-point subcutaneous transplantation in the forearm may be a reliable surgical approach for patients with ESRD. abstract_id: PUBMED:19958940 Tertiary hyperparathyroidism: is less than a subtotal resection ever appropriate? A study of long-term outcomes. Background: Our aim was to examine the outcomes of patients with tertiary hyperparathyroidism (3-HPT) who had limited resection of 1 or 2 parathyroids. Methods: We reviewed 140 patients with 3-HPT who underwent parathyroidectomy (PTX) at a single institution. Patients were analyzed according to their operation-limited PTX versus subtotal or total PTX. Results: The limited PTX group consisted of 29 patients who underwent resection of 1 (n = 12) or 2 (n = 17) parathyroids. The other 111 patients had subtotal (n = 104), total (n = 3), and/or reoperative PTX (n = 12). The mean +/- SEM follow-up was 79 +/- 5 months. Eucalcemia was achieved in 94% of the patients. All patients with persistent (n = 2) hypercalcemia underwent subtotal PTX (P = not significant [NS] vs limited PTX). In a logistic regression model, the extent of operation was not associated with the development of recurrent disease. Additionally, the incidence of permanent hypocalcemia was 7% after subtotal or total PTX versus 0% after limited resection (P = NS). Conclusion: Long-term outcomes in patients with 3-HPT appear to be similar after appropriate limited resection of 1 or 2 parathyroid glands compared to subtotal or total PTX. Therefore, a strategy of limited parathyroid resection seems appropriate for patients with 3-HPT when the disease is limited to 1 or 2 glands. Answer: The "one-size-fits-all" approach to targeting parathyroid hormone (PTH) levels in secondary hyperparathyroidism (SHPT) is likely not appropriate due to the heterogeneity observed in the dialysis population with newly diagnosed SHPT. A study by EPHEYL identified four distinct phenotypes of SHPT patients, suggesting that therapeutic management should be adjusted to these different subgroups rather than applying a uniform target for all (PUBMED:25123022). This heterogeneity is further supported by findings that the response of SHPT to cinacalcet, a medication used to treat SHPT, depends on the size of the parathyroid glands, with larger glands (nodular hyperplasia) delaying the response to treatment (PUBMED:19955824). Additionally, the relationship between parathyroid mass and PTH levels in hemodialysis patients with SHPT indicates that larger parathyroid glands secrete more PTH, and high serum PTH levels often suggest that surgical removal might be required (PUBMED:26058796). Therefore, individual patient characteristics, including parathyroid gland size and response to medical therapy, should be considered when setting targets for PTH in SHPT management.
Instruction: Double-balloon endoscopy: who needs it? Abstracts: abstract_id: PUBMED:26950010 Diagnostic and Therapeutic Capability of Double-Balloon Enteroscopy in Clinical Practice. Advances in technology have facilitated the common use of small-bowel imaging. Intraoperative enteroscopy was the gold standard method for small-bowel imaging. However, noninvasive capsule endoscopy and invasive balloon enteroscopy are currently the main endoscopic procedures that are routinely used for small-bowel pathologies, and the indications for both techniques are similar. Although obstruction is a contraindication for capsule endoscopy, it is not considered to be problematic for double-balloon enteroscopy. The most important advantage of double-balloon enteroscopy is the applicability of therapeutic interventions during the procedure; however, double-balloon enteroscopy has certain advantages as well as disadvantages. abstract_id: PUBMED:27908511 Double-Balloon Enteroscopy. Since the introduction of double-balloon enteroscopy 15 years ago, flexible enteroscopy has become an established method in the diagnostic and therapeutic work-up of small bowel disorders. With appropriate patient selection, diagnostic and therapeutic yields of 70% to 85% can be expected. The complication rates with diagnostic and therapeutic DBE are estimated at approximately 1% and 3% to 4%, respectively. Appropriate patient selection and device selection, as well as skill, are the key issues for successful enteroscopy. However, technical developments and improvements mean that carrying out enteroscopy is likely to become easier. abstract_id: PUBMED:30456723 A case of gossypiboma diagnosed with transanal double-balloon enteroscopy. Gossypiboma is an iatrogenic granuloma caused by retained surgical gauze. A 48-year-old woman with a history of cesarean section was incidentally found to have a pelvic mass on preoperative computed tomography examination for pectus excavatum. Abdominal enhanced computed tomography showed a 40-mm mass containing air in the pelvis. The mass was suspected to be continuous with the ileum. Transanal double-balloon enteroscopy showed a small fistula that was likely caused by penetration of the ileum dozens of centimeters from the ileocecal valve. A yellow-brown, movable, and fibrous body was found in the fistula. A part of the fibrous body was extracted with forceps. Pathological examination revealed that it was gauze. This is the first reported case of an asymptomatic gossypiboma penetrating the ileum that was diagnosed with double-balloon enteroscopy. Our results suggest that double-balloon enteroscopy is useful for early diagnosis of pelvic mass penetrating intestine, including gossypiboma. abstract_id: PUBMED:26725164 Yield of double-balloon enteroscopy in the diagnosis and treatment of small bowel strictures. Background: Small bowel strictures are common in gastroenterology practice. We report diagnostic and therapeutic yield of double-balloon enteroscopy for small bowel strictures. Methods: Retrospective study of 71 consecutive patients who were found to have small bowel stricture at the time of double-balloon enteroscopy. Results: During double-balloon enteroscopy, stricture identification and tissue sampling were possible in all 71 cases. Surgical pathology reported aetiology as non-steroidal anti-inflammatory drugs (32%), non-specific (21%), Crohn's disease (21%), radiation-induced (9%), tumour (10%), anastomotic (4%), celiac disease (1%), and surgical adhesions (1%). Sixteen patients (23%) underwent balloon dilation. Sensitivity of abdominal computed-tomography and video-capsule endoscopy for strictures based on double balloon enteroscopy findings was 61% and 43%, respectively. Conclusion: Double-balloon enteroscopy was safe and effective to access small bowel stricture with direct visualization and tissue sampling or for therapeutic balloon dilation. Given low sensitivity with conventional computed-tomography and/or video-capsule endoscopy for small bowel stricture, double-balloon enteroscopy can be considered if clinical suspicion is high. abstract_id: PUBMED:23488827 Multicenter comparison of double-balloon enteroscopy and spiral enteroscopy. Background And Aim: Spiral enteroscopy is a novel technique for small bowel exploration. The aim of this study is to compare double-balloon and spiral enteroscopy in patients with suspected small bowel lesions. Methods: Patients with suspected small bowel lesion diagnosed by capsule endoscopy were prospectively included between September 2009 and December 2010 in five tertiary-care academic medical centers. Results: After capsule endoscopy, 191 double-balloon enteroscopy and 50 spiral enteroscopies were performed. Indications were obscure gastrointestinal bleeding in 194 (80%) of cases. Lesions detected by capsule endoscopy were mainly angioectasia. Double-balloon and spiral enteroscopy resulted in finding one or more lesions in 70% and 75% of cases, respectively. The mean diagnosis procedure time and the average small bowel explored length during double-balloon and spiral enteroscopy were, respectively, 60 min (45-80) and 55 min (45-80) (P=0.74), and 200 cm (150-300) and 220 cm (200-300) (P=0.13). Treatment during double-balloon and spiral enteroscopy was possible in 66% and 70% of cases, respectively. There was no significant major procedure-related complication. Conclusion: Spiral enteroscopy appears as safe as double-balloon enteroscopy for small bowel exploration with a similar diagnostic and therapeutic yield. Comparison between the two procedures in terms of duration and length of small bowel explored is slightly in favor of spiral enteroscopy but not significantly. abstract_id: PUBMED:25641924 Tips and tricks of double-balloon endoscopic retrograde cholangiopancreatography (with video). Although endoscopic retrograde cholangiopancreatography (ERCP) is technically difficult in patients with altered gastrointestinal tract, double-balloon endoscopy (DBE) allows endoscopic access to pancreato-biliary system in such patients. Balloon dilation of biliary stricture and extraction of bile duct stones, placement of biliary stent in patients with Roux-en-Y or Billroth-II reconstruction, using DBE have been reported. However, two major technical parts are required for double-balloon ERCP (DB-ERCP). One is insertion of DBE and the other is an ERCP-related procedure. The important point of DBE insertion is a sure approach to the afferent limb with Roux-en-Y reconstruction or Braun anastomosis. Short type DBE with working length 152 cm is beneficial for DB-ERCP because it is short enough for most biliary accessory devices. In this paper, we introduce our tips and tricks for successful DB-ERCP. abstract_id: PUBMED:32451153 A novel double-balloon catheter for percutaneous balloon pulmonary valvuloplasty under echocardiographic guidance only. Background: Percutaneous balloon pulmonary valvuloplasty (PBPV) is the procedure of choice for uncomplicated severe or symptomatic pulmonary stenosis. Echocardiography (echo)-guided PBPV can completely avoid the use of radiation and contrast agents compared to fluoroscopy-guided PBPV. Although we have confirmed that echo-guided PBPV is feasible in humans, the poor visibility of the traditional catheter under echo greatly limits the promotion of this new technology. Methods: We produced a novel double-balloon catheter to make the catheter easy to be detected by echo through adding a guiding balloon at the distal end of the catheter. Echo-guided PBPV was performed on thirty healthy swine using either a novel catheter or a traditional catheter to evaluate the feasibility and safety of the novel double-balloon catheter. The feasibility was evaluated by the success rate of balloon inflation at the pulmonary valve annulus and the operating time. The safety was evaluated by the frequency of balloon slippage and the incidence of complications. Results: There were no significant between-group differences in terms of weight and the ratio of balloon diameter to pulmonary annulus diameter. The success rate was 93.3% and 60% in the novel and traditional groups, respectively. The novel group had significantly (p&lt;0.05) lower mean procedure time (6.33±6.86min vs 24.8±9.79min) and lower frequency of balloon slippage (0.07±0.26 vs 0.53±0.52), arrhythmia (0.07±0.26 vs 0.47±0.52), and tricuspid regurgitation (6.7% vs 40%) than the traditional group. No myocardial hematoma or pericardial tamponade occurred in the novel catheter group. Conclusion: Although further studies and improvements are required, the study results indicate that the novel double-balloon catheter for echo-guided PBPV is feasible and safe. abstract_id: PUBMED:25245840 Cervical ripening: is there an advantage for a double-balloon device in labor induction? Objectives: To compare efficiency of a double-balloon to vaginal prostaglandins for cervical ripening in patients with unfavourable cervix. Patients And Methods: Fifty patients induced with a double-balloon were compared to 50 patients receiving vaginal prostaglandins. Matching criteria were age, parity, history of uterine scar, gestational age and Bishop score. The primary outcome was failure induction. Secondary outcomes included improvement in Bishop score, ripening-to-delivery interval, caesarean section rate, maternal and neonatal morbidity. Results: Risk of failed induction (16% in the double-balloon group vs. 14% in the prostaglandins group) and caesarean section rate (28% vs. 36%) were similar in the two groups. The proportion of favourable cervix and the time to obtain a better Bishop score were similar with the two methods. Maximal pain score during cervical ripening was significantly lower in the double-balloon group (P&lt;0.001). Ripening-to-delivery interval (30.4 h ± 15.6h vs. 28.9 h ± 20.5h) was not different between the two groups. There was no difference about maternal and neonatal outcomes. Discussion And Conclusion: The double-balloon was as efficient as vaginal prostaglandins. The ripening-to-delivery interval was not different between the two groups. The main advantage of this device could be a better tolerance favourishing patient satisfaction. abstract_id: PUBMED:25120367 Double-balloon tamponade in the management of postpartum hemorrhage: a case series. Unlabelled: To show the efficacy of double-balloon cervical ripening catheter in the management of postpartum hemorrhage originating from the lower segment of the uterus or the upper parts of the vagina. Methods: Patients with intractable bleeding from the lower segment of the uterus and the upper parts of the vagina after Cesarean or vaginal deliveries were treated by double-balloon cervical ripening catheter. Results: Double-balloon catheter was used in seven patients, and it was properly placed in all of them. No other intervention was needed to control bleeding. Two patients were delivered vaginally, and five patients were delivered by Cesarean section. Length of hospitalization was longer in the vaginal delivery patients (average hospitalization was 12 days in the vaginal delivery patients and 5 days in the Cesarean section patients). The need for blood and blood products transfusion (average of blood and blood products transfusion was 30 U in the vaginal delivery patients and 6 U in the Cesarean patients) was also higher in the vaginal delivery patients. Conclusion: Although double-balloon cervical ripening catheter is designed for the induction of labor, it can successfully control intractable bleedings from the lower segment of the uterus and the upper parts of the vagina. This procedure can save patients from undergoing more morbid procedures. abstract_id: PUBMED:29279517 The Role of Double-balloon Enteroscopy in the Diagnosis and Surgical Treatment of Metastatic Small Bowel Tumors. Objective The aim of the present study is to investigate the role of double-balloon enteroscopy in the diagnosis and surgical treatment of metastatic small bowel tumors. Patients We retrospectively reviewed the records of 13 patients with metastatic small bowel tumors among 376 patients who underwent double-balloon enteroscopy from June 2005 to March 2017 in our hospital. Results The primary lesion sites were the lung (n=9), kidney (n=2), stomach (n=1) and duodenum (n=1). The clinical presentations were anemia requiring blood transfusion (n=10), obstructive symptoms (n=2), and no symptoms (n=1). The locations of the metastatic small bowel tumors were the jejunum (n=7), ileum (n=1), and both sites (n=5). The histological diagnosis of the metastatic tumor was made from biopsy specimens taken with double-balloon enteroscopy from all 11 patients whose condition permitted a biopsy. In seven patients, the findings on double-balloon endoscopy were determinants of the kind and extent of surgical treatment performed. Four patients had multiple metastatic small bowel tumors, and all were able to be removed surgically with guidance from preoperative tattooing at double-balloon endoscopy. After operation, blood transfusions were no longer needed in four of six patients who had required preoperative transfusions for the treatment of anemia, and one patient with intestinal obstructive symptoms was able to resume oral intake. Conclusion Double-balloon endoscopy was useful for making a histological diagnosis and directing surgical treatment in patients with metastatic small bowel tumors. Surgical treatment afforded palliation of symptoms in five patients. Answer: Double-balloon endoscopy (DBE) is a procedure that is primarily indicated for patients with small bowel pathologies. It is particularly useful when there is a need for both diagnostic and therapeutic interventions within the small intestine. Patients who may require DBE include those with suspected small bowel lesions, such as tumors, strictures, or sources of obscure gastrointestinal bleeding, as well as those with conditions like Crohn's disease, non-steroidal anti-inflammatory drug (NSAID) induced lesions, radiation-induced damage, and other etiologies of small bowel strictures (PUBMED:26725164). DBE is also indicated for cases where noninvasive methods like capsule endoscopy are contraindicated, such as in the presence of an obstruction (PUBMED:26950010). It can be particularly beneficial for patients with altered gastrointestinal anatomy, such as those with Roux-en-Y or Billroth-II reconstruction, where it allows access to the pancreato-biliary system for procedures like ERCP (endoscopic retrograde cholangiopancreatography) (PUBMED:25641924). Additionally, DBE has been used in unique cases such as the diagnosis of gossypiboma, an iatrogenic condition caused by retained surgical gauze, where it helped identify a pelvic mass penetrating the intestine (PUBMED:30456723). It has also been employed in the management of postpartum hemorrhage originating from the lower segment of the uterus or the upper parts of the vagina (PUBMED:25120367), and for cervical ripening in labor induction (PUBMED:25245840). In summary, DBE is needed by patients who require detailed evaluation and treatment of small bowel diseases, those with gastrointestinal bleeding of unknown origin, individuals with strictures or obstructions in the small intestine, and patients with altered GI anatomy that complicates standard endoscopic procedures. It is also a valuable tool in certain obstetric and gynecological emergencies, as well as in the management of unusual complications such as gossypiboma.
Instruction: Does living in crowded houses offer protection against the development of inflammatory bowel disease? Abstracts: abstract_id: PUBMED:23543446 Does living in crowded houses offer protection against the development of inflammatory bowel disease? Introduction: The credibility of the "Hygiene hypothesis" in patients with inflammatory bowel disease has been assessed. Objective: This survey is aimed at finding an answer for the question: "Does living in crowded or overcrowded houses protect against the development of inflammatory bowel disease?" Patients And Methods: Asian immigrants to the United Kingdom who attended inflammatory bowel diseases' clinics during the period of the study and who fulfilled Leonard-Jones criteria were asked to complete a questionnaire. The participants were asked to respond to questions on age, sex, their birth rank, diagnosis, &amp; number of brothers, sisters, sons and daughters. Results: 60% of the participants had four or more brothers and sisters. Forty per cent of the participants grew in crowded houses (occupied the fourth birth rank). Conclusions: Our presented data do not support any role of the number of house inhabitants in the development of inflammatory bowel disease. abstract_id: PUBMED:35573236 Engineered Bacteria-Based Living Materials for Biotherapeutic Applications. Future advances in therapeutics demand the development of dynamic and intelligent living materials. The past static monofunctional materials shall be unable to meet the requirements of future medical development. Also, the demand for precision medicine has increased with the progressively developing human society. Therefore, engineered living materials (ELMs) are vitally important for biotherapeutic applications. These ELMs can be cells, microbes, biofilms, and spores, representing a new platform for treating intractable diseases. Synthetic biology plays a crucial role in the engineering of these living entities. Hence, in this review, the role of synthetic biology in designing and creating genetically engineered novel living materials, particularly bacteria, has been briefly summarized for diagnostic and targeted delivery. The main focus is to provide knowledge about the recent advances in engineered bacterial-based therapies, especially in the treatment of cancer, inflammatory bowel diseases, and infection. Microorganisms, particularly probiotics, have been engineered for synthetic living therapies. Furthermore, these programmable bacteria are designed to sense input signals and respond to disease-changing environments with multipronged therapeutic outputs. These ELMs will open a new path for the synthesis of regenerative medicines as they release therapeutics that provide in situ drug delivery with lower systemic effects. In last, the challenges being faced in this field and the future directions requiring breakthroughs have been discussed. Conclusively, the intent is to present the recent advances in research and biomedical applications of engineered bacteria-based therapies during the last 5 years, as a novel treatment for uncontrollable diseases. abstract_id: PUBMED:37159332 Autonomously Assembled Living Capsules by Microbial Coculture for Enhanced Bacteriotherapy of Inflammatory Bowel Disease. Microorganism-mediated self-assembling of living formulations holds great promise for disease therapy. Here, we constructed a prebiotic-probiotic living capsule (PPLC) by coculturing probiotics (EcN) with Gluconacetobacter xylinus (G. xylinus) in a prebiotic-containing fermentation broth. Through shaking the culture, G. xylinus secretes cellulose fibrils that can spontaneously encapsulate EcN to form microcapsules under shear forces. Additionally, the prebiotic present in the fermentation broth is incorporated into the bacterial cellulose network through van der Waals forces and hydrogen bonding. Afterward, the microcapsules were transferred to a selective LB medium, which facilitated the colonization of dense probiotic colonies within them. The in vivo study demonstrated that PPLC-containing dense colonies of EcN can antagonize intestinal pathogens and restore microbiota homeostasis by showing excellent therapeutic performance in treating enteritis mice. The in situ self-assembly of probiotics and prebiotics-based living materials provides a promising platform for the treatment of inflammatory bowel disease. abstract_id: PUBMED:8708222 Living with ulcerative colitis: experiences of adolescents and young adults. The problems associated with ulcerative colitis and its treatment have effects on adolescents and young adults dissimilar from as well as more profound than those on older individuals. Adolescents are confronted with problems such as biological, psychological and social changes as well as role changes related to peers and family. This inductive study aimed to describe the adolescents' experiences of living with ulcerative colitis. A total of 28 subjects were asked about their experiences both at the present time and at the time their first symptoms appeared. Verbatim transcribed thematized interviews were analysed according to a method influenced by the constant comparative method for grounded theory. Eight categories were grounded in the data, forming a model which describes the process from onset of disease to present time. The main variable identified was: reduced living space, a strategy to manage the new situation. Dependent on the reactions received from significant others, the outcome for the adolescents hovered between feelings of self-confidence and lack of self-confidence. If the adolescents experienced support, the living space was expanded again. The results might be of great value when caring for and assisting young persons with a chronic disease in general, and in particular when taking care of adolescents with a recently diagnosed inflammatory bowel disease. abstract_id: PUBMED:18542023 A qualitative study of youth living with Crohn disease. Little is known about what it is like to live in adolescence with a chronic inflammatory bowel disease. This article reports the findings of a small qualitative study that explored the experience of four New Zealand youth aged between 16 and 21 years, who had been recently diagnosed with Crohn disease. Semistructured interviews focused on discovering the youth' thoughts, feelings, and perceptions of living with this condition. Analysis of the transcribed data is presented thematically. The findings reveal stress as integral to living with Crohn disease. They illuminate the paradoxical relationship between fear and hope and provide insight into what helps and what hinders young people's ability to cope with the disease and its treatments. Collectively, these three themes describe the ways in which the lives of young adults are drastically and almost irreparably changed by Crohn disease. The findings contribute to the "promoting wellness" literature and will inform those who support the increasing number of young people living and coping with a chronic inflammatory bowel disease. abstract_id: PUBMED:17403143 Quality of life following organ transplantation. Organ transplantation is a procedure that can save and prolong the life of individuals with end-stage heart, lung, liver, kidney, pancreas and small bowel diseases. The goal of transplantation is not only to ensure their survival, but also to offer patients the sort of health they enjoyed before the disease, achieving a good balance between the functional efficacy of the graft and the patient's psychological and physical integrity. Quality of life (QoL) assessments are used to evaluate the physical, psychological and social domains of health, seen as distinct areas that are influenced by a person's experiences, beliefs, expectations and perceptions, and QoL is emerging as a new medical indicator in transplantation medicine too. This review considers changes in overall QoL after organ transplantation, paying special attention to living donor transplantation, pediatric transplantation and particular aspects of QoL after surgery, e.g. sexual function, pregnancy, schooling, sport and work. abstract_id: PUBMED:37218007 The gut microbiota as a booster for radiotherapy: novel insights into radio-protection and radiation injury. Approximately 60-80% of cancer patients treated with abdominopelvic radiotherapy suffer post-radiotherapy toxicities including radiation enteropathy and myelosuppression. Effective preventive and therapeutic strategies are lacking for such radiation injury. The gut microbiota holds high investigational value for deepening our understanding of the pathogenesis of radiation injury, especially radiation enteropathy which resembles inflammatory bowel disease pathophysiology and for facilitating personalized medicine by providing safer therapies tailored for cancer patients. Preclinical and clinical data consistently support that gut microbiota components including lactate-producers, SCFA-producers, indole compound-producers and Akkermansia impose intestinal and hematopoietic radio-protection. These features serve as potential predictive biomarkers for radiation injury, together with the microbial diversity which robustly predicts milder post-radiotherapy toxicities in multiple types of cancer. The accordingly developed manipulation strategies including selective microbiota transplantation, probiotics, purified functional metabolites and ligands to microbe-host interactive pathways are promising radio-protectors and radio-mitigators that merit extensive validation in clinical trials. With massive mechanistic investigations and pilot clinical trials reinforcing its translational value the gut microbiota may boost the prediction, prevention and mitigation of radiation injury. In this review, we summarize the state-of-the-art landmark researches related with radio-protection to provide illuminating insights for oncologists, gastroenterologists and laboratory scientists interested in this overlooked complexed disorder. abstract_id: PUBMED:34259559 Shigella-Specific Immune Profiles Induced after Parenteral Immunization or Oral Challenge with Either Shigella flexneri 2a or Shigella sonnei. Shigella spp. are a leading cause of diarrhea-associated global morbidity and mortality. Development and widespread implementation of an efficacious vaccine remain the best option to reduce Shigella-specific morbidity. Unfortunately, the lack of a well-defined correlate of protection for shigellosis continues to hinder vaccine development efforts. Shigella controlled human infection models (CHIM) are often used in the early stages of vaccine development to provide preliminary estimates of vaccine efficacy; however, CHIMs also provide the opportunity to conduct in-depth immune response characterizations pre- and postvaccination or pre- and postinfection. In the current study, principal-component analyses were used to examine immune response data from two recent Shigella CHIMs in order to characterize immune response profiles associated with parenteral immunization, oral challenge with Shigella flexneri 2a, or oral challenge with Shigella sonnei. Although parenteral immunization induced an immune profile characterized by robust systemic antibody responses, it also included mucosal responses. Interestingly, oral challenge with S. flexneri 2a induced a distinctively different profile compared to S. sonnei, characterized by a relatively balanced systemic and mucosal response. In contrast, S. sonnei induced robust increases in mucosal antibodies with no differences in systemic responses across shigellosis outcomes postchallenge. Furthermore, S. flexneri 2a challenge induced significantly higher levels of intestinal inflammation compared to S. sonnei, suggesting that both serotypes may also differ in how they trigger induction and activation of innate immunity. These findings could have important implications for Shigella vaccine development as protective immune mechanisms may differ across Shigella serotypes. IMPORTANCE Although immune correlates of protection have yet to be defined for shigellosis, prior studies have demonstrated that Shigella infection provides protection against reinfection in a serotype-specific manner. Therefore, it is likely that subjects with moderate to severe disease post-oral challenge would be protected from a homologous rechallenge, and investigating immune responses in these subjects may help identify immune markers associated with the development of protective immunity. This is the first study to describe distinct innate and adaptive immune profiles post-oral challenge with two different Shigella serotypes. Analyses conducted here provide essential insights into the potential of different immune mechanisms required to elicit protective immunity, depending on the Shigella serotype. Such differences could have significant impacts on vaccine design and development within the Shigella field and should be further investigated across multiple Shigella serotypes. abstract_id: PUBMED:29545807 The Dynamics of Interleukin-10-Afforded Protection during Dextran Sulfate Sodium-Induced Colitis. Inflammatory bowel disease encompasses a group of chronic-inflammatory conditions of the colon and small intestine. These conditions are characterized by exacerbated inflammation of the organ that greatly affects the quality of life of patients. Molecular mechanisms counteracting this hyperinflammatory status of the gut offer strategies for therapeutic intervention. Among these regulatory molecules is the anti-inflammatory cytokine interleukin (IL)-10, as shown in mice and humans. Indeed, IL-10 signaling, particularly in macrophages, is essential for intestinal homeostasis. We sought to investigate the temporal profile of IL-10-mediated protection during chemical colitis and which were the underlying mechanisms. Using a novel mouse model of inducible IL-10 overexpression (pMT-10), described here, we show that mice preconditioned with IL-10 for 8 days before dextran sulfate sodium (DSS) administration developed a milder colitic phenotype. In IL-10-induced colitic mice, Ly6C cells isolated from the lamina propria showed a decreased inflammatory profile. Because our mouse model leads to transcription of the IL-10 transgene in the bone marrow and elevated seric IL-10 concentration, we investigated whether IL-10 could imprint immune cells in a long-lasting way, thus conferring sustained protection to colitis. We show that this was not the case, as IL-10-afforded protection was only observed if IL-10 induction immediately preceded DSS-mediated colitis. Thus, despite the protection afforded by IL-10 in colitis, novel strategies are required, specifically to achieve long-lasting protection. abstract_id: PUBMED:36094344 Intestinal Engineered Probiotics as Living Therapeutics: Chassis Selection, Colonization Enhancement, Gene Circuit Design, and Biocontainment. Intestinal probiotics are often used for the in situ treatment of diseases, such as metabolic disorders, tumors, and chronic inflammatory infections. Recently, there has been an increased emphasis on intelligent, customized treatments with a focus on long-term efficacy; however, traditional probiotic therapy has not kept up with this trend. The use of synthetic biology to construct gut-engineered probiotics as live therapeutics is a promising avenue in the treatment of specific diseases, such as phenylketonuria and inflammatory bowel disease. These studies generally involve a series of fundamental design issues: choosing an engineered chassis, improving the colonization ability of engineered probiotics, designing functional gene circuits, and ensuring the safety of engineered probiotics. In this review, we summarize the relevant past research, the progress of current research, and discuss the key issues that restrict the widespread application of intestinal engineered probiotic living therapeutics. Answer: The survey aimed at assessing the credibility of the "Hygiene hypothesis" in patients with inflammatory bowel disease (IBD) found that living in crowded or overcrowded houses does not protect against the development of IBD. The participants, who were Asian immigrants to the United Kingdom attending IBD clinics, were asked to complete a questionnaire regarding their family size and living conditions. The results showed that 60% of the participants had four or more siblings, and 40% grew up in crowded houses (occupied the fourth birth rank). However, the data presented did not support any role of the number of house inhabitants in the development of IBD (PUBMED:23543446).
Instruction: History taking by physiotherapists with low back pain patients: are illness perceptions addressed properly? Abstracts: abstract_id: PUBMED:26308888 History taking by physiotherapists with low back pain patients: are illness perceptions addressed properly? Purpose: Despite the increasing evidence that illness perceptions should be addressed in patients, there is a lack of studies evaluating whether physiotherapists question illness perceptions. This study, using a mixed-methods design, investigates the integration of illness perceptions during the first consultation of physiotherapists treating patients with low back pain (LBP). Methods: Thirty-four physiotherapists performed usual history taking in a patient with non-specific LBP. The interview was audiotaped and illness perceptions were indexed using an observational instrument, based on the domains of Leventhal's Common Sense Model. Patients were also asked to fill in the Illness Perception Questionnaire-Revised for LBP. Results: Physiotherapists assessed the illness identity, also perceptions regarding the (physical) cause and controllability of LBP were evaluated. Illness perceptions, such as timeline, consequences, coherence and emotional representation, were poorly assessed. Results of the questionnaire reveal that LBP-patients report overuse, workload and bad posture as primary cause. Patients held positive beliefs about the controllability and have high illness coherence. Conclusion: Belgian physiotherapists mainly question bio-medically oriented illness perceptions, e.g. physical symptoms and causes, but do not sufficiently address psychosocially oriented illness perceptions as recommended in LBP guidelines. Implications For Rehabilitation: Belgian physiotherapists mainly question biomedical oriented illness perceptions (illness identity, provoking factors and treatment control) in patients with low back pain (LBP) during the history taking (i.e. the first consultation). From a bio-psycho-social view psychosocially oriented illness perceptions should be incorporated in the daily routine of physiotherapist's to comply with the bio-psycho-social treatment guidelines for LBP. Continuing education is mandatory in order to improve physiotherapists' knowledge regarding the use of all dimensions of illness perceptions in the assessment of patients with LBP. abstract_id: PUBMED:25878954 Quantity and quality of randomized controlled trials published by Indian physiotherapists. Background And Objectives: Randomized controlled trials (RCTs) are considered as the gold standard evidence for determining efficacy of interventions. Physiotherapeutic interventions are essential in the management of various conditions. However, information on the quantity and quality of RCTs published by Indian physiotherapists is largely unknown. Therefore, the primary objective of this study was to review the RCTs published by Indian physiotherapists for analyzing publication trend and its quality. Materials And Methods: Medline database was searched for eligible RCTs published by Indian physiotherapists between the years 2000 and 2013. We performed quantitative analysis of RCTs including type of participants, area of focus in physiotherapy, clinical condition and geographical location of first author's affiliation and analyzed the methodological quality and reporting of RCTs using Physiotherapy Evidence Database (PEDro) scale and consolidated standards of reporting trials (CONSORTs) key criterion statement, respectively. Results: A total of 45 RCTs have been published by Indian physiotherapists. The common conditions investigated in the trials were low back pain (16.3%), followed by diabetes (6.7%) and chronic obstructive pulmonary disease (6.7%). The mean score of PEDro is 5.5 (standard deviation: 1.2). Trial registration (3 [7%]) and sample size calculation (28.9%) are the most common CONSORT items not reported in the trials. Interpretation And Conclusions: RCTs published by Indian physiotherapists is gradually increasing in numbers and the methodological qualities of studies are fair. However, there is substantial scope for improvement in conducting and reporting trials. In the future, Indian physiotherapists should focus more on conditions such as stroke, asthma, and others, which have a larger burden of illness among Indian population. abstract_id: PUBMED:32309777 Can the Pain Attitudes and Beliefs Scales be adapted for use in the context of osteoarthritis with general practitioners and physiotherapists? Background: Conservative, first-line treatments (exercise, education and weight-loss if appropriate) for hip and knee joint osteoarthritis are underused despite the known benefits. Clinicians' beliefs can affect the advice and education given to patients, in turn, this can influence the uptake of treatment. In New Zealand, most conservative OA management is prescribed by general practitioners (GPs; primary care physicians) and physiotherapists. Few questionnaires have been designed to measure GPs' and physiotherapists' osteoarthritis-related health, illness and treatment beliefs. This study aimed to identify if a questionnaire about low back pain beliefs, the Pain Attitudes and Beliefs Scale for Physiotherapists (PABS-PT), can be adapted to assess GP and physiotherapists' beliefs about osteoarthritis. Methods: This study used a cross-sectional observational design. Data were collected anonymously from GPs and physiotherapists using an online survey. The survey included a study-specific demographic and occupational characteristics questionnaire and the PABS-PT questionnaire adapted for osteoarthritis. All data were analysed using descriptive statistics, and the PABS-PT data underwent principal factor analysis. Results: In total, 295 clinicians (87 GPs, 208 physiotherapists) participated in this study. The principal factor analysis identified two factors or subscales (categorised as biomedical and behavioural), with a Cronbach's alpha of 0.84 and 0.44, respectively. Conclusions: The biomedical subscale of the PABS-PT appears appropriate for adaptation for use in the context of osteoarthritis, but the low internal consistency of the behavioural subscale suggests this subscale is not currently suitable. Future research should consider the inclusion of additional items to the behavioural subscale to improve internal consistency or look to develop a new, osteoarthritis-specific questionnaire. Trial Registration: This trial was part of the primary author's PhD, which began in 2012 and therefore this study was not registered. abstract_id: PUBMED:28370977 Diagnostic protocols-A consultation tool still to be discovered. Rationale: Experienced primary care physicians handle most illnesses to everyone's satisfaction despite limited resources of time and means. However, cases can be multifaceted in that harmless-presenting symptoms may also be warning signals or an indicator of a health disorder that too infrequently presents in family practice to be diagnosed correctly. On the basis of these observations, RN Braun developed 82 diagnostic protocols for a structured recording of various complaints. Method: All consultations during the years 2001 to 2014, in which 1 author (WF) had used diagnostic protocols in her single-handed practice, were analyzed retrospectively regarding reasons for encounter, diagnostic classification, and long-term outcome. Results: During the period, a diagnostic protocol was used 1686 times. It was applied at a rate of approximately 5% of 2500 new complaints annually, most often (1366 times) for febrile conditions. In 320 consultations for other complaints, 43 different diagnostic protocols were applied. Among them, the "tabula diagnostica" for various undifferentiated symptoms was used most frequently (n = 54), followed by diagnostic protocols for headache (n = 45), dizziness (n = 36), precordial pain (n = 20), nonspecific abdominal pain (n = 15), low back pain (n = 14), hypertension (n = 12), diarrhea &gt; 1 week (n = 12), epigastralgia (n = 11), depression (n = 10), polyarthralgia (n = 8), cough, and lower abdominal pain (each n = 7). A final diagnosis was established in less than 20% of cases. Conclusions: This observational study from routine practice gives an insight how diagnostic protocols helped to manage complex patient presentations. A broader use of diagnostic protocols could investigate the potential of this consultation tool to handle the complexity of primary health care. The use of a standardized diagnostic approach could stimulate research, in particular on managing common complaints/undifferentiated illness and their inherent diagnostic uncertainty. abstract_id: PUBMED:31502778 A travel-loving woman in her eighties with lower back pain and weight loss. Background: This case report presents one of the first documented incidents of chronic Q-fever (C. burnetii) in Norway. A comprehensive workup resulted in an unexpected finding. Case Presentation: A Norwegian woman in her eighties presented to a district general hospital with lower back pain, decreased general condition and weight loss. Computer tomography (CT) revealed a large thoracic aortic aneurysm presumed to be of mycotic origin, and later magnetic resonance imaging (MRI) scans revealed osteomyelitis in the surrounding vertebrae. Conventional diagnostic workup did not identify the causative agent. After more than 6 months of different examinations, surgery, exhausting invasive procedures and antimicrobial treatment, we were ultimately successful in determining the microbial cause of chronic mycotic aneurism and osteomyelitis to be C. Burnetii (Q-fever) through serological and PCR analysis. Interpretation: An increasing proportion of the population in all age groups travel abroad, and clinicians should be aware of the increasing incidence of imported infectious diseases. Obtaining a thorough medical history is still an important tool in the diagnostic process. abstract_id: PUBMED:23709319 Reported side effects, bother, satisfaction, and adherence in patients taking hydrocodone for non-cancer pain. Objective: Pain is a prevalent condition that often involves a neuropathic component. Hydrocodone is one of the most widely used opioids for pain but is often associated with side effects (SEs). This study sought to characterize the experience of patients taking hydrocodone for non-cancer pain. Methods: A nationwide survey of adults in the United States taking hydrocodone for non-cancer pain was conducted. The survey included questions to characterize these patients and their experience with hydrocodone-related SEs. A neuropathic pain subgroup also was examined. Results: Among 630 respondents, the average age was 50.1 years (14.25). Most (90.6 percent) were Caucasian and 72.5 percent were female. Back pain or low back pain was the most common (42.1 percent) type of pain. Almost three-fourths (73.3 percent) experienced at least one SE, and 67.3 percent reported being bothered. More than three-fourths (78.3 percent) reported being satisfied with hydrocodone relieving pain; however, less (74.8 percent) reported being satisfied with it overall. More than one-fourth (27.6 percent) reported taking hydrocodone less than instructed with 41.4 percent of them reporting that SEs were bothersome as a reason. A greater percent of the neuropathic pain subgroup (266 respondents) experienced at least one SE (80.8 percent) and were bothered by them (75.6 percent). Overall satisfaction was slightly lower (71.1 percent) among these respondents, and among the 24.8 percent taking less than instructed, more than half (54.5 percent) reported that SEs were bothersome as a reason. Conclusions: This study demonstrates an unmet need for better therapeutic options to manage pain, including neuropathic pain. Therapies that offer improved tolerability also may increase adherence, which could affect overall satisfaction and response to pain management. abstract_id: PUBMED:10068920 "INTERMED": a method to assess health service needs. II. Results on its validity and clinical use. The validity and clinical use of a recently developed instrument to assess health care needs of patients with a physical illness, called INTERMED, is investigated. The INTERMED combines data reflecting patients' biological, psychological, and social characteristics with information on health care utilization characteristics. An example of a patient population in which such an integral assessment can contribute to the appropriateness of care, are patients with low back pain of degenerative or unknown origin. It supports the validity and the clinical usefulness of the INTERMED when clinically relevant subgroups in this heterogeneous population can be identified and described based on their INTERMED scores. The INTERMED was utilized in a group of patients (N = 108) having low back pain who vary on the chronicity of complaints, functional status, and associated disability. All patients underwent a medical examination and responded to a battery of validated questionnaires assessing biological, psychological, and social aspects of their life. In addition, the patients were assessed by the INTERMED. It was studied whether it proved to be possible to form clinically meaningful groups of patients based on their INTERMED scores; for this, a hierarchical cluster analysis was performed. In order to clinically describe them, the groups of patients were compared with the data from the questionnaires. The cluster analysis on the INTERMED scores revealed three distinguishable groups of patients. Comparison with the questionnaires assessing biological, psychological, and social aspects of disease showed that one group can be characterized as complex patients with chronic complaints and reduced capacity to work who apply for a disability compensation. The other groups differed explicitly with regard to chronicity, but also on other variables. By means of the INTERMED, clinically relevant groups of patients can be identified, which supports its use in clinical practice and its use as a method to describe case mix for scientific or health care policy purposes. In addition, the INTERMED is easy to implement in daily clinical practice and can be of help to ease the operationalization of the biopychosocial model of disease. More information on its validity in different patient populations is necessary. abstract_id: PUBMED:23152089 Prognostic factors of sciatica in the Canon of Avicenna. Prognosis studies are fast developing and very practical types of medical research. Sciatica is one of the common types of low back pain and identifying prognostic factors of the illness can help physicians and patients to choose best method of practice. The prognostic factors of sciatica are presented from the Canon of Avicenna, one of the most famous physicians in the history of medicine. abstract_id: PUBMED:36764063 Negative language use of the physiotherapist in low back pain education impacts anxiety and illness beliefs: A randomised controlled trial in healthy respondents. Objective: This study aimed to determine the effect of physiotherapists' negative language use on nocebo effects of state anxiety and illness beliefs. Methods: A web-based randomised controlled trial included adults without recent musculoskeletal pain. The intervention was a short educational video about low back pain using negative language (nocebo condition: n = 87) versus a video using neutral or positive language (control condition: n = 82). State anxiety was assessed using the State-Trait Anxiety Inventory. Illness beliefs were assessed using the Illness Perception Questionnaire. Results: Nocebo and control groups differed in outcome measures (MANOVA Pillai's trace = 0.22, F = 4.98; df = (9,159), p &lt; 0.001). Post-hoc analyses showed a medium to large effect for the nocebo condition on anxiety (d = 0.71, 95% CI 0.4 -1.0). The nocebo group also had higher scores in three illness beliefs: beliefs on timeline (d = 0.45, 95% CI 0.14 - 0.75), treatment control (d = 0.43, 95% CI 0.12 - 0.74) and concern (d = 0.47, 95% CI 0.16 - 0.78). Conclusion: Physiotherapists' use of negative language contributes directly to a higher state anxiety and illness beliefs that can trigger the nocebo effects in the recipient PRACTICE IMPLICATIONS: Negative language use should be avoided. abstract_id: PUBMED:20492018 Self-management and chronic low back pain: a qualitative study. Aims: This paper is a report of a study of the self-management strategies of people with chronic low back pain and how their healthcare professionals perceived their role in facilitating self-management. Background: Chronic low back pain is a complex disorder, challenging to treat, and associated with wide-ranging adverse consequences including physical disability, psychosocial disruption, and increased use of healthcare resources. Most clinical guidelines suggest that self-management strategies are the best treatment option. Design: A qualitative analysis was conducted of semi-structured interviews with 64 people identified as having chronic low back pain and 22 healthcare professionals nominated by that person. The interviews were conducted in 2008. The people with chronic low back pain were asked about their self-management strategies; healthcare professionals were asked about how they perceived their role in the person's self-management. Data were analysed using a content analysis. Findings: The most common strategies used by participants to manage their chronic low back pain were medication, exercise and application of heat. The nominated healthcare professionals were predominantly physiotherapists and general practitioners. Physiotherapists described exercises, particularly those aimed at improving core strength, as the main strategy that they encouraged people to use. General practitioners regarded themselves as primarily having three roles: prescription of pain medication, dispensing of sickness certificates, and referral to specialists. Conclusion: People with chronic low back pain use self-management strategies that they have discovered to provide relief and to prevent exacerbation. The strategies reflect an active process of decision-making that combines personal experience with professional recommendations. Answer: The study "History taking by physiotherapists with low back pain patients: are illness perceptions addressed properly?" investigated whether physiotherapists adequately address illness perceptions during initial consultations with patients suffering from low back pain (LBP). The study used a mixed-methods design and involved 34 physiotherapists who conducted usual history taking with a patient presenting with non-specific LBP. The interactions were audiotaped and analyzed using an observational instrument based on the domains of Leventhal's Common Sense Model. Additionally, patients completed the Illness Perception Questionnaire-Revised for LBP. The results indicated that while physiotherapists did assess illness identity and perceptions regarding the physical cause and controllability of LBP, they poorly assessed other illness perceptions such as timeline, consequences, coherence, and emotional representation. The questionnaire results showed that patients primarily attributed LBP to overuse, workload, and bad posture, and they held positive beliefs about the controllability of the condition and had high illness coherence. The conclusion drawn from the study was that Belgian physiotherapists mainly focused on bio-medically oriented illness perceptions, such as physical symptoms and causes, but did not sufficiently address psychosocially oriented illness perceptions as recommended in LBP guidelines. The study suggests that there is a need for continuing education to improve physiotherapists' knowledge regarding the use of all dimensions of illness perceptions in the assessment of patients with LBP, in order to comply with the bio-psycho-social treatment guidelines for LBP (PUBMED:26308888).
Instruction: Could plasma cystatin C be useful as a marker of hemodialysis low molecular weight proteins removal? Abstracts: abstract_id: PUBMED:15528941 Could plasma cystatin C be useful as a marker of hemodialysis low molecular weight proteins removal? Background: Plasma cystatin (pCyst) is a well-assessed tool for measuring renal function, and it could also play a part in hemodialysis adequacy. Methods: pCyst and other uremic toxins (urea, creatinine, parathyroid hormone, prolactin) were assessed before and after a dialysis session in 18 hemodialysis patients: 7 on bicarbonate hemodialysis (BHD) and 11 on mixed convective dialysis (MCD; 6 standard hemodiafiltration and 5 acetate-free biofiltration). Plasma levels and reduction ratios (RR) were then compared between the BHD and MCD groups. Results: The mean pre-dialysis pCyst level is nearly the same in both groups (5.3 +/- 0.8 vs. 5.7 +/- 1 mg/l, p = ns), although a substantial decrease occurs after MCD only (mean 2.4 +/- 1 vs. 6.2 +/- 2.2 mg/l after BHD, p = 0.002). The mean pCyst RR (PCRR) of 55.5% after MCD is poorly related to prolactin and urea RR, fairly comparable to parathyroid hormone RR and very close to creatinine RR (58.4%). Conclusions: Only MCD removes pCyst, but the amount of removal is different for other low molecular weight proteins (prolactin and parathyroid hormone) and similar for creatinine, a classic 'little molecule'. In view of the discrepancy of these findings, the use of pCyst in hemodialysis still seems premature and needs further studies. abstract_id: PUBMED:7933817 Determinants of the serum concentrations of low molecular weight proteins in patients on maintenance hemodialysis. Factors influencing the serum concentrations of low molecular weight proteins (LMWP) during long-term hemodialysis were studied in 112 patients undergoing dialysis for an average of 61.1 months (range 1 to 243). These patients were treated with AN69, cellulose acetate, cuprophan or polysulfone membranes. The following proteins were measured in serum before and after a four hour dialysis session: cystatin C (CYST C), beta 2-microglobulin (beta 2 m), Clara cell protein (CC16) and retinol-binding protein (RBP). Predialysis levels of the four proteins were markedly elevated. In simple regression analysis, pre-dialysis serum concentrations of beta 2 m and CC16 weakly correlated with the duration of dialysis treatment, but these relations completely disappeared when a stepwise regression analysis was performed using as predictors age, sex, residual diuresis, body weight loss (BWL), duration of hemodialysis and the type or ultrafiltration coefficient (UFC) of the membranes. The only significant determinants which emerged from this analysis were the residual diuresis and age which negatively correlated with CYST C, beta 2m and CC16 (residual diuresis only), and sex which influenced CYST C. During the dialysis session, the microproteins underwent changes that were related to their molecular radius, the membrane UFC and the BWL. After adjustment for the latter, high flux membranes (UFC &gt; or = 15 ml/h.m2.mm Hg) allowed up to 50% of CYST C and 25% of beta 2m to be removed. No significant elimination of CC16 and RBP was evident. On the basis of these results, we estimated the effective pore radius of high flux membranes between 1.5 and 1.7 nm and that of low flux membranes as below 1.5 nm.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:3058174 Cystatin C: a new marker of biocompatibility or a good marker for the redistribution of LMW proteins during hemodialysis? The mechanism(s) behind the larger relative increase of Plasma beta 2 microglobulin (P-beta 2m) than that of Plasma albumin (P-alb) during Cuprophan hemodialysis is disputed. To elucidate this phenomenon P-alb, P-beta 2m (MW 11,800) and Plasma cystatin (P-cC; MW 13,000) an inhibitor of cystein proteinases, were determined before and after a Cuprophan or polysulphone hemodialysis (4-7 hr, QB 200 ml/min) in 30 stable regular dialysis treatment (RDT) patients. Body weight (BW) decreased by 2.5 +/- 1.4% (mean +/- SD). P-alb, P-beta 2m and P-cC increased by 11.4 +/- 14.8%, 15.4 +/- 11.5%, and 22.1 +/- 14.3%, respectively, during Cuprophan dialysis. The relative increase of P-cC was larger than that of P-beta 2m (P less than 0.05) and that of P-alb (P less than 0.02). During polysulphone dialysis BW decreased by 4.1 +/- 1.8%. P-alb, P-beta 2m, and P-cC increased almost equally by 28.1 +/- 18, 26.5 +/- 19.2, and 26.8 +/- 14.4%, respectively. These results are hard to interpret. Is the increase in P-cC a new marker of biocompatibility or does it reflect the true shift of low molecular weight (LMW) proteins between the interstitial and the plasma volume during hemodialysis better than P-beta 2m? In vitro studies indicate that small amounts of both Serum beta 2m (S-beta 2m) and Serum cystatin C (S-cC) are adsorbed to or sieved through the Cuprophan membrane, findings which render the kinetics of LMW proteins during hemodialysis still more complex. abstract_id: PUBMED:6199657 Low molecular weight plasma proteins in the cerebrospinal fluid of children with hematological malignancies. The concentration of beta-2-microglobulin (beta 2-m) and of post gamma globulin (P gamma G) was examined in serum and cerebrospinal fluid from children with acute lymphatic leukemia (ALL) and non-Hodgkin's lymphoma (NHL). Data were analysed in order to determine whether concentration of beta 2-m or P gamma G during remission would be of value in predicting relapse or eventual outcome. Mean serum concentration of beta 2-m was similar in good and poor prognosis patients with ALL in remission and was not significantly altered in CNS or marrow relapse. Mean CSF concentration in NHL was also similar in both prognostic groups, and in poor prognosis patients was not significantly altered in relapse. The same pattern was seen when P gamma G was measured in CSF (serum concentration of this protein being too low for accurate determination). High within patient variability of levels of beta 2-m and P gamma G appeared to relate to chemotherapy rather than the disease process. Concentration of P gamma G was persistently raised in three children with brain damage of differing etiologies. Levels of two other low molecular weight proteins, retinol binding protein and alpha 1-microglobulin, were also determined in order to establish that beta 2-m and P gamma G concentration was not influenced by alteration in permeability of the blood-brain barrier. The beta 2-m and P gamma G concentration, although higher than reported in healthy children [5] does not appear to be of value as a prognostic indicator in ALL and NHL in children. abstract_id: PUBMED:18638309 A new synthetic dialyzer with advanced permselectivity for enhanced low-molecular weight protein removal. Optimizing solute removal at minimized albumin loss is a major goal of dialyzer engineering. In a prospective, randomized, crossover study on eight patients (age 63 +/- 14 years) on maintenance hemodialysis, the new Baxter Xenium 170 high-flux dialyzer (BX), which contains a 1.7-m(2) PUREMA H dialysis membrane, was compared with two widely used reference high-flux dialyzers currently available for hemodialysis in North America, the Fresenius Optiflux 180 NR (FO) and the Gambro Polyflux 170 H (GP). Solute removal and biocompatibility were assessed in hemodialysis for 240 min at blood and dialysate flow rates of 300 and 500 mL/min, respectively. Additional ex vivo experiments detecting the interleukin-1beta (IL-1b) generation in recirculated donor blood were performed to demonstrate the pyrogen retention properties of the dialyzers. The instantaneous plasma clearances were similar for the three dialyzers except for cystatin c (cysc), for which a lower clearance was measured with FO as compared with BX and GP after 30 and 180 min of hemodialysis. The reduction ratios (RRs) corrected for the hemoconcentration of beta(2)-microglobulin and cysc were lower in FO (44 +/- 9 and 35 +/- 9%, respectively) versus BX (62 +/- 6 and 59 +/- 7%, respectively) and GP (61 +/- 7 and 56 +/- 8%, respectively). The RRs of the cytokine tumor necrosis factor alpha and interleukin-6 were not different between the dialyzers. The albumin loss was &lt;300 mg for all filters. No differences between the dialyzers were found in the biocompatibility parameters showing very low leukocyte and complement activation. The ex vivo recirculation experiments revealed a significantly higher IL-1b generation for GP (710 +/- 585 pg/mL) versus BX (317 +/- 211 pg/mL) and FO (151 +/- 38 pg/mL). BX is characterized by a steep solute sieving profile with high low-molecular weight protein removal at virtually no albumin loss and an excellent biocompatibility. This improved performance may be regarded as a contribution to optimal dialysis therapy. abstract_id: PUBMED:23105882 Effect of hemodialysis on circulating cystatin c levels in patients with end stage renal disease. Plasma cystatin C is an emerging parameter to assess kidney function. Its utility in assessing the adequacy of hemodialysis in patients with end-stage-renal disease has however not been established with certainty. This study was therefore carried out to assess the usefulness of serum cystatin C estimation in patients undergoing low flux membrane hemodialysis. Serum creatinine and cystatin C were estimated in 20 patients before and after undergoing hemodialysis. The mean serum creatinine decreased from a pre-dialysis value of 7.72 mg/dL to a post-dialysis value of 2.90 mg/dL. On the contrary, the mean serum cystatin C levels were found to increase from a pre-dialysis value of 5.97 mg/L to a post-dialysis value of 8.25 mg/L. Therefore, serum cystatin C cannot be used to monitor dialysis adequacy. It however, serves as a surrogate marker of the inadequacy of low flux membrane bicarbonate hemodialysis in clearing low molecular weight proteins from the circulation. abstract_id: PUBMED:30951588 The Effect of Molecular Weight on Passage of Proteins Through the Blood-Aqueous Barrier. Purpose: To determine the effect of molecular weight (MW) on the concentration of plasma-derived proteins in aqueous humor and to estimate the plasma-derived and eye-derived fractions for each protein. Methods: Aqueous humor and plasma samples were obtained during cataract surgery on an institutional review board-approved protocol. Protein concentrations were determined by ELISA and quantitative antibody microarrays. A total of 93 proteins were studied, with most proteins analyzed using 27 to 116 aqueous and 6 to 30 plasma samples. Results: Plasma proteins without evidence of intraocular expression by sequence tags were used to fit a logarithmic model relating aqueous-plasma ratio (AH:PL) to MW. The log(AH:PL) appears to be well predicted by the log(MW) (P &lt; 0.0001), with smaller proteins such as cystatin C (13 kDa) having a higher AH:PL (1:6) than larger proteins such as albumin (66 kDa, 1:300) and complement component 5 (188 kDa, 1:2500). The logarithmic model was used to calculate the eye-derived intraocular fraction (IOF) for each protein. Based on the IOF, 66 proteins could be categorized as plasma-derived (IOF&lt;20), whereas 10 proteins were primarily derived from eye tissue (IOF &gt;80), and 17 proteins had contribution from both plasma and eye tissue (IOF 20-80). Conclusions: Protein concentration of plasma-derived proteins in aqueous is nonlinearly dependent on MW in favor of smaller proteins. Our study demonstrates that for proper interpretation of results, proteomic studies evaluating changes in aqueous humor protein levels should take into account the plasma and eye-derived fractions. abstract_id: PUBMED:24789553 Removal and rebound kinetics of cystatin C in high-flux hemodialysis and hemodiafiltration. Background And Objectives: Cystatin C is a 13.3 kD middle molecule of similar size to β2-microglobulin and a marker of GFR in CKD. This study aimed to determine cystatin C kinetics in hemodialysis to understand whether blood concentrations may predict residual renal function and middle-molecule clearance. Design, Setting, Participants, & Measurements: Cystatin C removal and rebound kinetics were studied in 24 patients on high-flux hemodialysis or hemodiafiltration. To determine whether cystatin C concentrations are predictable, an iterative two-pool mathematical model was applied. Results: Cystatin C was cleared effectively, although less than β2-microglobulin (reduction ratios ± SD are 39% ± 11 and 51% ± 11). Cystatin C rebounded to 95% ± 5% of predialysis concentration by 12 hours postdialysis. The two-pool kinetic model showed excellent goodness of fit. Modeled extracellular cystatin C pool volume is smaller than that predicted, comprising 25.5% ± 9.2 of total body water. Iterated parameters, including nonrenal clearance, showed wide interindividual variation. Modeled nonrenal clearance was substantially higher than renal clearance in this population at 25.1 ± 6.6 ml/min per 1.73 m(2) body surface area. Conclusions: Plasma cystatin C levels may be used to measure middle-molecule clearance. Levels rebound substantially postdialysis and plateau in the interdialytic period. At low GFR, nonrenal clearance predominates over renal clearance, and its interindividual variation will limit use of cystatin C to predict residual renal function in advanced kidney disease. abstract_id: PUBMED:19133017 Matching efficacy of online hemodiafiltration in simple hemodialysis mode. PUREMA H (referred to as PES) is an innovative dialysis membrane for enhanced low-molecular-weight (LMW) protein removal. The purpose of the study was to prove whether its efficacy in hemodialysis (HD) matches that of online hemodiafiltration (HDF) with conventional high-flux membranes. In a prospective, randomized, cross-over study on eight maintenance dialysis patients, treatment efficacy of HD with PES was compared with online postdilution HDF with the two synthetic high-flux membranes polysulfone (referred to as PSU) and Polyamix (referred to as POX). Apart from the infusion of replacement fluid, which was set at 20% of the blood flow rate of 300 mL/min, operating conditions in HD and HDF were kept identical. Small solute and LMW protein plasma clearances as well as the reduction ratio (RR) of cystatin C and retinol-binding protein were not different between the therapies. HDF with POX resulted in a significantly lower myoglobin RR as compared with HD with PES, and HDF with PSU. A 4% higher beta(2)-microglobulin RR was determined in HDF with PSU (73 +/- 5%) as compared with PES in HD (69 +/- 5%). The albumin loss was below 1 g for all treatments. Despite the fact that simple HD did not fully exploit the characteristics of PES, it achieved essentially similar LMW protein removal and albumin loss as compared with online postdilution HDF with the conventional synthetic high-flux membranes PSU and POX. Therefore, HD with PES may have beneficial effects on the outcome of maintenance dialysis patients similar to high-efficiency HDF. abstract_id: PUBMED:32734191 Development and Validation of Residual Kidney Function Estimating Equations in Dialysis Patients. Rationale & Objective: Measurement of residual kidney function is recommended for the adjustment of the dialysis prescription, but timed urine collections are difficult and prone to errors. Equations to calculate residual kidney function from serum concentrations of endogenous filtration markers and demographic parameters would simplify monitoring of residual kidney function. However, few equations to estimate residual kidney function using serum concentrations of small solutes and low-molecular-weight proteins have been developed and externally validated. Study Design: Study of diagnostic test accuracy. Setting & Participants: 823 Chinese peritoneal dialysis (PD) patients (development cohort) and 826 PD and hemodialysis patients from the Netherlands NECOSAD study (validation cohort). Tests Compared: Equations to estimate residual kidney function (estimated clearance [eCl]) using serum creatinine, urea nitrogen, cystatin C, β2-microglobulin (B2M), β-trace protein (BTP), and combinations, as well as demographic variables (age, sex, height, and weight). Equations were developed using multivariable linear regression analysis in the development cohort and then tested in the validation cohort. Equations were compared with published validated equations. Outcomes: Residual kidney function measured as urinary clearance (mCl) of urea nitrogen (mClUN) and average of creatinine and urea nitrogen clearance (mClUN-cr). Results: In external validation, bias (difference between mCl and eCl) was within ± 1.0 unit for all equations. Accuracy (percent of differences within ± 2.0 units) was significantly better for eClBTP, eClB2M, and eClBTP-B2M than eClUN-cr for both mClUN (78%, 80%, and 81% vs 72%; P &lt; 0.05 for all) and mClUN-cr (72%, 78%, and 79% vs 68%; P &lt; 0.05 for all). The area under the curve for predicting mClUN &gt; 2.0 mL/min was highest for eClB2M (0.853) and eClBTP-B2M (0.848). Results were similar for other validated equations. Limitations: Development cohort only consisted of PD patients, no gold-standard method for residual kidney function measurement. Conclusions: These results confirm the validity and extend the generalizability of residual kidney function estimating equations from serum concentrations of low-molecular-weight proteins without urine collection. Answer: Plasma cystatin C (pCyst) has been considered as a potential marker for the removal of low molecular weight proteins (LMWP) during hemodialysis. Studies have shown that pCyst levels decrease significantly after mixed convective dialysis (MCD), such as standard hemodiafiltration and acetate-free biofiltration, with a mean reduction ratio (RR) of 55.5%, which is very close to the RR for creatinine (58.4%) (PUBMED:15528941). This suggests that MCD can effectively remove pCyst, indicating its potential as a marker for the removal of LMWPs during this type of dialysis. However, the removal of pCyst varies with different dialysis modalities and membrane types. For instance, high-flux membranes have been shown to allow up to 50% removal of cystatin C, while no significant elimination was evident with low-flux membranes (PUBMED:7933817). Additionally, during hemodialysis with certain membranes, such as Cuprophan, an increase in plasma cystatin C levels was observed, raising questions about whether this increase is a marker of biocompatibility or reflects the true shift of LMW proteins between compartments during hemodialysis (PUBMED:3058174). Moreover, the use of advanced dialyzers, such as the Baxter Xenium 170 high-flux dialyzer, has demonstrated enhanced removal of cystatin C with minimal albumin loss, suggesting improved performance in LMWP removal (PUBMED:18638309). However, in patients undergoing low flux membrane hemodialysis, serum cystatin C levels were found to increase post-dialysis, indicating that it may not be a suitable marker for dialysis adequacy in this setting (PUBMED:23105882). Kinetic studies have also shown that cystatin C is cleared effectively during high-flux hemodialysis or hemodiafiltration, but there is a substantial rebound to near predialysis concentration within 12 hours postdialysis (PUBMED:24789553). This rebound effect and the interindividual variation in nonrenal clearance of cystatin C may limit its use in predicting residual renal function in advanced kidney disease. In summary, while plasma cystatin C has the potential to be a useful marker for the removal of LMWPs during certain types of hemodialysis, its utility may be limited by the type of dialysis membrane used, the dialysis modality, and the individual patient's characteristics. The effectiveness of cystatin C removal does not always correlate with the removal of other LMWPs, such as prolactin and parathyroid hormone, which suggests that pCyst may not be a universal marker for all LMWPs (PUBMED:15528941). Furthermore, the performance of dialyzers in simple hemodialysis mode using innovative membranes like PUREMA H has been shown to achieve similar LMWP removal and albumin loss compared to online postdilution hemodiafiltration with conventional high-flux membranes. This indicates that the choice of dialyzer membrane can influence the removal of cystatin C and other LMWPs, and that advanced membranes may offer benefits similar to more complex dialysis modalities (PUBMED:19133017). Additionally, the molecular weight of proteins affects their passage through the blood-aqueous barrier, with smaller proteins like cystatin C having a higher ratio in aqueous humor compared to larger proteins. This molecular weight-dependent selectivity also plays a role in the removal efficiency during dialysis (PUBMED:30951588). Lastly, equations have been developed to estimate residual kidney function using serum concentrations of cystatin C and other LMWPs, which could simplify monitoring without the need for urine collection. These equations have been validated and show promise for clinical use, although they may not be applicable in all patient populations (PUBMED:32734191). In conclusion, plasma cystatin C could be useful as a marker of hemodialysis LMWP removal under certain conditions, particularly with the use of advanced dialysis membranes and modalities that enhance its clearance. However, its utility is influenced by various factors, including the type of hemodialysis, the rebound effect post-dialysis, and individual patient characteristics. Further studies are needed to establish pCyst as a reliable marker for hemodialysis adequacy across different settings.
Instruction: Are radiological joint space widths of normal hips asymmetrical? Abstracts: abstract_id: PUBMED:10364904 Are radiological joint space widths of normal hips asymmetrical? Background: To be certain that the joint space width is abnormal in the case of hip joint pain when compared with the contralateral hip requires knowledge of physiological dissymmetry. Aim Of The Study: To assess interindividual variability and dissymmetry in pelvic radiological joint space width. Methods: Pelvic radiographs of subjects without hip joint disease. Measurement with a 0.1 mm graduated magnifying glass and 0.5 mm graduated flat ruler at the hip superointermediate site (vertical going through the femoral head centre). After randomisation of the side to measure, analysis of nine groups of 19 plain films by one investigator blind for the result of the contralateral side. Results: The difference between the left and right hip was plotted against the corresponding mean for all 171 normal subjects. This shows the frequency and the limits of the asymmetry at each measurement site. The asymmetry is independent of interindividual variability of the joint space width and greater than the measurement error in most subjects. Conclusion: This study confirms the interindividual variability of hip joint space width, shows the frequency of hip joint space asymmetry and defines its limit. abstract_id: PUBMED:27693959 Long-term effects of lateral wedge orthotics on hip and ankle joint space widths. Background: Lateral wedge insoles have been used for the treatment of medial knee osteoarthritis (OA) and have been shown to reduce loading of the medial compartment of the knee. However, as the entire lower extremity acts as a single kinetic chain, altering the biomechanics of the knee may also have significant effects at the ankles or hips. We aimed to evaluate the effects of lateral wedge orthotics on ankle and hip joints, compared to neutral orthotics, by assessing the changes in joint space width (JSW) during 36 months of continuous use. Methods: We prospectively enrolled 109 subjects with symptomatic osteoarthritis of the medial knee according to the American College of Rheumatology criteria. The trial was double blind and patients were randomized to either wedged or neutral orthotic shoe inserts. Hip and ankle JSWs were quantified using plain radiographies at baseline and at 36-months follow-up. Findings: 45 patients completed the 36 month study. 31 of those who completed the study were using the lateral wedge versus 14 were using neutral orthotics. 2 patients in the wedge group had missing radiographs and were not included in the JSW analyses. There were no significant differences between the wedge and the neutral orthotics groups in the magnitude of JSW change at either the hip or the ankles at 36 month. Interpretation: We found no significant adverse effects of the lateral wedges on ankles or hips. (ClinicalTrials.gov NCT00076453). abstract_id: PUBMED:28828181 Peri-talar re-alignment osteotomy for joint preservation in asymmetrical ankle osteoarthritis. Various types of re-alignment surgery are used to preserve the ankle joint in cases of intermediate ankle arthritis with partial joint space narrowing.The short-term and mid-term results after re-alignment surgery are promising, with substantial post-operative pain relief and functional improvement that is reflected by high rates of patient satisfaction.In this context, re-alignment surgery can preserve the joint and reduce the pathological load that acts on the affected area.Good clinical and radiological outcomes can be achieved in asymmetrical ankle osteoarthritis by understanding the specific deformities and appropriate indications for different surgical techniques. Cite this article: EFORT Open Rev 2017;2:324-331. DOI: 10.1302/2058-5241.2.160021. abstract_id: PUBMED:19747582 Radiological joint space width in the clinically normal hips of a Korean population. Objective: The purpose of this paper was to investigate the association of the joint space width (JSW) of the hip with radiologically observed hip deformity, the anthropological features and aging in a clinically asymptomatic Korean population. Design: 428 consecutive patients who were without clinical evidence of hip osteoarthritis (OA) and who underwent supine anteroposterior (AP) pelvic radiography for hip contusion or a routine health check were analyzed for the relation of joint space narrowing to the center-edge (CE) angle, the acetabular depth, the head-neck ratio, the neck-shaft angle, the pelvic width, the height, the body mass index (BMI), gender and age. Results: The CE angle was inversely associated with the superomedial JSW and the superolateral JSW. The acetabular depth was positively associated with superomedial JSW. A decreased head-neck ratio and the neck-shaft angle were not associated with the superomedial or superolateral JSW. The height was positively associated with an increased superomedial JSW, but not with the superolateral JSW. The BMI and increased age were positively associated with the superolateral JSW, but not with the superomedial JSW. Conclusion: Our study showed that the CE angle was the single constant radiological parameter that was inversely related to the JSW of hip joints. Further, the height was positively related to the superomedial JSW while the BMI was positively related to the superolateral JSW. The normal aging process was not associated with joint space narrowing of the hip joint. abstract_id: PUBMED:36384277 Bisphosphonate use is associated with a decreased joint narrowing rate in the non-arthritic hip. Aims: The preventive effects of bisphosphonates on articular cartilage in non-arthritic joints are unclear. This study aimed to investigate the effects of oral bisphosphonates on the rate of joint space narrowing in the non-arthritic hip. Methods: We retrospectively reviewed standing whole-leg radiographs from patients who underwent knee arthroplasties from 2012 to 2020 at our institute. Patients with previous hip surgery, Kellgren-Lawrence grade ≥ II hip osteoarthritis, hip dysplasia, or rheumatoid arthritis were excluded. The rate of hip joint space narrowing was measured in 398 patients (796 hips), and the effects of the use of bisphosphonates were examined using the multivariate regression model and the propensity score matching (1:2) model. Results: A total of 45 of 398 (11.3%) eligible patients were taking an oral bisphosphonate at the time of knee surgery, with a mean age of 75.8 years (SD 6.2) in bisphosphonate users and 75.7 years (SD 6.8) in non-users. The mean joint space narrowing rate was 0.04 mm/year (SD 0.11) in bisphosphonate users and 0.12 mm/year (SD 0.25) in non-users (p &lt; 0.001). In the multivariate model, age (standardized coefficient = 0.0867, p = 0.016) and the use of a bisphosphonate (standardized coefficient = -0.182, p &lt; 0.001) were associated with the joint space narrowing rate. After successfully matching 43 bisphosphonate users and 86 non-users, the joint narrowing rate was smaller in bisphosphonate users (p &lt; 0.001). Conclusion: The use of bisphosphonates is associated with decreased joint degeneration in non-arthritic hips after knee arthroplasty. Bisphosphonates slow joint degeneration, thus maintaining the thickness of joint cartilage in the normal joint or during the early phase of osteoarthritis.Cite this article: Bone Joint Res 2022;11(11):826-834. abstract_id: PUBMED:22274624 Computational measurement of joint space width and structural parameters in normal hips. Introduction: Joint space width (JSW) of hip joints on radiographs in normal population may vary by related factors, but previous investigations were insufficient due to limitations of sources of radiographs, inclusion of subjects with osteoarthritis, and manual measurement techniques. We investigated influential factors on JSW using semiautomatic computational software on pelvic radiographs in asymptomatic subjects without radiological osteoarthritic findings. Methods: Global and local JSW at the medial, middle, and lateral compartments, and the hip structural parameters were measured in asymptomatic, normal 150 cases (300 hips), using a customized computational software. Results: Reliability of measurement in global and local JSWs was high with intraobserver reproducibility (intraclass correlation coefficient) ranging from 0.957 to 0.993 and interobserver reproducibility ranging from 0.925 to 0.985. There were significant differences among three local JSWs, with the largest JSW at the lateral compartment. Global and medial local JSWs were significantly larger in the right hip, and global, medial and middle local JSWs were significantly smaller in women. Global and local JSWs were inversely correlated with CE angle and positively correlated with horizontal distance of the head center, but not correlated with body mass index in men and women. They were positively correlated with age and inversely correlated with vertical distance of the head center only in men. Conclusions: There were interindividual variations of JSW in normal population, depending on sites of the weight-bearing area, side, gender, age, and hip structural parameters. For accurate diagnosis and assessment of hip osteoarthritis, consideration of those influential factors other than degenerative change is important. abstract_id: PUBMED:33575169 Hip joint space width in an asymptomatic population: Computed tomography analysis according to femoroacetabular impingement morphologies. Background: Although the association between femoroacetabular impingement (FAI) syndrome and hip osteoarthritis (OA) is well established, not all hips exhibiting cam or pincer morphologies (i.e. imaging findings of FAI syndrome) are symptomatic or arthritic. It is difficult to detect which subgroup will wear out, or how does the arthritic process start radiographically. Therefore, we measured in a retrospective study based on computed tomography (CT) analysis, the joint space width (JSW) according to a standard protocol and we investigated its variation according to the presence of a cam and/or pincer morphology. We hypothesized that the radiological presence of a cam and/or pincer hip morphologies, even in asymptomatic subjects, would affect JSW. Methods: Two hundred pelvic CT scans performed for non-orthopedic etiologies in asymptomatic patients were analyzed using a 3D software. After excluding patients with hip OA or previous hip surgery, 194 pelvic CT scans (388 hips) were retained. We measured for each hip the presence of FAI syndrome imaging findings (cam and pincer morphologies) using the classical parameters of coxometry. In addition, we performed a measurement of articular joint space width according to a standard protocol. We then calculated the mean thickness of 3 defined regions along the femoroacetabular joint: anterior-superior, posterior-inferior, and posterior-superior. Lastly, we compared the JSW across 4 groups: hips with (1) no cam or pincer, (2) pincer, (3) cam, and (4) cam and pincer morphologies using a multivariate analysis. Additionally, a topographic heatmap of JSW was plotted allowing quantitative representation of JSW along the joint. Results: Increased JSW with peak difference of 0.9 mm (25.7%) was found in hips with cam and pincer morphologies when compared to normal ones (p = 0.002) and to hips with pincer or cam morphologies only. Conclusion: Positive variations in JSW were associated to the presence of cam and pincer morphologies. This significant increase in JSW could be one of the earliest measurable changes preceding later classical alterations. abstract_id: PUBMED:9010875 The effects of position on the radiographic joint space in osteoarthritis of the hip. The aim of the study was to assess whether radiographic hip joint space thickness was changed by weight-bearing (WB) compared with non weight-bearing (NWB) position, and to evaluate whether radiographs centered on the hip were more sensitive than pelvic X-rays to detect such a change. Anteroposterior radiographs of the pelvis were made in 30 patients with hip osteoarthritis OA (46 OA and 11 normal hips). Osteoarthritic, as well as contralateral normal hips were analyzed. Radiographs centered on OA hip were performed in 28 other patients. X-rays were made in WB and NWB positions using a standardized radiological procedure. Measurements of mean joint space width (MeanJSW) maximum joint space narrowing (MaxJSN) and joint space surface area (JSA), were made using a computerized image analysis system. The joint space width was unaffected by WB in normal joints but decreased with WB in OA joints. The decrease was significant only when considering MaxJSN in patients with a joint space thickness smaller than 2.5mm. The difference between WB and NWB was larger in radiographs centered on the hip than on pelvic X-rays. MeanJSW and JSA were found to be less sensitive than MaxJSN. The decrease of joint space width was inversely correlated with joint space size in WB. These results suggest that WB radiographs of the hip should be used in preference to NWB in studies of hip OA. abstract_id: PUBMED:18622665 Radiographic joint space width in patients with Crowe Type-I dysplastic hips. Unlabelled: Radiographic evaluation of preoperative joint space width is believed important to predict the long-term results of osteotomy. We asked whether joint space width differs in the supine and standing positions in patients with Crowe Type-1 osteoarthritis (OA) secondary to developmental dysplasia of the hip (DDH). Joint space width was measured in the supine and standing positions in 146 women and 16 men (231 hips) with OA. Subjects had a mean age of 46.7 years (range, 22-59 years). Differences were seen on radiographs in joint space width between supine (2.35 +/- 1.65 mm; range, 0.1-6.2 mm) and standing (2.04 +/- 1.78 mm; range, 0.0-5.9 mm). In 27 of 172 hips with greater than 1 mm joint space in the supine position, joint space width was decreased by greater than 1 mm in the standing position. To evaluate preoperative joint space width in patients scheduled for osteotomy, radiographs should be obtained with the patient in the standing position. Level Of Evidence: Level IV, diagnostic study. See the Guidelines for Authors for a complete description of levels of evidence. abstract_id: PUBMED:9597791 Three-dimensional pressure distribution of the human hip joint--comparison between normal hips and dysplastic hips. We developed a system for analyzing the three-dimensional contact pressure distribution in the human hip joint. Hip joint images obtained by computed tomography were loaded into a computer, and the articular surface was reconstructed as a three-dimensional polygonal model. The pressure distribution of this hip model was then analyzed using a rigid-body spring model. Using this system, the three-dimensional pressure distribution in dysplastic hips was compared with that in normal hips. Seven dysplastic hips and five normal hips were analyzed. In the dysplastic hips, the maximum peak pressure was 5.3 MPa, while it was 2.5 MPa in the normal hips. The peak pressure was located at the edge of the acetabulum in the dysplastic hips. This system allowed us to easily and accurately evaluate the mechanical status of the hip joint. For individual patients, the clinical course could be easily predicted and various treatment methods could be examined preoperatively. Answer: Yes, radiological joint space widths of normal hips can be asymmetrical. A study assessing interindividual variability and dissymmetry in pelvic radiological joint space width found that there is a frequency of hip joint space asymmetry and defined its limit. The asymmetry is independent of interindividual variability of the joint space width and greater than the measurement error in most subjects (PUBMED:10364904).
Instruction: Comparison of abdominal aortic aneurysm diameter measurements obtained with ultrasound and computed tomography: Is there a difference? Abstracts: abstract_id: PUBMED:12947257 Comparison of abdominal aortic aneurysm diameter measurements obtained with ultrasound and computed tomography: Is there a difference? Objectives: Accurate diameter measurements of abdominal aortic aneurysm (AAA) with both computed tomography (CT) and ultrasound (US) are essential for screening, planning surgical intervention, and follow-up after endovascular repair. Often there is a discrepancy between measurements obtained with CT and US, and neither limit of agreement (LOA) nor correlation between the two imaging methods has been clearly established. The purpose of this study was to assess the paired differences in AAA diameter measurements obtained with CT and US in a large national endograft trial. Methods: CT and US measurements were obtained from an independent core laboratory established to assess imaging data in a national endograft trial (Ancure; Guidant, Menlo Park, Calif). The study included only baseline examinations in which both CT and US measurements were available. Axial CT images and transverse US images were assessed for maximal AAA diameter and recorded as CT(max) and US(max), respectively. Correlations and LOA were performed between all image diameters, and differences in their means were assessed with paired t test. Results: A total of 334 concurrent measurements were available at baseline after endovascular repair. CT(max) was greater than US(max) in 95% (n = 312), and mean CT(max) (5.69 +/- 0.89 cm) was significantly larger (P &lt;.001) than mean US(max) (4.74 +/- 0.91 cm). The correlation coefficient between CT(max) and US(max) was 0.705, but the difference between the two was less than 1.0 cm in only 51%. There was less discrepancy between CT(max) and US(max) for small AAA (0.7 cm, 15.3%) compared with medium (0.9 cm, 17.9%) and large (1.46 cm, 20.3%) AAA; however, the difference was not statistically significant. LOA between CT(max) and US(max) (-0.45-2.36 cm) exceeded the limits of clinical acceptability (-0.5-0.5 cm). Poor LOA was also found in each subgroup based on AAA size. Conclusions: Maximal AAA diameter measured with CT is significantly and consistently larger than maximal AAA diameter measured with US. The clinical significance of this difference and its cause remains a subject for further investigation. abstract_id: PUBMED:27542700 A Systematic Review of Ultrasound or Magnetic Resonance Imaging Compared With Computed Tomography for Endoleak Detection and Aneurysm Diameter Measurement After Endovascular Aneurysm Repair. Purpose: To analyze the literature comparing ultrasound [duplex (DUS) or contrast-enhanced (CEUS)] or magnetic resonance imaging (MRI) with computed tomography angiography (CTA) for endoleak detection and aneurysm diameter measurement after endovascular aneurysm repair (EVAR). Methods: A systematic review identified 31 studies that included 3853 EVAR patients who had paired scans (DUS or CEUS vs CTA or MRI vs CTA) within a 1-month interval for identification of endoleaks during EVAR surveillance. The primary outcome was the number of patients with an endoleak detected by one test but undetected by another test. Results are presented for all endoleaks and for types I and III endoleaks only. Aneurysm diameter measurements between CTA and ultrasound were examined using meta-analysis. Results: Endoleaks were seen in 25.6% (985/3853) of patients after EVAR. Fifteen studies compared DUS with CTA for the detection of all endoleak types. CTA had a significantly higher proportion of additional endoleaks detected (214/2346 vs 77/2346 for DUS). Of 19 studies comparing CEUS with CTA for the detection of all endoleak types, CEUS was more sensitive (138/1694) vs CTA (51/1694). MRI detected 42 additional endoleaks that were undetected by CTA during the paired scans, whereas CTA detected 2 additional endoleaks that MRI did not show. CTA had a similar proportion of additional types I and III endoleaks undetected by CEUS or MRI. Of 9 studies comparing ultrasound vs CTA for post-EVAR aneurysm diameter measurement, the aneurysm diameter measured by CTA was greater than ultrasound (mean difference -1.70 mm, 95% confidence interval -2.45 to -0.96, p&lt;0.001). Conclusion: This study demonstrated that CEUS and MRI are more accurate than CTA for the detection of post-EVAR endoleaks, but they are no better than CTA for detecting types I and III endoleaks specifically. Aneurysm diameter differences between CTA and ultrasound should be considered when evaluating the change in aneurysm diameter postoperatively. abstract_id: PUBMED:36960142 The correlation between different ultrasound planes and computed tomography measures of abdominal aortic aneurysms. Introduction: Ultrasound measurements of the aorta are typically taken in the axial plane, with the transducer perpendicular to the aorta, and diameter measurements are obtained by placing the callipers from the anterior to the posterior wall and the transverse right to the left side of the aorta. While the 'conventional' anteroposterior walls in both sagittal and transverse plains may be suitable for aneurysms with less complicated geometry, there is controversy regarding the suitability of this approach for complicated, particularly tortuous aneurysms, as they may offer a more challenging situation. Previous work undertaken within our research group found that when training inexperienced users of ultrasound, they demonstrated more optimal calliper placement to the abdominal aorta when approached from a decubitus window to obtain a coronal image compared to the traditional ultrasound approach. Purpose: To observe the level of agreement in real-world reporting between computed tomography (CT) and ultrasound measurements in three standard planes; transverse AP, sagittal AP and coronal (left to right) infra-renal abdominal aortic aneurysm (AAA) diameter. Methodology: This is a retrospective review of the Otago Vascular Diagnostics database for AAA, where ultrasound and CT diameter data, available within 90 days of each other, were compared. In addition to patient demographics, the infrarenal aorta ultrasound diameter measurements in transverse AP and sagittal AP, along with a coronal decubitus image of the aorta was collected. No transverse measurement was performed from the left to the right of the aorta. Results: Three hundred twenty-five participants (238 males, mean age 76.4 ± 7.5) were included. Mean ultrasound outer to the outer wall, transverse AP and sagittal AP diameters were 48.7 ± 10.5 mm and 48.9 ± 9.9 mm, respectively. The coronal diameter measurement of the aorta from left to right was 53.9 ± 12.8 mm in the left decubitus window. The mean ultrasound max was 54.3 ± 12.6 mm. The mean CT diameter measurement was 55.6 ± 12.7 mm. Correlation between the CT max and ultrasound max was r2 = 0.90, and CT with the coronal measurement r2 = 0.90, CT and AP transverse was r2=0.80, and CT with AP sagittal measurement was r2 = 0.77. Conclusion: The decubitus ultrasound window of the abdominal aorta, with measurement of the coronal plane, is highly correlated and in agreement with CT scanning. This window may offer an alternative approach to measuring the infrarenal abdominal aortic aneurysm and should be considered when performing surveillance of all infra-renal AAA. abstract_id: PUBMED:15234697 The difference between ultrasound and computed tomography (CT) measurements of aortic diameter increases with aortic diameter: analysis of axial images of abdominal aortic and common iliac artery diameter in normal and aneurysmal aortas. The Tromsø Study, 1994-1995. Objective: To assess agreement between ultrasound and computed tomography (CT) measurements from axial images of normal and aneurysmatic aortic and common iliac artery diameter. Design: Part of a population health screening for abdominal aortic aneurysm conducted in 1994-1995. Materials And Methods: Three hundred and thirty-four subjects with and 221 subjects without ultrasound-detected aneurysm were scanned with CT. Three technicians and one radiologist measured ultrasonographic diameters and five radiologists measured CT diameters. The paired ultrasound-CT measurement differences were analyzed to assess agreement. Results: Compared to CT measurements, ultrasound slightly underestimated the diameter in normal aortas and tended to overestimate the diameter in aneurysmal aortas. In 555 ultrasound-CT pairs of measurements, the absolute differences for measurements of maximal aortic diameter were 2 mm or less in 62, 60 and 77% in anterior-posterior, transverse and maximum diameter in any plane, respectively. The corresponding figures for an absolute difference of 5 mm or more were 14, 18 and 8%, respectively. Variability increased with increasing diameter. Conclusions: Both ultrasound and CT measurements of abdominal aortic diameter are liable to variability and neither of these methods can be considered to be 'gold standard'. Both methods can be used, while taking variability into consideration when making clinical decisions. abstract_id: PUBMED:19631858 Abdominal aortic aneurysm diameter: a comparison of ultrasound measurements with those from standard and three-dimensional computed tomography reconstruction. Objective: Aortic aneurysm size is a critical determinant of the need for intervention, yet the maximal diameter will often vary depending on the modality and method of measurement. We aimed to define the relationship between commonly used computed tomography (CT) measurement techniques and those based on current reporting standards and to compare the values obtained with diameter measured using ultrasound (US). Methods: CT scans from patients with US-detected aneurysms were analyzed using three-dimensional reconstruction software. Maximal aortic diameter was recorded in the anteroposterior (CT-AP) plane, along the maximal ellipse (CT-ME), perpendicular to the maximal ellipse (CT-PME), or perpendicular to the centerline of flow (CT-PCLF). Diameter measurements were compared with each other and with maximal AP diameter according to US (US-AP). Analysis was performed according to the principles of Bland and Altman. Results are expressed as mean +/- standard deviation. Results: CT and US scans from 109 patients (92 men, 17 women), with a mean age of 72 +/- 8 years, were included. The mean of each series of readings on CT was significantly larger than the mean US-AP measurement (P &lt; .001), and they also differed significantly from each other (P &lt; .001). The CT-PCLF diameter was larger than CT-AP and CT-PME by mean values of 3.0 +/- 6.6 and 5.9 +/- 6.0 mm, respectively. The CT-ME diameter was larger than CT-PCLF by a mean of 2.4 +/- 5 mm. The US-AP diameter was smaller than CT-AP diameter by 4.2 +/- 4.9 mm, CT-ME by 9.6 +/- 8.0 mm, CT-PME by 1.3 +/- 5 mm, and smaller than CT-PCLF by 7.3 +/- 7.0 mm. Aneurysm size did not significantly affect these differences. Seventy-eight percent of 120 pairs of intraobserver CT measurements and 65% of interobserver CT measurements differed by &lt;2 mm. Conclusions: CT-based measurements of aneurysm size tend to be larger than the US-AP measurement. CT-PCLF diameters are consistently larger than CT-PME as well as CT-AP measurements. These differences should be considered when applying evidence from previous trials to clinical decisions. abstract_id: PUBMED:35903666 3D Ultrasound Measurements Are Highly Sensitive to Monitor Formation and Progression of Abdominal Aortic Aneurysms in Mouse Models. Background: Available mouse models for abdominal aortic aneurysms (AAAs) differ substantially in the applied triggers, associated pathomechanisms and rate of vessel expansion. While maximum aortic diameter (determined after aneurysm excision or by 2D ultrasound) is commonly applied to document aneurysm development, we evaluated the sensitivity and reproducibility of 3D ultrasound to monitor aneurysm growth in four distinct mouse models of AAA. Methods: The models included angiotensin-II infusion in ApoE deficient mice, topical elastase application on aortas in C57BL/6J mice (with or without oral administration of β-aminoproprionitrile) and intraluminal elastase perfusion in C57BL/6J mice. AAA development was monitored using semi-automated 3D ultrasound for aortic volume calculation over 12 mm length and assessment of maximum aortic diameter. Results: While the models differed substantially in the time course of aneurysm development, 3D ultrasound measurements (volume and diameter) proved highly reproducible with concordance correlation coefficients &gt; 0.93 and variations below 9% between two independent observers. Except for the elastase perfusion model where aorta expansion was lowest and best detected by diameter increase, all other models showed high sensitivity of absolute volume and diameter measurements in monitoring AAA formation and progression by 3D ultrasound. When compared to standard 2D ultrasound, the 3D derived parameters generally reached the highest effect size. Conclusion: This study has yielded novel information on the robustness and limitations of semi-automated 3D ultrasound analysis and provided the first direct comparison of aortic volume increase over time in four widely applied mouse models of AAA. While 3D ultrasound generally proved highly sensitive in detecting early AAA formation, the 3D based volume analysis was found inferior to maximum diameter assessment in the elastase perfusion model where the extent of inflicted local injury is determined by individual anatomical features. abstract_id: PUBMED:15707804 Ultrasonographic measurement of aortic diameter by emergency physicians approximates results obtained by computed tomography. To assess agreement between emergency physicians' measurements of abdominal aortic diameter using ultrasound in the Emergency Department (ED) and measurements obtained by computed tomography (CT), a double-blinded, prospective study was conducted. The study enrolled a convenience sample of patients over 50 years of age presenting to the ED and scheduled to undergo CT scan of the abdomen and pelvis. Before CT scan, each patient received an ultrasound from a resident or attending emergency physician measuring anterior-posterior aortic diameter transversely at the approximate level of the superior mesenteric artery (SMA), longitudinally midway between the SMA and the iliac bifurcation, and transversely approximately 1 cm above the iliac bifurcation. Two radiologists blinded to the ultrasound measurements then independently measured aortic diameters at the corresponding anatomical points as imaged by CT. The ultrasonographic measurements were then compared with an average of the two CT measurements. Forty physicians enrolled a total of 104 patients into the study. Ultrasonographic measurements of aortic diameter were slightly smaller than those obtained by CT scan, with a difference of means of -0.39 cm (95% CI -0.25 to -0.53) at the level of the SMA, -0.26 cm (95% CI -0.17 to -0.36) on longitudinal view, and -0.11 cm (95 % CI -0.01 to 0.22) at the bifurcation. At the level of the SMA, the difference in measurements by ultrasound and CT would be expected to be less than 1.41 cm, 95% of the time. At the bifurcation, we expect 95% of the differences to be less than 1.05 cm. Agreement was closest on longitudinal view, with 95% of the differences expected to be less than 0.94 cm. Participating physicians estimated the time required to complete their ultrasound studies to be less than 5 min in a majority of cases. In conclusion, ultrasonographic measurement of aortic diameter by emergency physicians rapidly and effectively approximates measurements obtained by CT scan. abstract_id: PUBMED:34606959 Three-dimensional ultrasound volume and conventional ultrasound diameter changes are equally good markers of endoleak in follow-up after endovascular aneurysm repair. Introduction: The main disadvantages of computed tomography angiography (CTA) in follow-up after endovascular aneurysm repair are the risks of contrast-induced renal impairment and radiation-induced cancer. Three-dimensional ultrasound is a new technique for volume estimation of the aneurysm sac. Some studies have reported promising results. The aim of this study was to evaluate the accuracy and precision of three-dimensional ultrasound aneurysm sac-volume estimates, and to explore whether volume and/or diameter changes on ultrasound can be used as markers of endoleak. Methods: A single-center diagnostic accuracy study was performed. A total of 92 patients planned for endovascular aneurysm repair were prospectively and consecutively enrolled (2013-2016). Aneurysm sac diameter and volume were measured using CTA, conventional ultrasound, and three-dimensional ultrasound preoperatively and 1, 6, 12, and 24 months postoperatively. Three-dimensional ultrasound was performed with a commercially available electromechanical transducer. Patients with endoleak were observed 5 years after endovascular aneurysm repair. Results: A total of 79 men and 13 women were included. Mean age was 74 years (57-92 years). Median follow-up was 24 months. Endoleak cases were observed for up to 55 months. Diameter measurements on conventional ultrasound correlated well with CT diameters (r = 0.9, P &lt; .05, n = 347), and Bland-Altman analyses showed an upper limit of agreement of +0.5 cm and a lower limit of agreement of -0.8 cm. The mean difference was -0.13 cm ± 0.36 cm. Three-dimensional ultrasound volumes had a correlation with CTA diameters of r = 0.8 (P &lt; .05, n = 347) and with three-dimensional CT volumes of r = 0.8 (P &lt; .05, n = 155). Receiver operating characteristic analyses showed that the diameter and volume changes that led to reintervention were most accurate at 24-month follow-up, with area-under-the-curve percentage changes of 0.98 (two-dimensional ultrasound), 0.97 (three-dimensional ultrasound), and 0.97 (two-dimensional CT). Discussion: Both diameter and volume changes can be used as markers for endoleak with excellent areas under the curve on receiver operating characteristic analyses. However, three-dimensional ultrasound volumes did not add any further diagnostic information. Conventional 2D diameter measurements were as accurate as volume changes as markers of endoleak. Conclusions: Type II endoleaks can safely be followed up using a simple diameter measurement on conventional ultrasound. abstract_id: PUBMED:10194490 Accurate assessment of abdominal aortic aneurysm with intravascular ultrasound scanning: validation with computed tomographic angiography. Purpose: The purpose of this study was to assess the accuracy of intravascular ultrasound (IVUS) parameters of abdominal aortic aneurysm, used for endovascular grafting, in comparison with computed tomographic angiography (CTA). Methods: This study was designed as a descriptive study. Between March 1997 and March 1998, 16 patients with abdominal aortic aneurysms were studied with angiography, IVUS (12.5 MHz), and CTA. The length of the aneurysm and the length and lumen diameter of the proximal and distal neck obtained with IVUS were compared with the data obtained with CTA. The measurements with IVUS were repeated by a second observer to assess the reproducibility. Tomographic IVUS images were reconstructed into a longitudinal format. Results: IVUS results identified 31 of 32 renal arteries and four of five accessory renal arteries. A comparison of the length measurements of the aneurysm and the proximal and distal neck obtained with IVUS and CTA revealed a correlation of 0.99 (P &lt;.001), with a coefficient of variation of 9%. IVUS results tended to underestimate the length as compared with the CTA results (0.48 +/- 0.52 cm; P &lt;.001). A comparison of the lumen diameter measurements of the proximal and distal neck derived from IVUS and CTA showed a correlation of 0.93 (P &lt;.001), with a coefficient of variation of 9%. IVUS results tended to underestimate aneurysm neck diameter as compared with CTA results (0.68 +/- 1.76 mm; P =.006). Interobserver agreement of IVUS length and diameter measurements showed a good correlation (r = 1.0; P &lt;.001), with coefficients of variation of 3% and 2%, respectively, and no significant differences (0.0 +/- 0.16 cm and 0.06 +/- 0.36 mm, respectively). The longitudinal IVUS images displayed the important vascular structures and improved the spatial insight in aneurysmal anatomy. Conclusion: Intravascular ultrasound scanning results provided accurate and reproducible measurements of abdominal aortic aneurysm. The longitudinal reconstruction of IVUS images provided additional knowledge on the anatomy of the aneurysm and its proximal and distal neck. abstract_id: PUBMED:33646456 The reproducibility of measuring maximum abdominal aortic aneurysm diameter from ultrasound images. Background: Accurate repeat assessment of the diameter of an abdominal aortic aneurysm (AAA) is important. This study investigated the reproducibility of different methods of measuring AAA diameter from ultrasound images. Methods: Fifty AAA patients were assessed by ultrasound. Maximum AAA diameter was measured independently by three trained observers on two separate occasions using a standardised protocol. Five diameters were measured from each scan, three in the anterior-posterior (AP) and two in the transverse (TV) plane, including inner-to-inner (ITI), outer-to-outer (OTO) and leading edge-to-leading edge (LETLE). Intra- and inter-observer reproducibility were reported as reproducibility coefficients. Statistical comparison of methods was performed using linear mixed effects models. Results: Intra-observer reproducibility coefficients (AP LETLE 2.2 mm; AP ITI 2.4 mm; AP OTO 2.6 mm) were smaller than inter-observer reproducibility coefficients (AP LETLE 4.6 mm: AP ITI 4.5; and AP OTO 4.8 mm). There was no statistically significant difference in intra-observer reproducibility of three types of measurements performed in the AP plane. Measurements obtained in the TV plane had statistically significant worse intra-observer reproducibility than those performed in the AP plane. Conclusions: This study suggests that the comparison of maximum AAA diameter between repeat images is most reproducibly performed by a single trained observer measuring diameters in the AP plane. Answer: Yes, there is a difference between abdominal aortic aneurysm (AAA) diameter measurements obtained with ultrasound (US) and computed tomography (CT). Studies have shown that maximal AAA diameter measured with CT is significantly and consistently larger than the diameter measured with US. For instance, one study found that CT measurements were greater than US measurements in 95% of cases, with a mean CT(max) of 5.69 cm significantly larger than the mean US(max) of 4.74 cm (P < .001) (PUBMED:12947257). The correlation coefficient between CT(max) and US(max) was 0.705, but the difference between the two was less than 1.0 cm in only 51% of cases. The limits of agreement (LOA) between CT(max) and US(max) (-0.45-2.36 cm) exceeded the limits of clinical acceptability (-0.5-0.5 cm), indicating poor agreement between the two methods (PUBMED:12947257). Another study found that the aneurysm diameter measured by CT was greater than that measured by ultrasound, with a mean difference of -1.70 mm (95% confidence interval -2.45 to -0.96, p<0.001) (PUBMED:27542700). Similarly, a study comparing different ultrasound planes with CT measures of AAA found that the mean CT diameter measurement was 55.6 mm, which was larger than the mean ultrasound max of 54.3 mm. The correlation between CT max and ultrasound max was r2 = 0.90, indicating a high degree of correlation, but still demonstrating a discrepancy in measurements (PUBMED:36960142). Furthermore, a study analyzing axial images of abdominal aortic and common iliac artery diameter in normal and aneurysmal aortas found that ultrasound slightly underestimated the diameter in normal aortas and tended to overestimate the diameter in aneurysmal aortas compared to CT measurements (PUBMED:15234697). Another study reported that CT-based measurements of aneurysm size tend to be larger than US-AP measurements, with CT-PCLF diameters consistently larger than CT-PME as well as CT-AP measurements (PUBMED:19631858).
Instruction: Incidental pancreatic cystic lesions: is there a relationship with the development of pancreatic adenocarcinoma and all-cause mortality? Abstracts: abstract_id: PUBMED:25117591 Incidental pancreatic cystic lesions: is there a relationship with the development of pancreatic adenocarcinoma and all-cause mortality? Purpose: To establish the effect of incidental pancreatic cysts found by using computed tomographic (CT) and magnetic resonance (MR) imaging on the incidence of pancreatic ductal adenocarcinoma and overall mortality in patients from an inner-city urban U.S. tertiary care medical center. Materials And Methods: Institutional review board granted approval for the study and waived the informed consent requirement. The study population comprised cyst and no-cyst cohorts drawn from all adults who underwent abdominal CT and/or MR November 1, 2001, to November 1, 2011. Cyst cohort included patients whose CT or MR imaging showed incidental pancreatic cysts; no-cyst cohort was three-to-one frequency matched by age decade, imaging modality, and year of initial study from the pool without reported incidental pancreatic cysts. Patients with pancreatic cancer diagnosed within 5 years before initial CT or MR were excluded. Demographics, study location (outpatient, inpatient, or emergency department), dates of pancreatic adenocarcinoma and death, and modified Charlson scores within 3 months before initial CT or MR examination were extracted from the hospital database. Cox hazard models were constructed; incident pancreatic adenocarcinoma and mortality were outcome events. Adenocarcinomas diagnosed 6 months or longer after initial CT or MR examination were considered incident. Results: There were 2034 patients in cyst cohort (1326 women [65.2%]) and 6018 in no-cyst cohort (3,563 [59.2%] women); respective mean ages were 69.9 years ± 15.1(standard deviation) and 69.3 years ± 15.2, respectively (P = .129). The relationship between mortality and incidental pancreatic cysts varied by age: hazard ratios were 1.40 (95% confidence interval [ CI confidence interval ]: 1.13, 1.73) for patients younger than 65 years and 0.97 (95% CI confidence interval : 0.88, 1.07), adjusted for sex, race, imaging modality, study location, and modified Charlson scores. Incidental pancreatic cysts had a hazard ratio of 3.0 (95% CI confidence interval : 1.32, 6.89) for adenocarcinoma, adjusted for age, sex, and race. Conclusion: Incidental pancreatic cysts found by using CT or MR imaging are associated with increased mortality for patients younger than 65 years and an overall increased risk of pancreatic adenocarcinoma. abstract_id: PUBMED:29191271 Incidental Intraductal Papillary Mucinous Neoplasm, Cystic or Premalignant Lesions of the Pancreas: The Case for Aggressive Management. Incidental cystic intrapancreatic lesions are daily findings in abdominal radiology. The discovery of incidental pancreatic lesions is increasingly common with technologic diagnostic advancements. This article provides a perspective and guideline on the clinical management of incidental intraductal papillary mucinous neoplasms and cystic or premalignant lesions of the pancreas. abstract_id: PUBMED:20850905 Incidence and characteristics of pancreatic cystic neoplasms Introduction: Cystic neoplasms (CN) of the pancreas represent 10% of cystic lesions and 1% of pancreatic tumors. Mucinous cystic neoplasm (MCN), serous cystadenoma (SC) and intraductal papillary mucinous neoplasm (IPMN) are cystic neoplasms and represent more than 90% of these types of lesion. Few series have been published on these lesions, especially in Spain. Aim: To evaluate the incidence, characteristics and survival of patients with cystic neoplasms attended in our hospital in the last 12 years. Patients And Method: A retrospective analysis was carried out in all patients diagnosed with CN between January 1997 and December 2008. Diagnosis was made by abdominal computed tomography, pancreatic-magnetic resonance imaging and/or endoscopic ultrasonography. Sex, age, year of diagnosis, symptoms, tumoral location and size, type of surgery, pathology, and survival were evaluated. Results: A total of 117 patients were analyzed. The mean age was 63±14 years and 56% were women. Eighty-eight patients had IPMN, 21 had SC and eight had MCN. Fifty-six per cent were diagnosed in the last 4 years, 42.7% were diagnosed as an incidental finding and 19% had a history of acute pancreatitis. The most frequent location was the pancreatic head (53%). The mean imaging size was 32mm. Surgical resection was performed in 69.2% of the patients. Twenty-three percent of the tumors were malignant, 30% were carcinoma in situ and 70% were invasive. Thirteen percent of the patients died; of these 93.3% had invasive carcinoma. Five-year survival was 94.7% in SC, 76% in IPMN and 60% in MCN. Conclusions: CN were mainly identified as incidental findings, although acute pancreatitis is another possible cause. The most frequent tumor in our environment is IPMN. Surgical treatment of IPMN and MCN, at the right moment, may be useful to prevent the development of pancreatic carcinoma. abstract_id: PUBMED:37245934 Pancreatic Cystic Lesions: Next Generation of Radiologic Assessment. Pancreatic cystic lesions are frequently identified on cross-sectional imaging. As many of these are presumed branch-duct intraductal papillary mucinous neoplasms, these lesions generate much anxiety for the patients and clinicians, often necessitating long-term follow-up imaging and even unnecessary surgical resections. However, the incidence of pancreatic cancer is overall low for patients with incidental pancreatic cystic lesions. Radiomics and deep learning are advanced tools of imaging analysis that have attracted much attention in addressing this unmet need, however, current publications on this topic show limited success and large-scale research is needed. abstract_id: PUBMED:28831506 Cystic pancreatic tumors: diagnostics and new biomarkers Mortality due to pancreatic ductal adenocarcinoma (PDAC) will increase in the near future. The only curative treatment for PDAC is radical resection; however, even small carcinomas exhibit micrometastases leading to early relapse. Accordingly, detection of premalignant precursor lesions is important. In essence, PDAC develops from three precursor lesions: pancreatic intraepithelial lesions (PanIN), intraductal papillary-mucinous neoplasia (IPMN) and mucinous-cystic neoplasia (MCN). Together with serous cystic neoplasia (SCN) and solid pseudopapillary neoplasia (SPN), these cystic lesions constitute the most common cystic neoplasms in the pancreas. In the case of IPMN, main and branch duct IPMN have to be differentiated because of a markedly different malignancy potential. While main duct IPMN and MCN have a high malignancy transformation rate, branch duct IPMNs are more variable with respect to malignant transformation. This shows that differential diagnosis of cystic lesions is important; however, this is often very difficult to accomplish using conventional imaging. Novel biomarkers and diagnostic tools based on the molecular differences of cystic pancreatic lesions could be helpful to differentiate these lesions and facilitate early diagnosis. The aim is to distinguish the premalignant cysts from strictly benign cystic lesions and a timely detection of malignant transformation. This article provides an overview on the molecular characteristics of cystic pancreatic lesions as a basis for improved diagnostics and the development of new biomarkers. abstract_id: PUBMED:30931195 A Rare Case of Pancreatic Tail Hydatid Cyst with Incidental Adenocarcinoma of the Pancreatic Body. Pancreatic hydatid cyst is a rare disease found mostly in endemic regions. Having no specific clinical signs, it may present with tension related abdominal pain, dyspepsia, a palpable mass, and signs of external pressure on the surrounding organs in accordance with localization of the lesion. Pancreatic carcinoma as a neoplastic pathology with poor prognosis can have various clinical presentations changing with localization of the tumor which sometimes has cystic components. Due to the distinct nature of these pathologies, surgical approach can be fairly different. In this report, we present a case of a 70-year-old patient who had an isolated hydatid cyst in the tail of the pancreas with an incidental pancreatic carcinoma in the corpus of the pancreas. The patient was treated with a subtotal pancreatectomy, having no problems in the postoperative period leading to uncomplicated discharge. abstract_id: PUBMED:37523124 Acinar cystic transformation in the pancreatic tail. Pancreatic acinar cystic transformation (ACT) is a rare non-neoplastic cystic lesion that is predominantly located at the pancreatic head in females. Preoperative definitive diagnosis of ACT remains challenging despite advances in radiologic imaging methods. A 25-year-old male patient presented with abdominal discomfort and a 50-mm cystic lesion in the pancreatic tail. The patient underwent laparoscopic distal pancreatectomy, because branch duct intraductal papillary mucinous neoplasm cannot be ruled out and the presence of abdominal symptoms. The resected specimen revealed a collection of small and large cysts lined by a single cuboidal epithelium layer with scattered pancreatic tissue exhibiting fibrosis in the septal wall. The cystic lesion was epithelial, trypsin-positive, B cell lymphoma 10-positive, cytokeratin 19-positive, mucin 1-positive, and MUC6-negative with a differentiated lobular central conduit causing to an adeno-cystic cell, thereby supporting the ACT diagnosis. Distinguishing ACT from other pancreatic cystic tumors remains a diagnostic challenge despite improvements in radiologic imaging methods. Surgical resection may be justified when other cystic neoplasms cannot be excluded because of its heterogeneous nature, although the ACT is a non-neoplastic lesion, and cases of malignant transformation have never been reported to date. abstract_id: PUBMED:36535512 Clinical and Molecular Attributes and Evaluation of Pancreatic Cystic Neoplasm. Intraductal papillary mucinous neoplasms (IPMNs) and mucinous cystic neoplasms (MCNs) are all considered "Pancreatic cystic neoplasms (PCNs)" and show a varying risk of developing into pancreatic ductal adenocarcinoma (PDAC). These lesions display different molecular characteristics, mutations, and clinical manifestations. A lack of detailed understanding of PCN subtype characteristics and their molecular mechanisms limits the development of efficient diagnostic tools and therapeutic strategies for these lesions. Proper in vivo mouse models that mimic human PCNs are also needed to study the molecular mechanisms and for therapeutic testing. A comprehensive understanding of the current status of PCN biology, mechanisms, current diagnostic methods, and therapies will help in the early detection and proper management of patients with these lesions and PDAC. This review aims to describe all these aspects of PCNs, specifically IPMNs, by describing the future perspectives. abstract_id: PUBMED:29899320 Pancreatic Cystic Lesions: Pathogenesis and Malignant Potential. Pancreatic cancer remains one of the most lethal cancers despite extensive research. Further understanding of precursor lesions may enhance the ability to treat and prevent pancreatic cancer. Pancreatic cystic lesions (PCLs) with malignant potential include: mucinous PCLs (intraductal papillary mucinous neoplasms and mucinous cystic neoplasm), solid pseudopapillary tumors and cystic neuroendocrine tumors. This review summarizes the latest literature describing what is known about the pathogenesis and malignant potential of these PCLs, including unique epidemiological, radiological, histological, genetic and molecular characteristics. abstract_id: PUBMED:21713322 Symptomatic and incidental thromboembolism are both associated with mortality in pancreatic cancer. Pancreatic cancer is known to be associated with VTE, but contemporary rates of incidental and symptomatic VTE events and their association with mortality are incompletely understood. We conducted a retrospective cohort study of consecutive pancreatic adenocarcinoma patients at the University of Rochester from 2006-2009. Data were analysed using a Cox model with time-dependent covariates. A total of 1,151 radiologic exams of 135 patients were included. Forty-seven patients (34.8%) experienced VTE including 12 pulmonary emboli (PE), 28 deep-vein thromboses (DVTs) and 47 visceral vein events. Incidental events comprised 33.3% of PEs, 21.4% of DVTs and 100% of visceral VTE. Median (95% CI) conditional survival beyond three months was 233 (162-322) more days for those without VTE, which was significantly greater than 12 (3-60) days for those with DVT as first event (p&lt;0.0001) and 87 (14-322) days with visceral first events (p=0.022). In multivariate analysis, DVT (HR 25, 95% CI 10-63, p &lt;0.0001), PE (HR 8.9, 95% CI 2.5-31.7, p = 0.007) and incidental visceral events (HR 2.6, 95% CI 1.6-4.2, p =0.0001) were all associated with mortality, though anticoagulants reduced these risks by 70% (26-88%, p = 0.009). In conclusion, VTE occurs in over one-third of contemporary pancreatic cancer patients and, whether symptomatic or incidental, is strongly associated with worsened mortality. The role of anticoagulation in treating incidental or visceral VTE warrants further study. Answer: Yes, there is a relationship between incidental pancreatic cystic lesions and the development of pancreatic adenocarcinoma as well as all-cause mortality. A study found that incidental pancreatic cysts identified through CT or MR imaging are associated with an increased risk of pancreatic adenocarcinoma, with a hazard ratio of 3.0, adjusted for age, sex, and race (PUBMED:25117591). Additionally, the same study reported increased mortality for patients younger than 65 years who had incidental pancreatic cysts (PUBMED:25117591). Other research supports the notion that incidental cystic intrapancreatic lesions, such as intraductal papillary mucinous neoplasms (IPMNs) and mucinous cystic neoplasms (MCNs), have a varying risk of developing into pancreatic ductal adenocarcinoma (PDAC) (PUBMED:36535512). It is also noted that the proper management of incidental intraductal papillary mucinous neoplasms and cystic or premalignant lesions of the pancreas is crucial (PUBMED:29191271). Furthermore, cystic neoplasms of the pancreas, which are often identified as incidental findings, can be precursors to pancreatic carcinoma, and surgical treatment at the right moment may be useful to prevent the development of pancreatic carcinoma (PUBMED:20850905). The importance of distinguishing premalignant cysts from strictly benign cystic lesions and timely detection of malignant transformation is emphasized, as this can facilitate early diagnosis and improve patient outcomes (PUBMED:28831506). In summary, incidental pancreatic cystic lesions are associated with an increased risk of developing pancreatic adenocarcinoma and can affect all-cause mortality, particularly in younger patients. The management of these lesions is critical to potentially prevent the progression to pancreatic carcinoma.
Instruction: Can Increasing the Manufacturer's Recommended Shortest Curing Time of High-intensity Light-emitting Diodes Adequately Cure Sealants? Abstracts: abstract_id: PUBMED:26314592 Can Increasing the Manufacturer's Recommended Shortest Curing Time of High-intensity Light-emitting Diodes Adequately Cure Sealants? Purpose: To investigate sealant depth of cure after increasing the curing times of high-intensity light-emitting diode units (LEDs). Methods: Three sealants (opaque-unfilled, opaque-filled, and clear-filled) were light cured in a covered-slot mold with: (a) three LEDs (VALO, SmartLite, Fusion) for six to 15 seconds; and (b) a quartz-tungsten halogen (QTH) light for 40 seconds as a control (N=10). Twenty-four hours after light curing, microhardness was measured at the sealant surface and through the depth at 0.5 mm increments. Results were analyzed via analysis of variance followed by the Student-Newman-Keuls test (significance level 0.05). Results: The opaque-filled and clear-filled sealants cured with VALO for six or nine seconds had hardness values that were statistically equivalent to or better than the QTH to a depth of 1.5 mm. Using Fusion for 10 seconds (exposure limit) did not adequately cure the three sealants beyond one mm. SmartLite at 15 seconds (maximum exposure period without overheating) did not adequately cure the sealants beyond 0.5 mm. Conclusions: Among the tested high-intensity LEDs, only VALO at double or triple the manufacturers' shortest curing time (six or nine seconds) provided adequate curing of opaque-filled and clear-filled sealants at 1.5 mm depth compared to the 40-second QTH light. abstract_id: PUBMED:38159193 Evaluation of the effect of high-intensity light-curing device on micro-leakage of pits and fissure sealants. Reducing treatment time is one of the most important trends in modern dentistry. This study aimed to compare the micro-leakage around the resin sealants when using both high and conventional intensity light-curing systems. The study sample consisted of 30 extracted human maxillary premolar teeth that were divided into two equal groups according to the light-curing system used: Group 1, High-Intensity Light-Curing System and Group 2, Conventional Light-Curing System. Light-curing by Woodpecker I-LED device with two intensities (high and conventional) has been used. All teeth were subjected to 500 cycles of thermocycling. Then, a methylene blue dye microleakage test was performed, and the teeth were sectioned longitudinally and studied under a stereo microscope. The mean of micro-leakage in the high-intensity group (1.33 ± 1.29) was less than in the conventional intensity group (1.63 ± 1.29) without any statistically significant differences (p = 0.320). The high-intensity light-curing system mode may be a good and acceptable alternative to conventional intensity light-curing system mode in polymerization of pits and fissure sealants. abstract_id: PUBMED:24628863 Depth of cure of sealants polymerized with high-power light emitting diode curing lights. Objective: To determine whether recommended short curing times of three high-power light emitting diode (LED) curing lights are sufficient to polymerize sealant materials. Methods: Opaque-unfilled sealant (Delton LC Opaque), opaque-filled sealant (UltraSeal XT plus), and clear-filled sealant (FluroShield) were light cured in a covered slot-mold using the manufacturers' shortest recommended curing times with three high-power LED lights (3-s VALO, 5-s Fusion, 10-s Smartlite). A 40-s cure with a quartz-tungsten halogen (QTH) light was used as control. Vickers hardness was measured 24 h after curing at the sealant surface and through the depth (0.5 mm increments) (N = 10). Results were analyzed with two-way anova (pair-wise multiple comparisons, significance level 0.05). Results: The high-power LEDs did not cure the sealants as deep as the QTH. Delton LC Opaque showed the least depth of cure as hardness values beyond a depth of 0.5 mm were not measurable regardless of the curing light. Even for UltraSeal XT plus, when surface hardness was about the same with all lights, hardness decreased more rapidly with depth for the LEDs. FluroShield showed the slowest decline in hardness through the depth for all lights. Conclusions: Manufacturers' recommendations for shortest possible curing time with high-power LEDs were not sufficient for adequate polymerization of the tested sealants. abstract_id: PUBMED:32727961 Effect of high-irradiance light curing on exposure times and pulpal temperature of adequately polymerized composite. This study investigated the effect of high-irradiance light-curing on exposure time and pulpal temperature of adequately-cured composite. Composite placed in a molar preparation was cured using high-irradiance light-curing units (Flashmax P3, Valo, S.P.E.C. 3 LED, Cybird XD) and tested for hardness occlusal-gingivally. The first group had exposure times set according to manufacturer settings (recommended), second group to yield 80% of maximum hardness at the 2 mm depth (experimental), and third group was set at 20 s (extended). Exposure time necessary to adequately polymerize the composite at 2 mm depth was 9 s for the Cybird XD and Valo and 12 s for S.P.E.C. 3 LED and Flashmax P3. None of the high-irradiance light-curing units adequately polymerized the composite at the manufacturer-recommended minimum-exposure times of 1-3 s. Exposure times necessary to adequately polymerize composite at 2 mm resulted in a maximum pulpal-temperature increase well below the temperature associated with possible pulpal necrosis. abstract_id: PUBMED:22041113 Effect of light curing methods on microleakage and microhardness of different resin sealants. Purpose: This study's purpose was to evaluate the effect of light curing methods on the microleakage and microhardness of sealants. Methods: The Elipar Free Light 2 light emitting diode (LED) with 10- and 20-second curing times, and the Elipar 2500 halogen light with a 20-second curing time were compared. Four different sealants were used: (1) Delton Clear; (2) Delton Opaque; (3) UltraSeal XT Clear; and (4) UltraSeal XT Opaque. Specimens were fabricated in a silicone mold (2-mm thick) and cured. Knoop hardness was measured at the bottom and top surfaces. For the microleakage evaluation, 120 human molars were divided into 12 groups and sealed with the sealants and curing methods, as stated previously. The teeth were thermocycled and immersed in 2% methylene blue for 24 hours. Each tooth was sectioned and examined for dye penetration. Results: There were no statistically significant differences in the microleakage of sealants polymerized by either the halogen or LED curing methods. The microhardness of sealants varied according to the type of material and curing method. Conclusions: A 10-second polymerization time with light emitting diodes was not sufficient to cure the 2-mm-thick opaque or high filler loaded sealants. Decreasing the curing time, however, had no effect on the microleakage of the sealants. abstract_id: PUBMED:19255178 The depth of cure of clear versus opaque sealants as influenced by curing regimens. Background: The authors conducted a study to test the hypothesis that light-curing regimens affect depth of cure of clear versus opaque sealants. Methods: The authors light-cured samples of one clear and two opaque sealants at 20 seconds, 0 millimeters; 40 seconds, 0 mm; and 40 seconds, 2.2 mm (n = 5 each). They assessed the depth of cure with Knoop hardness at 0.5-mm increments five minutes and one hour after curing. The authors used analysis of variance. Results: Curing regimens and sealant types affected the depth of cure. The clear sealant maintained a greater hardness than did the opaque sealants through a depth of 3 mm (P &lt; .001). A 20-second duration reduced the depth of cure for all sealants (P &lt; .001). The distance from the light source did not affect the cure depth of the clear sealant (P = .34), but it reduced the cure depth of the opaque sealants (P &lt; .05). Sealant hardness increased significantly one hour after light curing (P &lt; .001). Conclusions: A clear sealant cured deeper than did opaque sealants. Curing duration is crucial to achieve an adequate depth of cure. A 20-second duration may not suffice. Light source distance affected the depth of cure for the opaque sealants, but not for the clear sealant with sufficient curing duration. Clinical Implications: The authors advocate a curing duration of longer than 20 seconds to ensure thorough polymerization at the interface between the sealant and tooth. Insufficient curing could contribute to failure of the sealants, especially the opaque sealants, under clinical conditions that restrict the light tip position. abstract_id: PUBMED:24157603 Effects of exposure time and exposure distance on the degree of cure in light-activated pit and fissure sealants. Objectives: The study aims to measure and compare the effect of different exposure times and exposure distances on the degree of cure (DC) of light hardening resin based pit and fissure sealants. Methods: A representative selection of 13 commercial sealants brands was chosen. DC of each material (n=6) was measured in real-time by Fourier transform infrared spectroscopy (FTIR) at three clinically relevant exposure times (10, 20, 40s) and two fixed exposure distances (4mm and 7 mm) between sample and light source. Data were analyzed by a multi-variant analysis and partial eta-squared statistic. Results: Factors "material", "exposure time" and "exposure distance" had a significant influence on the DC across all materials (ηp(2)=0.927,0.774 and 0.266 respectively) with "material" and "exposure time" showing the strongest effect (significance level α ≤ 0.05). In general, an increased exposure time and reduced exposure distance between sample and light source led to increased DC for all the materials. Conclusions: Degree of cure is influenced significantly by the brand of sealant and by exposure time. In some cases it is found that DC is also affected significantly by the exposure distance. Clinical Significance: On the basis of this study, an exposure time of at least 20s and a maximum exposure distance of 4mm between curing unit and material surface is recommended. abstract_id: PUBMED:16382600 Curing of pit &amp; fissure sealants using Light Emitting Diode curing units. Light Emitting Diode (LED) curing units are attractive to clinicians, because most are cordless and should create less heat within tooth structure. However, questions about polymerization efficacy have surrounded this technology. This research evaluated the adequacy of the depth of cure of pit &amp; fissure sealants provided by LED curing units. Optilux (OP) and Elipar Highlight (HL) high intensity halogen and Astralis 5 (A5) conventional halogen lights were used for comparison. The Light Emitting Diode (LED) curing units were Allegro (AL), LE Demetron I (DM), FreeLight (FL), UltraLume 2(UL), UltraLume 5 (UL5) and VersaLux (VX). Sealants used in the study were UltraSeal XT plus Clear (Uclr), Opaque (Uopq) and Teethmate F-1 Natural (Kclr) and Opaque (Kopq). Specimens were fabricated in a brass mold (2 mm thick x 6 mm diameter) and placed between two glass slides (n=5). Each specimen was cured from the top surface only. One hour after curing, four Knoop Hardness readings were made for each top and bottom surface at least 1 mm from the edge. The bottom to top (B/T) KHN ratio was calculated. Groups were fabricated with 20 and 40-second exposure times. In addition, a group using a 1 mm-thick mold was fabricated using an exposure time of 20 seconds. Differences between lights for each material at each testing condition were determined using one-way ANOVA and Student-Newman-Keuls Post-hoc test (alpha=0.05). There was no statistical difference between light curing units for Uclr cured in a 1-mm thickness for 20 seconds or cured in a 2 mm-thickness for 40 seconds. All other materials and conditions showed differences between light curing units. Both opaque materials showed significant variations in B/T KHN ratios dependent upon the light-curing unit. abstract_id: PUBMED:27896210 Comparison of the bonding strengths of second- and third-generation light-emitting diode light-curing units. Objective: With the introduction of third-generation light-emitting diodes (LEDs) in dental practice, it is necessary to compare their bracket-bonding effects, safety, and efficacy with those of the second-generation units. Methods: In this study, 80 extracted human premolars were randomly divided into eight groups of 10 samples each. Metal or polycrystalline ceramic brackets were bonded on the teeth using second- or third-generation LED light-curing units (LCUs), according to the manufacturers' instructions. The shear bond strengths were measured using the universal testing machine, and the adhesive remnant index (ARI) was scored by assessing the residual resin on the surfaces of debonded teeth using a scanning electron microscope. In addition, curing times were also measured. Results: The shear bond strengths in all experimental groups were higher than the acceptable clinical shear bond strengths, regardless of the curing unit used. In both LED LCU groups, all ceramic bracket groups showed significantly higher shear bond strengths than did the metal bracket groups except the plasma emulation group which showed no significant difference. When comparing units within the same bracket type, no differences in shear bond strength were observed between the second- and third-generation unit groups. Additionally, no significant differences were observed among the groups for the ARI. Conclusions: The bracket-bonding effects and ARIs of second- and third-generation LED LCUs showed few differences, and most were without statistical significance; however, the curing time was shorter for the second-generation unit. abstract_id: PUBMED:24885810 Influence of light-curing mode on the cytotoxicity of resin-based surface sealants. Background: Surface sealants have been successfully used in the prevention of erosive tooth wear. However, when multiple tooth surfaces should be sealed, the light-curing procedure is very time-consuming. Therefore, the aim of this study was to investigate whether reduced light-curing time (while maintaining similar energy density) has an influence on resin-based surface sealant cytotoxicity. Methods: Bovine dentine discs were treated as follows: group 1: untreated, groups 2-5: Seal&amp;Protect and groups 6-9: experimental sealer. Groups 2 and 6 were light-cured (VALO LED light-curing device) for 40 s (1000 mW/cm2), groups 3 and 7 for 10 s (1000 mW/cm2), groups 4 and 8 for 7 s (1400 mW/cm2) and groups 5 and 9 for 3 s (3200 mW/cm2). Later, materials were extracted in culture medium for 24 h, and released lactate dehydrogenase (LDH) activity as a measure of cytotoxicity was determined photometrically after cells (dental pulp cells and gingival fibroblasts) were exposed to the extracts for 24 h. Three independent experiments, for both sample preparation and cytotoxicity testing, were performed. Results: Overall, lowest cytotoxicity was observed for the unsealed control group. No significant influence of light-curing settings on the cytotoxicity was observed (p = 0.537 and 0.838 for pulp cells and gingival fibroblasts, respectively). No significant difference in the cytotoxicity of the two sealants was observed after light-curing with same light-curing settings (group 2 vs. 6, 3 vs. 7, 4 vs. 8 and 5 vs. 9: p &gt; 0.05, respectively). Conclusions: Shortening the light-curing time, while maintaining constant energy density, resulted in no higher cytotoxicity of the investigated sealants. Answer: Yes, increasing the manufacturer's recommended shortest curing time of high-intensity light-emitting diodes (LEDs) can adequately cure sealants. A study found that when using high-intensity LEDs, only the VALO unit at double or triple the manufacturer's shortest recommended curing time (six or nine seconds) provided adequate curing of opaque-filled and clear-filled sealants at a depth of 1.5 mm compared to a 40-second cure with a quartz-tungsten halogen (QTH) light (PUBMED:26314592). This suggests that with appropriate adjustments to the curing time, high-intensity LEDs can be effective for curing dental sealants.
Instruction: Are depression and poor sexual health neglected comorbidities? Abstracts: abstract_id: PUBMED:27009148 Are depression and poor sexual health neglected comorbidities? Evidence from a population sample. Objective: To examine associations between sexual behaviour, sexual function and sexual health service use of individuals with depression in the British general population, to inform primary care and specialist services. Setting: British general population. Participants: 15,162 men and women aged 16-74 years were interviewed for the third National Survey of Sexual Attitudes and Lifestyles (Natsal-3), undertaken in 2010-2012. Using age-adjusted ORs (aAOR), relative to a comparator group reporting no treatment or symptoms, we compared the sexual health of those reporting treatment for depression in the past year. Outcome Measures: Sexual risk behaviour, sexual function, sexual satisfaction and sexual health service use. Results: 1331 participants reported treatment for depression (5.2% men; 11.8% women). Relative to the comparator group, treatment for depression was associated with reporting 2 or more sexual partners without condoms (men aAOR 2.07 (95% CI 1.38 to 3.10); women 2.22 (1.68 to 2.92)), and concurrent partnerships (men 1.80 (1.18 to 2.76); women 2.06 (1.48 to 2.88)), in the past year. Those reporting depression treatment were more likely to be dissatisfied with their sex lives (men 2.32 (1.74 to 3.11); women 2.30 (1.89 to 2.79)), and to score in the lowest quintile on the Natsal-sexual function measure. They were also more likely to report a recent chlamydia test (men 1.92 (1.15 to 3.20)); women (1.27 (1.01 to 1.60)), and to have sought help regarding their sex life from a healthcare professional (men 2.92 (1.98 to 4.30); women (2.36 (1.83 to 3.04)), most commonly from a family doctor. Women only were more likely to report attending a sexual health clinic (1.91 (1.42 to 2.58)) and use of emergency contraception (1.98 (1.23 to 3.19)). Associations were broadly similar for individuals with depressive symptoms but not reporting treatment. Conclusions: Depression, measured by reported treatment, was strongly associated with sexual risk behaviours, reduced sexual function and increased use of sexual health services, with many people reporting help doing so from a family doctor. The sexual health of depressed people needs consideration in primary care, and mental health assessment might benefit people attending sexual health services. abstract_id: PUBMED:33354005 Depression, Sexual Dysfunction, and Medical Comorbidities in Young Adults Having Nicotine Dependence. Background: Nicotine dependence, depression, diabetes mellitus, hypertension, and hypothyroidism are risk factors of sexual dysfunction. Aims And Objectives: The present study aims to find the prevalence of sexual dysfunction and the various sexual response cycle domains in individuals with nicotine dependence with and without comorbidities. Materials And Methods: A total of 52 individuals attending the tobacco cessation clinic were included in the study. To assess the primary outcome, Fagerstrom test for nicotine dependence, Arizona Sexual Experiences Scale, and Hamilton's Depression Rating Scale 17had been administered after validation in local vernacular. Results: In the sample, 32 (61.5%) were male and 20 (38.5) were female. The 17 participants (32.7%) met the criteria of low nicotine dependence, 5 (9.6%) participants met low to moderate, 11 participants (21.2%) had moderate dependence, and 19 (36.5%) participants met the criteria of high nicotine dependence. Conclusions: The nicotine dependence is directly related to sexual dysfunction, and it affects various stages of the sexual response cycle. One-quarter of individuals of nicotine dependence also met the threshold criteria of depression. The interventions as primary and primordial preventions with awareness building and health education may be a cost-effective measure to prevent tobacco-related deaths. abstract_id: PUBMED:30178126 Association between comorbidities and female sexual dysfunction: findings from the third National Survey of Sexual Attitudes and Lifestyles (Natsal-3). Introduction And Hypothesis: Although medical comorbidities are widely recognized to be associated with erectile dysfunction, less research has been done on their association with female sexual dysfunction (FSD). The purpose of this study was to assess whether FSD is associated with comorbidities; we hypothesized that there is an association. Methods: This is a secondary analysis of the third National Survey of Sexual Attitudes and Lifestyles (Natsal-3), a prospective stratified probability sample of individuals aged 16-74. We assessed for association between sexual function scores and heart attack, heart disease, hypertension, stroke, diabetes, chronic lung disease, depression, other mental health condition, other neurologic conditions, and incontinence, as well as menopause and smoking status. Correlation between comorbidities and specific domains of sexual function was also assessed. Results: A total of 6777 women, with an average age of 35.4 (14.1), responded to the survey and reported sexual activity in the past year. There was an association between sexual function score and age, menopause, hysterectomy, heart disease, hypertension, diabetes, obesity, smoking, depression, other mental health condition, stroke, other neurological condition, and homosexual attraction (p &lt; 0.05). On multivariate analysis, age, sexual attraction, smoking status, depression, and other mental health conditions remained significantly correlated with sexual function (p &lt; 0.05). Comorbidities were found to be correlated with specific domains. Conclusions: Comorbidities were associated with FSD and specific comorbidities associated with dysfunction in specific domains. Urogynecologists and urologists must assess for comorbidities, as women presenting with sexual dysfunction may provide an opportunity for early diagnosis of life-threatening conditions. abstract_id: PUBMED:36278585 Examining Health Disparities and Severity of Depression among Sexual Minorites in a National Population Sample. Background: Health disparities and mental health issues have not been fully explored among sexual minorities. This study aims to examine health disparities and severity of depression among sexual minorities using a nationally representative sample of the US population. Methods: The National Health and Nutrition Examination Survey (NHANES) data from 2011 to 2016 were analyzed. The Patient Health Questionnaire (PHQ-9) was used to examine the severity of depression among sexual minorities compared to heterosexuals. Data were analyzed for descriptive statistics and associations using the Chi-squared test. A multivariate logistic regression analysis was used to quantify the magnitude of association between severity of depression and demographic characteristics. A p-value of &lt;0.05 was considered statistically significant. Results: Among 7826 participants included, 426 (5.4%) were identified as a sexual minority. Moderately severe to severe depression was observed among 9.3% of sexual minorities with women having higher rates (64.2%) than men. Similarly, sexual minorities were two times more likely to have moderately severe to severe depression, two and half times more likely to see a mental health professional, and one and half times more likely to have genital herpes and be a user of illicit drugs than heterosexuals. In addition, they were less likely to be married and more likely to have been born in the United States, be a U.S. citizen, and earn less than USD 25,000 (p &lt; 0.05). Conclusions: Sexual minorities are affected by a range of social, structural, and behavioral issues impacting their health. The screening of individuals with depression who are sexual minorities (especially females), illicit drug users, poor, or aged over 39 years may benefit from early intervention efforts. abstract_id: PUBMED:29631956 Comparison of Correlated Comorbidities in Male and Female Sexual Dysfunction: Findings From the Third National Survey of Sexual Attitudes and Lifestyles (Natsal-3). Background: Many of the same mechanisms involved in the sexual arousal-response system in men exist in women and can be affected by underlying general medical conditions. Aim: To assess whether sexual function in men and women is correlated with similar comorbidities. Methods: This study was a secondary analysis of the 3rd National Survey of Sexual Attitudes and Lifestyles (Natsal-3), a prospective stratified probability sample of British individuals 16 to 74 years old interviewed from 2010 to 2012. We assessed for an association between sexual function and the following comorbidities: heart attack, heart disease, hypertension, stroke, diabetes, chronic lung disease, depression, other mental health conditions, other neurologic conditions, obesity, menopause, incontinence, smoking status, and age. Outcome: An association was found between multiple medical comorbidities and sexual dysfunction in women and in men. Results: 6,711 women and 4,872 men responded to the survey, were in a relationship, and reported sexual activity in the past year. The average age of the women was 35.4 ± 14.1 and that of the men was 36.8 ± 15.6. There was an association between sexual function and all variables assessed except for chronic lung disease, heart attack, and incontinence in women compared with stroke, other neurologic conditions, incontinence, and smoking status in men. Comorbidities associated with erectile dysfunction included depression, diabetes, and other heart disease, whereas comorbidities associated with difficulty with lubrication included depression and other heart disease. Menopause was predictive of sexual dysfunction. Male sexual function appeared to decline after 45.5 years of age. Clinical Implications: Physicians should be aware of the correlation between medical comorbidities and sexual dysfunction in women and men and should ask patients about specific symptoms that might be associated with underlying medical conditions. Strengths And Limitations: Use of a stratified probability sample compared with a convenience sample results in capturing of associations representative of the population. Inclusion of multiple comorbidities in the multivariate analysis allows us to understand the effects of several variables on sexual function. Although this study shows only an association, further research could determine whether there is a causal relation between comorbidities and sexual dysfunction in women. Conclusion: Multiple medical comorbidities are associated with sexual dysfunction not only in men but also in women. Polland A, Davis M, Zeymo A, et al. Comparison of Correlated Comorbidities in Male and Female Sexual Dysfunction: Findings From the Third National Survey of Sexual Attitudes and Lifestyles (Natsal-3). J Sex Med 2018;15:678-686. abstract_id: PUBMED:25317293 Depression, poor sleep, and sexual dysfunction in migraineurs women. Background: Migraine is a chronic disorder affecting women more than men. Sexual dysfunction is one the complaints of women with migraine, which is not regarded as it should be. The goal of this study was to determine sexual dysfunction in women with migraine, and possible effects of depression and sleep quality on their sexual function. Methods: One hundred married migraineurs women were enrolled. All participants were asked to fill out valid and reliable Persian versions of Pittsburgh Sleep Questionnaire (PSQI), female sexual function index (FSFI) and beck depression inventory (BDI). Results: Mean BDI, PSQI, and FSFI scores were 15.1 ± 9.1, 7.6 ± 4, and 21.6 ± 8.8 in all patients, respectively. Sexual dysfunction found in 68% and 79% were poor sleepers. Mean BDI and PSQI scores were significantly higher in women with sexual dysfunction (FSFI &lt; 26.55). There was significant negative correlation between BDI score and FSFI (r = -0.1, P = 0.001) as well as significant positive correlation between BDI and PSQI (r = 0.42, P &lt; 0.001). Multiple linear regression analysis showed that BDI and age were independent predictors of FSFI score. Conclusions: Physicians should consider sexual dysfunction in women with migraine along with depression and poor sleep in such cases. abstract_id: PUBMED:29404648 Psychotherapy of depressive disorders: Evidence in chronic depression and comorbidities Background: Psychotherapy has been shown to be an effective treatment option for depressive disorders; however, its effectiveness varies depending on patient and therapist characteristics and the individual form of the depressive disorder. Objectives: The aim of this article is to present the current evidence for psychotherapeutic antidepressive treatments for patients with chronic and treatment-resistant depression as well as for patients with mental and somatic comorbidities. Material And Methods: During the revision of the currently valid German S3- and National Disease Management Guideline (NDMG) on unipolar depression published in 2015, a comprehensive and systematic evidence search including psychotherapy for specific patient groups was conducted. The results of this search along with a systematic update are summarized. Results: Psychotherapy has been shown to be effective in reducing depressive symptoms in patients suffering from chronic and treatment-resistant depression and in patients with mental and somatic comorbidities. The evidence is insufficient particularly for patients with mental comorbidities. Conclusion: Based on the current evidence and clinical expertise the NDMG recommends psychotherapy alone or in combination with pharmacotherapy to treat most of these depressive patient groups. Evidence gaps were identified, which highlight the need for further research. abstract_id: PUBMED:25530046 Poor mental health in severely obese patients is not explained by the presence of comorbidities. The prevalence of obesity, especially severe obesity where body mass index (BMI) exceeds 40 kg m(-2) and where the physical risks are greatest, is increasing. However, little is known about the impact of severe obesity on psychological well-being and self-rated health (SRH). We aimed to investigate this relationship in patients attending an Irish weight management clinic. SRH was measured with a single-item inventory (excellent = 1, poor = 5). Well-being was measured with the validated World Health Organization-Five Well-being Index (WHO-5), in which scores &lt;13 indicate poor well-being. Previous studies of the Irish population have reported mean SRH = 2.56 (males) and 2.53 (females) and mean well-being = 16.96. One hundred eighty-two (46.8%) completed questionnaires were returned. The sample was representative of the clinic population with a mean age of 47.1, mean baseline BMI of 51.9 kg m(-2) and 64.3% females. Mean SRH was 3.73 in males and 3.30 in females; mean well-being was 10.27 in males and 10.52 in females. In the final multivariable models, number of medications, depression and obstructive sleep apnoea, WHO-5 and current BMI were significant predictors of SRH, and secondary level education, social support and mindfulness scores were significant predictors of psychological well-being. Number of medications was not significant. The results suggest that the poor psychological well-being seen is not explained by the presence of comorbidities and that social support and mindfulness may be important targets for improving psychological well-being. Improving psychological well-being in addition to weight loss and effective management of comorbidities may be important for improving SRH. abstract_id: PUBMED:21772908 Associates of poor physical and mental health-related quality of life in beta thalassemia-major/intermedia. Background: Using two logistic regression models, we determined the associates of poor physical and mental health related quality of life (HRQoL) among beta thalassemia patients. Methods: In this cross-sectional study which was conducted during 2006 and 2007 in outpatient adult thalassemia clinic, Blood Transfusion Organization, Tehran, Iran, Short Form 36 (SF-36) was used for measuring HRQoL in 179 patients with beta thalassemia (major/intermedia). We determined scores higher than third quartiles of obtained PCS and MCS scores as the cutoff points of good HRQoL. Poor HRQoL was defined scores lower than first quartiles of obtained PCS and MCS scores. Two distinct logistic regression models were used to derive associated variables including demographic, clinical, and psychological factors. Results: The regression models suggested that poor physical HRQoL was positively associated with somatic comorbidities (OR = 1.472, CI = 1.021-2.197, p = 0.048) and depression score (OR = 8.568, CI = 2.325-31.573, p = 0.001). The variables that were associated with poor mental HRQoL were anxiety score (OR = 9.409, CI = 1.022-89.194, p = 0.049) and depression score (OR = 20.813, CI = 4.320-100.266, p &lt; 0.001). Conclusions: Depression is associated with both poor physical and mental HRQoL among patients with major/intermedia beta thalassemia, however somatic comorbidities and anxiety are associated with poor physical and mental HRQoL, respectively. abstract_id: PUBMED:37861944 A cross-sectional study of the relationship between depression status, health care coverage, and sexual orientation. Health care coverage is an important factor in receipt of behavioral healthcare. This study uses data from the New York City Community Health Survey to examine how sexual minority status impacts the relationship between depression status and having health care coverage. Approximately 10% of the sample (n = 9571; 47% 45+ years old; 35% White Non-Hispanic; 7% sexual minority) reported probable depression and low health care coverage. Compared to heterosexual participants, a greater proportion of sexual minority participants had low health care coverage (17% vs. 9%) and probable depression (19% vs. 9%). Logistic regression examining the association between probable depression status and health care coverage showed that those with probable depression have odds of low health care coverage that are were 3.08 times those who did not have probable depression; this relationship was not modified by sexual orientation. Continued research to understand the interplay of health care coverage, mental health, and sexual orientation is needed. Answer: Yes, depression and poor sexual health appear to be neglected comorbidities. Evidence from a population sample in the British general population indicates that individuals reporting treatment for depression were strongly associated with sexual risk behaviors, reduced sexual function, and increased use of sexual health services (PUBMED:27009148). This suggests that the sexual health of depressed people requires consideration in primary care, and mental health assessment might benefit people attending sexual health services. Furthermore, studies have found that nicotine dependence, which is associated with depression, is directly related to sexual dysfunction, affecting various stages of the sexual response cycle (PUBMED:33354005). Additionally, comorbidities such as heart disease, hypertension, diabetes, obesity, smoking, depression, and other mental health conditions have been associated with female sexual dysfunction (FSD), indicating that urogynecologists and urologists must assess for comorbidities in women presenting with sexual dysfunction (PUBMED:30178126). Sexual minorities, who experience higher rates of depression, are also more likely to have moderately severe to severe depression, and face a range of social, structural, and behavioral issues impacting their health, including sexual health (PUBMED:36278585). Moreover, multiple medical comorbidities are associated with sexual dysfunction not only in men but also in women, with depression being a common factor (PUBMED:29631956). In cases of migraine in women, sexual dysfunction has been found to be associated with depression and poor sleep, suggesting that these factors should be considered in the management of sexual dysfunction in women with migraine (PUBMED:25317293). Psychotherapy has also been shown to be effective in reducing depressive symptoms in patients suffering from chronic and treatment-resistant depression and in patients with mental and somatic comorbidities, highlighting the importance of addressing mental health in the treatment of comorbid conditions (PUBMED:29404648). Lastly, severe obesity, which is not typically associated with comorbidities, has been linked to poor psychological well-being and self-rated health, suggesting that improving psychological well-being may be important for improving overall health in severely obese patients (PUBMED:25530046). Depression has also been associated with both poor physical and mental health-related quality of life among patients with major/intermedia beta thalassemia, further emphasizing the need to address mental health in the presence of other health conditions (PUBMED:21772908).
Instruction: Is [18F] fluorodeoxyglucose uptake by the primary tumor a prognostic factor in breast cancer? Abstracts: abstract_id: PUBMED:38396455 Correlation between Histopathological Prognostic Tumor Characteristics and [18F]FDG Uptake in Corresponding Metastases in Newly Diagnosed Metastatic Breast Cancer. Background: In metastatic breast cancer (MBC), [18F]fluorodeoxyglucose positron emission tomography/computed tomography ([18F]FDG-PET/CT) can be used for staging. We evaluated the correlation between BC histopathological characteristics and [18F]FDG uptake in corresponding metastases. Patients And Methods: Patients with non-rapidly progressive MBC of all subtypes prospectively underwent a baseline histological metastasis biopsy and [18F]FDG-PET. Biopsies were assessed for estrogen, progesterone, and human epidermal growth factor receptor 2 (ER, PR, HER2); Ki-67; and histological subtype. [18F]FDG uptake was expressed as maximum standardized uptake value (SUVmax) and results were expressed as geometric means. Results: Of 200 patients, 188 had evaluable metastasis biopsies, and 182 of these contained tumor. HER2 positivity and Ki-67 ≥ 20% were correlated with higher [18F]FDG uptake (estimated geometric mean SUVmax 10.0 and 8.8, respectively; p = 0.0064 and p = 0.014). [18F]FDG uptake was lowest in ER-positive/HER2-negative BC and highest in HER2-positive BC (geometric mean SUVmax 6.8 and 10.0, respectively; p = 0.0058). Although [18F]FDG uptake was lower in invasive lobular carcinoma (n = 31) than invasive carcinoma NST (n = 146) (estimated geometric mean SUVmax 5.8 versus 7.8; p = 0.014), the metastasis detection rate was similar. Conclusions: [18F]FDG-PET is a powerful tool to detect metastases, including invasive lobular carcinoma. Although BC histopathological characteristics are related to [18F]FDG uptake, [18F]FDG-PET and biopsy remain complementary in MBC staging (NCT01957332). abstract_id: PUBMED:19160365 Association between [18F]fluorodeoxyglucose uptake and prognostic parameters in breast cancer. Background: This study analysed the correlation between [(18)F]fluorodeoxyglucose (FDG) uptake assessed by positron emission tomography (PET) in breast tumours, and histopathological and inmunohistochemical prognostic factors. Methods: FDG-PET was performed before surgery in 275 women with primary breast cancer. The standarized uptake value (SUV) was compared with histopathological findings after surgery. Results: A positive relationship was found between the SUV and tumour size (r = 0.46, P &lt; 0.001), axillary lymph node status (P &lt; 0.001), histological type (P &lt; 0.001), histological grade (P &lt; 0.001), oestrogen receptor status (P &lt; 0.001), p53 (P &lt; 0.001) and Ki-67 (P &lt; 0.001) expression. Multivariable linear regression showed that tumour size, histological grade, Ki-67 expression, oestrogen receptor status and histological type were significantly related to the SUV. Conclusion: The SUV is a preoperative and non-invasive metabolic factor that relates to some prognostic factors in breast cancer. abstract_id: PUBMED:35165015 Increased cardiac uptake of (18F)-fluorodeoxyglucose incidentally detected on positron emission tomography after left breast irradiation: How to interpret? Radiation-induced heart disease is a complication that occurs years after thoracic irradiation. Recent studies suggest that radiation-induced heart disease could be an earlier complication and that subclinical cardiac injury can be detected. The present case described an increased uptake of (18F)-fluorodeoxyglucose incidentally detected on positron emission tomography after left breast irradiation with slightly reversible perfusion defect on (99mTc)-tetrofosmin single photon emission computed tomography. The cardiac clinical examination was asymptomatic, and the patient had a normal angiography, suggesting a radiation-induced hibernating myocardium. The relevant question is: how far should an incidentally (18F)-fluorodeoxyglucose uptake be explored? abstract_id: PUBMED:22704459 Is [18F] fluorodeoxyglucose uptake by the primary tumor a prognostic factor in breast cancer? Background: We retrospectively investigated (18)F-FDG uptake by the primary breast tumor as a predictor for relapse and survival. Patients And Methods: We studied 203 patients with cT1-T3N0 breast cancer. Standardized uptake value (SUVmax), was measured on the primary tumor. After a median follow-up of 68 months (range 22-80), the relation between SUVmax and tumor factors, disease free-survival (DFS) and overall survival (OS) was investigated. Results: In the PET-positive patients, the median FDG uptake by the tumor was 4.7. FDG uptake was significantly related to tumor size, number of involved axillary nodes, grade, negative ER, high Ki-67 and HER2 overexpression. No distant metastases or deaths occurred in the PET-negative group. Five-year DFS was 97% and 83%, respectively in the PET-negative and PET-positive groups (P = 0.096). At univariate analysis, DFS was significantly lower in patients with SUVmax &gt;4.7 compared to the patients with negative PET (P = 0.042), but not to the patients with SUVmax ≤4.7 (P = 0.106). At multivariable analysis, among PET-positive patients, SUVmax was not an independent prognostic factor for DFS (HR(&gt;4.7 vs ≤4.7): 1.02 (95% CI 0.45-2.31)). Five-year OS was 100% and 93%, respectively, in the PET-negative and PET-positive groups (P = 0.126). Conclusion: FDG uptake by the primary lesion was significantly associated with several prognostic variables, but it was not an independent prognostic factor. abstract_id: PUBMED:27994334 Dual-Time 18F-FDG PET/CT Imaging in Initial Locoregional Staging of Breast Carcinoma: Comparison with Conventional Imaging and Pathological Prognostic Factors. The aims of this retrospective study were to consider the diagnostic role of dual-time 18F-fluorodeoxyglucose positron emission tomography and computed tomography (18F-FDG PET/CT) in detection of breast carcinoma and axillary lymph node (ALN) status and to evaluate the primary tumor 18F-FDG uptake pattern. Preoperative staging was performed by 18F-FDG PET/CT in 78 female patients with breast carcinoma. Conventional imaging results were evaluated by breast magnetic resonance imaging (MRI) of 79 lesions in 78 patients, bilateral mammography (MMG) of 40 lesions in 40 patients, and breast ultrasonography (USG) of 47 lesions in 46 patients. The primary tumor detection rate using 18F-FDG PET/CT was higher than those using MRI, USG, and MMG. The sensitivity and specificity of 18F-FDG PET/CT scans for detecting multifocality were higher than those of MRI. The specificity of ALN metastasis detection with MRI was higher than that with 18F-FDG PET/CT, but 18F-FDG PET/CT had higher sensitivity. Higher 18F-FDG uptake levels were detected in patients with ALN metastasis, histologic grade 3, estrogen-progesterone-negative receptor status, lymphatic invasion, and moderate to poor prognostic groups. There was no statistical difference for the retention index in categorical pathological parameters except for progesterone-negative status. In conclusion, 18F-FDG PET/CT scans may be a valuable imaging technique for evaluating primary tumor and axillary status in staging breast carcinoma and 18F-FDG uptake may be a prognostic factor that indicates aggressive tumor biology and poor prognosis. Dual-time imaging in breast carcinoma staging may not be used for predicting pathological criteria and the aggressiveness of primary lesions. abstract_id: PUBMED:35552460 Association between tumor 18F-fluorodeoxyglucose metabolism and survival in women with estrogen receptor-positive, HER2-negative breast cancer. We examined whether 18F-fluorodeoxyglucose metabolism is associated with distant relapse-free survival (DRFS) and overall survival (OS) in women with estrogen receptor (ER)-positive, HER2-negative breast cancer. This was a cohort study examining the risk factors for survival that had occurred at the start of the study. A cohort from Asan Medical Center, Korea, recruited between November 2007 and December 2014, was included. Patients received anthracycline-based neoadjuvant chemotherapy. The maximum standardized uptake value (SUV) of 18F-fluorodeoxyglucose positron emission tomography/computed tomography (PET/CT) was measured. The analysis included 466 women. The median (interquartile range) follow-up period without distant metastasis or death was 6.2 (5.3-7.6) years. Multivariable analysis of hazard ratio (95% confidence interval [CI]) showed that the middle and high tertiles of SUV were prognostic for DRFS (2.93, 95% CI 1.62-5.30; P &lt; 0.001) and OS (4.87, 95% CI 1.94-12.26; P &lt; 0.001). The 8-year DRFS rates were 90.7% (95% CI 85.5-96.1%) for those in the low tertile of maximum SUV vs. 73.7% (95% CI 68.0-79.8%) for those in the middle and high tertiles of maximum SUV. 18F-fluorodeoxyglucose PET/CT may assess the risk of distant metastasis and death in ER-positive, HER2-negative patients. abstract_id: PUBMED:34082513 Oncological Follow-up with 2-[18F]-FDG PET/CT in Li-Fraumeni Syndrome Li-Fraumeni syndrome is a rare disorder caused by abnormalities of the tumor-suppressor protein P53 gene. We present the case of a 26-years-old female diagnosed with bilateral ductal carcinoma. The genetic panel for breast cancer gene 1 (BRCA1) and BRCA2 mutations was negative and positive heterozygous germline tumor protein P53 gene mutations, considering Li-Fraumeni syndrome. A 2-[18F]-fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) was used for postsurgical staging to show the right lung hypermetabolic nodule. A lobectomy was accomplished, and histopathology reported pulmonary adenocarcinoma. A year later, oncological follow-up was conducted with 2-[18F]-FDG PET/CT without evidence of abnormalities. abstract_id: PUBMED:32545312 Prognostic Value of Dual-Time-Point 18F-Fluorodeoxyglucose PET/CT in Metastatic Breast Cancer: An Exploratory Study of Quantitative Measures. This study aimed to compare the prognostic value of quantitative measures of [18F]-fluorodeoxyglucose positron emission tomography with integrated computed tomography (FDG-PET/CT) for the response monitoring of patients with metastatic breast cancer (MBC). In this prospective study, 22 patients with biopsy-verified MBC diagnosed between 2011 and 2014 at Odense University Hospital (Denmark) were followed up until 2019. A dual-time-point FDG-PET/CT scan protocol (1 and 3 h) was applied at baseline, when MBC was diagnosed. Baseline characteristics and quantitative measures of maximum standardized uptake value (SUVmax), mean standardized uptake value (SUVmean), corrected SUVmean (cSUVmean), metabolic tumor volume (MTV), total lesion glycolysis (TLG), and corrected TLG (cTLG) were collected. Survival time was analyzed using the Kaplan-Meier method and was regressed on MTV, TLG, and cTLG while adjusting for clinicopathological characteristics. Among the 22 patients included (median age: 59.5 years), 21 patients (95%) died within the follow-up period. Median survival time was 29.13 months (95% Confidence interval: 20.4-40 months). Multivariable Cox proportional hazards regression analyses of survival time showed no influence from the SUVmean, cSUVmean, or SUVmax, while increased values of MTV, TLG, and cTLG were significantly associated with slightly higher risk, with hazard ratios ranging between 1.0003 and 1.004 (p = 0.007 to p = 0.026). Changes from 1 to 3 h were insignificant for all PET measures in the regression model. In conclusion, MTV and TLG are potential prognostic markers for overall survival in MBC patients. abstract_id: PUBMED:35158904 Prognostic Value of Metabolic, Volumetric and Textural Parameters of Baseline [18F]FDG PET/CT in Early Triple-Negative Breast Cancer. (1) Background: triple-negative breast cancer (TNBC) remains a clinical and therapeutic challenge primarily affecting young women with poor prognosis. TNBC is currently treated as a single entity but presents a very diverse profile in terms of prognosis and response to treatment. Positron emission tomography/computed tomography (PET/CT) with 18F-fluorodeoxyglucose ([18F]FDG) is gaining importance for the staging of breast cancers. TNBCs often show high [18F]FDG uptake and some studies have suggested a prognostic value for metabolic and volumetric parameters, but no study to our knowledge has examined textural features in TNBC. The objective of this study was to evaluate the association between metabolic, volumetric and textural parameters measured at the initial [18F]FDG PET/CT and disease-free survival (DFS) and overall survival (OS) in patients with nonmetastatic TBNC. (2) Methods: all consecutive nonmetastatic TNBC patients who underwent a [18F]FDG PET/CT examination upon diagnosis between 2012 and 2018 were retrospectively included. The metabolic and volumetric parameters (SUVmax, SUVmean, SUVpeak, MTV, and TLG) and the textural features (entropy, homogeneity, SRE, LRE, LGZE, and HGZE) of the primary tumor were collected. (3) Results: 111 patients were enrolled (median follow-up: 53.6 months). In the univariate analysis, high TLG, MTV and entropy values of the primary tumor were associated with lower DFS (p = 0.008, p = 0.006 and p = 0.025, respectively) and lower OS (p = 0.002, p = 0.001 and p = 0.046, respectively). The discriminating thresholds for two-year DFS were calculated as 7.5 for MTV, 55.8 for TLG and 2.6 for entropy. The discriminating thresholds for two-year OS were calculated as 9.3 for MTV, 57.4 for TLG and 2.67 for entropy. In the multivariate analysis, lymph node involvement in PET/CT was associated with lower DFS (p = 0.036), and the high MTV of the primary tumor was correlated with lower OS (p = 0.014). (4) Conclusions: textural features associated with metabolic and volumetric parameters of baseline [18F]FDG PET/CT have a prognostic value for identifying high-relapse-risk groups in early TNBC patients. abstract_id: PUBMED:34300339 Relationship between Prognostic Stage in Breast Cancer and Fluorine-18 Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography. This retrospective study examined the relationship between the standardized uptake value max (SUVmax) of fluorine-18 fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) and the prognostic stage of breast cancer. We examined 358 breast cancers in 334 patients who underwent 18F-FDG PET/CT for initial staging between January 2016 and December 2019. We extracted data including SUVmax of 18F-FDG PET and pathological biomarkers, including estrogen receptor (ER), progesterone receptor (PR), human epidermal growth factor receptor 2 (HER2), and nuclear grade. Anatomical and prognostic stages were determined per the American Joint Committee on Cancer (eighth edition). We examined whether there were statistical differences in SUVmax between each prognostic stage. The mean SUVmax values for clinical prognostic stages were as follow: stage 0, 2.2 ± 1.4; stage IA, 2.6 ± 2.1; stage IB, 4.2 ± 3.5; stage IIA, 5.2 ± 2.8; stage IIB, 7.7 ± 6.7; and stage III + IV, 7.0 ± 4.5. The SUVmax values for pathological prognostic stages were as follows: stage 0, 2.2 ± 1.4; stage IA, 2.8 ± 2.2; stage IB, 5.4 ± 3.6; stage IIA, 6.3 ± 3.1; stage IIB, 9.2 ± 7.5, and stage III + IV, 6.2 ± 5.2. There were significant differences in mean SUVmax between clinical prognostic stage 0 and ≥II (p &lt; 0.001) and I and ≥II (p &lt; 0.001). There were also significant differences in mean SUVmax between pathological prognostic stage 0 and ≥II (p &lt; 0.001) and I and ≥II (p &lt; 0.001). In conclusion, mean SUVmax increased with all stages up to prognostic stage IIB, and there were significant differences between several stages. The SUVmax of 18F-FDG PET/CT may contribute to prognostic stage stratification, particularly in early cases of breast cancers. Answer: Yes, [18F] fluorodeoxyglucose (FDG) uptake by the primary tumor is considered a prognostic factor in breast cancer. Several studies have found a correlation between FDG uptake and various prognostic parameters in breast cancer. For instance, one study found that FDG uptake was significantly related to tumor size, number of involved axillary nodes, grade, negative estrogen receptor (ER) status, high Ki-67, and HER2 overexpression, which are all important prognostic factors (PUBMED:22704459). Another study reported that higher FDG uptake levels were detected in patients with axillary lymph node (ALN) metastasis, histologic grade 3, estrogen-progesterone-negative receptor status, lymphatic invasion, and moderate to poor prognostic groups, suggesting that FDG uptake may indicate aggressive tumor biology and poor prognosis (PUBMED:27994334). Furthermore, a study examining women with ER-positive, HER2-negative breast cancer found that higher tertiles of maximum standardized uptake value (SUV) were prognostic for distant relapse-free survival (DRFS) and overall survival (OS), indicating that FDG PET/CT may assess the risk of distant metastasis and death in this patient group (PUBMED:35552460). Additionally, metabolic tumor volume (MTV) and total lesion glycolysis (TLG) have been identified as potential prognostic markers for overall survival in patients with metastatic breast cancer (PUBMED:32545312). In the context of triple-negative breast cancer (TNBC), metabolic, volumetric, and textural parameters measured at the initial FDG PET/CT were associated with disease-free survival (DFS) and overall survival (OS), with high values of TLG, MTV, and entropy of the primary tumor being associated with lower DFS and OS (PUBMED:35158904). Lastly, a retrospective study found that the SUVmax of FDG PET/CT may contribute to prognostic stage stratification, particularly in early cases of breast cancers (PUBMED:34300339). In summary, the uptake of [18F]FDG by the primary tumor is associated with several prognostic factors in breast cancer and can be used as a non-invasive metabolic factor to predict outcomes such as relapse and survival.
Instruction: Do fibrinolytic proteins of human bile derive exclusively from gall bladder? Abstracts: abstract_id: PUBMED:12211732 Do fibrinolytic proteins of human bile derive exclusively from gall bladder? Background: In this study we addressed the issue of whether fibrinolytic proteins are presented in gall bladder bile only or in choledochus bile as well. Material And Methods: Gall bladder bile was obtained from 20 patients (Group I) undergoing laparoscopic cholecystectomy. Bile from common bile duct was aspirated after insertion Kehr drainage from 9 patients (Group II). The concentrations of t-PA, u-PA, PAI-1 and PAI-2 were measured by ELISA. Results: We have shown that in cholecystectomized patients fibrinolytic proteins can be detected in bile both from gall bladder and from choledochus. Mean concentrations of t-PA, u-PA, PAI-1 were lower in Group II (5.69 ng/ml vs 15.7; 0.46 ng/ml vs 0.7; 16.82 ng/ml vs 26.16 ng/ml) or nearly equal for PAI-2 (343.53 ng/ml vs 341.02). All differences were insignificant (p &gt; 0.05). Conclusions: Based on these results we concluded that the entire biliary tree produces the fibrinolytic proteins thus this production is not restricted to the gall bladder as it was earlier reported [1]. abstract_id: PUBMED:25336405 Characterization of the bile and gall bladder microbiota of healthy pigs. Bile is a biological fluid synthesized in the liver, stored and concentrated in the gall bladder (interdigestive), and released into the duodenum after food intake. The microbial populations of different parts of mammal's gastrointestinal tract (stomach, small and large intestine) have been extensively studied; however, the characterization of bile microbiota had not been tackled until now. We have studied, by culture-dependent techniques and a 16S rRNA gene-based analysis, the microbiota present in the bile, gall bladder mucus, and biopsies of healthy sows. Also, we have identified the most abundant bacterial proteins in the bile samples. Our data show that the gall bladder ecosystem is mainly populated by members of the phyla Proteobacteria, Firmicutes, and Bacteroidetes. Furthermore, fluorescent in situ hybridization (FISH) and transmission electron microscopy (TEM) allowed us to visualize the presence of individual bacteria of different morphological types, in close association with either the epithelium or the erythrocytes, or inside the epithelial cells. Our work has generated new knowledge of bile microbial profiles and functions and might provide the basis for future studies on the relationship between bile microbiota, gut microbiota, and health. abstract_id: PUBMED:6871384 Proteins of guinea-pig bile: selective resorption in the gall bladder. We have examined and compared the proteins present in guinea-pig bile as collected either from the common hepatic duct or from the gall bladder. Guinea-pig bile, collected from the common bile duct, has a rather low concentration of protein. Detailed examination shows that the concentrations of actively transported proteins such as immunoglobulin A and haptoglobin:haemoglobin complexes are markedly lower than in rats although the concentrations of proteins which, like albumin, leak non-specifically into bile are similar in the two species. We also find that the protein composition of guinea-pig bile is extensively and selectively modified by resorption of protein in the gall bladder. abstract_id: PUBMED:27059701 Synchronous malignancies of the gall bladder and common bile duct: a case report. Background: Synchronous malignancies of the gall bladder and common bile duct are a rare entity. Much of our knowledge on this topic comes from Japanese literature. Most of the synchronous carcinomas described in Japanese literature are associated with the presence of an anomalous pancreatic-bile duct junction (APBDJ). Case Presentation: We report a case of synchronous malignancy of the extrahepatic biliary tree involving the fundus of the gall bladder and the intrapancreatic portion of the common bile duct (CBD). A 50-year-old female patient presented to us with clinical features of obstructive jaundice and on radiological evaluation was diagnosed to have a periampullary carcinoma; the patient underwent a pancreaticoduodenectomy, and histopathological examination revealed adenocarcinoma of the gall bladder and the intrapancreatic portion of the CBD. Conclusions: Synchronous malignancies have been rarely reported from the Indian subcontinent; therefore, it is essential for the clinician as well as the pathologist to maintain a high index of suspicion while evaluating such lesions and to look for the presence of an anamolous pancreatic-bile duct junction whenever indicated. abstract_id: PUBMED:31641494 Giant intracholecystic papillary tubular adenoma of the gall bladder with gall stones in an elderly woman; case report. Gall bladder polyps occur in 0.4% of patients undergoing cholecystectomy, the majority of gall bladder polyps are benign, they are classified into 3 types: epithelial or adenomatous polyps, mesenchymal polyps, and pseudopolyps. Gall bladder polys mostly affect females and those more than 50 years of age. Ultrasound is a very sensitive tool in the diagnosis. An 88-year-old woman presented with epigastric pain and right hypochondrial pain, fever, and vomiting for 1 week. Clinical examination showed jaundice and tenderness at the right hypochondrial region. Investigations showed elevated WBC, bilirubin level and the alkaline phosphatase. MRCP showed multiple gall stones with a large irregular polyp in the fundus of the gall bladder, and dilated common bile duct with multiple stones in the lumen of common bile duct. Cholecystectomy was done with exploration of the common bile duct with extraction of stones, T-tube was placed inside the CBD. At the 14th day T-tube cholangiography was done which showed passage of the dye to the duodenum, the tube was extracted and the patient was discharged home with no postoperative complications. The histopathology showed intracholecystic papillary tubular adenoma of the gall bladder with no evidence of malignancy. The general indications of surgery for gall bladder polyps include the size if more than 10 mm especially if solitary, the presence of associated gall stones, the age if more than 60 years, and if the polyps are causing symptoms. In this patients the large size of the polyp and obstructive jaundice were the two indications for surgery. abstract_id: PUBMED:24830319 Effects of thienorphine on contraction of the guinea pig sphincter of Oddi, choledochus and gall bladder. Opioid analgesics are widely believed to cause spasm of the bile duct sphincter and so impede bile flow. Thienorphine is a partial opioid agonist that is a good candidate for the treatment of opioid dependence; however, to date, no studies have reported the effects of thienorphine on the function of the biliary tract. This study examined the in vivo effects of thienorphine on the guinea pig isolated sphincter of Oddi, choledochus and gall bladder and on bile flow. The area under the curve (AUC) of isolated sphincter of Oddi was not influenced by thienorphine or buprenorphine, whereas morphine increased the AUC of the isolated sphincter of Oddi in a concentration-dependent manner. Thienorphine and buprenorphine concentration-dependently decreased the AUC of isolated choledochus, while morphine increased the AUC of isolated choledochus. Thienorphine had no effect on the contractile amplitude or basal tension of isolated gall bladder muscle strips. In contrast, buprenorphine and morphine increased the contractile basal tension of isolated gall bladder muscle strips in a concentration-dependent manner. Thienorphine (0.01-1.0mg/kg) had no significant inhibitory effect on bile flow. However, morphine (1.0-10mg/kg) and buprenorphine (1.0mg/kg) significantly inhibited bile flow. The maximum inhibition of bile flow by buprenorphine was 63.9±12.9% and by morphine was 74.1±11.3%. In summary, thienorphine has little influence on the guinea pig isolated sphincter of Oddi, choledochus and gall bladder or on bile flow, which may result in a lack of adverse biliary colic effects. abstract_id: PUBMED:28663675 Synchronous Gall Bladder and Bile Duct Cancer: A Short Series of Seven Cases and a Brief Review of Literature. Background: Simultaneous presence of cancer in the gall bladder and in the biliary tree could be due to local spread, metastases, de novo multifocal origin, or as part of a field change. In the past, such an association has been described in patients with anomalous pancreatico-biliary ductal junction. Aims: We studied seven consecutive patients with simultaneous gall bladder and bile duct malignancy with a view to identify the best way to treat them, and if possible to hypothesize the etiopathogenesis. Methods: Over a period of 24 months, there were seven cases, with synchronous gall bladder and extra-hepatic bile duct cancer. Results: None of our patients had anomalous pancreatico-biliary ductal junction. Three patients were found to have inoperable disease, three other underwent curative resection, and one patient had a complete response to chemotherapy. Herein, we describe these patients and our lessons learnt from these patients with synchronous bile duct and gall bladder cancer. Of the seven patients, we were able to complete a curative resection in three patients, and the three patients were found to have inoperable disease. One patient had an excellent response to chemotherapy. Conclusion: Thus aggressive therapy in such patients with gall bladder cancer may be warranted in select cases. Also, the gall bladder specimens in patients undergoing surgery for cholangiocarcinoma should be analyzed in detail to identify foci of dysplasia or change in the epithelium. The pathogenesis may be due to a common field change in the biliary epithelium. abstract_id: PUBMED:29737313 Laparoscopic management of a case of accessory gall bladder with review of literature. Gall bladder duplication is a rare congenital anomaly. True duplication is still rarer. Pre-operative detection helps in avoiding complications or missing the gall bladder during surgery. Ultrasonography (USG) and magnetic resonance cholangiography are investigation of choice. Laparoscopic cholecystectomy is the preferred modality for management of double gall bladder. We present a case diagnosed as cholelithiasis on USG. While doing laparoscopic surgery 2 gall bladders were found. She had a normal gall bladder that was lying in the supraduodenal area. It had cystic duct that joined the common bile duct. There was an accessory gall bladder attached to the anterior free margin of the liver. This gallbladder was occluded with a big solitary calculus occupying the whole of gall bladder cavity and had a small feeding vessel; whereas its duct had fibrosed. abstract_id: PUBMED:8504975 Reduced cholesterol metastability of hepatic bile and its further decline in gall bladder bile in patients with cholesterol gall stones. The reduced metastability of biliary cholesterol in the gall bladder bile of patients with cholesterol gall stones has been well shown. The purpose of this study was to examine the hypothesis that such a difference in metastability already exists in hepatic bile. Paired hepatic and gall bladder bile samples were collected from 10 patients with cholesterol gall stones and six patients without gall stones. Cholesterol nucleation time, biliary lipid concentration, vesicular cholesterol distribution, and biliary protein concentration were measured and compared. The nucleation time in the hepatic bile of patients with cholesterol gall stones was significantly shorter than the gall stone free patients (8.2 (7.2) v 15.7 (5.8) days, p &lt; 0.05), and was associated with a greater concentration of biliary lipid despite the lack of a difference in the cholesterol saturation index (CSI) and total protein concentration. During the storage of bile in the gall bladder, the nucleation time became quicker in the patients with cholesterol gall stone (2.9 (1.7) days) while it was similar in the gall stone free patients (17.3 (5.7) days) compared with that of the corresponding hepatic bile. These differences were associated with a higher CSI (1.44 (0.33) v 1.13 (0.14), p &lt; 0.05) and a greater vesicular cholesterol distribution (19.7 (11.9) v 4.4 (1.4)%, p &lt; 0.01) in the patients with cholesterol gall stones than the gall stone free patients. The concentrations of total lipid and protein in gall bladder bile were not significantly different between the two groups. In conclusion, patients with cholesterol gall stones produce less metastable hepatic bile by the evidence of shorter nucleation time. During the storage of the bile in the gall bladder, the metastability is reduced further only in the cholesterol gall stone patients but not in the gall stone free patients. abstract_id: PUBMED:4064373 Gall bladder: the predominant source of bile IgA in man? The sedimentation profiles of IgA and Secretory Component (SC) and the concentrations of IgA, IgG, IgM, SC and albumin were evaluated after an overnight fast in gall bladder bile of six adult subjects without hepatobiliary disease. The sedimentation profiles differed from those previously obtained in hepatic bile in three ways: gall bladder bile contained a greater percentage of free-SC, a greater percentage of polymeric-IgA (p-IgA), and a major peak of 14 to 19 S p-IgA associated to SC. In contrast to hepatic bile in which IgG is the predominant Ig, IgA clearly was the predominant Ig in gall-bladder bile, its concentration averaging 92 micrograms/ml. Relative-to-albumin coefficients of excretion of proteins in gall bladder bile averaged 0.99 for IgG, 8.6 for monomeric IgA, 196 for p-IgA and 31 for IgM, indicating that there was a selective excretion of IgA and IgM into gall bladder bile. As compared to hepatic bile, the enrichment of gall bladder bile with IgA and IgM was respectively 6.5 and 11.5 times greater than with IgG. These results suggest that quite a significant amount of p-IgA could have been added to bile during its storage in the gall bladder which should therefore be regarded as the predominant source of bile IgA in humans. Answer: No, fibrinolytic proteins of human bile do not derive exclusively from the gall bladder. A study that measured the concentrations of tissue plasminogen activator (t-PA), urokinase-type plasminogen activator (u-PA), plasminogen activator inhibitor-1 (PAI-1), and plasminogen activator inhibitor-2 (PAI-2) in bile from both the gall bladder and the common bile duct (choledochus) found that fibrinolytic proteins could be detected in bile from both sources. The mean concentrations of t-PA, u-PA, and PAI-1 were lower in bile from the common bile duct compared to the gall bladder, while the concentration of PAI-2 was nearly equal in both. However, all differences were statistically insignificant, leading to the conclusion that the entire biliary tree produces fibrinolytic proteins, and this production is not restricted to the gall bladder (PUBMED:12211732).
Instruction: Does hypocortisolism predict a poor response to cognitive behavioural therapy in chronic fatigue syndrome? Abstracts: abstract_id: PUBMED:19607750 Does hypocortisolism predict a poor response to cognitive behavioural therapy in chronic fatigue syndrome? Background: There is evidence that patients with chronic fatigue syndrome (CFS) have mild hypocortisolism. The clinical significance of this is unclear. We aimed to determine whether hypocortisolism exerted any effect on the response of CFS to cognitive behavioural therapy (CBT). Method: We measured 24-h urinary free cortisol (UFC) in 84 patients with Centers for Disease Control and Prevention (CDC)-defined CFS (of whom 64 were free from psychotropic medication) who then received CBT in a specialist, tertiary out-patient clinic as part of their usual clinical care. We also measured salivary cortisol output from 0800 to 2000 h in a subsample of 56 psychotropic medication-free patients. Results: Overall, 39% of patients responded to CBT after 6 months of treatment. Lower 24-h UFC output was associated with a poorer response to CBT but only in psychotropic medication-free patients. A flattened diurnal profile of salivary cortisol was also associated with a poor response to CBT. Conclusions: Low cortisol is of clinical relevance in CFS, as it is associated with a poorer response to CBT. Hypocortisolism could be one of several maintaining factors that interact in the persistence of CFS. abstract_id: PUBMED:18937978 Salivary cortisol output before and after cognitive behavioural therapy for chronic fatigue syndrome. Background: There is evidence that patients with chronic fatigue syndrome (CFS) have mild hypocortisolism. One theory about the aetiology of this hypocortisolism is that it occurs late in the course of CFS via factors such as inactivity, sleep disturbance, chronic stress and deconditioning. We aimed to determine whether therapy aimed at reversing these factors--cognitive behavioural therapy for CFS--could increase cortisol output in CFS. Methods: We measured diurnal salivary cortisol output between 0800 and 2000 h before and after 15 sessions (or 6 months) of CBT in 41 patients with CDC-defined CFS attending a specialist, tertiary outpatient clinic. Results: There was a significant clinical response to CBT, and a significant rise in salivary cortisol output after CBT. Limitations: We were unable to control for the passage of time using a non-treated CFS group. Conclusions: Hypocortisolism in CFS is potentially reversible by CBT. Given previous suggestions that lowered cortisol may be a maintaining factor in CFS, CBT offers a potential way to address this. abstract_id: PUBMED:25260861 Cortisol output in adolescents with chronic fatigue syndrome: pilot study on the comparison with healthy adolescents and change after cognitive behavioural guided self-help treatment. Objective: This study examined cortisol in adolescents with chronic fatigue syndrome (CFS) compared to healthy adolescents and changes in cortisol after cognitive behavioural guided self-help treatment. Exploratory analyses investigated the association between cortisol output and psychological variables. Methods: Salivary cortisol was measured upon awakening, at 15, 30, 45 and 60 min afterwards and at 12 noon, 4:00 p.m. and 8:00 p.m., in adolescents with CFS and healthy controls (HC). Groups were matched for age, gender, menarche status, menstrual cycle and awakening time. Twenty-four adolescents with CFS provided saliva samples six months after treatment. The main outcome measure was total salivary output over the day, calculated by area under the curve (AUC). The salivary awakening response was also assessed. Results: Cortisol output over the day was significantly lower in the CFS group (n=46) than in healthy controls (n=33). Within the CFS group, lower daily cortisol output was associated with higher self-reported perfectionist striving and prosocial behaviour. There were no significant group differences in the awakening response (n=47 CFS versus n=34 HC). After treatment, adolescents with CFS (n=21) showed a significant increase in daily cortisol output, up to normal levels. Conclusion: The reduced daily cortisol output in adolescents with CFS is in line with adult findings. Associations between reduced cortisol output and two psychological variables-perfectionism and prosocial behaviour-are consistent with cognitive behavioural models of chronic fatigue syndrome. The mild hypocortisolism is reversible; cortisol output had returned to healthy adolescent levels by six months after cognitive behavioural guided self-help treatment. abstract_id: PUBMED:23312650 Chronic fatigue syndrome. Chronic fatigue syndrome (CFS) is an illness characterized by disabling fatigue of at least 6 months. The aetiology of the condition has been hotly debated. In this chapter the evidence for CFS as a post viral condition and/or a neurological condition is reviewed. Although there is evidence that CFS is triggered by certain viruses in some patients and that neurobiological changes such as hypocortisolism are associated with the syndrome, neither mechanism is sufficient to explain the extent of the symptoms or disability experienced by patients. It is unlikely that CFS can be understood through one aetiological mechanisms. Rather it is a complex illness which is best explained in terms of a multifactorial cognitive behavioural model. This model proposes that CFS is precipitated by life events and/or viral illness in vulnerable individuals, such as those who are genetically predisposed, prone to distress, high achievement, and over or under activity. A self perpetuating cycle where physiological changes, illness beliefs, reduced and inconsistent activity, sleep disturbance, medical uncertainty and lack of guidance interact to maintain symptoms. Treatments based on this model including cognitive behavioural therapy and graded exercise therapy are effective at significantly reducing fatigue and disability in CFS. This chapter provides a description of these approaches and details of the trials conducted in the area. abstract_id: PUBMED:11453960 Plasma leptin in chronic fatigue syndrome and a placebo-controlled study of the effects of low-dose hydrocortisone on leptin secretion. Objective: Previous studies have suggested that chronic fatigue syndrome (CFS) is associated with changes in appetite and weight, and also with mild hypocortisolism. Because both of these features may be related to leptin metabolism, we undertook a study of leptin in CFS. Design: (i) A comparison of morning leptin concentration in patients with CFS and controls and (ii) a randomized, placebo-controlled crossover study of the effects of hydrocortisone on leptin levels in CFS. Patients: Thirty-two medication free patients with CFS but not comorbid depression or anxiety. Thirty-two age, gender, weight, body mass index and menstrual cycle matched volunteer subjects acted as controls. Measurements: We measured basal 0900 h plasma leptin levels in patients and controls. All 32 patients were taking part in a randomized, placebo-controlled crossover trial of low dose (5 or 10 mg) hydrocortisone as a potential therapy for CFS. We measured plasma leptin after 28 days treatment with hydrocortisone and after 28 days treatment with placebo. Results: At baseline, there was no significant difference in plasma leptin between patients [mean 13.8, median 7.4, interquartile range (IQR) 18.0 ng/ml] and controls (mean 10.2, median 5.5, IQR 11.3 ng/ml). Hydrocortisone treatment, for both doses combined, caused a significant increase in leptin levels compared to placebo. When the two doses were analysed separately, only 10 mg was associated with a significant effect on leptin levels. We also compared the hydrocortisone induced increase in leptin between those who were deemed treatment-responders and those deemed nonresponders. Responders showed a significantly greater hydrocortisone-induced rise in leptin than nonresponders. This association between a clinical response to hydrocortisone and a greater rise in leptin levels may indicate a greater biological effect of hydrocortisone in these subjects, perhaps due to increased glucocorticoid receptor sensitivity, which may be present in some patients with CFS. Conclusions: We conclude that, while we found no evidence of alterations in leptin levels in CFS, low dose hydrocortisone therapy caused increases in plasma leptin levels, with this biological response being more marked in those CFS subjects who showed a positive therapeutic response to hydrocortisone therapy. Increases in plasma leptin levels following low dose hydrocortisone therapy may be a marker of pretreatment physiological hypocortisolism and of response to therapy. abstract_id: PUBMED:24636516 The role of hypocortisolism in chronic fatigue syndrome. Background: There is accumulating evidence of hypothalamic-pituitary-adrenal (HPA) axis hypofunction in chronic fatigue syndrome (CFS). However, knowledge of this hypofunction has so far come exclusively from research in adulthood, and its clinical significance remains unclear. The objective of the current study was to assess the role of the HPA-axis in adolescent CFS and recovery from adolescent CFS. Method: Before treatment, we compared the salivary cortisol awakening response of 108 diagnosed adolescent CFS patients with that of a reference group of 38 healthy peers. Salivary cortisol awakening response was measured again after 6 months of treatment in CFS patients. Results: Pre-treatment salivary cortisol levels were significantly lower in CFS-patients than in healthy controls. After treatment recovered patients had a significant rise in salivary cortisol output attaining normalization, whereas non-recovered patients improved slightly, but not significantly. The hypocortisolism found in CFS-patients was significantly correlated to the amount of sleep. Logistic regression analysis showed that an increase of one standard deviation in the difference between pre- and post-treatment salivary cortisol awakening response was associated with a 93% higher odds of recovery (adjusted OR 1.93 (1.18 to 3.17), p=0.009). Pre-treatment salivary cortisol did not predict recovery. Conclusions: Hypocortisolism is associated with adolescent CFS. It is not pre-treatment cortisol but its change to normalization that is associated with treatment success. We suggest that this finding may have clinical implications regarding the adaptation of future treatment strategies. abstract_id: PUBMED:12618533 Assessment of cortisol response with low-dose and high-dose ACTH in patients with chronic fatigue syndrome and healthy comparison subjects. A reduced secretion of cortisol has been proposed as a possible explanation of the symptoms in chronic fatigue syndrome. However, the evidence of hypocortisolism in chronic fatigue syndrome is conflicting. In order to simultaneously assess possible alterations in adrenocortical sensitivity and secretory adrenal reserve, the authors administered both low-dose and high-dose ACTH to a group of 18 chronic fatigue syndrome patients and 18 age- and gender-matched healthy comparison subjects. No response differences for salivary and plasma cortisol were detectable after administration of either low-dose or high-dose ACTH, indicating that primary adrenal insufficiency is unlikely to play a significant role in the etiology of chronic fatigue syndrome. abstract_id: PUBMED:11502777 Hypothalamo-pituitary-adrenal axis dysfunction in chronic fatigue syndrome, and the effects of low-dose hydrocortisone therapy. These neuroendocrine studies were part of a series of studies testing the hypotheses that 1) there may be reduced activity of the hypothalamic-pituitary-adrenal axis in chronic fatigue syndrome and 2) low-dose augmentation with hydrocortisone therapy would improve the core symptoms. We measured ACTH and cortisol responses to human CRH, the insulin stress test, and D-fenfluramine in 37 medication-free patients with CDC-defined chronic fatigue syndrome but no comorbid psychiatric disorders and 28 healthy controls. We also measured 24-h urinary free cortisol in both groups. All patients (n = 37) had a pituitary challenge test (human CRH) and a hypothalamic challenge test [either the insulin stress test (n = 16) or D-fenfluramine (n = 21)]. Baseline cortisol concentrations were significantly raised in the chronic fatigue syndrome group for the human CRH test only. Baseline ACTH concentrations did not differ between groups for any test. ACTH responses to human CRH, the insulin stress test, and D- fenfluramine were similar for patient and control groups. Cortisol responses to the insulin stress test did not differ between groups, but there was a trend for cortisol responses both to human CRH and D-fenfluramine to be lower in the chronic fatigue syndrome group. These differences were significant when ACTH responses were controlled. Urinary free cortisol levels were lower in the chronic fatigue syndrome group compared with the healthy group. These results indicate that ACTH responses to pituitary and hypothalamic challenges are intact in chronic fatigue syndrome and do not support previous findings of reduced central responses in hypothalamic-pituitary-adrenal axis function or the hypothesis of abnormal CRH secretion in chronic fatigue syndrome. These data further suggest that the hypocortisolism found in chronic fatigue syndrome may be secondary to reduced adrenal gland output. Thirty-two patients were treated with a low-dose hydrocortisone regime in a double-blind, placebo-controlled cross-over design, with 28 days on each treatment. They underwent repeated 24-h urinary free cortisol collections, a human CRH test, and an insulin stress test after both active and placebo arms of treatment. Looking at all subjects, 24-h urinary free cortisol was higher after active compared with placebo treatments, but 0900-h cortisol levels and the ACTH and cortisol responses to human CRH and the insulin stress test did not differ. However, a differential effect was seen in those patients who responded to active treatment (defined as a reduction in fatigue score to the median population level or less). In this group, there was a significant increase in the cortisol response to human CRH, which reversed the previously observed blunted responses seen in these patients. We conclude that the improvement in fatigue seen in some patients with chronic fatigue syndrome during hydrocortisone treatment is accompanied by a reversal of the blunted cortisol responses to human CRH. abstract_id: PUBMED:21946893 Hypothalamic-pituitary-adrenal axis dysfunction in chronic fatigue syndrome. The weight of current evidence supports the presence of the following factors related to hypothalamic-pituitary-adrenal (HPA) axis dysfunction in patients with chronic fatigue syndrome (CFS): mild hypocortisolism; attenuated diurnal variation of cortisol; enhanced negative feedback to the HPA axis; and blunted HPA axis responsiveness. Furthermore, HPA axis changes seem clinically relevant, as they are associated with worse symptoms and/or disability and with poorer outcomes to standard treatments for CFS. Regarding etiology, women with CFS are more likely to have reduced cortisol levels. Studies published in the past 8 years provide further support for a multifactorial model in which several factors interact to moderate HPA axis changes. In particular, low activity levels, depression and early-life stress appear to reduce cortisol levels, whereas the use of psychotropic medication can increase cortisol. Addressing these factors-for example, with cognitive behavioral therapy-can increase cortisol levels and is probably the first-line approach for correcting HPA axis dysfunction at present, as steroid replacement is not recommended. Given what is now a fairly consistent pattern of findings for the type of HPA axis changes found in CFS, we recommend that future work focuses on improving our understanding of the cause and relevance of these observed changes. abstract_id: PUBMED:15036250 The HPA axis and the genesis of chronic fatigue syndrome. Many studies of patients with long-standing chronic fatigue syndrome (CFS) have found alterations to the hypothalamo-pituitary-adrenal (HPA) axis, including mild hypocortisolism, heightened negative feedback and blunted responses to challenge. However, recent prospective studies of high-risk cohorts suggest that there are no HPA axis changes present during the early stages of the genesis of fatiguing illnesses. Moreover, HPA axis changes can be reversed by modifying behavioural features of the illness, such as inactivity, deconditioning and sleep disturbance. Nevertheless, raising levels of cortisol pharmacologically can temporarily alleviate symptoms of fatigue. This article presents the case that there is no specific change to the HPA axis in CFS and that the observed changes are of multifactorial aetiology, with some factors occurring as a consequence of the illness. Nevertheless, the HPA axis might play a role in exacerbating or perpetuating symptoms late on in the course of the illness. Answer: Hypocortisolism has been associated with a poorer response to cognitive behavioural therapy (CBT) in chronic fatigue syndrome (CFS). Specifically, lower 24-hour urinary free cortisol (UFC) output was linked to a less favorable response to CBT, but this association was only significant in patients who were not taking psychotropic medication. Additionally, a flattened diurnal profile of salivary cortisol was also correlated with a poor response to CBT (PUBMED:19607750). However, it is important to note that while hypocortisolism may be a factor in the persistence of CFS, it is not the sole determining factor for the success of CBT, as CFS is a complex illness best explained by a multifactorial cognitive behavioural model (PUBMED:23312650). Moreover, studies have shown that hypocortisolism in CFS can potentially be reversed by CBT, as evidenced by a significant rise in salivary cortisol output after CBT treatment (PUBMED:18937978). This suggests that while hypocortisolism may predict a poorer initial response to CBT, the therapy itself may help to correct the hypocortisolism, potentially leading to improved outcomes over time. In adolescents with CFS, reduced daily cortisol output was observed, and this hypocortisolism was reversible; cortisol output returned to levels comparable to healthy adolescents after cognitive behavioural guided self-help treatment (PUBMED:25260861). This further supports the notion that while hypocortisolism may be present in CFS patients, it is not a permanent state and can be modified through appropriate interventions such as CBT. In summary, while hypocortisolism may predict a poorer response to CBT in CFS, particularly in patients not on psychotropic medication, it is not an absolute predictor of therapy outcomes. CBT itself may help to normalize cortisol levels, which could, in turn, contribute to the overall effectiveness of the treatment for CFS.
Instruction: Do women want disclosure of fetal gender during prenatal ultrasound scan? Abstracts: abstract_id: PUBMED:20418643 Do women want disclosure of fetal gender during prenatal ultrasound scan? Background/objectives: It is possible that not all women would want the disclosure of fetal gender by the sonologist during a prenatal scan. The objectives of this study were to determine the proportion of women who do not want fetal gender disclosure at the time of prenatal ultrasonography and document their reasons. Method: A cross-sectional survey of women that were 20 weeks or more pregnant that had prenatal ultrasound at a private health facility in January 2006. The sonologist asked each of the women during the procedure whether they wanted to know fetal sex or not. Those that consented had disclosure of fetal sex while those that declined gave their reasons, which were documented. Results: Two hundred and one (201) women were studied within the study period. Most of the women (82%) were of the Hausa/Fulani ethnic group and were predominantly of the Islamic faith (90%). One hundred and ninety women (94.5%) consented to disclosure of fetal gender, while eleven (5.5%) declined. The main reason for not wanting to know fetal sex was: 'Satisfied with any one that comes'. Conclusion: Most of the pregnant women (94%) would want disclosure of fetal gender at prenatal ultrasound scan. Only 5.5% of the women would not want fetal sex disclosure because they were satisfied with whichever that was there. It is advisable for the sonologist to be discrete on what to say during the procedure especially as it relates to fetal sex so as not to hurt those that do not want disclosure. abstract_id: PUBMED:25792816 Desire for prenatal gender disclosure among primigravidae in Enugu, Nigeria. Background: Prenatal gender disclosure is a nonmedical fetal ultrasonography view, which is considered ethically unjustified but has continued to grow in demand due to pregnant women's requests. Objective: The aim of this study was to determine the proportion of primigravidae who want prenatal gender disclosure and the reasons for it. Methods: This was a descriptive cross-sectional study of randomly selected primigravidae seen at Enugu Scan Centre. The women were randomly selected using a table of random numbers. Results: Ninety percent (225/250) of 250 primigravidae who fulfilled the criteria for inclusion in this study wanted to know the gender of their unborn baby, while 10% (25/250) declined gender disclosure. Furthermore, 62% (155/250) of primigravidae had preference for male children. There was statistically significant desire for male gender (P=0.0001). Statistically significant number of primigravidae who wanted gender disclosure did so to plan for the new baby (P=0.0001), and those that declined gender disclosure "leave it to the will of GOD" (P=0.014). Conclusion: Ninety percent of primigravidae wanted gender disclosure because of plans for the new baby, personal curiosity, partner and in-laws' curiosity; moreover, some women wanted to test the accuracy of the findings at delivery and 62% of primigravidae had preference for male children. In view of these results, gender disclosure could be beneficial in this environment. abstract_id: PUBMED:24363561 Reasons for disclosure of gender to pregnant women during prenatal ultrasonography. Background: The objective of this study was to determine the proportion of women who want to know fetal gender on antenatal ultrasonography and the reasons behind this. Methods: A descriptive, cross-sectional study was carried out between March 10, 2012 and September 10, 2012 at two tertiary care hospitals (Dow University Hospital, Ojha Campus, and Lady Dufferin Hospital) in Karachi. In total, 223 pregnant women who attended the antenatal clinic and gave their consent were included in the study. Information was collected on a predesigned questionnaire. Results: Of the 223 pregnant women, 109 (49.1%) were younger than 25 years. The majority (216, 96.9%) were Muslim, 164 (73.4%) were educated to different levels, 121 (54.3%) spoke Urdu, and 66 (29.6%) were primigravidas. Thirty-four (15.2%) women had a preference for a male child, 24 (10.8%) had a female preference, and 165 (74%) had no preference. Seventy (31.4%) women were interested to know the fetal gender. The association between education and gender preference was found to be statistically significant (P = 0.004) and also that between age and gender preference (P = 0.05), but no relationship was found between gender preference and gender of previous babies (P = 0.317 for males and P = 0.451 for females). Association of ethnicity was also not statistically significant (P = 0.102). Conclusion: This study revealed that 31.4% of women were interested in disclosure of gender on prenatal ultrasonography and only15.2% women had a preference for a male child. abstract_id: PUBMED:28191222 Accuracy of sonographic fetal gender determination: predictions made by sonographers during routine obstetric ultrasound scans. Objectives: The purpose of this study was to determine the accuracy of sonographer predictions of fetal gender during routine ultrasounds. Primarily, the study sought to investigate the accuracy of predictions made in the first trimester, as requests from parents wanting to know the gender of their fetus at this early scan are becoming increasingly common. Second and third trimester fetuses were included in the study to confirm the accuracy of later predictions. In addition, the mother's decision to know the gender was recorded to determine the prevalence of women wanting prenatal predictions. Methods: A prospective, cross sectional study was conducted in a specialist private obstetric practice in the Illawarra, NSW. A total of 640 fetuses across three trimesters were examined collectively by seven sonographers. Fetal gender was predicted using the sagittal plane only in the first trimester and either the sagittal or transverse plane in later trimesters. Phenotypic gender confirmation was obtained from hospital records or direct telephone contact with women postnatally. Results: Results confirmed 100% accuracy in predictions made after 14 weeks gestation. The overall success rate in the first trimester group (11-14 weeks) was 75%. When excluding those scans where a prediction could not be made, success rates increased to 91%. Results were less accurate for fetuses younger than 12 weeks, with an overall success rate of 54%. Male fetuses under 13 weeks were more likely to have gender incorrectly or unable to be assigned. After 13 weeks, success rates for correctly predicting males exceeded that of female fetuses. Statistical differences were noted in the success rates of individual sonographers. Sixty seven percent of women were in favour of knowing fetal gender from ultrasound. Publicly insured women were more likely to request gender disclosure than privately insured women. Conclusions: Sonographic gender determination provides high success rates in the first trimester. Results vary depending on sonographer experience, fetal age and fetal gender. Practice guidelines regarding gender disclosure should be developed. Predictions prior to 12 weeks should be discouraged. abstract_id: PUBMED:29915761 The ultrasound identification of fetal gender at the gestational age of 11-12 weeks. Introduction: The early prenatal identification of fetal gender is of great importance. Accurate prenatal identification is currently only possible through invasive procedures. The present study was conducted to determine the accuracy and sensitivity of ultrasound fetal gender identification. Materials And Methods: The present cross-sectional study was conducted on 150 women in their 11th and 12th weeks of pregnancy in Hamadan in 2014. Ultrasound imaging performed in the 11th and 12th weeks of pregnancy for fetal gender identification identified the fetus either as a girl, a boy, or as a "gender not assigned." Frequency, sensitivity, specificity, positive and negative predictive values, and accuracy of the gender identification was assessed using SPSS version 20. The significant level was 0.05 in all analyses. Results: Of the total of 150 women, the gender was identified as female in 32 (21.3%), as male in 65 (43.3%), and not assigned in 53 (35.3%); overall, gender identification was made in 64.6% of the cases. A total of 57 male fetuses were correctly identified as boys, and 8 female fetuses were wrongly identified as boys. As for the female fetuses, 31 were correctly identified as girls, and 1 was wrongly identified as a boy. The positive predictive value for the ultrasound imaging gender identification was 87.6% for the male fetuses and 96.8% for the female fetuses. Conclusion: The present study had a much higher gender identification accuracy compared to other studies. The final success of fetal gender identification was about 91% in the 11th and 12th weeks of pregnancy. abstract_id: PUBMED:24761233 Perception of male gender preference among pregnant igbo women. Background: Male gender preference is a dominant feature of Igbo culture and could be the reason behind women seeking fetal gender at ultrasound. Aim: The aim of this study is to investigate the perception of prenatal ultrasound patients of male gender preference in a patriarchal and gender sensitive society. Subjects And Methods: The study was a cross-sectional survey, which targeted pregnant women who presented for prenatal ultrasound at four selected hospitals in Anambra State. A convenience sample size of 790 pregnant women constituted the respondents. The data collection instrument was a 13-item semi-structured self-completion questionnaire designed in line with the purpose of the study. Descriptive and inferential statistical analyses were carried out with statistical significance being considered at P &lt; 0.05. Results: Most of the women (88.4%, 698/790) were aware that fetal gender can be determined during the prenatal ultrasound while just over half of them (61.0%, 482/790) wanted fetal gender disclosed to them during prenatal ultrasound. More than half (58.6%, 463/790) of the women desired to have male babies in their present pregnancies while 20.1% (159/790) desired female babies and 21.3% (168/790) did not care if the baby was male or female. Some of the women (22.2%, 175/790) wanted to have male babies in their present pregnancies for various reasons predominant of which was protecting their marriages and cementing their places in their husbands' hearts. Male gender preference was strongly perceived. There was considerable anxiety associated with prenatal gender determination and moderate loss of interest in the pregnancy associated with disclosure of undesired fetal gender. Socio-demographic factors had significant influence on perception of male gender preference. Conclusion: Male gender preference is strongly perceived among Igbo women and its perception is significantly influenced by socio-demographic factors. Male gender preference may be responsible for Igbo women seeking fetal gender at ultrasound. abstract_id: PUBMED:16801957 Seeing baby: women's experience of prenatal ultrasound examination and unexpected fetal diagnosis. Objective: Although prenatal ultrasound (US) is a common clinical undertaking today, little information is available about women's experience of the procedure from the perspective of women themselves. The objective of this study was to explore women's experience of undergoing a routine prenatal US examination associated with an unexpected fetal diagnosis. Study Design: Qualitative methods were used to explore the prenatal US experience of 13 women. Five women were given unexpected news of multiple pregnancy and eight women were given unexpected news of congenital fetal abnormality. One in-depth audio-taped interview was conducted with each woman. Content analysis of interview data identified themes common to women's experience of US. Results: Identified themes of women's experience of routine prenatal US examination associated with an unexpected fetal diagnosis are: experiencing the setting, sensing information, feeling connected/disconnected, the power of the image, and communication rules. Conclusions: Women's experience of prenatal US examination is influenced by physical and environmental factors and by the behaviors of the US examiner. Behaviors of the examiner contribute to a woman's labeling of the US experience as positive or negative. Women identify being objectified by the examination and experience poor communication patterns after a fetal US diagnosis. Women's description of the US screen image as a baby suggests it is a powerful influence on subsequent clinical and ethical decision-making about the pregnancy. abstract_id: PUBMED:16435312 Why women want prenatal ultrasound in normal pregnancy. Objectives: To investigate women's reasons for requesting prenatal ultrasound in the absence of clinical indications. Methods: A postal questionnaire was completed by 370 pregnant women with no apparent obstetric risk factors, who had expressed a desire to have ultrasound scanning in their current pregnancy. The women were asked to indicate, from a list of 12 items, their three most important reasons for wanting scanning. Ninety per cent of the women were in the first trimester of pregnancy, and 10% in the second trimester. Results: The items most frequently identified as important reasons for ultrasound were to check for fetal abnormalities (60% of women), to see that all was normal (55%) and for own reassurance (44%). Lower income was related to wanting to see the baby (P = 0.028) and wanting an ultrasound picture (P = 0.017); higher income was related to checking that all was normal (P = 0.003) and for own reassurance (P = 0.015). Women in their first pregnancy were more likely to want themselves and the father to see the baby (P = 0.001); women who had given birth previously were more likely to want reassurance (P = 0.002), as were women with a previous miscarriage or induced abortion. Women who believed that the presence of fetal trisomy justifies abortion or who would vote for free abortion were more likely to want to know about abnormalities (P &lt; 0.001 and P &lt; 0.004, respectively). Women in the second trimester were more likely to want to check for abnormalities (P = 0.041) and appropriate fetal growth (P = 0.047) than those in the first trimester. Conclusions: It would appear that women in normal pregnancy have specific reasons for wanting prenatal ultrasound that are influenced by sociodemographic, obstetric and attitudinal factors. abstract_id: PUBMED:30568808 Fetal Kidneys Ultrasound Appearance in the First Trimester - Clinical Significance and Limits of Counseling. Objective: The objective of this study was to determine the visualizing rate of fetal kidneys at various gestational ages in late first trimester (FT) and to establish the clinical significance of their two-dimensional ultrasound (2DUS) appearance in the FT. Methods: In a prospective cross-sectional study, 1456 women from an unselected population underwent a detailed assessment of fetal anatomy at 11+0 -13+4 weeks of gestation with the use of transabdominal sonography. Information on the ultrasound findings, antenatal course and perinatal outcome was obtained in 1331 cases. Results: 44 cases in which a congenital kidney disease was detected by ultrasound in the prenatal period were identified. The renal pathology was suspected in the FT in 8 cases, and confirmed by a standard test (postmortem autopsy or second-trimester scan) in 4 cases. The standard detailed second-trimester scan at 18-22 weeks diagnosed another 23 cases but refuted suspicion in 4 FT positive cases. The third trimester added another 17, all confirmed by the postpartum scan. For FT presence or absence of congenital renal anomalies, sensitivity, specificity, +LRs and -LRs of 2DUS were 9.09%, 99.69%, 29.25, and 0.91. Conclusion: FT prenatal kidneys' visualization is critically dependent on the gestational age. FT diagnosis holds uncertainty. An early diagnosis carries a risk of providing a false-positive or a false-negative result, because the differentiation of the renal system is delayed or the diagnosis is not amenable yet to prenatal ultrasound. No FT findings can exclude the mid-trimester follow-up ultrasound scan. Second and third trimester scan are relevant for congenital kidney diseases. abstract_id: PUBMED:36352278 Perceived Value of Prenatal Ultrasound Screening: A Survey of Pregnant Women. Introduction: Routine prenatal screening ultrasounds primarily serve to diagnose major fetal anomalies which may prompt further testing and inform clinical decision-making, including possible pregnancy termination. Meanwhile, expectant parents may view the ultrasound experience and information gained differently from their clinicians. In this setting, how to best counsel patients, especially regarding the increasing findings of indeterminant clinical significance, is unclear. Greater understanding of women's views before undergoing their ultrasound may help to guide anticipatory counseling about the purpose of screening and interpretation of results. Methods: We surveyed 289 patients presenting for scheduled prenatal ultrasounds at an academic tertiary care center. Discrete and open-ended questions assessed views surrounding the receipt of abnormal results and management of the pregnancy once fetal anomalies are detected. Qualitative responses were analyzed using thematic analysis. Results: Most (95%) desired information about abnormal sonographic findings, although only half would consider pregnancy termination for anomalies. Reasons for wanting return of abnormal results included preparedness, valuing knowledge, and to a lesser extent, informing decision-making. When considering potential termination as a result of ultrasound findings, participants' rationales demonstrated deontological (seeing termination as inherently impermissible or permissible), relational (duties arising from the role of being a mother), and consequentialist (weighing harms and benefits) reasoning. Conclusion: This study highlights women's perceptions of prenatal ultrasounds as an inherently valuable source of information and preparedness, beyond their role in informing clinical decision-making. Identifying the ethical constructs underpinning patients' perspectives may help direct development of counseling tools responsive to individual needs and values regarding prenatal ultrasound findings. Answer: The desire for disclosure of fetal gender during prenatal ultrasound scans varies among women. Studies have shown that a significant majority of women do want to know the fetal gender. For instance, in a study conducted in Nigeria, 94.5% of the women consented to disclosure of fetal gender, with the main reason for not wanting to know being satisfaction with any gender that comes (PUBMED:20418643). Similarly, another study in Enugu, Nigeria, found that 90% of primigravidae wanted to know the gender of their unborn baby, mainly to plan for the new baby or due to personal curiosity (PUBMED:25792816). However, the desire for gender disclosure is not universal. A study in Karachi reported that only 31.4% of women were interested in knowing the fetal gender on prenatal ultrasonography, and only 15.2% had a preference for a male child (PUBMED:24363561). In contrast, a study among Igbo women in Nigeria found that more than half (58.6%) of the women desired male babies in their current pregnancies, indicating a strong perception of male gender preference which could influence the desire for fetal gender disclosure (PUBMED:24761233). The reasons for wanting to know the fetal gender can include planning for the baby, personal or partner's curiosity, testing the accuracy of the findings at delivery, and cultural or societal preferences for a particular gender (PUBMED:25792816, PUBMED:24761233). On the other hand, some women decline gender disclosure, preferring to leave it to the will of God or not having a preference for either gender (PUBMED:20418643, PUBMED:25792816). In summary, while a substantial proportion of women do want to know the fetal gender during prenatal ultrasound scans, there is a minority who either do not want to know or are indifferent to the information. The reasons for these preferences are varied and can be influenced by cultural, personal, and religious factors.
Instruction: Obstetric care and payment source: do low-risk Medicaid women get less care? Abstracts: abstract_id: PUBMED:12795306 Medicare program; change in methodology for determining payment for extraordinarily high-cost cases (cost outliers) under the acute care hospital inpatient and long-term care hospital prospective payment systems. Final rule. In this final rule, we are revising the methodology for determining payments for extraordinarily high-cost cases (cost outliers) made to Medicare-participating hospitals under the acute care hospital inpatient prospective payment system (IPPS). Under the existing outlier methodology, the cost-to-charge ratios from hospitals' latest settled cost reports are used in determining a fixed-loss amount cost outlier threshold. We have become aware that, in some cases, hospitals' recent rate-of-charge increases greatly exceed their rate-of-cost increases. Because there is a time lag between the cost-to-charge ratios from the latest settled cost report and current charges, this disparity in the rate-of-increases for charges and costs results in cost-to-charge ratios that are too high, which in turn results in an overestimation of hospitals' current costs per case. Therefore, we are revising our outlier payment methodology to ensure that outlier payments are made only for truly expensive cases. We also are revising the methodology used to determine payment for high-cost outlier and short-stay outlier cases that are made to Medicare-participating long-term care hospitals (LTCHs) under the long-term care hospital prospective payment system (LTCH PPS). The policies for determining outlier payment under the LTCH PPS are modeled after the outlier payment policies under the IPPS. abstract_id: PUBMED:16696154 Medicare program; prospective payment system for long-term care hospitals RY 2007: annual payment rate updates, policy changes, and clarification. Final rule. This final rule updates the annual payment rates for the Medicare prospective payment system (PPS) for inpatient hospital services provided by long-term care hospitals (LTCHs). The payment amounts and factors used to determine the updated Federal rates that are described in this final rule have been determined for the LTCH PPS rate year July 1, 2006 through June 30, 2007. The annual update of the long-term care diagnosis-related group (LTC-DRG) classifications and relative weights remains linked to the annual adjustments of the acute care hospital inpatient diagnosis-related group system, and will continue to be effective each October 1. The outlier threshold for July 1, 2006, through June 30, 2007, is also derived from the LTCH PPS rate year calculations. We are also finalizing policy changes and making clarifications. abstract_id: PUBMED:24926415 Medicare post-acute care episodes and payment bundling. Background: The purpose of this paper is to examine service use in an episode of acute and post-acute care (PAC) under alternative episode definitions and to look at geographic differences in episode payments. Data And Methods: The data source for these analyses was a Medicare claims file for 30 percent of beneficiaries with an acute hospital initiated episode in 2008 (N = 1,705,794, of which 38.7 percent went on to use PAC). Fixed length episodes of 30, 60, and 90 days were examined. Analyses examined differences in definitions allowing any claim within the fixed length period to be part of the episode versus prorating a claim extending past the episode endpoint. Readmissions were also examined as an episode endpoint. Payments were standardized to allow for comparison of episode payments per acute hospital discharge or PAC user across states. Results: The results of these analyses provide information on the composition of service use under different episode definitions and highlight considerations for providers and payers testing different alternatives for bundled payment. abstract_id: PUBMED:15132146 Medicare program; prospective payment system for long-term care hospitals: annual payment rate updates and policy changes. Final rule. This final rule updates the annual payment rates for the Medicare prospective payment system (PPS) for inpatient hospital services provided by long-term care hospitals (LTCHs). The payment amounts and factors used to determine the updated Federal rates that are described in this final rule have been determined based on the LTCH PPS rate year. The annual update of the long-term care diagnosis-related group (LTC-DRG) classifications and relative weights remains linked to the annual adjustments of the acute care hospital inpatient diagnosis-related group system, and will continue to be effective each October 1. The outlier threshold for July 1, 2004 through June 30, 2005 is also derived from the LTCH PPS rate year calculations. In this final rule, we also are making clarifications to the existing policy regarding the designation of a satellite of a LTCH as an independent LTCH. In addition, we are expanding the existing interrupted stay policy and changing the procedure for counting days in the average length of stay calculation for Medicare patients for hospitals qualifying as LTCHs. abstract_id: PUBMED:15880887 Medicare program; prospective payment system for long-term care hospitals: annual payment rate updates, policy changes, and clarification. Final rule. This final rule updates the annual payment rates for the Medicare prospective payment system (PPS) for inpatient hospital services provided by long-term care hospitals (LTCHs). The payment amounts and factors used to determine the updated Federal rates that are described in this final rule have been determined based on the LTCH PPS rate year July 1, 2005 through June 30, 2006. The annual update of the long-term care diagnosis-related group (LTC-DRG) classifications and relative weights remains linked to the annual adjustments of the acute care hospital inpatient diagnosis-related group system, and will continue to be effective each October 1. The outlier threshold for July 1, 2005 through June 30, 2006 is also derived from the LTCH PPS rate year calculations. We are adopting new labor market area definitions for the purpose of geographic classification and the wage index. We are also making policy changes and clarifications. abstract_id: PUBMED:12793455 Medicare program; prospective payment system for long-term care hospitals: annual payment rate updates and policy changes. Final rule. This final rule establishes the annual update of the payment rates for the Medicare prospective payment system (PPS) for inpatient hospital services provided by long-term care hospitals (LTCHs). It also changes the annual period for which the rates are effective. The rates will be effective from July 1 to June 30 instead of from October 1 through September 30, establishing a "long-term care hospital rate year" (LTCH PPS rate year). We also change the publication schedule for these updates to allow for an effective date of July 1. The payment amounts and factors used to determine the updated Federal rates that are described in this final rule have been determined based on this revised LTCH PPS rate year. The annual update of the long-term care diagnosis-related groups (LTC-DRG) classifications and relative weights remains linked to the annual adjustments of the acute care hospital inpatient diagnosis-related group system, and will continue to be effective each October 1. The outlier threshold for July 1, 2003, through June 30, 2004, is also derived from the LTCH PPS rate year calculations. In addition, we are making an adjustment to the short-stay outlier policy for certain LTCHs and a policy change eliminating bed-number restrictions for pre-1997 LTCHs that have established satellite facilities and elect to be paid 100 percent of the Federal rate or when the LTCH is fully phased-in to 100 percent of the Federal prospective rate after the transition period. abstract_id: PUBMED:28893815 Risk Stratification Methods and Provision of Care Management Services in Comprehensive Primary Care Initiative Practices. Purpose: Risk-stratified care management is essential to improving population health in primary care settings, but evidence is limited on the type of risk stratification method and its association with care management services. Methods: We describe risk stratification patterns and association with care management services for primary care practices in the Comprehensive Primary Care (CPC) initiative. We undertook a qualitative approach to categorize risk stratification methods being used by CPC practices and tested whether these stratification methods were associated with delivery of care management services. Results: CPC practices reported using 4 primary methods to stratify risk for their patient populations: a practice-developed algorithm (n = 215), the American Academy of Family Physicians' clinical algorithm (n = 155), payer claims and electronic health records (n = 62), and clinical intuition (n = 52). CPC practices using practice-developed algorithm identified the most number of high-risk patients per primary care physician (282 patients, P = .006). CPC practices using clinical intuition had the most high-risk patients in care management and a greater proportion of high-risk patients receiving care management per primary care physician (91 patients and 48%, P =.036 and P =.128, respectively). Conclusions: CPC practices used 4 primary methods to identify high-risk patients. Although practices that developed their own algorithm identified the greatest number of high-risk patients, practices that used clinical intuition connected the greatest proportion of patients to care management services. abstract_id: PUBMED:26606762 Medicare Program; Comprehensive Care for Joint Replacement Payment Model for Acute Care Hospitals Furnishing Lower Extremity Joint Replacement Services. Final rule. This final rule implements a new Medicare Part A and B payment model under section 1115A of the Social Security Act, called the Comprehensive Care for Joint Replacement (CJR) model, in which acute care hospitals in certain selected geographic areas will receive retrospective bundled payments for episodes of care for lower extremity joint replacement (LEJR) or reattachment of a lower extremity. All related care within 90 days of hospital discharge from the joint replacement procedure will be included in the episode of care. We believe this model will further our goals in improving the efficiency and quality of care for Medicare beneficiaries with these common medical procedures. abstract_id: PUBMED:30198683 Medicare Program; Certain Changes to the Low-Volume Hospital Payment Adjustment Under the Hospital Inpatient Prospective Payment Systems (IPPS) for Acute Care Hospitals for Fiscal Years 2011 Through 2017. Application of a payment adjustment. This document announces changes to the payment adjustment for low-volume hospitals under the hospital inpatient prospective payment systems (IPPS) for acute care hospitals for fiscal years (FYs) 2011 through 2017 in accordance with section 429 of the Consolidated Appropriations Act, 2018. abstract_id: PUBMED:28574240 Medicare Program; Advancing Care Coordination Through Episode Payment Models (EPMs); Cardiac Rehabilitation Incentive Payment Model; and Changes to the Comprehensive Care for Joint Replacement Model (CJR); Delay of Effective Date. Final rule; delay of effective date. This final rule finalizes May 20, 2017 as the effective date of the final rule titled "Advancing Care Coordination Through Episode Payment Models (EPMs); Cardiac Rehabilitation Incentive Payment Model; and Changes to the Comprehensive Care for Joint Replacement Model (CJR)" originally published in the January 3, 2017 Federal Register. This final rule also finalizes a delay of the applicability date of the regulations at 42 CFR part 512 from July 1, 2017 to January 1, 2018 and delays the effective date of the specific CJR regulations listed in the DATES section from July 1, 2017 to January 1, 2018. Answer: The provided abstracts do not contain information directly related to obstetric care and payment sources, or the comparison of care provided to low-risk Medicaid women versus women with other types of insurance. The abstracts focus on various aspects of Medicare payment systems, including changes to payment methodologies for high-cost cases (PUBMED:12795306), updates to payment rates for long-term care hospitals (PUBMED:16696154, PUBMED:15132146, PUBMED:15880887, PUBMED:12793455), examination of service use and payment bundling in post-acute care episodes (PUBMED:24926415), risk stratification methods in primary care practices (PUBMED:28893815), implementation of a new payment model for joint replacement services (PUBMED:26606762), changes to the low-volume hospital payment adjustment (PUBMED:30198683), and delays in the effective date of certain payment models (PUBMED:28574240). None of these abstracts discuss obstetric care or compare the level of care provided to Medicaid patients versus patients with other types of insurance. To answer the question about obstetric care and payment source for low-risk Medicaid women, one would need to look for studies or reports that specifically address the quality and quantity of obstetric care provided to Medicaid patients compared to those with private insurance or other payment sources. Such studies would likely examine factors such as access to prenatal care, interventions during childbirth, and postnatal support, and how these may differ based on the source of payment or insurance coverage.
Instruction: Is the success rate of endoscopic third ventriculostomy age-dependent? Abstracts: abstract_id: PUBMED:27863276 Simplest radiological measurement related to clinical success in endoscopic third ventriculostomy. Objective: Radiologic criteria for a successful endoscopic third ventriculostomy are not clearly defined and there is an ongoing need for determining simplest and strongest radiological criteria for this purpose. This paper aims to determine the easiest radiological parameter related to surgical outcome METHODS: Between January 2012 and December 2015 all patients receiving endoscopic third ventriculostomy with various indications were reviewed and 29 patients whose preoperative and early postoperative 3D-CISS images were available were studied. There were 13 males and 16 females, and there were 11 pediatric cases (mean age: 9.90±5.2; range: 2-18). The mean age of the entire population was 26.58±18.32 (range: 2-68 years). Measurements were performed using the ruler tool of a freely distributed medical imaging software. Simple ruler measurements of ventricular floor depression, lamina terminalis bowing, anterior commissure to tuber cinereum distance, mamillary body to lamina terminalis distance, third ventricular width, frontal horn width and occipital horn width were recorded and compared between successful and failed interventions. Results: Of the ventriculostomies, 22 (75.9%) were considered successful and 7 (24.1%) as failed at the last follow-up visit. Of the measurements performed, only those related to the third ventricle itself were significantly higher in the failed group. There were no association with lateral ventricular measurements. Conclusion: Simple ruler measurements of the suggested distances significantly correlate with clinical success. After validating our results with higher number of patients, complex measurements and calculations to determine the link between clinical success and radiological success of ventriculostomy procedures may not be needed. abstract_id: PUBMED:34560294 Transependymal Edema as a Predictor of Endoscopic Third Ventriculostomy Success in Pediatric Hydrocephalus. Background: The Endoscopic Third Ventriculostomy Success Score (ETVSS) is based on the clinical features of hydrocephalus except for radiological findings. A previous study suggested that transependymal edema (TEE) as a radiological finding may be a reliable predictor of endoscopic third ventriculostomy (ETV) success in patients of all ages. We aimed to investigate whether TEE on preoperative magnetic resonance imaging can predict ETV success in pediatric patients. Methods: Medical and radiological records of all pediatric patients with an initial ETV in our hospital between 2013 and 2019 were retrospectively reviewed. Results: This study included 32 patients with hydrocephalus. The median age at surgery was 10.0 years (interquartile range: 5.6-12.9 years). There were 20 patients in the high ETVSS (90-80) group and 12 patients in the moderate ETVSS (70-50) group. The median follow-up period was 29.0 months (interquartile range: 12.9-46.2 months). The ETV success rate at the final follow-up was 81%. Preoperative brain magnetic resonance imaging revealed TEE in 20 patients and third ventricle floor ballooning in 25 patients, of whom 19 (95%) and 22 (88%), respectively, achieved successful ETV. Patients with TEE had a significantly better outcome than patients without TEE (95% vs. 58%, P = 0.018). Multivariate analysis demonstrated that the presence of TEE (odds ratio 13.6, 95% confidence interval 1.3-137.5, P = 0.027) is a significant predictor of ETV success. Conclusions: In our cohort with a high or moderate ETVSS, the ETV success rate in patients with TEE was significantly higher than in patients without TEE, suggesting that TEE may be a useful predictor of ETV success in pediatric hydrocephalus. abstract_id: PUBMED:38423458 COMBINED PREDICTIVE MODEL FOR ENDOSCOPIC THIRD VENTRICULOSTOMY SUCCESS IN ADULTS AND CHILDREN. Background: The selection of patients in whom endoscopic third ventriculostomy (ETV) can be effective remains poorly defined. The endoscopic third ventriculostomy success score (ETVSS) and the presence of bowing of the third ventricle have been identified as independent factors for predicting success, each with limitations. The objective of this study is to elaborate a combined predictive model to predict ETV success in a mixed cohort of patients. Methods: Demographic, intraoperative, postoperative, and radiological variables were analyzed in all ventriculostomies performed consecutively at a single institution from December 2004 to December 2022. Qualitative and quantitative measurements of preoperative, immediate and late postoperative MRI were conducted. Univariate analysis and logistic regression models were performed. Results: 118 ETV were performed in the selected period. 106 procedures met inclusion criteria. The overall success rate was 71.7%, with a median follow-up of 3.64 years [1.06;5.62]. The median age was 36.1 years [11.7;53.5]. 35.84% were children (median=7.81 years). Among the 80 patients with third ventricle bowing, success rate was 88.8% (p&lt;0.001). Larger third ventricle dimensions on preoperative mid-sagittal MRI were associated with increased ETV success. The model with the best receiver operating characteristic (ROC) curves, with an area under the curve (AUC) of 0.918 (95% CI 0.856;0.979) includes sex, ETVSS, presence of complications and third ventricle bowing. Conclusions: The presence of bowing of the third ventricle is strongly associated with a higher ETV success rate. However, a combined predictive model that integrates it with the ETVSS is the most appropriate approach for selecting patients for ETV. abstract_id: PUBMED:12420119 Is the success rate of endoscopic third ventriculostomy age-dependent? An analysis of the results of endoscopic third ventriculostomy in young children. Introduction: Different opinions exist in the literature about the effectiveness of endoscopic third ventriculostomy (ETV) in the treatment of hydrocephalus in young children. Therefore we made a retrospective evaluation of our own success rates of performing ETVs in children less than 2 years of age. Materials And Methods: In a series of 275 ETVs 66 procedures were performed in children less than 2 years of age. Results: The overall success rate in this young age group was 53%, lower than the success rates of ETVs reported in literature (72-92%). But further analysis of these results and a comparison of the results in subgroups with different etiologies of the hydrocephalus showed that the success rates varied between 20 and 88%. Conclusion: We conclude that the success of ETV depends mainly on the etiology of the hydrocephalus and not on the age of the patient alone. abstract_id: PUBMED:38017131 Factors affecting endoscopic third ventriculostomy success in adults. Background: Endoscopic third ventriculostomy (ETV) is a standard treatment in hydrocephalus of certain aetiologies. The most widely used predictive model is the ETV success score. This is frequently used to predict outcomes following ETV in adult patients; however, this was a model developed in paediatric patients with often distinct aetiologies of hydrocephalus. The aim of this study was to assess the predictive value of the model and to identify factors that influence ETV outcomes in adults. Methods: A retrospective study design was used to analyse consecutive patients who underwent ETV at a tertiary neurosurgical centre between 2012 and 2020. Observed ETV outcomes at 6 months were compared to pre-operative predicted ETV success scores. A multivariable Bayesian logistic regression analysis was used to determine the factors that best predicted ETV success and those factors that were redundant. Results: A total of 136 patients were analysed during the 9-year study. Thirty-one patients underwent further cerebrospinal fluid diversion within 6 months. The overall ETV success rate was 77%. Observed ETV outcomes corresponded well with predicted outcomes using the ETV success score for the higher scores, but less well for lower scores. Location of obstruction at the aqueduct irrespective of aetiology was the best predictor of success with odds of 1.65 of success. Elective procedures were also associated with higher success compared to urgent ones, whereas age under 70, nature and location of obstructive lesion (other than aqueductal) did not influence ETV success. Conclusion: ETV was successful in three-quarters of adult patient with hydrocephalus within 6 months. Obstruction at the level of the aqueduct of any aetiology was a good predictor of ETV success. Clinicians should bear in mind that adult hydrocephalus responds differently to ETV compared to paediatric hydrocephalus, and more research is required to develop and validate an adult-specific predictive tool. abstract_id: PUBMED:35751962 Prediction of 6 months endoscopic third ventriculostomy success rate in patients with hydrocephalus using a multi-layer perceptron network. Objective: Discrimination between patients most likely to benefit from endoscopic third ventriculostomy (ETV) and those at higher risk of failure is challenging. Compared to other standard models, we have tried to develop a prognostic multi-layer perceptron model based on potentially high-impact new variables for predicting the ETV success score (ETVSS). Methods: Clinical and radiological data of 128 patients have been collected, and ETV outcomes were evaluated. The success of ETV was defined as remission of symptoms and not requiring VPS for six months after surgery. Several clinical and radiological features have been used to construct the model. Then the Binary Gravitational Search algorithm was applied to extract the best set of features. Finally, two models were created based on these features, multi-layer perceptron, and logistic regression. Results: Eight variables have been selected (age, callosal angle, bifrontal angle, bicaudate index, subdural hygroma, temporal horn width, third ventricle width, frontal horn width). The neural network model was constructed upon the selected features. The result was AUC:0.913 and accuracy:0.859. Then the BGSA algorithm removed half of the features, and the remaining (Age, Temporal horn width, Bifrontal angle, Frontal horn width) were applied to construct models. The ANN could reach an accuracy of 0.84, AUC:0.858 and Positive Predictive Value (PPV): 0.92, which was higher than the logistic regression model (accuracy:0.80, AUC: 0.819, PPV: 0.89). Conclusion: The research findings have shown that the MLP model is more effective than the classic logistic regression tools in predicting ETV success rate. In this model, two newly added features, the width of the lateral ventricle's temporal horn and the lateral ventricle's frontal horn, yield a relatively high inter-observer reliability. abstract_id: PUBMED:24403957 Success rate of endoscopic third ventriculostomy in infants below six months of age with congenital obstructive hydrocephalus (a preliminary study of eight cases). Aim: In this study, we were assessing the outcome of Endoscopic Third Ventriculostomy (ETV) in infants below six months of age in cases of congenital obstructive hydrocephalus. Materials And Methods: The study was done prospectively on eight cases of obstructive hydrocephalus in infants younger than six months of age to assess the success rate of ETV as a primary treatment for hydrocephalus in this age group; in cases of evident failure, a ventriculo-peritoneal (VP) shunt was applied. Results: Despite eliminating the factors suggested as causes of ETV failure in infants below six months; the type, as with the communicating hydrocephalus, the thickness of the third ventricular floor, history of previous intracranial hemorrhage or central nervous system infection, still the success rate did not exceed 12.5%. Conclusions: The complication rate following ETV was low in comparison to the high frequency (20-80%) and seriousness of the possible postoperative complications following VP shunt with a significant decrease in the quality of patients' lives. Hence the decision-making as well as the parental counselling were in a trial to estimate the ETV success or the need to perform a shunt in the treatment of obstructive hydrocephalus. abstract_id: PUBMED:31691874 Prediction of endoscopic third ventriculostomy (ETV) success with preoperative third ventricle floor bowing (TVFB): a supplement to ETV success score. Preoperative judgement of which children is likely to benefit from endoscopic third ventriculostomy (ETV) is still the most difficult challenge. This study aimed to compare the efficiency of third ventricular floor bowing (TVFB) and ETV success score (ETVSS) in selecting ETV candidates and achieve a better preoperative patient selection method for ETV based on our institutional experience. Children (≤ 16 years old) with newly diagnosed hydrocephalus treated with ETV between January 2013 and June 2018 were included in this prospective study. Patients with TVFB will receive ETV procedure in the pediatric subgroup of our department. ETVSS was calculated in every patient. The ETVSS predicted ETV success rate and the actual ETV success rate in our institution were compared and further analyzed. One hundred twenty-nine children with TVFB were enrolled in our study. The mean age at ETV was 5.84 ± 5.17 years (range, 0.04-16). Brain tumors, aqueductal stenosis, and inflammatory are the most common hydrocephalus etiologies. The most common complication was noninfectious fever (3.1%). During the average follow-up of 19.5 ± 14.95 months, twenty-five patients had depicted ETV failure. The actual ETV success rate (81%) in our study was higher than the success rate (69%) predicted by ETVSS. TVFB is a pragmatic, efficient, and simple model to predict the ETV outcome. We suggest that for hydrocephalic patients with preoperative third ventricular floor bowing, ETV should be the first-treatment choice regardless of the ETV success score. And for patients without such sign, ETVSS should be applied to select ETV candidates. abstract_id: PUBMED:31158842 Role of Secondary Endoscopic Third Ventriculostomy in Children: Review of an Institutional Experience. Background: Endoscopic third ventriculostomy (ETV) has become a standard and safe procedure for obstructive hydrocephalus. ETV can also play an important role in children presenting with shunt malfunction with an added advantage of shunt independence. Secondary ETV can be defined as either a redo endoscopic ventriculostomy done after primary ETV stoma closure or that done in cases presenting with shunt malfunction. Objective: The aim of our study was to evaluate the role of secondary ETV in the pediatric age group patients. Methods: This is a retrospective analysis of 36 children (&lt;18 years) who underwent ETV after shunt malfunction and 4 children with ETV done after previous ETV stoma closure from 2004 until 2018. In all patients, the obstructive pattern suggesting aqueduct outflow obstruction was observed on MRI. Patients were followed up for a mean period of 4.25 years. Results: ETV was considered successful if the patient avoided a shunt insertion later on in their life. Considering this definition, a success rate of 72% was observed with secondary ETV for shunt malfunction whereas a success rate of 75% was observed after primary ETV failure without any major side effects in any of the patients. Conclusion: ETV can be considered a primary treatment modality in children with shunt malfunction and has a good success rate in cases presenting with closure of previously performed ETV stoma. abstract_id: PUBMED:35733459 Failure of Endoscopic Third Ventriculostomy. Endoscopic third ventriculostomy (ETV) is an alternative to cerebrospinal fluid (CSF) shunting in the treatment of hydrocephalus. Careful patient selection is critical as patient age, etiology of hydrocephalus, and previous shunting have been shown to influence ETV success rates. Intraoperatively, patient anatomy and medical stability may prevent or limit the completion of the ventriculostomy procedure, and findings such as a patulous third ventricular floor or cisternal scarring may portend a lower chance of successful hydrocephalus treatment. Patients in whom a ventriculostomy is completed may still experience continued symptoms of hydrocephalus or CSF leak, representing an early ETV failure. In other patients, the ETV may prove a durable treatment of hydrocephalus for several months or even years before recurrence of hydrocephalus symptoms. The failure pattern for ETV is different than that of shunting, with a higher early failure rate but improved long-term failure-free survival rates. The risk factors for failure, along with the presentation and management of failure, deserve review. Answer: The success rate of endoscopic third ventriculostomy (ETV) appears to be influenced by age, but it is not solely dependent on it. In a study analyzing the results of ETV in young children, it was found that the overall success rate in children less than 2 years of age was 53%, which is lower than the success rates reported in the literature for older age groups (72-92%). However, the success rates varied significantly between different etiologies of hydrocephalus, suggesting that the etiology of hydrocephalus is a more critical factor than age alone (PUBMED:12420119). Another study assessing the outcome of ETV in infants below six months of age with congenital obstructive hydrocephalus reported a success rate of only 12.5%, despite eliminating factors suggested as causes of ETV failure in this age group (PUBMED:24403957). In contrast, a study that included a mixed cohort of adults and children found that the presence of bowing of the third ventricle was strongly associated with a higher ETV success rate, and a combined predictive model that integrates it with the ETV success score (ETVSS) was the most appropriate approach for selecting patients for ETV (PUBMED:38423458). Another study in adults found that the location of obstruction at the aqueduct, irrespective of etiology, was the best predictor of success, and age under 70 did not influence ETV success (PUBMED:38017131). Overall, while age may be a factor in the success rate of ETV, it is not the sole determinant. Other factors such as the etiology of hydrocephalus, the presence of third ventricle bowing, and the location of the obstruction are also important predictors of ETV success. Therefore, careful patient selection considering multiple factors is critical for predicting the outcome of ETV (PUBMED:35751962, PUBMED:35733459).
Instruction: Acute coronary syndromes complicated by symptomatic and asymptomatic heart failure: does current treatment comply with guidelines? Abstracts: abstract_id: PUBMED:15131543 Acute coronary syndromes complicated by symptomatic and asymptomatic heart failure: does current treatment comply with guidelines? Background: Patients with acute coronary syndromes (ACS) complicated by heart failure (HF) are at increased risk of death. Treatment with angiotensin-converting enzyme inhibitors (ACEI), beta-blockers, and early invasive risk stratification are recommended for these patients. Aim: The purpose of the current study was to assess adherence to treatment guidelines of patients with ACS complicated by HF in Europe and the Mediterranean region. Methods And Results: Of the 10,484 patients who participated in Euro-Heart ACS survey, 9587 had known HF status and were without cardiogenic shock; 7058 (74%) did not have symptomatic HF and 2529 (26%) presented with or developed symptomatic HF during hospitalization. HF patients were older and had more cardiovascular risk factors. ACEI were more commonly used in HF patients (75% vs 56%, P &lt; .01), whereas beta-blockers were less frequently used (75% vs 82%, P &lt; .01). Coronary angiography and in hospital revascularization rates were lower among HF patients (42% vs 57% for coronary angiography, P &lt; .01, and 32% vs 42% for revascularization, P &lt; .01). Similar trends were noticed among patients with left ventricular dysfunction (symptomatic and asymptomatic).Adjusted in-hospital mortality risk was higher among patients with ACS complicated by symptomatic HF regardless of electrocardiographic type of ACS: (ST-elevation ACS, OR 2.5, 95% CI 1.6-3.9; non-ST-elevation ACS, OR 8.9,95% CI 4.5-17.7; undetermined-ECG ACS, OR 9.3, 95% CI 2.5-34). Conclusions: Patients with ACS complicated by HF were at increased risk of dying. A relatively high percentage of HF patients were treated with ACEI and beta-blockers in accordance with current recommendations. Rates of coronary angiography and revascularization were significantly lower in ACS patients with HF versus those without HF, which potentially contributed to their worse mortality [corrected] abstract_id: PUBMED:18939910 Acute myocardial infarction complicated by cardiogenic shock: role of mechanical circulatory support. Acute myocardial infarction complicated by cardiogenic shock (AMI-CS) is the leading cause of in-hospital death for patients admitted with acute coronary syndromes. Expert guidelines for the care of AMI-CS patients recommend early revascularization with intra-aortic balloon pump support. Ventricular assist devices (VADs) offer the advantages of providing greater and longer-term cardiac support than an intra-aortic balloon pump and may improve outcomes when inserted early after heart failure symptoms begin. Pulsatile VADs are versatile and can provide biventricular support but are associated with a higher incidence of serious complications. The newer percutaneous VADs can normalize cardiac index and can be implanted without surgery. Therefore, early implementation of percutaneous VADs and early revascularization may reduce the high mortality of AMI-CS. However, access to revascularization and VAD support, including percutaneous VADs, is currently limited and must improve to more effectively treat AMI-CS patients. abstract_id: PUBMED:29421687 Baseline Blood Pressure, the 2017 ACC/AHA High Blood Pressure Guidelines, and Long-Term Cardiovascular Risk in SPRINT. Background: The 2017 American College of Cardiology (ACC)/American Heart Association (AHA) guidelines include lower thresholds to define hypertension than previous guidelines. Little is known about the impact of these guideline changes in patients with or at high risk for cardiovascular disease. Methods: In this exploratory analysis using baseline blood pressure assessments in Systolic Blood Pressure Intervention Trial (SPRINT), we evaluated the prevalence and associated cardiovascular prognosis of patients newly reclassified with hypertension based on the 2017 ACC/AHA (systolic blood pressure ≥130 mm Hg or diastolic blood pressure ≥80 mm Hg) compared with the Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation and Treatment of High Blood Pressure (JNC 7) guidelines (systolic blood pressure ≥140 mm Hg or diastolic blood pressure ≥90 mm Hg). The primary endpoint was the composite of myocardial infarction, other acute coronary syndromes, stroke, heart failure, or cardiovascular death. Results: In 4683 patients assigned to the standard treatment arm of SPRINT, 2328 (49.7%) met hypertension thresholds by JNC 7 guidelines, and another 1424 (30.4%) were newly reclassified as having hypertension based on the 2017 ACC/AHA guidelines. Over 3.3-year median follow-up, 319 patients experienced the primary endpoint (87 of whom were newly reclassified with hypertension based on the revised guidelines). Patients with hypertension based on prior guidelines compared with those newly identified with hypertension based on the new guidelines had similar risk of the primary endpoint (2.3 [95% confidence interval {CI}, 2.0-2.7] vs 2.0 [95% CI, 1.6-2.4] events per 100 patient-years; adjusted HR, 1.10 [95% CI, 0.84-1.44]; P = .48). Conclusions: The 2017 ACC/AHA high blood pressure guidelines are expected to significantly increase the prevalence of patients with hypertension (perhaps to a greater extent in higher-risk patient cohorts compared with the general population) and identify greater numbers of patients who will ultimately experience adverse cardiovascular events. abstract_id: PUBMED:16155391 Balloon-floating right heart catheter monitoring for acute coronary syndromes complicated by heart failure--discordance between guidelines and reality. Purpose: Current guidelines (class IIA recommendations) recommend balloon-floating right heart catheter monitoring (RHC) in particular for patients with ST elevation acute coronary syndromes (ACS) who have progressive heart failure (HF) or cardiogenic shock (CS), as well as for other ACS patients with hemodynamic instability. However, their implementation remains undetermined. The aim of the study was to analyze RHC use in ACS patients, in particular those with HF. Subjects And Methods: The Euro Heart ACS Survey enrolled 10,484 ACS patients (ST elevation, no ST elevation and undetermined ECG). We analyzed the general RHC use, in particular in HF patients. Results: Of the 10,234 patients with data regarding RHC use, RHC was used in 309 patients (3.0%). Patients with RHC presented more often with ST segment elevation ACS, symptoms other than angina, Killip class III/IV, lower systolic blood pressure and a higher heart rate. Of the 10,136 patients with documented HF status, 549 (5.4%) had CS, 2,529 (25.0%) had mild to moderate HF, and 7,058 (69.6%) had no HF. RHC was performed in 111 (20.2%) patients with CS, 87 (3.4%) patients with mild to moderate HF, and 109 (1.5%) patients without HF. In contrast, echocardiography was performed in 376 (68.4%) CS patients, 1,812 (71.6%) patients with mild to moderate HF, and 4,484 (63.5%) patients without HF. Conclusions: In contrast to the unequivocal recommendations to use RHC for hemodynamically unstable ACS patients, RHC is infrequently used in current clinical practice. Perhaps it has been supplanted by echocardiography, a noninvasive and readily available modality. Given the reserved use of RHC in ACS patients, and the reported complications associated with RHC, the guidelines regarding its use should be reconsidered. abstract_id: PUBMED:11346067 New recommendations from the 1999 American College of Cardiology/American Heart Association acute myocardial infarction guidelines. Objective: To review literature relating to significant changes in drug therapy recommendations in the 1999 American College of Cardiology (ACC)/American Heart Association (AHA) guidelines for treating patients with acute myocardial infarction (AMI). Data Sources: 1999 ACC/AHA AMI guidelines, English-language clinical trials, reviews, and editorials researching the role of drug therapy and primary angioplasty for AMI that were referenced in the guidelines were included. Additional data published in 2000 or unpublished were also included if relevant to interpretation of the guidelines. Study Selection: The articles selected influence AMI treatment recommendations. Data Synthesis: Many clinicians and health systems use the ACC/AHA AMI guidelines to develop treatment plans for AMI patients. This review highlights important changes in AMI drug therapy recommendations by reviewing the results of recent clinical trials. Insights into evolving drug therapy strategies that may impact future guideline development are also described. Conclusions: Several changes in drug therapy recommendations were included in the 1999 AMI ACC/AHA guidelines. There is emphasis on administering fibrin-specific thrombolytics secondary to enhanced efficacy. Selection between fibrin-specific agents is unclear at this time. Low response rates to thrombolytics have been noted in the elderly, women, patients with heart failure, and those showing left bundle-branch block on the electrocardiogram. These patient groups should be targeted for improved utilization programs. The use of glycoprotein (GP) IIb/IIIa receptor inhibitors in non-ST-segment elevation MI was emphasized. Small trials combining reduced doses of thrombolytics with GP IIb/IIIa receptor inhibitors have shown promise by increasing reperfusion rates without increasing bleeding risk, but firm conclusions cannot be made until the results of larger trials are known. Primary percutaneous coronary intervention (PCI) trials suggest lower mortality rates for primary PCI when compared with thrombolysis alone. However, primary PCI, including coronary angioplasty, is only available at approximately 13% of US hospitals, making thrombolysis the preferred strategy for most patients. Clopidogrel has supplanted ticlopidine as the recommended antiplatelet agent for patients with aspirin allergy or intolerance following reports of a better safety profile. The recommended dose of unfractionated heparin is lower than previously recommended, necessitating a separate nomogram for patients with acute coronary syndromes. Routine use of warfarin, either alone or in combination with aspirin, is not supported by clinical trials; however, warfarin remains a choice for antithrombotic therapy in patients intolerant to aspirin. Beta-adrenergic receptor blockers continue to be recommended, and emphasis is placed on improving rates of early administration (during hospitalization), even in patients with moderate left ventricular dysfunction. New recommendations for drug treatment of post-AMI patients with low high-density lipoprotein cholesterol and/or elevated triglycerides are included, with either niacin or gemfibrozil recommended as an option. Supplementary antioxidants are not recommended for either primary or secondary prevention of AMI, with new data demonstrating lack of efficacy vitamin E in primary prevention. Estrogen replacement therapy or hormonal replacement therapy should not be initiated solely for prevention of cardiovascular disease, but can be continued in cardiovascular patients already taking long-term therapy for other reasons. Bupropion has been added as a new treatment option for smoking cessation. As drug therapy continues to evolve in treating AMI, more frequent updates of therapy guidelines will be necessary. abstract_id: PUBMED:18855716 The role of trimetazidine after acute myocardial infarction. "Metabolic treatment" involves the use of drugs to improve cardiomyocyte function. Trimetazidine is the most investigated drugs in this group. The ESC 2006 guidelines on the management of patients with stable angina mention the efficacy of metabolic treatment in improving physical efficiency and decreasing the recurrence of pain. The available data suggest that combined therapy of trimetazidine and haemodynamic drugs is an effective antianginal treatment that reduces the risk of pain recurrence (in as many as 64% of patients). The most recent studies also suggest that trimetazidine might be effective in patients with acute coronary syndromes, ischemic cardiomyopathy and heart failure. However, while trimetazidine has shown beneficial effects on surrogate endpoints in several small trials its effect on cardiovascular events is uncertain. Further large randomized studies are needed before its effects on cardiovascular events can be evaluated. abstract_id: PUBMED:38298411 Adherence to the Clinical Practice Guidelines for the hospital management of patients with decompensated heart failure in a Coronary Care Unit in Colombia Objective: To assess adherence to the recommendations for the diagnosis and management of hospitalized patients with Decompensated Heart Failure issued by the European Society of Cardiology in 2021 at a Coronary Care Unit at a fourth-level hospital in the city of Bogotá. Materials And Methods: A descriptive cross-sectional study was conducted, including hospitalized patients in the Coronary Care Unit at Hospital San José in Bogotá, with a primary diagnosis of Decompensated Heart Failure, from September 2021 to January 2023. Patient data were collected from medical records. Adherence to the Decompensated Heart Failure guidelines was described in the study. Results: High adherence was observed for laboratory tests and medication prescriptions recommended by the 2021 European Society of Cardiology guidelines. However, there was low adherence to the request for thyroid function tests, troponin, and iron studies. The cause of heart failure and decompensation was adequately recorded. The most common cause of decompensation was acute coronary syndrome. Regarding the hemodynamic profile on admission, the majority presented as Stevenson B. Pharmacological adherence to Class I recommendations showed high compliance in prescribing beta-blockers, angiotensin-converting enzyme inhibitors, angiotensin II receptor blockers, and Angiotensin Receptor-Neprilysin Inhibitors. However, lower adherence was observed for Sodium-glucose co-transporter two inhibitors and Mineralocorticoid receptor antagonists. Conclusions: Variable adherence rates were recorded, emphasizing satisfactory compliance with class I recommendations for certain medications and laboratory tests. It is necessary to improve adherence in the request for paraclinicals, especially in thyroid function tests and ferrokinetic profiles. abstract_id: PUBMED:23590761 Management of coronary atherosclerosis and acute coronary syndromes in patients with chronic kidney disease. Atherosclerosis of the coronary arteries is common, extensive, and more unstable among patients with chronic renal impairment or chronic kidney disease (CKD). The initial presentation of coronary disease is often acute coronary syndrome (ACS) that tends to be more complicated and has a higher risk of death in this population. Medical treatment of ACS includes antianginal agents, antiplatelet therapy, anticoagulants, and pharmacotherapies that modify the natural history of ventricular remodeling after injury. Revascularization, primarily with percutaneous coronary intervention and stenting, is critical for optimal outcomes in those at moderate and high risk for reinfarction, the development of heart failure, and death in predialysis patients with CKD. The benefit of revascularization in ACS may not extend to those with end-stage renal disease because of competing sources of all-cause mortality. In stable patients with CKD and multivessel coronary artery disease, observational studies have found that bypass surgery is associated with a reduced mortality as compared with percutaneous coronary intervention when patients are followed for several years. This article will review the guidelines-recommended therapeutic armamentarium for the treatment of stable coronary atherosclerosis and ACS and give specific guidance on benefits, hazards, dose adjustments, and caveats concerning patients with baseline CKD. abstract_id: PUBMED:17394896 The new heart failure guidelines: strategies for implementation. Several life-prolonging therapies are available for the treatment of heart failure, yet they are underutilized in many patients. A treatment gap exists between patients who are eligible for a therapy and the number of patients who actually receive them. Suboptimal prescribing rates for angiotensin-converting enzyme inhibitors or angiotensin receptor blockers, beta-blockers, and aldosterone antagonists have been reported. Strategies are needed to increase the proportion of patients who are receiving evidence-based therapies. Predischarge initiation of evidence-based therapies has effectively increased utilization in the acute coronary syndrome population. Predischarge initiation of beta-blockade in patients hospitalized for heart failure has also been shown to be safe and effective for improving beta-blocker use at 60 days. Larger registry initiatives have demonstrated improved use of evidence-based therapies associated with implementation of a performance improvement process system. The mortality and morbidity associated with heart failure is strikingly high, and it is critical that proven therapies are prescribed to all eligible patients so that clinical outcomes can be improved. Systematic processes are effective approaches to ensuring appropriate evidence-based therapies are prescribed in most patients. abstract_id: PUBMED:27166210 Revascularization Trends in Patients With Diabetes Mellitus and Multivessel Coronary Artery Disease Presenting With Non-ST Elevation Myocardial Infarction: Insights From the National Cardiovascular Data Registry Acute Coronary Treatment and Intervention Outcomes Network Registry-Get with the Guidelines (NCDR ACTION Registry-GWTG). Background: Current guidelines recommend surgical revascularization (coronary artery bypass graft [CABG]) over percutaneous coronary intervention (PCI) in patients with diabetes mellitus and multivessel coronary artery disease. Few data are available describing revascularization patterns among these patients in the setting of non-ST-segment-elevation myocardial infarction. Methods And Results: Using Acute Coronary Treatment and Intervention Outcomes Network Registry-Get with the Guidelines (ACTION Registry-GWTG), we compared the in-hospital use of different revascularization strategies (PCI versus CABG versus no revascularization) in diabetes mellitus patients with non-ST-segment-elevation myocardial infarction who had angiography, demonstrating multivessel coronary artery disease between July 2008 and December 2014. Factors associated with use of CABG versus PCI were identified using logistic multivariable regression analyses. A total of 29 769 patients from 539 hospitals were included in the study, of which 10 852 (36.4%) were treated with CABG, 13 760 (46.2%) were treated with PCI, and 5157 (17.3%) were treated without revascularization. The overall use of revascularization increased over the study period with an increase in the proportion undergoing PCI (45% to 48.9%; Ptrend=0.0002) and no change in the proportion undergoing CABG (36.1% to 34.7%; ptrend=0.88). There was significant variability between participating hospitals in the use of PCI and CABG (range: 22%-100%; 0%-78%, respectively; P value &lt;0.0001 for both). Patient-level, but not hospital-level, characteristics were statistically associated with the use of PCI versus CABG, including anatomic severity of the disease, early treatment of adenosine diphosphate receptor antagonists at presentation, older age, female sex, and history of heart failure. Conclusions: Among patients with diabetes mellitus and multivessel coronary artery disease presenting with non-ST-segment-elevation myocardial infarction, only one third undergo CABG during the index admission. Furthermore, the use of PCI, but not CABG, increased modestly over the past 6 years. Answer: The treatment of acute coronary syndromes (ACS) complicated by heart failure (HF) has been a subject of investigation in several studies, with a focus on adherence to clinical guidelines. According to the Euro-Heart ACS survey, patients with ACS complicated by HF were at an increased risk of dying, and while a relatively high percentage of these patients were treated with angiotensin-converting enzyme inhibitors (ACEI) and beta-blockers in accordance with current recommendations, the rates of coronary angiography and revascularization were significantly lower in ACS patients with HF compared to those without HF (PUBMED:15131543). This suggests that while medication adherence is relatively high, there may be a gap in the implementation of invasive procedures recommended by guidelines. In the case of acute myocardial infarction complicated by cardiogenic shock (AMI-CS), expert guidelines recommend early revascularization with intra-aortic balloon pump support, and ventricular assist devices (VADs) may improve outcomes when inserted early after heart failure symptoms begin (PUBMED:18939910). However, access to revascularization and VAD support, including percutaneous VADs, is currently limited, which may affect the treatment of AMI-CS patients. The Euro Heart ACS Survey also found that balloon-floating right heart catheter monitoring (RHC) is infrequently used in current clinical practice for ACS patients, despite guidelines recommending its use for hemodynamically unstable patients. This suggests a discordance between guidelines and reality, potentially due to the availability of noninvasive modalities like echocardiography (PUBMED:16155391). In summary, while there is adherence to certain pharmacological treatments for ACS complicated by HF, as per guidelines, there appears to be less compliance with recommendations for invasive procedures such as coronary angiography, revascularization, and the use of RHC. This indicates that current treatment does not fully comply with guidelines, and there may be a need for improved implementation of these recommendations to enhance patient outcomes.
Instruction: The fibromyalgia diagnosis: hardly helpful for the patients? Abstracts: abstract_id: PUBMED:26051578 Characteristics of acupuncture users among internal medicine patients in Germany. Objectives: To identify socio-demographic and health-related factors associated with (a) acupuncture use and (b) the rated helpfulness of acupuncture among internal medicine patients. Methods: Data from a larger cross-sectional trial were reanalyzed. Patients who had used acupuncture for managing their primary medical complaint were compared to patients who had not. Predictors for (a) acupuncture use and (b) rated helpfulness were determined using logistic regression analyses. Results: Of 2486 included patients, 51.49% reported acupuncture use and 39.22% reported no prior use. The use of acupuncture was associated with higher age, i.e. those aged 50-64 were more likely to have used acupuncture, while those younger than 30 were less likely. Patients with spinal pain, fibromyalgia, or headache were more likely to be acupuncture users; while IBS patients were less likely. Patients with good to excellent health status, high external-social health locus of control and current smokers were less likely to have used acupuncture. Among those who had used acupuncture, 42.34% perceived the treatment as helpful, while 35.94% did not. Rated helpfulness was associated with female gender, full-time employment, high health satisfaction, and high internal health locus of control. Those with a diagnosis of osteoarthritis or inflammatory bowel disease were more likely to find acupuncture helpful; those with headache or other types of chronic pain were less likely to find acupuncture helpful. Conclusion: Acupuncture was used by more than half of internal medicine patients. Prevalence and rated helpfulness of acupuncture use was associated with the patients' medical condition, sociodemography, and health locus of control. abstract_id: PUBMED:18041660 The fibromyalgia diagnosis: hardly helpful for the patients? A qualitative focus group study. Objective: To explore experiences and consequences of the process of being diagnosed with fibromyalgia. Design: Qualitative focus-group study. Setting: Two local self-help groups. Subjects: Eleven women diagnosed with fibromyalgia. Main Outcome Measures: Descriptions of experiences and consequences of the process of being diagnosed with fibromyalgia. Results: Many participants had been suffering for years, and initial response of relief was common. For some, the diagnosis legitimized the symptoms as a disease, for others it felt better to suffer from fibromyalgia rather than more serious conditions. Nevertheless sadness and despair emerged when they discovered limitations in treatment options, respect, and understanding. Some patients keep the diagnosis to themselves since people seem to pay no attention to the name, or blatantly regard them as too cheerful or healthy looking. The initial blessing of the fibromyalgia diagnosis seems to be limited in the long run. The process of adapting to this diagnosis can be lonely and strenuous. Conclusion: A diagnosis may be significant when it provides the road to relief or legitimizes the patient's problems. The social and medical meaning of the fibromyalgia diagnosis appears to be more complex. Our findings propose that the diagnosis was hardly helpful for these patients. abstract_id: PUBMED:10478772 Stress: the chiropractic patients' self-perceptions. Background: Psychosocial stress pervades modern life and is known to have an impact on health. Pain, especially chronic back pain, is influenced by stress. Various strategies have been shown to successfully reduce stress and its consequences. Objectives: This study explores stress as a potential disease trigger among chiropractic patients. Method: A descriptive study was undertaken to ascertain the stress perceptions of chiropractic patients. Purposive sampling of chiropractic practices and convenience sampling of patients was undertaken. Patients were allocated to 1 of 4 groups according to their presentation: acute, chronic biomechanical, fibromyalgia, or maintenance care. Participating patients were requested to complete a questionnaire. Results: Of the 138 patients attending 1 of 10 participating chiropractic clinics, more than 30% regarded themselves as moderately to severely stressed, and over 50% felt that stress had a moderate or greater effect on their current problem. Some 71% of patients felt it would be helpful if their chiropractic care included strategies to help them cope with stress, and 44% were interested in taking a self-development program to enhance their stress management skills. Conclusion: Patient perceptions are known to be important in health care. A number of chiropractic patients perceive they are moderately or severely stressed. Interventions that reduce stress, or even the patient's perception of being stressed, may be construed as valid, non-specific clinical interventions. It may be timely for chiropractors to actively contemplate including stress management routinely in their clinical care protocols. abstract_id: PUBMED:23457682 Patients' and professionals' views on managing fibromyalgia. Background: Managing fibromyalgia is a challenge for both health care systems and the professionals caring for these patients, due, in part, to the fact that the etiology of this disease is unknown, its symptoms are not specific and there is no standardized treatment. Objective: The present study examines three aspects of fibromyalgia management, namely diagnostic approach, therapeutic management and the health professional-patient relationship, to explore specific areas of the health care process that professionals and patients may consider unsatisfactory. Methods: A qualitative study involving semistructured interviews with 12 fibromyalgia patients and nine health professionals was performed. Results: The most commonly recurring theme was the dissatisfaction of both patients and professionals with the management process as a whole. Both groups expressed dissatisfaction with the delay in reaching a diagnosis and obtaining effective treatment. Patients reported the need for greater moral support from professionals, whereas the latter often felt frustrated and of little help to patients. Patients and professionals agreed on one point: the uncertainty surrounding the management of fibromyalgia and, especially, its etiology. Conclusion: The present study contributes to a better understanding regarding why current management of fibromyalgia is neither effective nor satisfactory. It also provides insight into how health professionals can support fibromyalgia patients to achieve beneficial results. Health care services should offer greater support for these patients in the form of specific resources such as fibromyalgia clinics and health professionals with increased awareness of the disease. abstract_id: PUBMED:27989274 The social construction of fibromyalgia as a health problem from the perspective of policies, professionals, and patients. This article is a review of the PhD thesis written by Erica Briones-Vozmediano, entitled, 'The social construction of fibromyalgia as a health problem from the perspective of policies, professionals, and patients'. The findings show that in Spain, the fact that fibromyalgia (FM) lacks recognition still remains: in policies, in the clinical and professional fields, and in the patients' social circle. These three spheres have an influence on how this disease is constructed on a social level. International health policy has not yet taken steps to reflect the emergence of this recently diagnosed disease. The care for patients suffering from FM, who are mainly women, leads to frustration among the healthcare professionals and desperation among the patients themselves, as a resolutive treatment for the disease is not existing. Patients show resistance at assuming the sick role. They want to carry on undertaking their daily activities, both in the public sphere and in the private one. Roles involving the gendered division of labour were found to follow a rigid pattern, both prior to and subsequent to the disease, as the causes that led to frustration for men or women differ according to activities that are socially assigned to them. In practice, FM is conceived exclusively as a women's health problem, which may result in a gender-biased patient healthcare attention. It is recommended that the implementation of specific policies for FM which could resolve this evident shortcoming should take place. To draw attention on a social level to certain illnesses considered to be attributed to women, such as FM, is of utmost importance, in order to allow the patients to be socially recognised as suffering a real and disabling disease. abstract_id: PUBMED:19077065 Psychological characteristics of FMS patients. This research is a pilot study that explores the psychological profiles of fibromyalgia (FMS) patients. Data were collected from 29 subjects. The variables investigated were attachment style, sense of coherence (SOC), attribution style and depression. The prevalence of secure attachment amongst the group was 51.7%. Significant differences were found amongst the secure and insecure groups with relation to SOC, depression and five subscales of attribution style. The small sample size and cross-sectional nature of the study limit the strength of the conclusions drawn, but the results question the existence of a single discreet FMS-prone psychological profile. abstract_id: PUBMED:38226027 Low-dose naltrexone for treatment of pain in patients with fibromyalgia: a randomized, double-blind, placebo-controlled, crossover study. Introduction: Fibromyalgia (FM) is a chronic fluctuating, nociplastic pain condition. Naltrexone is a µ-opioid-receptor antagonist; preliminary studies have indicated a pain-relieving effect of low-dose naltrexone (LDN) in patients with FM. The impetus for studying LDN is the assumption of analgesic efficacy and thus reduction of adverse effects seen from conventional pharmacotherapy. Objectives: First, to examine if LDN is associated with analgesic efficacy compared with control in the treatment of patients with FM. Second, to ascertain the analgesic efficacy of LDN in an experimental pain model in patients with FM evaluating the competence of the descending inhibitory pathways compared with controls. Third, to examine the pharmacokinetics of LDN. Methods: The study used a randomized, double-blind, placebo-controlled, crossover design and had a 3-phase setup. The first phase included baseline assessment and a treatment period (days -3 to 21), the second phase a washout period (days 22-32), and the third phase a baseline assessment followed by a treatment period (days 33-56). Treatment was with either LDN 4.5 mg or an inactive placebo given orally once daily. The primary outcomes were Fibromyalgia Impact Questionnaire revised (FIQR) scores and summed pain intensity ratings (SPIR). Results: Fifty-eight patients with FM were randomized. The median difference (IQR) for FIQR scores between LDN and placebo treatment was -1.65 (18.55; effect size = 0.15; P = 0.3). The median difference for SPIR scores was -0.33 (6.33; effect size = 0.13; P = 0.4). Conclusion: Outcome data did not indicate any clinically relevant analgesic efficacy of the LDN treatment in patients with FM. abstract_id: PUBMED:1509466 What symptoms and complaints result in sick-listing? ICPC-coding of patients' own opinion in general practice The aim of this study was to describe which complaints patients themselves regard as the cause of sickness certification. During one week in April 1986, 1,379 patients in Buskerud county, Norway, filled in a form after receiving an initial certificate of illness or a continuation certificate from a general practitioner. International Classification of Primary Care (ICPC) no. 1-29 was used to classify the patients' symptoms and complaints. More than half of the patients (53%) considered pain in the musculoskeletal system, particularly back pain (22%), as the reason for their sickness certification. Physical work load was assumed to be a contributory cause to the complaints by 66% of the patients certified sick because of back pain, 58% of those certified sick due to cervical spine and shoulder symptoms, and 72% of those with myalgia, fibrositis (ICPC no. L18, L19). 29% of the patients meant that their health problems could have been prevented, and 15% reported that they could have continued work if their jobs had been adjusted because of their condition. abstract_id: PUBMED:25073746 Fibromyalgia patients' perceptions of the impact of the disease in the workplace Objective: To explore the perceptions of patients with fibromyalgia (FM) on the problems they experience in the workplace, to discuss how they face and adapt to the limitations imposed by the symptoms of this disease. Design: An exploratory qualitative study conducted in 2009. Location: Associations of patients of FM from Valencia (Spain). Participants: Sixteen patients (13 women and 3 men) diagnosed with FM by a rheumatologist, of different ages and occupations, selected from key informants and the snowball technique. Method: Pragmatic sample. Semi-structured interviews until saturation of information when no new information emerged. Qualitative content analysis using the software Atlas.ti-5, to generate and assign codes, forming categories and identifying a latent theme. Results: We identified four categories: difficulties in meeting the work demands, need for social support in the workplace, strategies adopted to continue working, and resistance to leave the employment. A theme which crosscut these categories emerged: FM patientś motivation to continue in the labour market. Conclusions: Addressing the specific needs of patients it is essential in order to helping them to stay in the labour market, according to their capabilities. Awareness programs about the consequences of FM in the workplace are needed to achieve the collaboration of managers, entrepreneurs, occupational health professionals and primary care physicians and nurses. abstract_id: PUBMED:13241738 Fibropathic syndromes in geriatric patients. N/A Answer: The diagnosis of fibromyalgia appears to have complex implications for patients. While some patients initially feel relief at having their symptoms legitimized as a disease, this initial blessing seems to be limited in the long run. Patients often experience sadness and despair when they realize the limitations in treatment options, respect, and understanding from others. The social and medical meaning of the fibromyalgia diagnosis is more complex, and many patients find the diagnosis hardly helpful as they adapt to it in a lonely and strenuous process (PUBMED:18041660). Furthermore, managing fibromyalgia is a challenge for both health care systems and professionals, partly due to the unknown etiology of the disease, non-specific symptoms, and lack of standardized treatment. Both patients and professionals express dissatisfaction with the management process, including delays in diagnosis and effective treatment. Patients desire more moral support from professionals, who often feel frustrated and of little help. The uncertainty surrounding the management of fibromyalgia and its etiology is a point of agreement between patients and professionals (PUBMED:23457682). Additionally, the social construction of fibromyalgia as a health problem is influenced by the lack of recognition in policies, clinical and professional fields, and patients' social circles. The care for fibromyalgia patients, who are mainly women, leads to frustration among healthcare professionals and desperation among patients due to the absence of a resolutive treatment. The disease is often conceived exclusively as a women's health problem, which may result in gender-biased patient healthcare attention (PUBMED:27989274). In summary, the diagnosis of fibromyalgia can be a double-edged sword for patients, providing initial relief but also leading to long-term challenges in treatment, social acceptance, and support from the healthcare system.
Instruction: 1,25 Dihydroxyvitamin D(3) receptor expression in superficial transitional cell carcinoma of the bladder: a possible prognostic factor? Abstracts: abstract_id: PUBMED:27366313 Evaluating the Prevalence of the Epidermal Growth Factor Receptor in Transitional Cell Carcinoma of Bladder and its Relationship With Other Prognostic Factors. Background: The most common malignancy in the urinary system has been bladder cancer and the most predominant histologic subtype has been transitional cell carcinoma (TCC). There were many molecular risk factors, related with poor prognosis. One of these factors was expression of epidermal growth factor receptor (EGFR). Objectives: The aim of this study was to evaluate the prevalence of the epidermal growth factor receptor in transitional cell carcinoma of bladder and its relationship with other prognostic factors. Patients And Methods: This analytic descriptive study has performed with 61 patients with TCC of bladder after radical cystectomy whom have been hospitalized in Labbafinejad hospital in Tehran, Iran between 2007 and 2010. We have used Chi-square and t-test to analyze our data samples. Results: Records of 61 patients have studied. Fifty three of the total samples were positive for EGFR expression (86.9%). Fifty samples of these fifty-three belonged to men and three others were women's samples (P = 0.46). Among the group with EGFR expression the results were as follows: 25 patients (47.2%) were 60 years old or less and 28 patients (52.8%) were older than 60 (P = 0.023), 16 patients (30.2%) had invasion to lamina properia, and the rest of them had invasion to deeper layers (P = 0.56). For most patients we could not determine the invasion of tumoral cells into the lymph nodes (Nx) (P = 0.067). Thirty four patients (64.2%) had not lymphovascular invasion (P = 0.44) and in forty three of patients (81.1%), perineural invasion have not seen (P = 0.23). Finally, 36 patients (67.9%) were grade 3 (P = 0.27). Conclusions: In this study we have concluded that most patients had EGFR positive expression. Also, except for the age, there was not any significant relation between expression of EGFR and the other prognostic factors such as, gender, invasion of the tumor into the layers, involving the lymph nodes, lymphovascular or perineural invasion, and grading. abstract_id: PUBMED:20414420 Expression of fibroblast growth factor receptor 3 in the recurrence of non-muscle-invasive urothelial carcinoma of the bladder. Purpose: The fibroblast growth factor receptor 3 (FGFR3) gene is known to be frequently mutated in noninvasive urothelial carcinomas of the bladder. In this study, we investigated the expression of FGFR3, Ki-67, and p53 in bladder cancers and the effects of expression on tumor recurrence. Materials And Methods: Fifty-five cases of primary bladder cancer were examined by immunohistochemistry. The relationship of these markers with various clinicopathological factors, including recurrence, was assessed. Results: Positivity for cytoplasmic FGFR3 (FGFR3-c) was associated with a lower cancer grade (p=0.022) and stage (p=0.011). Recurrence was more frequent in patients with a higher stage, negative FGFR3-c, and high Ki-67 expression. According to univariate analysis, predictors of recurrence-free survival included the following: age, stage, FGFR-c, Ki-67, and p53. However, none of these was independent from the other parameters in multivariate studies. Conclusions: The immunohistochemical expression of FGFR3 is not only one of the characteristic features of lower-grade and lower-stage urothelial carcinoma but also a possible marker in predicting disease recurrence. abstract_id: PUBMED:32847703 Hyperphosphatemia Secondary to the Selective Fibroblast Growth Factor Receptor 1-3 Inhibitor Infigratinib (BGJ398) Is Associated with Antitumor Efficacy in Fibroblast Growth Factor Receptor 3-altered Advanced/Metastatic Urothelial Carcinoma. Background: Infigratinib (BGJ398) is a potent, selective fibroblast growth factor receptor (FGFR) 1-3 inhibitor with significant activity in metastatic urothelial carcinoma (mUC) bearing FGFR3 alterations. It can cause hyperphosphatemia due to the "on-target" class effect of FGFR1 inhibition. Objective: To investigate the relationship between hyperphosphatemia and treatment response in patients with mUC. Intervention: Oral infigratinib 125 mg/d for 21 d every 28 d. Design, Setting, And Participants: Data from patients treated with infigratinib in a phase I trial with platinum-refractory mUC and activating FGFR3 alterations were retrospectively analyzed for clinical efficacy in relation to serum hyperphosphatemia. The relationship between plasma infigratinib concentration and phosphorous levels was also assessed. Outcome Measurements And Statistical Analysis: Clinical outcomes were compared in groups with/without hyperphosphatemia. Results And Limitations: Of the 67 patients enrolled, 48 (71.6%) had hyperphosphatemia on one or more laboratory tests. Findings in patients with versus without hyperphosphatemia were the following: overall response rate 33.3% (95% confidence interval [CI] 20.4-48.4) versus 5.3% (95% CI 0.1-26.0); disease control rate 75.0% (95% CI 60.4-86.4) versus 36.8% (95% CI 16.3-61.6). This trend was maintained in a 1-mo landmark analysis. Pharmacokinetic/pharmacodynamic analysis showed that serum phosphorus levels and physiologic infigratinib concentrations were correlated positively. Key limitations include retrospective design, lack of comparator, and limited sample size. Conclusions: This is the first published study to suggest that hyperphosphatemia caused by FGFR inhibitors, such as infigratinib, can be a surrogate biomarker for treatment response. These findings are consistent with other reported observations and will need to be validated further in a larger prospective trial. Patient Summary: Targeted therapy is a new paradigm in treating bladder cancer. In a study using infigratinib, a drug that targets mutations in a gene called fibroblast growth factor receptor 3 (FGFR3), we found that elevated levels of phosphorous were associated with greater clinical benefit. In the future, these data may help inform treatment strategies. abstract_id: PUBMED:15582249 1,25 Dihydroxyvitamin D(3) receptor expression in superficial transitional cell carcinoma of the bladder: a possible prognostic factor? Objective: Vitamin D receptors (VDR) have been detected in normal tissues and in a number of cancer types. This study was undertaken to determine the VDR expression status and to elucidate the prognostic significance of VDRs in superficial transitional cell carcinoma (TCC) of the human bladder. Methods: VDR expression was investigated in the tumour tissue blocks which were obtained by transurethral resection from 105 patients with superficial TCC without concomitant carcinoma in situ and in 30 control subjects. Median follow-up of the patients was 40 months. The expression of nuclear VDR was evaluated immunohistochemically using avidin-biotin-peroxidase method and a monoclonal VDR antibody. VDR staining intensity in samples were assessed semi-quantitatively and graded as [-] if VDR was lacking, [+] if &lt;33% of cells were stained, [++] if 33-66% of cells and [+++] if &gt;66% were stained. Staining characteristics were compared with the clinico-pathologic results. Results: VDRs were detected in 85.7% of the patients with superficial TCC and in 66.6% of the controls (p = 0.02). No correlation was found between VDR expression and pathological stage and grade (p = 0.05 and p = 0.09, respectively). Progression in pathologic stage was significantly higher in VDR[+++] tumours (p = 0.001). Also, disease-free survival was significantly lower and tumour size was significantly greater in VDR [+++] tumours than [-], [+] and [++] ones (p = 0.02, p = 0.008 and 0.007, respectively). No significant difference was found between patient age, sex, tumour multiplicity in terms of VDR expression. Survival was not affected by VDR expression. In multivariate analysis VDR expression was not found to be an independent prognostic factor. Conclusion: Superficial TCC of the bladder express VDRs. The association of increased VDR expression and higher disease progression may be useful in discriminating less differentiated superficial TCCs with poor outcome. abstract_id: PUBMED:19675076 Estrogen receptor 1 mRNA is a prognostic factor in ovarian carcinoma: determination by kinetic PCR in formalin-fixed paraffin-embedded tissue. Epidemiological and cell culture studies indicate that ovarian carcinoma growth is dependent on estrogen stimulation. However, possibly due to the lack of a reliable biomarker that helps to select patients according to prognostically relevant estrogen receptor (ER) levels, clinical trials using anti-estrogenic therapeutics in ovarian carcinoma have had inconsistent results. Therefore, we tested if ER expression analysis by a quantitative method might be useful in this regard in formalin-fixed paraffin-embedded (FFPE) tissue. In a study group of 114 primary ovarian carcinomas expression of estrogen receptor 1 (ESR1) mRNA was analyzed using a new method for RNA extraction from FFPE tissue that is based on magnetic beads, followed by kinetic PCR. The prognostic impact of ESR1 mRNA expression was investigated and compared to ERalpha protein expression as determined by immunohistochemistry. In univariate survival analysis the expression level of ESR1 mRNA was a significant positive prognostic factor for patient survival (hazard ratio (HR) 0.230 (confidence interval (CI) 0.102-0.516), P=0.002). ERalpha protein expression was correlated to ESR1 mRNA expression (P=0.0001); however, ERalpha protein expression did not provide statistically significant prognostic information. In multivariate analysis, ESR1 mRNA expression emerged as a prognostic factor, independent of stage, grade, residual tumor mass, age, and ERalpha protein expression (HR 0.227 (CI 0.078-0.656), P=0.006). Our results indicate that the determination of ESR1 levels by kinetic PCR may be superior to immunohistochemical methods in assessment of biologically relevant levels of ER expression in ovarian carcinoma, and is feasible in routinely used FFPE tissue. abstract_id: PUBMED:26942140 Human epidermal growth factor receptor 2/neu overexpression in urothelial carcinoma of the bladder and its prognostic significance: Is it worth hype? Aims: In urothelial tumors of the urinary bladder, human epidermal growth factor receptor 2 (HER-2)/neu expression has been reported over 10 years, but there is no clear correlation between prognosis and recurrence rate. The present study evaluates prognostic implication of HER-2/neu expression. Subjects And Methods: In this study, 100 formalin-fixed paraffin-embedded specimens of primary transitional cell carcinoma of the bladder were processed. HER-2/neu monoclonal antibody immunohistochemistry staining procedure used for the study. Results: A total of 70 (70%) patients were positive for overexpression of HER-2/neu. HER-2/neu was positive in patients with 42 (70%) superficial tumor, 28 (70%) muscle invasive tumor, 41 (75.9%) high-grade tumor, 29 (63%) low grade tumor, 31 (68.9%) recurrent tumor, and 6 (66.6%) had positive lymph nodes. Conclusions: Human epidermal growth factor receptor 2/neu over expression was not correlated with the tumor stage, lymphnode metastasis or recurrence of the disease. HER-2/neu overexpression was statistically insignificantly correlated with the differentiation grade (P &lt; 0.161) as compared to previous studies. Future studies on HER-2 expression with chemo-sensitivity and efficacy of HER-2-targeted therapies in urothelial carcinomas is needed. abstract_id: PUBMED:9079740 Evaluation of epidermal growth factor receptor, transforming growth factor alpha, epidermal growth factor and c-erbB2 in the progression of invasive bladder cancer. Introduction: Determination of the risk of invasive bladder tumors progressing is still imprecise due to the heterogeneous biological behavior of this neoplasm. The goals of this study were to evaluate the patterns of expression of the epidermal growth factor (EGF) system in invasive bladder cancer and to assess its prognostic value. Methods: This immunohistochemical study was performed using fresh frozen tumor samples and a panel of monoclonal antibodies on a series of 43 invasive bladder cancers treated by cystectomy. Results: EGF was detected in 45% of the tumors and did not correlate with survival from bladder cancer. Transforming growth factor alpha (TGF alpha) was expressed by 60% of the tumors and correlated strongly with death from bladder cancer. Epidermal growth factor receptor (EGF-R) expression was seen in 86% of cases and had no prognostic significance. c-erbB2 was expressed in 50% of cases and was inversely related to a poor prognosis. When EGF and TGF alpha were both expressed, there was little or no expression of c-erbB2. Conclusion: The accumulation of several growth factors and the relevant receptor are necessary for the progression of invasive bladder cancers. They could be used as indicators of tumor aggressiveness. abstract_id: PUBMED:25048477 Prognostic value of sex-hormone receptor expression in non-muscle-invasive bladder cancer. Purpose: We investigated sex-hormone receptor expression as predicting factor of recurrence and progression in patients with non-muscle invasive bladder cancer. Materials And Methods: We retrospectively evaluated tumor specimens from patients treated for transitional cell carcinoma of the bladder at our institution between January 2006 and January 2011. Performing immunohistochemistry using a monoclonal androgen receptor antibody and monoclonal estrogen receptor-beta antibody on paraffin-embedded tissue sections, we assessed the relationship of immunohistochemistry results and prognostic factors such as recurrence and progression. Results: A total of 169 patients with bladder cancer were evaluated in this study. Sixty-threepatients had expressed androgen receptors and 52 patients had estrogen receptor beta. On univariable analysis, androgen receptor expression was significant lower in recurrence rates (p=0.001), and estrogen receptor beta expression was significant higher in progression rates (p=0.004). On multivariable analysis, significant association was found between androgen receptor expression and lower recurrence rates (hazard ratio=0.500; 95% confidence interval, 0.294 to 0.852; p=0.011), but estrogen receptor beta expression was not significantly associated with progression rates. Conclusion: We concluded that the possibility of recurrence was low when the androgen receptor was expressed in the bladder cancer specimen and it could be the predicting factor of the stage, number of tumors, carcinoma in situ lesion and recurrence. abstract_id: PUBMED:29111177 Association of Androgen Receptor Expression on Tumor Cells and PD-L1 Expression in Muscle-Invasive and Metastatic Urothelial Carcinoma: Insights for Clinical Research. Background: Limited information is available regarding the use of androgen receptor (AR) immunohistochemical expression in muscle-invasive or metastatic urothelial carcinoma. We aimed to evaluate the frequency of AR expression by tumor cells (TC), its prognostic role, and its relationship with programmed cell-death ligand 1 (PD-L1) expression in these patients. Patients And Methods: From September 2015 to January 2017, we collected tissue from patients who received platinum-based chemotherapy at our center. Immunohistochemistry for AR was performed (1% cutoff of TC). PD-L1 coexpression, by TC or immune cells (1% cutoff), was also analyzed. Molecular analysis of AR gene was performed by sequencing of exons 5 to 8 and by fluorescence in-situ hybridization analysis. Cox models for overall survival (OS), adjusted for stage, visceral metastases, and platinum type, were fitted. Results: A total of 110 patients had tumor samples stained. Overall, 48 (43.6%) had AR-expressing TC: 19 (17.3%) had 1%-5% expression, 15 (13.6%) 5%-25% expression, and 14 (12.7%) &gt; 25% expression. Among the latter, 7 had molecularly evaluated tumor tissue: no AR gene mutations or amplifications were found, but polysomy of Xq chromosome was seen. PD-L1 expression by TC and immunohistochemistry concordantly decreased with increasing levels of AR expression by TC. In Cox analyses, AR expression was not associated with OS, both on univariable (P = .477) and multivariable (P = .505) analyses. Conclusion: AR is frequently expressed in patients with muscle-invasive and advanced urothelial carcinoma, and it does not seem to be prognostic for OS. The AR pathway is worthy of clinical studies to assess its synergistic action with anti-PD-L1 therapy. abstract_id: PUBMED:15245814 Hypoxia-inducible factor 1 alpha expression correlates with angiogenesis and unfavorable prognosis in bladder cancer. Introduction And Objectives: Hypoxia-inducible factor 1 alpha (HIF-1 alpha) is a critical regulatory protein of cellular response to hypoxia and is closely related to the triggering of the angiogenic process. We examined the relationship between hypoxia and angiogenesis, as well as their prognostic impact in patients with urothelial bladder cancer. Methods: The immunohistochemical expression of HIF-1 alpha was evaluated in 93 formalin-fixed paraffin-embedded primary transitional cell carcinoma tissue samples. HIF-1 alpha was recognized through nuclear staining of positive cells. The angiogenic profile was individually assessed immunohistochemically using a monoclonal antibody to vascular endothelial growth factor (VEGF) and microvessel density (MVD) was calculated with immunohistochemical staining of the adhesion molecule CD31 of the endothelial cells. Results: A significant positive association between HIF-1 alpha immunoreactivity and histological grade (p=0.009) was found. VEGF and MVD were closely related to tumor grade (p=0.06 and p&lt;0.001) and clinical stage (p=0.04 and p&lt;0.01, respectively). HIF-1 alpha was significantly correlated with VEGF expression (p=0.01) and MVD (p&lt;0.001). Patients characterized by HIF-1 alpha overexpression had significantly worse overall (p=0.009) and disease-free survival (p=0.03). When HIF-1 alpha, histologic grade and stage were included in multivariate Cox regression analysis, HIF-1 alpha emerged as an independent prognostic factor (p=0.02) along with grade and stage, but lost its independent prognostic value after the inclusion of angiogenic factors in the multivariate model. In the subgroup of patients with T1 disease, HIF-1 alpha emerged as a significant negative predictor of the time to first recurrence. Conclusions: HIF-1 alpha and angiogenesis markers may play an important predictive and prognostic role in patients with bladder cancer. HIF-1 alpha may be of biologic and clinical value as its overexpression is related to up-regulation of VEGF, the stimulation of angiogenesis and worse prognosis. Answer: The study identified in abstract PUBMED:15582249 investigated the expression of the 1,25 Dihydroxyvitamin D3 receptor (VDR) in superficial transitional cell carcinoma (TCC) of the bladder to determine its prognostic significance. The researchers found that VDRs were detected in 85.7% of patients with superficial TCC and in 66.6% of the controls, indicating a higher expression in TCC patients (p = 0.02). However, no correlation was found between VDR expression and pathological stage and grade (p = 0.05 and p = 0.09, respectively). The study did observe that progression in pathologic stage was significantly higher in tumors with the highest level of VDR expression (+++), and these tumors also had significantly lower disease-free survival and greater tumor size compared to tumors with lower levels of VDR expression (p = 0.001, p = 0.02, p = 0.008 and 0.007, respectively). Despite these findings, VDR expression was not found to be an independent prognostic factor in multivariate analysis. The study concluded that while superficial TCC of the bladder expresses VDRs, and increased VDR expression is associated with higher disease progression, it may be useful in discriminating less differentiated superficial TCCs with poor outcomes. However, VDR expression alone was not sufficient to affect patient survival as an independent prognostic factor.
Instruction: Noninvasive markers to diagnose cirrhosis in patients with HBeAg positive chronic hepatitis: Do new biomarkers improve the accuracy? Abstracts: abstract_id: PUBMED:20433821 Noninvasive markers to diagnose cirrhosis in patients with HBeAg positive chronic hepatitis: Do new biomarkers improve the accuracy? Objectives: The goal of the study was to clarify whether new biomarkers independently contribute to the diagnosis of cirrhosis. Design And Methods: A total of 142 consecutive patients with HBeAg positive chronic hepatitis who underwent liver biopsy were recruited. The Cirrhosis Score (CS)-1 was derived from routine laboratory data only. The CS-2 was calculated using all correlates obtained from both routine laboratory data and 7 new biomarkers. Results: A comparison of the area under the receiver operating characteristic (ROC) curve between CS-1 [0.84 (95% CI, 0.74 to 0.94)] and CS-2 [0.86 (0.78 to 0.95)] showed no superior diagnostic accuracy of CS-2 over CS-1 (p=0.24). Conclusions: None of the new biomarkers had value in addition to readily available laboratory data for differentiating cirrhosis from HBeAg positive chronic hepatitis B. abstract_id: PUBMED:20827410 A comparison of hepatitis B viral markers of patients in different clinical stages of chronic infection. Purpose: Hepatitis B viral markers may be useful for predicting outcomes such as liver-related deaths or development of hepatocellular carcinoma. We determined the frequency of these markers in different clinical stages of chronic hepatitis B infection. Methods: We compared baseline hepatitis B viral markers in 317 patients who were enrolled in a prospective study and identified the frequency of these tests in immune-tolerant (IT) patients, in inactive carriers, and in patients with either hepatitis B e antigen (HBeAg)-positive or HBeAg-negative chronic hepatitis or cirrhosis. Results: IT patients were youngest (median age 27 years) and HBeAg-negative patients with cirrhosis were oldest (median age 58 years) (p = 0.03 to &lt;0.0001). The male to female ratio was similar both in IT patients and in inactive carriers, but there was a male preponderance both in patients with chronic hepatitis and in patients with cirrhosis (p &lt; 0.0001). The A1896 precore mutants were most prevalent in inactive carriers (36.4%) and HBeAg-negative patients with chronic hepatitis (38.8%; p &lt; 0.0001), and the T1762/A1764 basal core promoter mutants were most often detected in HBeAg-negative patients with cirrhosis (65.1%; p = 0.02). Genotype A was detected only in 5.3% of IT patients, and genotype B was least often detected in both HBeAg-Positive patients with chronic hepatitis and cirrhosis (p = 0.03). The hepatitis B viral DNA levels were lowest in inactive carriers (2.69 log(10) IU/mL) and highest in IT patients (6.80 log(10) IU/mL; p = 0.02 to &lt;0.0001). At follow-up, HBeAg-positive and HBeAg-negative patients with cirrhosis accounted for 57 of 64 (89.1%) liver-related deaths (p &lt; 0.0001). Conclusion: Differences in baseline hepatitis B viral markers were detected in patients in various clinical stages of hepatitis B virus infection. HBeAg-positive and HBeAg-negative patients with cirrhosis accounted for the majority of the liver-related fatalities. abstract_id: PUBMED:37005546 Circulating MicroRNAs: Diagnostic Value as Biomarkers in the Detection of Non-alcoholic Fatty Liver Diseases and Hepatocellular Carcinoma. Non-alcoholic fatty liver disease (NAFLD), a metabolic-related disorder, is the most common cause of chronic liver disease which, if left untreated, can progress from simple steatosis to advanced fibrosis and eventually cirrhosis or hepatocellular carcinoma, which is the leading cause of hepatic damage globally. Currently available diagnostic modalities for NAFLD and hepatocellular carcinoma are mostly invasive and of limited precision. A liver biopsy is the most widely used diagnostic tool for hepatic disease. But due to its invasive procedure, it is not practicable for mass screening. Thus, noninvasive biomarkers are needed to diagnose NAFLD and HCC, monitor disease progression, and determine treatment response. Various studies indicated that serum miRNAs could serve as noninvasive biomarkers for both NAFLD and HCC diagnosis because of their association with different histological features of the disease. Although microRNAs are promising and clinically useful biomarkers for hepatic diseases, larger standardization procedures and studies are still required. abstract_id: PUBMED:19175871 Impact of adefovir dipivoxil on liver fibrosis and activity assessed with biochemical markers (FibroTest-ActiTest) in patients infected by hepatitis B virus. Summary: The aim was to assess the utility of FibroTest-ActiTest (FT-AT) as noninvasive markers of histological changes in patients with chronic hepatitis. Patients with chronic hepatitis B (HBeAg+ and HBeAg-) randomized in two trials of adefovir (ADV) vs placebo, with available paired liver biopsies and FT-AT at baseline and after 48 weeks of treatment were included. The predictive value of FT-AT was assessed using the area under the receiver operating characteristics curves (AUROCs) for the diagnosis of bridging fibrosis, cirrhosis and moderate-severe necroinflammatory activity. The impact of treatment with ADV vs placebo was assessed on liver injury according to baseline stage and virological response at 48 weeks. The analysis of 924 estimates for the diagnosis of bridging fibrosis, cirrhosis and moderate or severe necroinflammatory activity yielded FT-AT AUROCs: 0.76 +/- 0.02 (standardized 0.81 +/- 0.02), 0.81 +/- 0.02 and 0.80 +/- 0.01, respectively. Similar impacts of ADV on liver fibrosis and activity were observed both with paired biopsy (fibrosis stage from 1.6 to 1.4, activity grade from 2.5 to 1.3) and paired biomarkers (FT from 0.44 to 0.40, AT from 0.62 to 0.25) (P &lt; 0.0001). FibroTest-ActiTest provides a quantitative estimate of liver fibrosis and necroinflammatory activity in patients with chronic hepatitis B and may be an alternative to reduce the need for liver biopsy. abstract_id: PUBMED:28539035 The Performance of Serum Biomarkers for Predicting Fibrosis in Patients with Chronic Viral Hepatitis. Background/aims: The invasiveness of a liver biopsy and its inconsistent results have prompted efforts to develop noninvasive tools to evaluate the severity of chronic hepatitis. This study was intended to assess the performance of serum biomarkers for predicting liver fibrosis in patients with chronic viral hepatitis. Methods: A total of 302 patients with chronic hepatitis B or C, who had undergone liver biopsy, were retrospectively enrolled. We investigated the diagnostic accuracy of several clinical factors for predicting advanced fibrosis (F≥3). Results: The study population included 227 patients with chronic hepatitis B, 73 patients with chronic hepatitis C, and 2 patients with co-infection (hepatitis B and C). Histological cirrhosis was identified in 16.2% of the study population. The grade of porto-periportal activity was more correlated with the stage of chronic hepatitis compared with that of lobular activity (r=0.640 vs. r=0.171). Fibrosis stage was correlated with platelet count (r=-0.520), aspartate aminotransferase to platelet ratio index (APRI) (r=0.390), prothrombin time (r=0.376), and albumin (r=-0.357). For the diagnosis of advanced fibrosis, platelet count and APRI were the most predictive variables (AUROC=0.752, and 0.713, respectively). Conclusions: In a hepatitis B endemic region, platelet count and APRI could be considered as reliable non-invasive markers for predicting fibrosis of chronic viral hepatitis. However, it is necessary to validate the diagnostic accuracy of these markers in another population. abstract_id: PUBMED:20850886 Diagnostic accuracy of FibroScan and comparison to liver fibrosis biomarkers in chronic viral hepatitis: a multicenter prospective study (the FIBROSTIC study). Background & Aims: The diagnostic accuracy of non-invasive liver fibrosis tests that may replace liver biopsy in patients with chronic hepatitis remains controversial. We assessed and compared the accuracy of FibroScan® and that of the main biomarkers used for predicting cirrhosis and significant fibrosis (METAVIR ≥ F2) in patients with chronic viral hepatitis. Methods: A multicenter prospective cross-sectional diagnostic accuracy study was conducted in the Hepatology departments of 23 French university hospitals. Index tests and reference standard (METAVIR fibrosis score on liver biopsy) were measured on the same day and interpreted blindly. Consecutive patients with chronic viral hepatitis (hepatitis B or C virus, including possible Human Immunodeficiency Virus co-infection) requiring liver biopsy were recruited in the study. Results: The analysis was first conducted on the total population (1839 patients), and after excluding 532 protocol deviations, on 1307 patients (non-compliant FibroScan® examinations). The overall accuracy of FibroScan® was high (AUROC 0.89 and 0.90, respectively) and significantly higher than that of biomarkers in predicting cirrhosis (AUROC 0.77-0.86). All non-invasive methods had a moderate accuracy in predicting significant fibrosis (AUROC 0.72-0.78). Based on multilevel likelihood ratios, non-invasive tests provided a relevant gain in the likelihood of diagnosis in 0-60% of patients (cirrhosis) and 9-30% of patients (significant fibrosis). Conclusions: The diagnostic accuracy of non-invasive tests was high for cirrhosis, but poor for significant fibrosis. A clinically relevant gain in the likelihood of diagnosis was achieved in a low proportion of patients. Although the diagnosis of cirrhosis may rely on non-invasive tests, liver biopsy is warranted to diagnose intermediate stages of fibrosis. abstract_id: PUBMED:33327640 miRNAs as Potential Biomarkers for Viral Hepatitis B and C. Around 257 million people are living with hepatitis B virus (HBV) chronic infection and 71 million with hepatitis C virus (HCV) chronic infection. Both HBV and HCV infections can lead to liver complications such as cirrhosis and hepatocellular carcinoma (HCC). To take care of these chronically infected patients, one strategy is to diagnose the early stage of fibrosis in order to treat them as soon as possible to decrease the risk of HCC development. microRNAs (or miRNAs) are small non-coding RNAs which regulate many cellular processes in metazoans. Their expressions were frequently modulated by up- or down-regulation during fibrosis progression. In the serum of patients with HBV chronic infection (CHB), miR-122 and miR-185 expressions are increased, while miR-29, -143, -21 and miR-223 expressions are decreased during fibrosis progression. In the serum of patients with HCV chronic infection (CHC), miR-143 and miR-223 expressions are increased, while miR-122 expression is decreased during fibrosis progression. This review aims to summarize current knowledge of principal miRNAs modulation involved in fibrosis progression during chronic hepatitis B/C infections. Furthermore, we also discuss the potential use of miRNAs as non-invasive biomarkers to diagnose fibrosis with the intention of prioritizing patients with advanced fibrosis for treatment and surveillance. abstract_id: PUBMED:16969327 Noninvasive assessment of liver fibrosis in patients with chronic hepatitis virus C Development of liver fibrosis, which leads to cirrhosis, is the principal complication of all chronic liver diseases, regardless of their cause. Knowledge of the existence and severity of fibrosis is important from diagnostic and prognostic viewpoints. Its assessment plays an essential role in the treatment decision and makes it possible to assess the risk of progression to cirrhosis and the onset of its complications. Histologic examination of the liver remains the reference examination for assessing the extent of fibrosis during chronic liver disease. Nonetheless, the number of patients needing assessment, the risks of the punch-biopsy and the cost of this invasive examination have led many to propose other tools to assess fibrosis. Some standard indicators (transaminases, platelets, prothrombin time) have long been recognized as indirect markers of extensive fibrosis. More recently, progress in our knowledge of the mechanisms of liver fibrogenesis have made it possible to identify different peripheral blood components that may be of clinical interest. Thus serum assays of elements of the extracellular matrix, their decay products, or enzymes involved in their metabolism have been proposed as noninvasive indicators. Among these, hyaluronic acid appears the most interesting. For several years, scores have been calculated with algorithms that combine several indicators determined simultaneously to assess fibrosis in patients with hepatitis C and sometimes other chronic liver diseases. The Fibrotest is the best validated and most widely used of these. Finally, Fibroscan is a device for the diagnosis and quantification of hepatic fibrosis, based on the technique of transient elastography. The relative roles of these noninvasive markers and the value of their combinations must still be determined. abstract_id: PUBMED:7041836 Correlation between HBeAg in the blood and HBcAg in hepatocyte nuclei in hepatitis B The Authors have studied the correlation between serum HBeAg and the presence of hepatitis B "core" and "surface" antigens in liver tissue. The histological pictures of the bioptic specimens of patients affected by HBV were classified as: chronic persisting hepatitis, chronic active hepatitis, cirrhosis and minimal aspecific changes. HBcAg was detected in liver cells nuclei in all the patients resulted positive for serum HBeAg. In some of these subjects HBsAg was found to be localized on the hepatocyte membranes, and this seems to be related in some way to the presence of HBcAg in liver nuclei. No presence of hepatitis B associated antigens was observed in biopsies from subjects who were negative for serum HbsAg. abstract_id: PUBMED:3881331 Immunosuppressive treatment of HBsAg-positive chronic liver disease: significance of HBeAg. In a randomized clinical trial in 148 patients of azathioprine vs. prednisone treatment of chronic aggressive hepatitis and/or nonalcoholic cirrhosis, 20 were HBsAg positive on entry. In this subgroup sequential serum samples were investigated for HBs and HBe markers by radioimmunoassay. At the time of evaluation, 13 patients were still alive; their median age was 53 years (25 to 72) and median follow-up time was 46 months (23 to 82). Of 16 patients with cirrhosis, 5 of 7 with persistence of HBeAg died, compared with 2 of 9 with anti-HBe. In three patients with anti-HBe, HBeAg reappeared several times with simultaneous rise in transaminase values. The overall survival was 65% after 5 years. The prognosis of HBsAg-positive chronic liver disease seemed to depend on the presence of cirrhosis and HBeAg rather than on improvement in biochemical activity during immunosuppressive treatment. Answer: The study aimed to determine if new biomarkers could improve the accuracy of diagnosing cirrhosis in patients with HBeAg positive chronic hepatitis. The Cirrhosis Score (CS)-1 was derived from routine laboratory data, while CS-2 included both routine laboratory data and 7 new biomarkers. The results showed that the area under the receiver operating characteristic (ROC) curve for CS-1 was 0.84 (95% CI, 0.74 to 0.94) and for CS-2 was 0.86 (0.78 to 0.95), with no significant difference in diagnostic accuracy between the two (p=0.24). Therefore, the new biomarkers did not provide additional value in differentiating cirrhosis from HBeAg positive chronic hepatitis B beyond what was available from routine laboratory data (PUBMED:20433821).
Instruction: Does endograft support alter the rate of aneurysm sac shrinkage after endovascular repair? Abstracts: abstract_id: PUBMED:31004872 TREO Aortic Endograft Demonstrates Significant Aneurysmal Sac Shrinkage. Background: Aneurysmal sac shrinkage is associated with successful aneurysm repair after endovascular aortic repair (EVAR). There are a variety of approved aortic endografts, with a recent study demonstrating increased sac shrinkage in certain endografts compared with others. The TREO endograft (Terumo Aortic Ltd, Renfrewshire, Scotland, UK) is being evaluated for use in EVAR, with preliminary data demonstrating high rates of success. The objective of this study is to evaluate sac shrinkage of the TREO endograft. Methods: This is a retrospective analysis of EVARs at a single institution by a high-volume surgeon over a 1-year period in which the TREO graft was used. The change in sac size and rate of sac shrinkage (mm/mo) were evaluated between TREO and non-TREO grafts. All TREO grafts were included in the analysis. Non-TREO grafts were matched a priori for TREO indications for use anatomic specifications. Non-TREO grafts were also excluded for traumatic or emergent cases. The primary outcome was sac shrinkage, and secondary outcomes were composite complication profile within 30 d of operation. Results: Six TREO grafts and 16 non-TREO grafts were included for analysis. The groups were similar in age, gender, and race. The groups were also similar in aortic anatomy before EVAR. The aneurysm sac shrinkage rate (mm/mo) is significantly greater in the TREO group than in the non-TREO group (0.484 ± 0.107 versus 0.018 ± 0.112, P = 0.033). The total average size of sac shrinkage was also greater for the TREO group (-0.688 ± 2.262 versus 12.00 ± 2.78, P &lt; 0.001). The composite complication profile of stroke, myocardial infarction, death, and respiratory complications was not different between groups. Conclusions: TREO aortic endografts for aneurysm repair are being used in Europe. However, their application in the United States is limited. Our data demonstrate the significant advantage the TREO graft has with increased sac shrinkage and minimal complications, compared with other grafts. This study adds to the growing body of literature supporting TREO graft use for EVAR. abstract_id: PUBMED:35683617 Predictors and Consequences of Sac Shrinkage after Endovascular Infrarenal Aortic Aneurysm Repair. Background: Aneurysm shrinkage has been proposed as a marker of successful endovascular aneurysm repair (EVAR). We evaluated the impact of sac shrinkage on secondary interventions, on survival and its association with endoleaks, and on compliance with instructions for use (IFU). Methods: This observational retrospective study was conducted on all consecutive patients receiving EVAR for an infrarenal abdominal aortic aneurysm (AAA) using exclusively Endurant II/IIs endograft from 2014 to 2018. Sixty patients were entered in the study. Aneurysm sac shrinkage was defined as decrease ≥5 mm of the maximum aortic diameter. Univariate methods and Kaplan-Meier plots assessed the potential impact of shrinkage. Results: Twenty-six patients (43.3%) experienced shrinkage at one year, and thirty-four (56.7%) had no shrinkage. Shrinkage was not significantly associated with any demographics or morbidity, except hypertension (p = 0.01). No aneurysm characteristics were associated with shrinkage. Non-compliance with instructions for use (IFU) in 13 patients (21.6%) was not associated with shrinkage. Three years after EVAR, freedom from secondary intervention was 85 ± 2% for the entire series, 92.3 ± 5.0% for the shrinkage group and 83.3 ± 9% for the no-shrinkage group (Logrank: p = 0.49). Survival at 3 years was not significantly different between the two groups (85.9 ± 7.0% vs. 79.0 ± 9.0%, Logrank; p = 0.59). Strict compliance with IFU was associated with less reinterventions at 3 years (92.1 ± 5.9% vs. 73.8 ± 15%, Logrank: p = 0.03). Similarly, survival at 3 years did not significantly differ between strict compliance with IFU and non-compliance (81.8 ± 7.0% vs. 78.6 ± 13.0%, Logrank; p = 0.32). Conclusion: This study suggests that shrinkage ≥5 mm at 1-year is not significantly associated with a better survival rate or a lower risk of secondary intervention than no-shrinkage. In this series, the risk of secondary intervention regardless of shrinkage seems to be linked more to non-compliance with IFU. Considering the small number of patients, these results must be confirmed by extensive prospective studies. abstract_id: PUBMED:12932149 Does endograft support alter the rate of aneurysm sac shrinkage after endovascular repair? Purpose: To test the hypothesis that stent-graft support influences sac shrinkage independent of endoleak rates after endovascular repair of abdominal aortic aneurysms (AAA). Methods: Ninety AAA patients underwent treatment with bifurcated endoluminal devices at our institution between October 1996 and February 1999. Fifty-two patients were treated using a nonsupported (NS) Ancure endograft and 38 using a fully supported (FS) Excluder endograft. Computed tomographic (CT) scans were obtained during the first postoperative month and at 6, 12, and 24-month intervals. Aneurysm diameter was measured as the minor axis of the largest AAA axial slice on the CT scan. Six, 12, and 24-month sac sizes were compared to the first postoperative evaluation. Results: Successful endoluminal graft placement was accomplished in all patients. The two groups were matched for age, anatomical criteria, and comorbidities except for baseline AAA size: the mean diameter was 5.4 cm in the NS group and 5.0 cm for the FS group (p&lt;0.01). Endoleak rates were 25% (13/52) in the NS group and 18% (7/38) in the FS group (p&lt;0.05) at 1 month. All endoleaks that did not resolve spontaneously at 6 months were treated. Initial endoleak status did not affect the sac shrinkage rates at the 12 and 24-month evaluations. At 2 years, the NS group had greater shrinkage of the sac (1.2 cm) versus the FS cohort (0.3 cm, p&lt;0.05). In addition, more patients in the NS group had sac shrinkage &gt;or =5 mm (83% versus 18%, p&lt;0.05). Conclusions: Despite a higher endoleak rate, the nonsupported Dacron Ancure endografts were associated with greater sac shrinkage at up to 24 months following implantation. abstract_id: PUBMED:38188976 Factors Influencing on the Aneurysm Sac Shrinkage after Endovascular Abdominal Aortic Aneurysm Repair by the Analysis of the Patients with the Aneurysm Sac Shrinkage and Expansion. Objectives: The aneurysmal sac shrinkage has been reported as the strong predictor of favorable long-term outcome after endovascular aneurysm repair (EVAR). We evaluated the effects of perioperative and intraoperative factors on the aneurysm sac shrinkage. Methods: EVAR was performed for 296 patients during August 2009-December 2021. Nine patients with type Ia, Ib, or III; 69 patients with the sac diameter change less than 5 mm; and five patients with sac re-expansion after shrunk more than 5 mm were excluded. Thus, patients with sac shrinkage 5 mm or more (79 patients, shrinkage group) and with sac expansion 5 mm or more (18 patients) were included in this study. Antifibrinolytic therapy with tranexamic acid (TXA) 1500 mg/day for 6 months after EVAR was introduced in March 2013 and patent aortic side branches were coil embolized during EVAR since July 2015. Patients' background and patent aortic side branches at the end of EVAR were evaluated. Results: Univariate analysis for comparison between patients with sac shrinkage and sac expansion revealed that males (82.3% vs. 55.6%, p = 0.021), without antiplatelet therapy (40.5% vs. 66.7%, p = 0.044) and TXA (79.8% vs. 38.9%, p &lt;0.001), were significantly associated with sac shrinkage. By multivariate analysis, the odds ratio of sac shrinkage was 11.7 for males, 0.1 for the patients on antiplatelet therapy, and 6.5 for the patient who received TXA. The patients with patent inferior mesenteric artery (IMA) were less in the shrinkage group (20.3% vs. 77.8%, p &lt;0.001) and with two or less patent lumbar arteries (LAs) were more in the shrinkage group (82.3% vs. 33.3%, p &lt; 0.001). The odd ratio of sac shrinkage was 7.8 for occluded IMA and 3.9 for two or less patent LAs. Conclusion: The possibility of sac shrinkage would be high for the patient with occluded IMA and two or less patent LA at the end of EVAR, and that patient received TXA after EVAR. (This is a translation of Jpn J Vasc Surg 2022; 31: 291-297.). abstract_id: PUBMED:32589118 Prognostic Significance of Aneurysm Sac Shrinkage After Endovascular Aneurysm Repair. Purpose: To investigate whether patients who develop aneurysm sac shrinkage following endovascular aneurysm repair (EVAR) have better outcomes than patients with a stable or increased aneurysm sac. Materials and Methods: The Healthcare Databases Advanced Search interface developed by the National Institute for Health and Care Excellence was used to interrogate MEDLINE and EMBASE. Thesaurus headings were adapted accordingly. Case-control studies were identified comparing outcomes in patients demonstrating aneurysm sac shrinkage after EVAR with those of patients with a stable or expanded aneurysm sac. Pooled estimates of dichotomous outcome data were calculated using the odds ratio (OR) and 95% confidence interval (CI). Meta-analysis of time-to-event data was conducted using the inverse-variance method; the results are reported as a summary hazard ratio (HR) and 95% CI. Summary outcome estimates were calculated using random-effects models. Results: Eight studies were included in quantitative synthesis reporting a total of 17,096 patients (8518 patients with sac shrinkage and 8578 patients without sac shrinkage). The pooled incidence of sac shrinkage at 12 months was 48% (95% CI 40% to 56%). Patients with aneurysm sac shrinkage had a significantly lower hazard of death (HR 0.73, 95% CI 0.60 to 0.87), secondary interventions (HR 0.42, 95% CI 0.29 to 0.62), and late complications (HR 0.37, 95% CI 0.24 to 0.56) than patients with a stable or increased aneurysm sac. Furthermore, their odds of rupture were significantly lower than those in patients without shrinkage (OR 0.09, 95% CI 0.02 to 0.36). Conclusion: Sac regression is correlated to improved survival and a reduced rate of secondary interventions and EVAR-related complications. The prognostic significance of sac regression should be considered in surveillance strategies. Intensified surveillance should be applied in patients who fail to achieve sac regression following EVAR. abstract_id: PUBMED:35314300 Effect of abdominal aortic aneurysm sac shrinkage after endovascular repair on long-term outcomes between favorable and hostile neck anatomy. Objective: The aim of the present study was to analyze the influence of abdominal aortic aneurysm sac shrinkage on the long-term outcomes after endovascular aneurysm repair (EVAR) between patients with favorable and hostile neck anatomy. Methods: In the present study, we retrospectively analyzed data from 268 patients with fusiform aneurysm and sac behavior who had been evaluated for ≥1 year after EVAR. Hostile neck anatomy was defined as a proximal aneurysmal neck length of &lt;10 mm or proximal neck angle of ≥60°. The primary end point was sac shrinkage, and the secondary end points included reintervention and a composite of rupture, type Ia endoleak, and late open conversion. Results: No differences were found in sac shrinkage between the patients with favorable and hostile neck anatomy (P = .47). Multivariate analysis revealed that an occluded inferior mesenteric artery (P = .04), the presence of posterior thrombus (P &lt; .01), and no antiplatelet therapy (P = .01) were positive factors for sac shrinkage. The reintervention-free survival rate was better for patients with sac shrinkage compared with those without sac shrinkage regardless of the proximal neck anatomy (P &lt; .01). The event-free survival rate of the composite end point at 5 and 10 years was 97.5% and 83.5% for patients with favorable neck anatomy and 86.8% and 81.0% for those with hostile neck anatomy, respectively (P = .02). In the subgroup with sac shrinkage, the event-free survival rates at 5 and 10 years were 98.7% and 98.7% for those with favorable neck anatomy and 92.7% and 82.4% for those with hostile neck anatomy, respectively (P = .02). In contrast, the event-free survival for patients without sac shrinkage did not differ between those with favorable and hostile neck anatomy (P = .08). Multivariate analysis showed that a hostile neck anatomy (hazard ratio, 3.32; 95% confidence interval, 1.26-8.80; P = .02) and no sac shrinkage (hazard ratio, 3.88; 95% confidence interval, 1.25-12.0; P = .02) were significant risk factors for the composite end point of rupture, type Ia endoleak, and late open conversion. Conclusions: Proximal neck anatomy did not affect sac shrinkage after EVAR. Sac shrinkage has been a good surrogate marker of better long-term outcomes after EVAR for patients with favorable neck anatomy. In contrast, critical events such as rupture and type Ia endoleak can occur even after sac shrinkage has been achieved in patients with hostile neck anatomy. abstract_id: PUBMED:37496654 Active aortic aneurysm sac treatment with shape memory polymer during endovascular aneurysm repair. Preprocedural image analysis and intraprocedural techniques to fully treat infrarenal abdominal aortic aneurysm sacs outside of the endograft with shape memory polymer (SMP) devices during endovascular aneurysm repair were developed. Prospective, multicenter, single-arm studies were performed. SMP is a porous, self-expanding polyurethane polymer material. Target lumen volumes (aortic flow lumen volume minus endograft volume) were estimated from the preprocedural imaging studies and endograft dimensions. SMP was delivered immediately after endograft deployment via a 6F sheath jailed in a bowed position in the sac. Technical success was achieved in all cases, defined as implanting enough fully expanded SMP volume to treat the actual target lumen volume. abstract_id: PUBMED:37437582 Aneurysm Sac Shrinkage After EVAR Can Lead to Complications: A Case Report of Complete Endograft Thrombosis Due to Kinking. Background: Bilateral limb occlusion after endovascular repair of abdominal aortic aneurysms (EVAR) is an uncommon entity. The relationship between graft kinking and unilateral limb occlusion is widely described in the literature. Our aim is to report a case of complete endograft thrombosis due to bilateral limb kinking secondary to aneurysm sac shrinkage, treated by endovascular means. Case Report: A 67 year-old male with history of EVAR with an Incraft® endograft (Cordis, Bridgewater, NJ, USA) four years before, presented at the emergency department with disabling claudication of the right lower extremity and a better tolerated 10-month left extremity claudication. Complete endograft thrombosis with bilateral limb kinking and a remarkable reduction of the aneurysm sac was observed in the computed tomography angiography. An endovascular repair was performed, through bilateral open femoral access and with angiographic control through percutaneous left brachial access. Bilateral recanalization was achieved and the endograft was re-lined with two 10x150 mm Viabahn (WL Gore &amp; Ass., Flagstaff, AZ, USA). Both sides were extended with a 11 × 50 mm Viabahn (WL Gore &amp; Ass., Flagstaff, AZ, USA). The final angiographic control showed bilateral patency with no residual stenosis and the patient recovered distal pulses. Follow-up showed complete patency and no complications at 17 months. Conclusions: Bilateral limb occlusion is a rare complication with technically challenging treatment options. Aneurysm sac shrinkage can affect the endograft configuration, leading to limb distortion and occasionally to bilateral limb occlusion after EVAR. Special attention should be put on imaging follow-up to detect these complications before occlusion occurs. abstract_id: PUBMED:38301871 Midterm Outcomes and Aneurysm Sac Dynamics Following Fenestrated Endovascular Aneurysm Repair after Previous Endovascular Aneurysm Repair. Objective: Fenestrated endovascular aneurysm repair (FEVAR) is a feasible option for aortic repair after endovascular aneurysm repair (EVAR), due to improved peri-operative outcomes compared with open conversion. However, little is known regarding the durability of FEVAR as a treatment for failed EVAR. Since aneurysm sac evolution is an important marker for success after aneurysm repair, the aim of the study was to examine midterm outcomes and aneurysm sac dynamics of FEVAR after prior EVAR. Methods: Patients undergoing FEVAR for complex abdominal aortic aneurysms from 2008 to 2021 at two hospitals in The Netherlands were included. Patients were categorised into primary FEVAR and FEVAR after EVAR. Outcomes included five year mortality rate, one year aneurysm sac dynamics (regression, stable, expansion), sac dynamics over time, and five year aortic related procedures. Analyses were done using Kaplan-Meier methods, multivariable Cox regression analysis, chi square tests, and linear mixed effect models. Results: One hundred and ninety-six patients with FEVAR were identified, of whom 27% (n = 53) had had a prior EVAR. Patients with prior EVAR were significantly older (78 ± 6.7 years vs. 73 ± 5.9 years, p &lt; .001). There were no significant differences in mortality rate. FEVAR after EVAR was associated with a higher risk of aortic related procedures within five years (hazard ratio [HR] 2.6; 95% confidence interval [CI] 1.1 - 6.5, p = .037). Sac dynamics were assessed in 154 patients with available imaging. Patients with a prior EVAR showed lower rates of sac regression and higher rates of sac expansion at one year compared with primary FEVAR (sac expansion 48%, n = 21/44, vs. 8%, n = 9/110, p &lt; .001). Sac dynamics over time showed similar results, sac growth for FEVAR after EVAR, and sac shrinkage for primary FEVAR (p &lt; .001). Conclusion: There were high rates of sac expansion and a need for more secondary procedures in FEVAR after EVAR than primary FEVAR patients, although this did not affect midterm survival. Future studies will have to assess whether FEVAR after EVAR is a valid intervention, and the underlying process that drives aneurysm sac growth following successful FEVAR after EVAR. abstract_id: PUBMED:37118936 Five-Year Outcomes of Endovascular Aortic Repair With the TREO Abdominal Endograft. Purpose: Newer generation abdominal endografts, including Treo (Terumo Aortic, Sunrise, Florida), have shown optimal safety and effectiveness in treating abdominal aortic aneurysms (AAAs), even with hostile anatomy over the short- and mid-term. The durability of such results, however, is still a controversial issue, due to the paucity of long-term data. Our aim is to show the long-term outcomes of endovascular aortic repair of both standard and hostile AAAs with the Treo endograft on a cohort of patients treated between 2016 and 2017. Methods: We analyzed the postoperative follow-up of 37 consecutive patients who have undergone endovascular aortic repair (EVAR) with the Treo Endograft between 2016 and 2017, whose baseline clinical conditions, operative data, and short-term outcomes had been published in 2018. All patients were followed up by computed tomography angiography (CTA) at 6 and 12 months and 5 years postoperatively. Primary endpoints were aortic-related mortality, type I-III endoleak (EL), and reintervention rate. Secondary endpoints were the rates of type II ELs and aneurysm sac regression. Results: Of 37 patients, 27 had at least one criterion of anatomic hostility and 11 were performed outside the device-specific instructions for use (IFU). In the perioperative period, we observed 100% technical success, with no perioperative mortality. Over a mean follow-up of 5.5 years (66 months), 3 patients (8.1%) were lost to follow-up and 3 (8.1%) died of non-aortic causes (overall survival: 91.9%). One type IA EL of an AAA with a hostile neck (but within the IFU) and a type III EL of an AAA with standard anatomy were observed and treated by endovascular relining (overall reintervention rate: 5.5%). Four type II ELs were associated with aneurysm sac stability over time and are still under surveillance. Mean aneurysm shrinkage was 11.25±8.30 mm. Conclusion: The optimal results of the Treo Endograft in terms of complication and reintervention rates reported over the mid-term by the current literature (ITA-ENDOBOOT registry) are maintained on the long term, both in case of hostile and friendly aortic anatomy, with a satisfactory shrinkage rate of the aneurysm sac. Clinical Impact: The innovative characteristics of Treo and its short-term results are well-known and reported. The present case series contributes to the scientific validation of a new-generation abdominal aortic endograft over the long-term, focusing especially on its performance in treating AAAs with hostile anatomy. Its 5-years outcomes confirm the optimal results already reported over the short- and mid-term. Answer: Yes, endograft support can alter the rate of aneurysm sac shrinkage after endovascular repair. A study comparing nonsupported (NS) Ancure endografts with fully supported (FS) Excluder endografts found that despite a higher endoleak rate, the nonsupported Dacron Ancure endografts were associated with greater sac shrinkage at up to 24 months following implantation. More patients in the NS group experienced sac shrinkage of ≥5 mm compared to the FS group (83% versus 18%, p<0.05) (PUBMED:12932149). This suggests that the type of endograft support can influence the rate of aneurysm sac shrinkage after endovascular repair.
Instruction: Does modifying electrode placement of the 12 lead ECG matter in healthy subjects? Abstracts: abstract_id: PUBMED:20701990 Does modifying electrode placement of the 12 lead ECG matter in healthy subjects? Background: Limb electrodes for the 12 lead ECG are routinely placed on the torso during exercise stress testing or when limbs are clinically inaccessible. It is unclear whether such electrode modification produces ECG changes in healthy male or female subjects that are clinically important according to the 2009 AHA, ACCF, HRS guidelines. We therefore measured whether ECG modification produced clinically important or false positive ECG changes e.g., appearance of Q waves in leads V(1-3), ST changes greater than 0.1 mV, T wave changes greater than 0.5 mV (frontal plane) or 1 mV (transverse plane), QRS axis shifts or alterations to QTc/P-R/QRS intervals. Methods: The 12 lead ECG was measured in 18 healthy and semi-recumbent subjects using the standard and Takuma modified limb placements. Results: In the frontal plane we demonstrate that the modification of limb electrode placement produces small Q, R and T wave amplitude and QRS axis changes that are statistically but not clinically significant. In the transverse plane it produces no statistically or clinically significant changes in the ECG or in ST segment morphology, P-R, QRS or QTc intervals. Conclusions: We provide better and more robust evidence that routine modification of limb electrode placement produces only minor changes to the ECG waveform in healthy subjects. These are not clinically significant according to the 2009 guidelines and thus have no effect on the clinical specificity of the 12 lead ECG. abstract_id: PUBMED:27092502 Graphite Based Electrode for ECG Monitoring: Evaluation under Freshwater and Saltwater Conditions. We proposed new electrodes that are applicable for electrocardiogram (ECG) monitoring under freshwater- and saltwater-immersion conditions. Our proposed electrodes are made of graphite pencil lead (GPL), a general-purpose writing pencil. We have fabricated two types of electrode: a pencil lead solid type (PLS) electrode and a pencil lead powder type (PLP) electrode. In order to assess the qualities of the PLS and PLP electrodes, we compared their performance with that of a commercial Ag/AgCl electrode, under a total of seven different conditions: dry, freshwater immersion with/without movement, post-freshwater wet condition, saltwater immersion with/without movement, and post-saltwater wet condition. In both dry and post-freshwater wet conditions, all ECG-recorded PQRST waves were clearly discernible, with all types of electrodes, Ag/AgCl, PLS, and PLP. On the other hand, under the freshwater- and saltwater-immersion conditions with/without movement, as well as post-saltwater wet conditions, we found that the proposed PLS and PLP electrodes provided better ECG waveform quality, with significant statistical differences compared with the quality provided by Ag/AgCl electrodes. abstract_id: PUBMED:28576322 Accuracy in precordial ECG lead placement: Improving performance through a peer-led educational intervention. Background And Objectives: Inaccurate electrocardiography (ECG) lead placement may lead to erroneous diagnoses, such as poor R wave progression. We sought to assess the accuracy of precordial ECG lead placement amongst hospital staff members, and to re-evaluate performance after an educational intervention. Methods And Results: 100 randomly selected eligible staff members placed sticker dots on a mannequin, their positions were recorded on a radar plot and compared to the correct precordial lead positions. The commonest errors were placing V1 and V2 leads too superiorly, and V5 and V6 leads too medially.Following an educational intervention with the aid of moderated poster presentations and volunteer patients, the study was repeated six months later. 60 subjects correctly placed all leads, compared to 10 in the pre-intervention cohort (P&lt;0.0001) with the proportion achieving correct placement of any lead rising from 0.34 to 0.83, (p&lt;0.0001 for all leads). Conclusion: Incorrect ECG lead placement is common. This may be addressed through regular training incorporated into annual induction processes for relevant health care professionals. abstract_id: PUBMED:36436474 The ΔWaveECG: The differences to the normal 12‑lead ECG amplitudes. Background: The QRS, ST segment, and T-wave waveforms of electrocardiogram are difficult to interpret, especially for non-ECG experts readers, like general practitioners. As the ECG waveforms are influenced by many factors, like body build, age, sex, electrode placement, even for experience ECG readers the waveform is difficult to interpret. In this research we have created a novel method to distinguish normal from abnormal ECG waveforms for an individual ECG based on the ECG amplitude distribution derived from normal standard 12‑lead ECG recordings. Aim: Creation of a normal ECG amplitude distribution to enable the distinction by non-ECG experts of normal from abnormal waveforms of the standard 12‑lead ECG. Methods: The ECGs of healthy normal controls in the PTB-XL database were used to construct a normal amplitude distribution of the 12 lead ECG for males and females. All ECGs were resampled to have the same number of samples to enable the classification of an individual ECG as either normal or abnormal, i.e. within the normal amplitude distribution or outside, the ΔWaveECG. Results: From the same PTB-XL database six ECG's were selected, normal, left and right bundle branch block, and three with a myocardial infarction. The normal ECG was obviously within the normal distribution, and all other five showed clear abnormal ECG amplitudes outside the normal distribution in any of the ECG segments (QRS, ST segment and remaining STT segment). Conclusion: The ΔWaveECG can distinguish the abnormal from normal ECG waveform segments, making the ECG easier to classify as normal or abnormal. Conduction disorders and ST changes due to ischemia and abnormal T-waves are effortless to detect, also by non-ECG expert readers, thus improving the early detection of cardiac patients. abstract_id: PUBMED:31266252 Simulating Arbitrary Electrode Reversals in Standard 12-lead ECG. Electrode reversal errors in standard 12-lead electrocardiograms (ECG) can produce significant ECG changes and, in turn, misleading diagnoses. Their detection is important but mostly limited to the design of criteria using ECG databases with simulated reversals, without Wilson's central terminal (WCT) potential change. This is, to the best of our knowledge, the first study that presents an algebraic transformation for simulation of all possible ECG cable reversals, including those with displaced WCT, where most of the leads appear with distorted morphology. The simulation model of ECG electrode swaps and the resultant WCT potential change is derived in the standard 12-lead ECG setup. The transformation formulas are theoretically compared to known limb lead reversals and experimentally proven for unknown limb-chest electrode swaps using a 12-lead ECG database from 25 healthy volunteers (recordings without electrode swaps and with 5 unicolor pairs swaps, including red (right arm-C1), yellow (left arm-C2), green (left leg (LL) -C3), black (right leg (RL)-C5), all unicolor pairs). Two applications of the transformation are shown to be feasible: 'Forward' (simulation of reordered leads from correct leads) and 'Inverse' (reconstruction of correct leads from an ECG recorded with known electrode reversals). Deficiencies are found only when the ground RL electrode is swapped as this case requires guessing the unknown RL electrode potential. We suggest assuming that potential to be equal to that of the LL electrode. The 'Forward' transformation is important for comprehensive training platforms of humans and machines to reliably recognize simulated electrode swaps using the available resources of correctly recorded ECG databases. The 'Inverse' transformation can save time and costs for repeated ECG recordings by reconstructing the correct lead set if a lead swap is detected after the end of the recording. In cases when the electrode reversal is unknown but a prior correct ECG recording of the same patient is available, the 'Inverse' transformation is tested to detect the exact swapping of the electrodes with an accuracy of (96% to 100%). abstract_id: PUBMED:27480730 Inter-lead correlation analysis for automated detection of cable reversals in 12/16-lead ECG. Background And Objective: A crucial factor for proper electrocardiogram (ECG) interpretation is the correct electrode placement in standard 12-lead ECG and extended 16-lead ECG for accurate diagnosis of acute myocardial infarctions. In the context of optimal patient care, we present and evaluate a new method for automated detection of reversals in peripheral and precordial (standard, right and posterior) leads, based on simple rules with inter-lead correlation dependencies. Methods: The algorithm for analysis of cable reversals relies on scoring of inter-lead correlations estimated over 4s snapshots with time-coherent data from multiple ECG leads. Peripheral cable reversals are detected by assessment of nine correlation coefficients, comparing V6 to limb leads: (I, II, III, -I, -II, -III, -aVR, -aVL, -aVF). Precordial lead reversals are detected by analysis of the ECG pattern cross-correlation progression within lead sets (V1-V6), (V4R, V3R, V3, V4), and (V4, V5, V6, V8, V9). Disturbed progression identifies the swapped leads. Results: A test-set, including 2239 ECGs from three independent sources-public 12-lead (PTB, CSE) and proprietary 16-lead (Basel University Hospital) databases-is used for algorithm validation, reporting specificity (Sp) and sensitivity (Se) as true negative and true positive detection of simulated lead swaps. Reversals of limb leads are detected with Se = 95.5-96.9% and 100% when right leg is involved in the reversal. Among all 15 possible pairwise reversals in standard precordial leads, adjacent lead reversals are detected with Se = 93.8% (V5-V6), 95.6% (V2-V3), 95.9% (V3-V4), 97.1% (V1-V2), and 97.8% (V4-V5), increasing to 97.8-99.8% for reversals of anatomically more distant electrodes. The pairwise reversals in the four extra precordial leads are detected with Se = 74.7% (right-sided V4R-V3R), 91.4% (posterior V8-V9), 93.7% (V4R-V9), and 97.7% (V4R-V8, V3R-V9, V3R-V8). Higher true negative rate is achieved with Sp &gt; 99% (standard 12-lead ECG), 81.9% (V4R-V3R), 91.4% (V8-V9), and 100% (V4R-V9, V4R-V8, V3R-V9, V3R-V8), which is reasonable considering the low prevalence of lead swaps in clinical environment. Conclusions: Inter-lead correlation analysis is able to provide robust detection of cable reversals in standard 12-lead ECG, effectively extended to 16-lead ECG applications that have not previously been addressed. abstract_id: PUBMED:30716527 Common source of miscalculation and misclassification of P-wave negativity and P-wave terminal force in lead V1. Background: P-wave terminal force (PTF) &gt; 4000 ms·μV and deep terminal negativity (DTN) are ECG markers of left atrial abnormality associated with both atrial fibrillation and stroke. When the precordial lead V1 is placed higher than the correct position in the fourth intercostal space, it may cause increased PTF and DTN. Several studies have documented that electrode misplacement, especially high placement, is common. The influence of electrode misplacement on these novel ECG markers has not previously been quantified. Objective: The objective was to assess the influence of electrode misplacement on PTF and DTN. Method: 12-Lead ECGs were recorded in 29 healthy volunteers from the Department of Cardiology at the Copenhagen University Hospital of Bispebjerg. The precordial electrode V1 was placed in the fourth, third and second intercostal space, giving a total of 3 ECGs per subject. Continuous variables were compared using Dunnett's post-hoc test and categorical variables were compared using Fischer's exact test. Results: High placement of V1 electrodes resulted in a more than three-fold increase of PTF (IC4 = 2267 ms·μV, IC2 = 7996 ms·μV, p-value &lt; 0.001). There was a similar increase of DTN (IC4 = 0%, IC2 = 28%, p-value &lt; 0.001). P-wave area and amplitude of the negative deflection increased, and P-wave area and amplitude of the positive deflection decreased. The P-wave shape changed from being predominantly positive or biphasic in IC4 to 90% negative in IC2. The PR-duration and P-wave duration were not altered by electrode placement. Conclusion: High electrode placement results in significant alteration of PTF and DTN in lead V1. abstract_id: PUBMED:34854951 Effect of the recording condition on the quality of a single-lead electrocardiogram. Although many wearable single-lead electrocardiogram (ECG) monitoring devices have been developed, information regarding their ECG quality is limited. This study aimed to evaluate the quality of single-lead ECG in healthy subjects under various conditions (body positions and motions) and in patients with arrhythmias, to estimate requirements for automatic analysis, and to identify a way to improve ECG quality by changing the type and placement of electrodes. A single-lead ECG transmitter was placed on the sternum with a pair of electrodes, and ECG was simultaneously recorded with a conventional Holter ECG in 12 healthy subjects under various conditions and 35 patients with arrhythmias. Subjects with arrhythmias were divided into sinus rhythm (SR) and atrial fibrillation (AF) groups. ECG quality was assessed by calculating the sensitivity and positive predictive value (PPV) of the visual detection of QRS complexes (vQRS), automatic detection of QRS complexes (aQRS), and visual detection of P waves (vP). Accuracy was defined as a 100% sensitivity and PPV. We also measured the amplitude of the baseline, P wave, and QRS complex, and calculated the signal-to-noise ratio (SNR). We then focused on aQRS and estimated thresholds to obtain an accurate aQRS in more than 95% of the data. Finally, we sought to improve ECG quality by changing electrode placement using offset-type electrodes in 10 healthy subjects. The single-lead ECG provided 100% accuracy for vQRS, 87% for aQRS, and 74% for vP in healthy subjects under various conditions. Failure for accurate detection occurred in several motions in which the baseline amplitude was increased or in subjects with low QRS or P amplitude, resulting in low SNR. The single-lead ECG provided 97% accuracy for vQRS, 80% for aQRS in patients with arrhythmias, and 95% accuracy for vP in the SR group. The AF group showed higher baseline amplitude than the SR group (0.08 mV vs. 0.02 mV, P &lt; 0.01) but no significant difference in accuracy for aQRS (79% vs. 81%, P = 1.00). The thresholds to obtain an accurate aQRS were a QRS amplitude &gt; 0.42 mV and a baseline amplitude &lt; 0.20 mV. The QRS amplitude was significantly influenced by electrode placement and body position (P &lt; 0.01 for both, two-way analysis of variance), and the maximum reduction by changing body position was estimated as 30% compared to the sitting posture. The QRS amplitude significantly increased when the inter-electrode distance was extended vertically (1.51 mV for vertical extension vs. 0.93 mV for control, P &lt; 0.01). The single-lead ECG provided at least 97% accuracy for vQRS, 80% for aQRS, and 74% for vP. To obtain stable aQRS in any body positions, a QRS amplitude &gt; 0.60 mV and a baseline amplitude &lt; 0.20 mV were required in the sitting posture considering the reduction induced by changing body position. Vertical extension of the inter-electrode distance increased the QRS amplitude. abstract_id: PUBMED:25696193 Precordial electrode placement in women. Background: Precordial ECG electrode positioning was standardised in the early 1940s. However, it has been customary for the V3 to V6 electrodes to be placed under the left breast in women rather than in the correct anatomical positions relating to the 4th and 5th interspaces. For this reason, a comparison between the two approaches to chest electrode positioning in women was undertaken. Methods: In total 84 women were recruited and ECGs recorded with electrodes in the correct anatomical position and also in the more commonly used positions under the breast. As a separate study, 299 healthy women were recruited to study normal limits of leads V3 to V6 recorded with electrodes in the correct anatomical positions and compare them with published normal limits with electrodes in the more commonly used locations. Results: It was shown that there was less variability with electrodes in the correct anatomical positions and that there were significant differences between the new limits of normality compared with the old established limits. Conclusion: Expansion of the database and further analysis of the data is required to make a definitive recommendation with respect to precordial electrode placement in women. abstract_id: PUBMED:33488027 The Reconstruction of a 12-Lead Electrocardiogram from a Reduced Lead Set Using a Focus Time-Delay Neural Network. Background: The 12-lead electrocardiogram (ECG) is the gold-standard ECG method used by cardiologists. However, accurate electrode placement is difficult and time consuming, and can lead to incorrect interpretation. Objectives: The objective of this study was to accurately reconstruct a full 12-lead ECG from a reduced lead set. Methods: Five-electrode placement was used to generate leads I, II, III, aVL, aVR, aVF and V2. These seven leads served as inputs to the focus time-delay neural network (FTDNN) which derived the remaining five precordial leads (V1, V3-V6). An online archived medical database containing 549 cases of ECG recordings was used to train, validate and test the FTDNN. Results: After removing outliers, the reconstructed leads exhibited correlation values of between 0.8609 and 0.9678 as well as low root mean square error values of between 123 μV and 245 μV across all cases, for both healthy controls and cardiovascular disease subgroups except the bundle branch block disease subgroup. The results of the FTDNN method compared favourably to those of prior lead reconstruction methods. Conclusions: A standard 12-lead ECG was successfully reconstructed with high quantitative correlations from a reduced lead set using only five electrodes, of which four were placed on the limbs. Less reliance on precordial leads will aid in the reduction of electrode placement errors, ultimately improving ECG lead accuracy and reduce the number of cases that are incorrectly diagnosed. Answer: Modifying electrode placement of the 12 lead ECG in healthy subjects produces only minor changes to the ECG waveform that are not clinically significant according to the 2009 guidelines. This suggests that such modifications have no effect on the clinical specificity of the 12 lead ECG (PUBMED:20701990).
Instruction: Is Patch Testing with Food Additives Useful in Children with Atopic Eczema? Abstracts: abstract_id: PUBMED:25873103 Is Patch Testing with Food Additives Useful in Children with Atopic Eczema? Background: Atopy patch testing is a useful way to determine delayed-type hypersensitivity reactions to foods and aeroallergens. Although food additives have been accused of worsening atopic eczema symptoms, according to recent studies the role of food additives in atopic eczema remains unclear. The purpose of our study was to investigate food additive hypersensitivity in a group of children with atopic eczema by using standardized atopy patch testing and to determine the role of food additive hypersensitivity in atopic eczema. Methods: Thirty-four children with atopic eczema and 33 healthy children were enrolled in the study. Children who consumed foods containing additives and did not use either antihistamines or local or systemic corticosteroids for at least 7 days prior to admission were enrolled in the study. All children were subjected to atopy patch testing and after 48 and 72 hours their skin reactions were evaluated by using the guidelines. Results: Positive atopy patch test results were significantly higher in the atopic eczema group. Forty-one percent of the atopic eczema group (n = 14) and 15.2% (n = 5) of the control group had positive atopy patch test results with food additives (p = 0.036) (estimated relative risk 1.68, case odds 0.7, control odds 0.17). Carmine hypersensitivity and the consumption of foods containing carmine, such as gumdrops, salami, and sausage, were significantly higher in the children with atopic eczema. Conclusion: This is the first study investigating hypersensitivity to food additives in children with atopic eczema. Our results indicate that carmine may play a role in atopic eczema. abstract_id: PUBMED:32792881 Evaluation of contact sensitivity to food additives in children with atopic dermatitis. Introduction: Atopic dermatitis (AD) is a chronic inflammatory disease caused by the complex interaction of genetic, immune and environmental factors such as food and airborne allergens. The atopy patch test (APT) is a useful way to determine delayed-type hypersensitivity reactions to food and aeroallergens. Many studies have also suggested that food additives are associated with dermatologic adverse reactions and the aggravation of pre-existing atopic dermatitis symptoms. Aim: To elucidate the contact sensitivity to food additives in children suffering from AD by using standardized atopy patch testing. Material And Methods: A total of 45 children with AD and 20 healthy children have been enrolled. All the children have regularly consumed food containing additives, and were subjected to atopy patch tests. Results: In total, 28 (62%) children with AD and 4 (20%) healthy children have had positive patch test reactions to ≥ 1 allergens. There has been a significant difference (p = 0.04) between the groups in terms of the positivity rate in the patch test and the most common allergen that elicited positive patch test results in the AD group was azorubine (n = 11, 24.4%, p = 0.014). Conclusions: In our study, contact sensitivity was detected more frequently in AD patients. Food additives may play a role in the development and exacerbation of AD. Atopy patch testing with food additives can be useful in the treatment and follow-up of children with AD. abstract_id: PUBMED:32792880 Can allergy patch tests with food additives help to diagnose the cause in childhood chronic spontaneous urticaria? Introduction: Chronic spontaneous urticaria (CSU) is characterized by the onset of symptoms which are not induced by specific triggers, but are rather spontaneous. A considerable number of patients report that foods or food additives might be responsible for their chronic urticaria. Aim: To determine the prevalence of sensitization to food additives in children with CSU using atopy patch tests (ATP). Material And Methods: Atopy patch tests for 23 different food additives were applied to 120 children with CSU and 61 healthy controls. Results: Seventeen (14.1%) children with CSU were sensitized with food additives. None of the control group had positive APT. Azorubine and Cochineal red were the food additives detected with the highest sensitization rates (5.8% (n = 7) and 6.7% (n = 8), respectively). Conclusions: There can be an association between food additives and CSU. APT tests may be a helpful tool in the assessment and management of CSU so that easier to follow diets and effective treatments can be offered to families. abstract_id: PUBMED:9418765 Patch testing in children and adolescents: five years' experience and follow-up. Background: Allergic contact dermatitis in children is a significant clinical problem. Little information is available concerning the value of patch testing and the outcome in these children. Objective: Our purpose was to assess the value of patch testing in children and the outcome of allergic contact dermatitis in childhood. Methods: Clinical data on 83 children patch tested during a 5-year period were assessed. Clinical follow-up on 68 subjects was performed. Results: Overall, 41 children had one or more allergic reactions (49%). Reactions to metals, topical preparations, and food additives were common. The clinical outcome at 6 months was significantly better for 36 children with a relevant allergen on patch testing than in 32 with no allergen or no relevant allergen (p = 0.006). Conclusion: Patch testing is useful in the management of children suspected of having an allergic contact dermatitis. Patch testing and subsequent allergen avoidance may improve the prognosis in children with a relevant contact allergen. abstract_id: PUBMED:12838776 Patch tests in the diagnosis of food allergies in the nursing infant The atopy patch-test has been shown to be useful in diagnosis of delayed reactions in infants with atopic dermatitis or digestive symptoms. The combination of skin prick testing and patch testing can significantly enhance the accuracy in diagnosis of specific food allergy in infants with atopic dermatitis or digestive symptoms. abstract_id: PUBMED:21504435 Patch testing is a useful investigation in children with eczema. Background: Allergic contact dermatitis in children is less recognized than in adults. However, recently, allergic contact dermatitis has started to attract more interest as a cause of or contributor to eczema in children, and patch testing has been gaining in recognition as a useful diagnostic tool in this group. Objectives: The aim of this analysis was to investigate the results of patch testing of selected children with eczema of various types (mostly atopic dermatitis) attending the Sheffield Children's Hospital, and to assess potential allergens that might elicit allergic contact dermatitis. Patients And Methods: We analysed retrospectively the patch test results in 110 children aged between 2 and 18 years, referred to a contact dermatitis clinic between April 2002 and December 2008. We looked at the percentages of relevant positive reactions in boys and girls, by age groups, and recorded the outcome of treatment following patch testing. Results: One or more positive allergic reactions of current or past relevance was found in 48/110 children (44%; 29 females and 19 males). There were 94 allergy-positive patch test reactions in 110 patients: 81 had a reaction of current or past relevance, 12 had a reaction of unknown relevance, and 1 had reaction that was a cross-reaction. The commonest allergens with present or past relevance were medicaments, plant allergens, house dust mite, nickel, Amerchol® L101 (a lanolin derivative), and 2-bromo-2-nitropropane-1,3-diol. However, finding a positive allergen was not associated with a better clinical outcome. Conclusions: We have shown that patch testing can identify relevant allergens in 44% of children with eczema. The commonest relevant allergens were medicament allergens, plant allergens, house dust mite, nickel, Amerchol® L101, and 2-bromo-2-nitropropane-1,3-diol. Patch testing can be performed in children as young as 2 years with the proper preparation. abstract_id: PUBMED:22359874 Atopy patch test--when is it useful? The aim of the article is to introduce the atopy patch test (APT) as a model of cellular immunity reaction. APT is epicutaneous test performed with food and aeroallergens, and represents a good model for T lymphocyte hypersensitivity. It is compared with skin prick test (SPT). Its value is supported by the fact that atopic dermatitis is the result of complex immune interactions and involves both Coombs and Gell reactions type IV and I. In this review, we shortly discuss the etiopathogenesis of atopic dermatitis, distinction of extrinsic and intrinsic issues, and compare the value of APT with SPT and IgE determination. APT includes epicutaneous application of type I allergens known to elicit IgE mediated reactions, followed by evaluation of eczematous skin reaction after 48 and 72 hours. The limitations of ATP include the lack of test standardization, but there also are comparative advantages over SPT and specific IgE determination. We also briefly discuss the most important food and aeroallergens. APT has been recognized as a diagnostic tool in the evaluation of food allergy and aeroallergens such as house dust mite, pollen and animal dander. APT is a useful diagnostic procedure in patients with atopic dermatitis allergic to inhalant allergens and in children with food allergy younger than 2 years. The sensitivity and specificity of the test greatly depend on the allergen tested and patient age. abstract_id: PUBMED:12209104 Importance of chamber size for the outcome of atopy patch testing in children with atopic dermatitis and food allergy. Because the small backs of young children offer little space for atopy patch testing, it would be helpful to use smaller chambers. We therefore compared 6-mm chambers with the 12-mm chambers used in previous studies. We performed 55 double-blind, placebo-controlled food challenges in 30 children (17 boys, 13 girls) aged 3 to 58 months (median, 13 months). Sensitivity, specificity, positive predictive value, negative predictive value, and efficiency results show that the 12-mm chamber size yields much better results than the 6-mm chamber size. Therefore, 12-mm cups should be used for atopy patch tests with food, even in infants and small children. abstract_id: PUBMED:21854421 Sole dermatitis in children: patch testing revisited. Although dermatoses affecting the soles of the feet in children are regularly encountered in dermatology clinics, the relationship with allergic contact dermatitis affecting this part of the foot is not well established. The aim of this study was to evaluate the relevance of patch testing children with sole dermatoses. We reviewed the results of all patch tests performed in children (&lt;18 years) presenting with dermatoses involving the soles between 1997 and 2009 from our departmental patch test database. Forty-one children were identified: 27 children with an inflammatory dermatitis affecting the sole and 14 children with juvenile plantar dermatosis (JPD). Seventeen (41%) children had at least one clinically relevant positive patch test reaction. Rubber additives and potassium dichromate were the most frequent allergens identified. Forty-eight percent of children with inflammatory dermatitis affecting the sole and 29% of children with JPD had at least one relevant reaction. Of the children with relevant reactions, 76% had a personal or family history of atopy. Our results demonstrate the importance of patch testing children with dermatoses affecting the soles and indicate that a history of atopy or a diagnosis of JPD should not deter investigation. abstract_id: PUBMED:24102181 Allergic contact dermatitis in Danish children referred for patch testing - a nationwide multicentre study. Background: Although contact allergy among children was previously considered to be rare, data from the past decade have shown that it is common among children and that the prevalence may be increasing. Objectives: To describe the demographics of all children referred for patch testing in Denmark during 2003-2011, to examine the frequency and relevance of positive patch test reactions, and to assess the most common allergens. Methods: A retrospective analysis of the patch test data from the Danish National Database of Contact Allergy was performed. Results: Of 2594 children and adolescents aged 1-17 years, 25.1% had one or more positive patch test reactions. The associated relevance was 66.4%. The most common sensitizers were metals, fragrances, and hair dyes. The frequency of positive patch test reactions and allergic contact dermatitis was significantly higher among girls. Conclusions: Allergic contact dermatitis in children is a significant clinical problem. Contact allergy should always be considered when children with recalcitrant eczema are encountered, and special attention should be paid to girls. Patch testing is important, and children may be tested with the same patch test concentrations as adults. Answer: Patch testing with food additives appears to be useful in children with atopic eczema. Studies have shown that children with atopic eczema may have a higher rate of positive atopy patch test results with food additives compared to healthy children. For instance, one study found that 41% of the atopic eczema group had positive atopy patch test results with food additives, which was significantly higher than the control group, suggesting that food additives like carmine may play a role in atopic eczema (PUBMED:25873103). Another study reported that 62% of children with atopic dermatitis (AD) had positive patch test reactions to food additives, with azorubine being the most common allergen eliciting positive results, indicating that food additives may contribute to the development and exacerbation of AD (PUBMED:32792881). Moreover, patch testing has been shown to be a useful diagnostic tool in children with eczema, identifying relevant allergens in 44% of cases, which included food additives among other allergens (PUBMED:21504435). The atopy patch test (APT) is also highlighted as a model for T lymphocyte hypersensitivity and is useful in diagnosing food allergy and aeroallergens in patients with atopic dermatitis, especially in children younger than 2 years (PUBMED:22359874). Overall, the evidence suggests that patch testing with food additives can be a valuable part of the diagnostic process for children with atopic eczema, helping to identify potential triggers and contributing factors to the condition, and thereby aiding in the management and treatment of the disease.
Instruction: Should laparoscopic paraesophageal hernia repair be abandoned in favor of the open approach? Abstracts: abstract_id: PUBMED:15531968 Should laparoscopic paraesophageal hernia repair be abandoned in favor of the open approach? Background: The most appropriate approach to the repair of large paraesophageal hernias remains controversial. Despite early results of excellent outcomes after laparoscopic repair, recent reports of high recurrence require that this approach be reevaluated. Methods: For this study, 60 primary paraesophageal hernias consecutively repaired at one institution from 1990 to 2002 were reviewed. These 25 open transabdominal and 35 laparoscopic repairs were compared for operative, short-, and long-term outcomes on the basis of quality-of -life questionnaires and radiographs. Results: No difference in patient characteristics was detected. Laparoscopic repair resulted in lower blood loss, fewer intraoperative complications, and a shorter length of hospital stay. No difference in general or disease-specific quality-of-life was documented. Radiographic follow-up was available for 78% open and 91% laparoscopic repairs, showing anatomic recurrence rates of 44% and 23%, respectively (p = 0.11). Conclusions: Laparoscopic repair should remain in the forefront for the management of paraesophageal hernias. However, there is considerable room for improvement in reducing the incidence of recurrence. abstract_id: PUBMED:31564396 Laparoscopic Approach to Paraesophageal Hernia Repair. The introduction of minimally invasive techniques to the field of foregut surgery has revolutionized the surgical approach to giant paraesophageal hernia repair. Laparoscopy has become the standard approach in patients with giant paraesophageal hernia because it has been shown to be safe and is associated with lower morbidity and mortality when compared with various open approaches. Specifically, it has been associated with decreased intraoperative blood loss, decreased complications, and reduced hospital length of stay. This is despite a rise in comorbid conditions associated with this patient population. This article describes our operative approach to laparoscopic giant paraesophageal hernia repair. abstract_id: PUBMED:32500419 Outcomes of Paraesophageal Hernia Repair: Analysis of the Veterans Affairs Surgical Quality Improvement Program Database. Background: While there have been many outcome studies on paraesophageal hernia repair in the civilian population, there is sparse recent data on the veteran population. This study analyzes the mortality and morbidities of veterans who underwent paraesophageal hernia repair in the Veterans Affairs Surgical Quality Improvement Program database. Methods: Veterans who underwent paraesophageal hernia repair from 2010 to 2017 were identified using Current Procedural Terminology codes. Multivariable analysis was used to compare laparoscopic and open, including abdominal and thoracic approaches, groups. The outcomes were postoperative complications and mortality. Results: There were 1607 patients in the laparoscopic group and 366 in the open group, with 84.1% men and mean age of 61 years. Gender and body mass index did not influence the type of surgical approach. The mortality rates at 30 and 180 days were 0.5% and 0.7%, respectively. Postoperative complications, including reintubation (2.2%), pneumonia (2.0%), intubation &gt; 48 h (2.0%), and sepsis (2.0%) were higher in the open group (15.9% versus 7.2%, p &lt; 0.001). The laparoscopic group had a significantly shorter length of stay (4.3 versus 9.6 days, p &lt; 0.001) and a lower percentage of return to surgery within 30 days (3.9% versus 8.2%, p &lt; 0.001) than the open group. The ratio of open versus laparoscopic paraesophageal hernia repairs varied significantly by different Veterans Integrated Services Network regions. Conclusions: Veterans undergoing laparoscopic paraesophageal hernia repair experience similar outcomes as patients in the private sector. Veterans who underwent laparoscopic paraesophageal hernia repair had significantly less complications compared to an open approach even after adjusting for patient comorbidities and demographics. The difference in open versus laparoscopic practices between various regions requires further investigation. abstract_id: PUBMED:31564393 Surgical Techniques for Robotically-Assisted Laparoscopic Paraesophageal Hernia Repair. The surgical approach to giant paraesophageal hernia repair has evolved considerably, from an open approach to minimally invasive approaches. Laparoscopic and robotic-assisted approaches to giant paraesophageal hernia have been considered safe and are associated with less morbidity and mortality. Limited data exist comparing the efficacy between laparoscopic and robotic-assisted giant paraesophageal hernia repairs, but the benefits of robotic surgery include superior optics and freedom of motion, thus allowing surgeons to accomplish the key points in a successful repair without compromising patient outcomes. abstract_id: PUBMED:21528087 Laparoscopic repair of hiatal hernia with mesenterioaxial volvulus of the stomach. Although mesenterioaxial gastric volvulus is an uncommon entity characterized by rotation at the transverse axis of the stomach, laparoscopic repair procedures have still been controversial. We reported a case of mesenterioaxial intrathoracic gastric volvulus, which was successfully treated with laparoscopic repair of the diaphragmatic hiatal defect using a polytetrafluoroethylene mesh associated with Toupet fundoplication. A 70-year-old Japanese woman was admitted to our hospital because of sudden onset of upper abdominal pain. An upper gastrointestinal series revealed an incarcerated intrathoracic mesenterioaxial volvulus of the distal portion of the stomach and the duodenum. The complete laparoscopic approach was used to repair the volvulus. The laparoscopic procedures involved the repair of the hiatal hernia using polytetrafluoroethylene mesh and Toupet fundoplication. This case highlights the feasibility and effectiveness of the laparoscopic procedure, and laparoscopic repair of the hiatal defect using a polytetrafluoroethylene mesh associated with Toupet fundoplication may be useful for preventing postoperative recurrence of hiatal hernia, volvulus, and gastroesophageal reflux. abstract_id: PUBMED:23743369 Open versus laparoscopic hiatal hernia repair. Background: The literature reports the efficacy of the laparoscopic approach to paraesophageal hiatal hernia repair. However, its adoption as the preferred surgical approach and the risks associated with paraesophageal hiatal hernia repair have not been reviewed in a large database. Method: The Nationwide Inpatient Sample dataset was queried from 1998 to 2005 for patients who underwent repair of a complicated (the entire stomach moves into the chest cavity) versus uncomplicated (only the upper part of the stomach protrudes into the chest) paraesophageal hiatal hernia via the laparoscopic, open abdominal, or open thoracic approach. A multivariate analysis was performed controlling for demographics and comorbidities while looking for independent risk factors for mortality. Results: In total, 23,514 patients met the inclusion criteria. By surgical approach, 55% of patients underwent open abdominal, 35% laparoscopic, and 10% open thoracic repairs. Length of stay was significantly reduced for all patients after laparoscopic repair (P &lt; .001). Age ≥60 years and nonwhite ethnicity were associated with significantly higher odds of death. Laparoscopic repair and obesity were associated with lower odds of death in the uncomplicated group. Conclusion: Laparoscopic repair of paraesophageal hiatal hernia is associated with a lower mortality in the uncomplicated group. However, older age and Hispanic ethnicity increased the odds of death. abstract_id: PUBMED:29974871 Techniques and pitfalls of laparoscopic paraesophageal hernia repair in severe kyphoscoliosis patients. Background: Increasing evidence suggests that kyphoscoliosis may play a role in the pathophysiology of paraesophageal hernia development. The presence of severe kyphoscoliosis not only increases the incidence of paraesophageal hernia but also increases the risk of hiatal hernia (HH) repair. Moreover, the technical skills and the pitfalls of laparoscopic repair of HH in this special condition have yet been described. Methods: The technical skills, experience and pitfalls of laparoscopic paraesophageal hernia repair in severe kyphoscoliosis patients were described. These include perioperative care of patients' pulmonary function, patients' operating position and trocar placement, and the key steps and risks of laparoscopic HH repair in this special condition. Results: Paraesophageal HHs were successfully laparoscopically repaired, and prolonged hospital stay was due to post-operative pulmonary complications. Conclusion: These techniques are essential to minimise the perioperative complications in laparoscopic paraesophageal hernia repair in severe kyphoscoliosis patients, and great pulmonary care is required in these patients. abstract_id: PUBMED:23362434 Strangulation of the stomach and the transverse colon following laparoscopic esophageal hiatal hernia repair. The authors present a 32-year-old male patient with incarceration of a recurrent esophageal hiatal hernia after laparoscopic repair. A life-threatening strangulation of the stomach and the transverse colon occurred within a few days after the operation. Relapse of hiatal hernias amounts to almost half of early complications characteristic for the laparoscopic approach. General recommendations regarding surgical technique as well as perioperative care have been proposed in order to decrease the risk of relapse. Also, routine contrast radiology on the first or second day following the laparoscopic operation facilitates early diagnosis of relapse of hiatal hernia with emergent reoperation. This may result in decreased morbidity and improved overall outcome of the treatment. abstract_id: PUBMED:22127087 Utilization and outcomes of laparoscopic versus open paraesophageal hernia repair. The optimal operative approach for repair of diaphragmatic hernia remains debated. The aim of this study was to examine the utilization of laparoscopy and compare the outcomes of laparoscopic versus open paraesophageal hernia repair performed at academic centers. Data was obtained from the University HealthSystem Consortium database on 2726 patients who underwent a laparoscopic (n = 2069) or open (n = 657) paraesophageal hernia repair between 2007 and 2010. The data were reviewed for demographics, length of stay, 30-day readmission, morbidity, in-hospital mortality, and costs. For elective procedures, utilization of laparoscopic repair was 81 per cent and was associated with a shorter hospital stay (3.7 vs 8.3 days, P &lt; 0.01), less requirement for intensive care unit care (13% vs 35%, P &lt; 0.01), and lower overall complications (2.7% vs 8.4%, P &lt; 0.01), 30-day readmissions (1.4% vs 3.4%, P &lt; 0.01) and costs ($15,227 vs $24,263, P &lt; 0.01). The in-hospital mortality was 0.4 per cent for laparoscopic repair versus 0.0 per cent for open repair. In patients presenting with obstruction or gangrene, utilization of laparoscopic repair was 57 per cent and was similarly associated with improved outcomes compared with open repair. Within the context of academic centers, the current practice of paraesophageal hernia repair is mostly laparoscopy. Compared with open repair, laparoscopic repair was associated with superior perioperative outcomes even in cases presenting with obstruction or gangrene. abstract_id: PUBMED:21435915 Laparoscopic versus open repair of paraesophageal hernia: the second decade. Background: A decade ago we reported that laparoscopic repair of paraesophageal hernia (PEH) had an objective recurrence rate of 42% compared with 15% after open repair. Since that report we have modified our laparoscopic technique. The aim of this study was to determine if these modifications have reduced the rate of objective hernia recurrence. Study Design: We retrospectively identified all patients that had primary repair of a PEH with ≥ 50% of the stomach in the chest from May 1998 to January 2010 with objective follow-up by videoesophagram. The finding of any size of hernia was considered to be recurrence. Results: There were 73 laparoscopic and 73 open PEH repairs that met the study criteria. There were no significant differences in gender, body mass index, or prevalence of a comorbid condition between groups. The median follow-up was similar (12 months laparoscopic versus 16 months open; p = 0.11). In the laparoscopic group, 84% of patients had absorbable mesh reinforcement of the crural closure and 40% had a Collis gastroplasty, compared with 32% and 26%, respectively, in the open group. A recurrent hernia was identified in 27 patients (18%), 9 after laparoscopic repair and 18 after open repair (p = 0.09). The median size of a recurrent hernia was 3 cm, and the incidence of recurrence increased yearly in those with serial follow-up with no early peak or late plateau. Conclusions: In our first decade of laparoscopic PEH repair, no mesh crural reinforcement was used, and no patient had a Collis gastroplasty. Evolution in the technique of laparoscopic PEH repair during the subsequent decade has reduced the hernia recurrence rate to that seen with an open approach. Reduced morbidity and shorter hospital stay make laparoscopy the preferred approach, but continued efforts to reduce hernia recurrence are warranted. Answer: Based on the provided abstracts, laparoscopic paraesophageal hernia repair should not be abandoned in favor of the open approach. The laparoscopic approach has been shown to be safe and is associated with lower morbidity and mortality when compared with various open approaches (PUBMED:31564396). It has been associated with decreased intraoperative blood loss, decreased complications, reduced hospital length of stay (PUBMED:31564396), and similar or better outcomes in terms of postoperative complications and mortality rates compared to the open approach (PUBMED:32500419). Additionally, laparoscopic repair has been associated with a lower mortality in the uncomplicated group (PUBMED:23743369). Although there have been concerns about high recurrence rates with laparoscopic repair, recent studies have shown that with modifications in the laparoscopic technique, such as the use of absorbable mesh reinforcement of the crural closure and Collis gastroplasty, the recurrence rate has been reduced to that seen with an open approach (PUBMED:21435915). Furthermore, laparoscopic repair resulted in lower blood loss, fewer intraoperative complications, and a shorter length of hospital stay, with no difference in general or disease-specific quality-of-life when compared to open repair (PUBMED:15531968). The robotic-assisted laparoscopic approach also offers benefits such as superior optics and freedom of motion, which can help accomplish the key points in a successful repair without compromising patient outcomes (PUBMED:31564393). Even in patients with severe kyphoscoliosis, laparoscopic repair has been successfully performed, although great pulmonary care is required (PUBMED:29974871). In conclusion, the evidence suggests that laparoscopic paraesophageal hernia repair remains a valuable and effective approach and should not be abandoned. Instead, efforts should be made to continue improving the technique to reduce the incidence of recurrence and optimize patient outcomes (PUBMED:15531968, PUBMED:21435915).
Instruction: Is needle examination always necessary in evaluation of carpal tunnel syndrome? Abstracts: abstract_id: PUBMED:9596390 Is needle examination always necessary in evaluation of carpal tunnel syndrome? Objective: To investigate whether needle evaluation added any important clinical information to normal nerve conduction studies in the evaluation of carpal tunnel syndromes. Design: Retrospective review of electromyography (EMG) done with the referring diagnosis of possible carpal tunnel syndrome. Setting: Outpatients seen for EMG evaluation at one university hospital by a single electromyographer. Patients: Consecutive sample of possible carpal tunnel syndrome patients. Interventions: None. Main Outcome Measure: We determined whether needle examination was abnormal when nerve conduction studies were normal. Results: In patients in whom only carpal tunnel syndrome was suspected, normal nerve conduction studies predicted that EMG would be normal 89.8% of the time (p = .0494). Testing based on a larger sample size might increase the predictive value. Conclusions: There may be a subpopulation of patients referred for carpal tunnel syndrome who may be adequately evaluated by nerve conduction studies alone. Additional studies will help evaluate whether this is so. abstract_id: PUBMED:35141492 When is needle examination of thenar muscle necessary in the evaluation of mild and moderate carpal tunnel syndrome? Objectives: This study aims to evaluate the predictors of standard nerve conduction study (NCS) parameters in determining the presence of axonal loss by means of spontaneous activity in patients with mild and moderate carpal tunnel syndrome (CTS). Patients And Methods: Between May 2015 and April 2018, a total of 118 patients (11 males, 107 females; mean age: 52.3±10.6 years; range, 27 to 79 years) who underwent electrophysiological studies and were diagnosed with CTS were included. Demographic data of the patients including age, sex, and symptom duration were recorded. Electrodiagnostic studies were performed in all patients. All the needle electromyography (EMG) findings were recorded, but only the presence or absence of spontaneous EMG activities was used as the indicator of axonal injury. Results: In 37 (31.4%) of the patients, spontaneous activity was detected at the thenar muscle needle EMG. No spontaneous activity was observed in any of 43 (36.4%) patients with normal distal motor latency (DML). There were significant differences in DMLs, compound muscle action potential (CMAP) amplitudes, sensory nerve action potentials amplitudes, and sensory nerve conduction velocities between the groups with and without spontaneous activity (p&lt;0.05). The multiple logistic regression analysis revealed that DML was a significant independent risk variable in determining presence of spontaneous activity. The most optimal cut-off value for median DML was calculated as 4.9 ms. If the median DML was &gt;4.9 ms, the relative risk of finding spontaneous activity on thenar muscle needle EMG was 13.5 (95% CI: 3.6-51.2). Conclusion: Distal motor latency is the main parameter for predicting the presence of spontaneous activity in mild and moderate CTS patients with normal CMAP. Performing needle EMG of the thenar muscle in CTS patients with a DML of &gt;4.9 ms may be beneficial to detect axonal degeneration in early stages. abstract_id: PUBMED:7717817 Relation between needle electromyography and nerve conduction studies in patients with carpal tunnel syndrome. Four hundred eighty cases of electrodiagnostically confirmed carpal tunnel syndrome were reviewed to determine if the findings on nerve conduction studies could predict the presence or absence of fibrillation potentials or motor unit changes on the needle examination of the abductor pollicis brevis (APB). The needle examination is more uncomfortable and the ability to predict the findings in this setting from standard nerve conduction studies (NCS) would make the test more acceptable to patients. All patients had median and ulnar nerves (both sensory and motor) tested, as well as the needle evaluation of the APB. Two hundred thirty-one patients had an abnormal needle evaluation as defined by presence of one of the following conditions: abnormal spontaneous activity, increased motor unit action potential (MUAP) amplitude, or increased MUAP polyphasia. One hundred five patients had fibrillation potentials. The mean median motor and sensory amplitudes and latencies, as well as age, did differ in the normal and abnormal needle examination groups, but the sensitivity for predicting an abnormality ranged from 57% to 68%. The ratio of the median to the ulnar amplitudes did not improve the sensitivity of predicting the abnormal needle findings. Motor and sensory evoked potential latencies were the most important predictors of an abnormal needle examination. abstract_id: PUBMED:12056340 Needle electromyography in carpal tunnel syndrome. The role of needle electromyography (EMG) in the routine evaluation of carpal tunnel syndrome (CTS) is not clear. The aim of this study was to determine if needle EMG examination of the thenar muscles could provide useful information in addition to the nerve conduction (NC) studies. Electrophysiologic procedures performed on 84 patients (103 hands) consistent with CTS were reviewed. The median thenar motor NC data were matched with the needle EMG findings in the abductor pollicis brevis (APB) muscle. The severity of the needle EMG findings in the APB muscle correlated well with the severity of the motor NC data. As the thenar compound muscle action potential amplitude decreased and the degree of nerve conduction slowing and block across the wrist increased, there was a corresponding increase in the number of enlarged motor units and decrease in the recruitment pattern in the needle EMG findings. Needle EMG examination confined to the thenar muscles in CTS does not seem to provide any further information when the NC data had already established this diagnosis, and it should not be performed routinely. abstract_id: PUBMED:8155238 Assessment of basic physical examination skills of internal medicine residents. Background: Internal medicine faculty at the Mayo Clinic designed a clinical evaluation exercise that separates assessment of physical examination skills from that of medical interviewing and reasoning skills. This report summarizes the first year's experience with assessment of basic physical examination skills. Method: A core faculty of five general internists and three internist subspecialists designed a 45-item general examination checklist (e.g., measure blood pressure, examine mouth, palpate liver, drape to ensure privacy). In addition, the core faculty generated a menu of 27 focused examination skills (e.g., examine for carpal tunnel syndrome) from which the faculty examiner would select five items for the resident to perform. Each checklist item was scored 0, 1, or 2 for a maximum possible score of 100. The core faculty selected a criterion-based scoring reference and established a passing score of 90 based on practice examinations with residents and faculty. The core faculty made an instructional videotape of a model examination that was available to all residents. In 1991-92, prior to examination, the checklist was distributed to all first-year categorical (43), preliminary (25), and newly appointed second-year residents (eight). Results: Of the 76 residents examined, 11 (14%) failed and 65 (86%) passed. All failing scores were 86 or lower. The absence of scores 87, 88, and 89 suggested that faculty upgraded borderline performances. All 11 residents who initially failed retook the examination and passed. The five most commonly missed items were (1) inspect the skin, (2) complete examination in logical sequence, (3) palpate aorta, (4) auscultate anterior breath sounds, and (5) palpate axillary and inguinal nodes. Other important observed errors were failure to measure vital signs, confusion of liver and spleen, failure to use bell on stethoscope, and inadequate breast examination. Twenty-eight residents completed an optional feedback form. Reviews were mixed but generally favorable. Conclusion: Assessment of the basic physical examination skills of the internal medicine residents was useful, and such skills were able to be assessed separately from physical diagnosis skills and interviewing skills. Direct observation of basic physical examination skills revealed important deficiencies, which provided opportunity for remediation. abstract_id: PUBMED:38020668 Case report: Ultrasound-guided needle knife technique for carpal ligament release in carpal tunnel syndrome treatment. Carpal tunnel syndrome (CTS) is a common peripheral neuropathy of the hand, mainly manifesting as sensory disturbances, motor dysfunctions, and pain in the fingers and hand. The pathogenesis of the disease is associated with fibrosis of the transverse carpal ligament in the carpal tunnel, which compresses median nerve. In our case, we demonstrate an ultrasound-guided needle knife technique to treat CTS. We guided the patient to a supine position on the examination table. The skin of the wrist area was sterilized for the procedure. After the skin was dry, we positioned sterile drapes, located the median nerve and compression, and marked the compression point. Local anesthesia was administered. An ultrasound-guided needle knife was inserted. The needle knife was advanced under ultrasound guidance. The carpal ligament was incised. A gradual release of pressure on the median nerve was observed on the ultrasound monitor. After treatment, the patient's finger sensation and motor function can significantly improve, and pain symptoms are markedly reduced, this case demonstrates that small needle-knife treatment can serve as a safe and effective minimally invasive therapeutic method. abstract_id: PUBMED:7784802 Examination of sensory nerve fibers by needle recording in the carpal tunnel syndrome; use of the orthodromic method with special attention to the paresthetic forms. The authors examined clinically, by EMG and by electrostimulation the motor and sensory fibers of the median nerve in 15 control hands (group A), 35 hands with the paresthetic form of CTS (group B), and 33 hands with CTS and pathologic DML (group C). The examination of the sensory fibers was performed on the first (thumb) to 4th digits separately by the orthodromic technique with monitoring the NAP by needle electrodes from the wrist. Two hundred fifty six responses were averaged out and always 4 stimulatory values were followed on the sensory fibers. The highest percentage of pathologic values by DSL in group B was on the first digit (thumb: 37%), in group C on the 3rd digit (93%). By DSCV the highest number of pathologic values in groups B and C was on the thumb (43 and 90 per cent respectively), for NAP duration in groups B and C on the third digit (26 and 60 per cent respectively). In the controls the mean amplitude of NAP fluctuated between 19-50 uV. The best parameters are considered: the DSL, DSCV, somewhat less the duration of NAP. In the paresthetic form of CTS pathologic values of sensory parameters fluctuated between 8.6-42.8 per cent, in the group with pathologic DML in the range of 24.2-93.3 per cent. If one considered pathologic an examination that had at least one pathologic sensory parameter at least on one digit, group B yielded 77 per cent, and group C 100 per cent of pathological results.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:27118633 Neurological assessment. Neurological system assessment is an important skill for the orthopaedic nurse because the nervous system has such an overlap with the musculoskeletal system. Nurses whose scope of practice includes such advanced evaluation, e.g. nurse practitioners, may conduct the examination described here but the information will also be useful for nurses caring for patients who have abnormal neurological assessment findings. Within the context of orthopaedic physical assessment, possible neurological findings are evaluated as they complement the patient's history and the examiner's findings. Specific neurological assessment is integral to diagnosis of some orthopaedic conditions such as carpal tunnel syndrome. In other situations such as crushing injury to the extremities, there is high risk of associated neurological or neurovascular injury. These patients need anticipatory examination and monitoring to prevent complications. This article describes a basic neurological assessment; emphasis is on sensory and motor findings that may overlap with an orthopaedic presentation. The orthopaedic nurse may incorporate all the testing covered here or choose those parts that further elucidate specific diagnostic questions suggested by the patient's history, general evaluation and focused musculoskeletal examination. Abnormal findings help to suggest further testing, consultation with colleagues or referral to a specialist. abstract_id: PUBMED:37064174 Comparison efficacy of ultrasound-guided needle release plus corticosteroid injection and mini-open surgery in patients with carpal tunnel syndrome. This retrospective study was to compare clinical outcomes of ultrasound-guided needle release with corticosteroid injection vs. mini-open surgery in patients with carpal tunnel syndrome (CTS). From January 2021 to December 2021, 40 patients (40 wrists) with CTS were analyzed in this study. The diagnosis was based on clinical symptoms, electrophysiological imaging, and ultrasound imaging. A total of 20 wrists were treated with ultrasound-guided needle release plus corticosteroid injection (Group A), and the other 20 wrists were treated with mini-open surgery (Group B). We evaluated the Boston carpal tunnel questionnaire, electrophysiological parameters (distal motor latency, sensory conduction velocity, and sensory nerve action potential of the median nerve), and ultrasound parameters (cross-sectional area, flattening ratio, and the thicknesses of transverse carpal ligament) both before and 3 months after treatment. Total treatment cost, duration of treatment, healing time, and complications were also recorded for the two groups. The Boston carpal tunnel questionnaire and electrophysiological and ultrasound outcomes at preoperatively and 3 months postoperatively had a significant difference for each group (each with P &lt; 0.05). There were no complications such as infection, hemorrhage, vascular, nerve, or tendon injuries in both groups. Ultrasound-guided needle release and mini-open surgery are both effective measures in treating CTS patients. Ultrasound-guided needle release plus corticosteroid injection provides smaller incision, less cost, less time of treatment, and faster recovery compared with mini-open surgery. Ultrasound-guided needle release plus corticosteroid injection is better for clinical application. abstract_id: PUBMED:29787509 Use of Moulage to Evaluate Student Assessment of Skin in an Objective Structured Clinical Examination. Purpose: This study investigated the physician assistant (PA) student evaluation of skin and the use of moulage in an objective structured clinical examination (OSCE). Methods: Clinical-year PA students participated in an OSCE, taking a history, completing a physical examination, and writing a focused clinical note. The standardized patient's history simulated a primary diagnosis of right-sided carpal tunnel syndrome and an incidental skin lesion consistent with melanoma, only if the student made an inquiry in this replication study. Faculty members' assessments of students' physical examinations were standardized using an agreed-upon rubric. Students completed an anonymous survey on the OSCE experience immediately following the activity. Results: All students identified the primary diagnosis of carpal tunnel. Sixty-three percent of students verbalized examination of the skin; however, only 38% identified an atypical skin lesion in their clinical note documentation. Of those, only 33% of students referred the patient for biopsy. Most students believed that the use of moulage increased the realism of the standardized patient encounter. Sixty percent of students reported that they noticed a skin lesion but did not believe that it was part of the simulated experience. Conclusions: Moulage increased the realism of the OSCE. This study found that the ability of a student to verbalize the evaluation of a particular body system did not necessarily mean that they adequately or correctly assessed that system upon physical examination. Implementation of an OSCE and use of moulage provided an opportunity to explicitly assess student performance related to evaluation and assessment of skin. It provided a mechanism for meaningful, formative feedback of students' performance. Answer: No, needle examination is not always necessary in the evaluation of carpal tunnel syndrome (CTS). According to a retrospective review, normal nerve conduction studies predicted that electromyography (EMG) would be normal 89.8% of the time in patients where only carpal tunnel syndrome was suspected, suggesting that there may be a subpopulation of patients who could be adequately evaluated by nerve conduction studies alone (PUBMED:9596390). Additionally, a study found that distal motor latency (DML) is a significant independent risk variable in determining the presence of spontaneous activity, which is an indicator of axonal injury. It was suggested that performing needle EMG of the thenar muscle in CTS patients with a DML of >4.9 ms may be beneficial to detect axonal degeneration in early stages, implying that needle examination may not be necessary in all cases, particularly when DML is normal (PUBMED:35141492). Furthermore, another study indicated that needle EMG examination confined to the thenar muscles in CTS does not seem to provide any further information when the nerve conduction data had already established the diagnosis, and it should not be performed routinely (PUBMED:12056340). These findings collectively suggest that while needle examination can be useful in certain cases, particularly when specific nerve conduction study parameters indicate potential axonal loss, it is not a mandatory component of the evaluation for every patient with suspected CTS.
Instruction: Does valerian improve sleepiness and symptom severity in people with restless legs syndrome? Abstracts: abstract_id: PUBMED:19284179 Does valerian improve sleepiness and symptom severity in people with restless legs syndrome? Objective: To compare the effects of 800 mg of valerian with a placebo on sleep quality and symptom severity in people with restless legs syndrome (RLS). Methods: A prospective, triple-blinded, randomized, placebo-controlled, parallel design was used to compare the efficacy of valerian with placebo on sleep quality and symptom severity in patients with RLS. Thirty-seven participants were randomly assigned to receive 800 mg of valerian or placebo for 8 weeks. The primary outcome of sleep was sleep quality with secondary outcomes including sleepiness and RLS symptom severity. Results: Data were collected at baseline and 8 weeks comparing use of valerian and placebo on sleep disturbances (Pittsburgh Sleep Quality Index and Epworth Sleepiness Scale) and severity of RLS symptoms (International RLS Symptom Severity Scale) from 37 participants aged 36 to 65 years. Both groups reported improvement in RLS symptom severity and sleep. In a nested analysis comparing sleepy vs nonsleepy participants who received 800 mg ofvalerian (n=17), significant differences before and after treatment were found in sleepiness (P=.01) and RLS symptoms (P=.02). A strong positive association between changes in sleepiness and RLS symptom severity was found (P=.006). Conclusions: The results of this study suggest that the use of 800 mg of valerian for 8 weeks improves symptoms of RLS and decreases daytime sleepiness in patients that report an Epworth Sleepiness Scale (ESS) score of 10 or greater. Valerian may be an alternative treatment for the symptom management ofRLS with positive health outcomes and improved quality of life. abstract_id: PUBMED:31770614 Seasonality of restless legs syndrome: symptom variability in winter and summer times. Introduction: Restless legs syndrome (RLS) is a common sensorimotor neurological disorder, with symptoms that might cause sleep fragmentation leading to excessive daytime sleepiness. A seasonality of RLS symptoms has been suggested; however, to date, no study focused on this aspect. In order to detect a possible seasonality of RLS manifestations, we evaluated RLS symptom severity and excessive daytime sleepiness in winter and summer in RLS patients. Methods: RLS patients who performed two follow-up visits in summer and winter were included in this retrospective bicentric analysis. RLS severity, measured with the International RLS Study Group rating scale (IRLS), and daytime sleepiness, measured with the Epworth Sleepiness Scale (ESS), were recorded in both seasons in Innsbruck and Rome Sleep Medicine Centers. Results: In sum, 64 RLS patients were included. In the overall sample, IRLS in summer was higher than in winter (p = 0.008). After gender stratification, this held true only in men (p = 0.008). When stratifying for centers, the seasonal variation in RLS severity was present exclusively in Rome (p &lt; 0.001). Moreover, 20 RLS patients completed ESS in both seasonal periods, and scores in summer were higher than in winter (p &lt; 0.001). Conclusion: This retrospective observational study showed an increase of RLS severity during summer compared to winter, supporting the hypothesis that RLS symptoms are more troublesome when temperatures are higher. Changes in microvascular regulation, sweating, and serum iron level changes may support this difference in RLS symptoms across the year. The documented seasonal variation in RLS severity with worsening in the warmer months needs to be investigated further in prospective studies. abstract_id: PUBMED:23340851 Psychosomatic symptom profiles in patients with restless legs syndrome. Purpose: It has been reported that restless legs syndrome (RLS) might be associated with multiple psychosomatic symptoms. We aimed to identify which psychosomatic symptom is the most related in RLS patients compared to healthy controls. We also attempted to determine the relation between psychosomatic comorbidity and RLS severity regardless of sleep-related symptoms. Methods: One hundred two newly diagnosed patients with RLS and 37 healthy control subjects participated in the present study. The RLS patients were categorized as mild and severe based on the International RLS Study Group rating scale. Data on demographics were collected. All participants completed the Pittsburgh Sleep Quality Index, Athens Insomnia Scale, and Epworth Sleepiness Scale as sleep-related questionnaires. All participants completed the Symptom Checklist-90-Revision (SCL-90-R). Results: RLS patients were found to have pervasive comorbid psychosomatic symptoms. Somatization was found to be the most significant contributing factor (OR 1.145, 95 % CI 1.061-1.234, p &lt; 0.001) for psychosomatic comorbidity in RLS. Severe RLS patients were found to have poorer sleep quality than mild RLS patients. Furthermore, severe RLS patients had higher scores for most psychosomatic symptom domains in SCL-90-R. Anxiety was found to be the most independent contributing factor for psychosomatic comorbidity according to RLS severity (OR 1.145, 95 % CI 1.043-1.257, p = 0.005). Conclusions: Our study demonstrates that comorbid psychosomatic distress is considerable in patients with RLS. Furthermore, most psychosomatic comorbidity is increased with the RLS severity in association with poorer sleep quality. abstract_id: PUBMED:23929523 A single-blind randomized controlled trial to evaluate the effect of 6 months of progressive aerobic exercise training in patients with uraemic restless legs syndrome. Background: Uraemic restless legs syndrome (RLS) affects a significant proportion of patients receiving haemodialysis (HD) therapy. Exercise training has been shown to improve RLS symptoms in uraemic RLS patients; however, the mechanism of exercise-induced changes in RLS severity is still unknown. The aim of the current randomized controlled exercise trial was to investigate whether the reduction of RLS severity, often seen after training, is due to expected systemic exercise adaptations or it is mainly due to the relief that leg movements confer during exercise training on a cycle ergometer. This is the first randomized controlled exercise study in uraemic RLS patients. Methods: Twenty-four RLS HD patients were randomly assigned to two groups: the progressive exercise training group (n = 12) and the control exercise with no resistance group (n = 12). The exercise session in both groups included intradialytic cycling for 45 min at 50 rpm. However, only in the progressive exercise training group was resistance applied, at 60-65% of maximum exercise capacity, which was reassessed every 4 weeks to account for the patients' improvement. The severity of RLS symptoms was evaluated using the IRLSSG severity scale, functional capacity by a battery of tests, while sleep quality, depression levels and daily sleepiness status were assessed via validated questionnaires, before and after the intervention period. Results: All patients completed the exercise programme with no adverse effects. RLS symptom severity declined by 58% (P = 0.003) in the progressive exercise training group, while a no statistically significant decline was observed in the control group (17% change, P = 0.124). Exercise training was also effective in terms of improving functional capacity (P = 0.04), sleep quality (P = 0.038) and depression score (P = 0.000) in HD patients, while no significant changes were observed in the control group. After 6 months of the intervention, RLS severity (P = 0.017), depression score (P = 0.002) and daily sleepiness status (P = 0.05) appeared to be significantly better in the progressive exercise group compared with the control group. Conclusion: A 6-month intradialytic progressive exercise training programme appears to be a safe and effective approach in reducing RLS symptom severity in HD patients. It seems that exercise-induced adaptations to the whole body are mostly responsible for the reduction in RLS severity score, since the exercise with no applied resistance protocol failed to improve the RLS severity status of the patients. abstract_id: PUBMED:36603295 Effect of exergaming in people with restless legs syndrome with multiple sclerosis: A single-blind randomized controlled trial. Background: Restless legs syndrome (RLS) is a sensory-motor disorder characterized by an uncomfortable sensation in the lower extremity, triggered by sitting and lying positions and release with motion. There is strong evidence that RLS prevalence is higher in persons with multiple sclerosis (MS, pwMS) than in the general population. Current literature has shown that exergaming as non-pharmacological therapy may be an effective method for symptoms such as balance, walking, fatigue, cognitive functions in pwMS, but the effects on RLS are not known. Therefore, the study's main aim is to investigate the effects of exergaming in pwMS with RLS. Methods: Thirty-one pwMS with RLS and 34 pwMS without RLS were randomly divided as exergaming group and control group. The outcome measures were International RLS Study Group Rating Scale, Modified Fatigue Impact Scale, MS Walking Scale, Timed 25-Foot Walk Test, Hospital Anxiety and Depression Scale, Godin-Shephard Leisure-Time Physical Activity Questionnaire, Pittsburgh Sleep Quality Index, Epworth Sleepiness Scale, 6 min Walk Test, Timed and Up Go, MS International Quality of Life questionnaire, MS-Related Symptom Checklist. Results: 26 pwMS with RLS (11 exergaming group, 15 control group) and 27 pwMS without RLS (12 exergaming group, 15 control group) were included in 8-week post-treatment analyses. After an 8-week long-term follow-up, 16 pwMS with and without RLS completed the protocol. The RLS severity (p = 0.004), anxiety level (p = 0.024), sleep quality (0.005), walking (0.004), and balance functions (0.041) were improved in pwMS with RLS exergaming group, while RLS severity increased in control group (p = 0.004). At 8-week follow-up, the effect of exergaming on RLS severity, quality of life, sleep quality, and walking capacity was preserved. There was significant improvement in gait and balance functions in pwMS without RLS exergaming group, there was no significant differences control group. In 8-week follow-up, improvement obtained in pwMS without RLS exergaming group was not preserved. Conclusions: This study suggests that exergaming training could be an effective method for managing RLS severity, anxiety, sleep quality, gait, balance, and quality of life in pwMS with RLS. abstract_id: PUBMED:17915343 Symptoms of restless legs syndrome in older adults: outcomes on sleep quality, sleepiness, fatigue, depression, and quality of life. Objectives: To compare differences in sleep quality, sleepiness, fatigue, depression, and quality of life according to severity of symptoms of restless legs syndrome (RLS) in older adults. Design: Descriptive, comparative study; cross-sectional design. Setting: Penn Sleep Center at the University of Pennsylvania and RLS support groups in Philadelphia. Participants: Thirty-nine adults, aged 65 and older, diagnosed with RLS with symptoms at least 3 nights per week. Participants were stratified according to symptom severity based on scores from the RLS Symptom Severity Scale. Exclusion criteria were dementia, cognitive impairments, and sleep disorders other than RLS. Measurements: Sleep quality, measured using the Pittsburgh Sleep Quality Index (PSQI), was the primary outcome. Secondary outcomes were sleepiness, fatigue, depression, and quality of life measured using the Epworth Sleepiness Scale (ESS), Fatigue Severity Scale (FSS), Center for Epidemiological Studies--Depression Scale (CES-D), and RLS Quality of Life Instrument (RLS-QLI), respectively. Results: Significant differences were found in subjective sleep quality (P=.007) and sleep duration (P=.04), as well as in PSQI global score (P=.007). RLS-QLI sleep quality (beta=-0.12, 95% confidence interval (CI)=-0.18 to -0.06, P&lt;.001) and sleepiness (beta=0.35, 95% CI=0.09-0.61, P=.01) were significantly related to PSQI global score. Subjects with severe symptoms were five times as likely to use medication to treat RLS (OR=5.3, 95% CI=1.2-22.2). Conclusion: The severity of RLS symptoms in older adults affects not only sleep quality but also many aspects of quality of life, including social functioning, daily functioning, and emotional well-being. abstract_id: PUBMED:26847984 Increased frequency and severity of restless legs syndrome in patients with neuromyelitis optica spectrum disorder. Objectives: To investigate the comorbidity of restless legs syndrome (RLS) and neuromyelitis optica spectrum disorder (NMOSD). Methods: This study enrolled 159 NMOSD patients and 153 age- and gender-matched healthy controls. All participants completed a questionnaire based on the updated International Restless Legs Syndrome Study Group consensus criteria, the International RLS Severity scale, Epworth Sleepiness Scale, Fatigue Severity Scale, and Pittsburgh Sleep Quality Index, and were subsequently interviewed by a neurologist. The frequency and features of RLS were compared between NMOSD patients and healthy controls. The clinical and radiological characteristics of the NMOSD patients with and without RLS were also compared. Results: The frequency and severity of RLS were significantly higher in NMOSD patients than in healthy controls (p = 0.015 for both) and NMOSD patients with RLS had a longer disease duration and more severe disability than those without RLS. Conclusions: This study indicated importance of considering RLS in NMOSD patients. abstract_id: PUBMED:31338581 Evaluation of potential cardiovascular risk protein biomarkers in high severity restless legs syndrome. Restless legs syndrome (RLS) is a common sensorimotor disorder that, in case of severe symptoms, can be very distressing and negatively interfere with quality of life. Moreover, increasing evidences associate RLS with higher risk of cerebrovascular and cardiovascular disease (CVD). The purpose of this study was to quantify two proteins, previously identified by proteomics and potentially linked with CVD risk, namely kininogen-1 (KNG1) and alpha-1-antitrypsin (A1AT), in primary RLS patients at high severity grade (HS-RLS) in comparison to healthy control subjects. Proteins were quantified through enzyme-linked immunosorbent assay (ELISA) in plasma samples from 14 HS-RLS patients and 15 control individuals. The two groups were closely matched for age and gender. The expression level of KNG1 resulted significantly higher (p &lt; 0.001), while A1AT was significantly decreased (p &lt; 0.05) in HS-RLS patients compared to controls, confirming the relationship between these proteins and the disease severity. Furthermore, in patients group the association between the protein concentrations and the following parameters was further evaluated: age, disease onset and diagnosis, scores obtained from the RLS rating scales (Epworth Sleepiness Scale, Pittsburgh Sleep Quality Index, Beck Depression Inventory) and smoking habit. All the considered variables resulted independent of protein levels, so the disease can be reasonably considered the main cause of protein changes. As emerged from the literature, high levels of KNG1 and low amounts of A1AT seem to be related with a highest probability to develop CVD. Consequently, these proteins may be reliable candidate biomarkers of CVD risk in patients with RLS at high severity grade. abstract_id: PUBMED:21205038 Daytime symptom patterns in insomnia sufferers: is there evidence for subtyping insomnia? The type and severity of daytime symptoms reported by insomnia sufferers may vary markedly. Whether distinctive daytime symptom profiles are related to different insomnia diagnoses has not been studied previously. Using profile analysis via multidimensional scaling, we investigated the concurrent validity of ICSD-2 insomnia diagnoses by analysing the relationship of prototypical profiles of daytime symptoms with a subset of ICSD-2 diagnoses, such as insomnia associated to a mental disorder, psychophisiological insomnia, paradoxical insomnia, inadequate sleep hygiene, idiopathic insomnia, obstructive sleep apnea and restless legs syndrome. In a sample of 332 individuals meeting research diagnostic criteria for insomnia (221 women, M(age) =46 years.), the profile analysis identified four prototypical patterns of daytime features. Pearson correlation coefficients indicated that the diagnoses of insomnia associated to a mental disorder and idiopathic insomnia were associated with a daytime profile characterized by mood disturbance and low sleepiness; whereas the diagnoses of psychophysiological insomnia and inadequate sleep hygiene were related to a profile marked by poor sleep hygiene, daytime tension and low fatigue. Furthermore, whereas paradoxical insomnia was consistently associated to lower daytime impairment, insomnia associated to a mental disorder appeared as the most severe daytime form of insomnia. This classification of insomnia sufferers along multiple defining dimensions provides initial validation for two basic insomnia subtypes, with a presumably distinct aetiology: insomnia characterized mainly by an 'internal' component, and a 'learned' insomnia. Research to determine which dimensions are critical for inclusion or differential weighting for defining a general typological system for insomnia sufferers is warranted. abstract_id: PUBMED:18996743 Increased frequency of restless legs syndrome in chronic obstructive pulmonary disease patients. Background: Despite complaints of poor sleep being very common in people with chronic obstructive pulmonary disease (COPD), restless legs syndrome (RLS) symptoms have not been extensively investigated in these patients. Objective: To assess the prevalence and severity of RLS in patients with COPD and to investigate the factors potentially associated with RLS. Methods: A total of 87 patients with COPD and 110 controls, matched for age and sex, were evaluated regarding the presence and severity of RLS symptoms. A diagnosis of RLS was made according to the criteria of the International RLS Study Group (IRSLSSG), and severity was assessed by the IRLSSG severity scale. Excessive daytime somnolence was assessed using the Epworth sleepiness scale (ESS). Results: RLS was significantly more frequent in COPD patients than in controls (36.8% vs. 11%; p &lt; 0.001). Compared to controls, COPD patients with RLS showed higher disease severity (mean IRLSSG severity scale score: 20.5 +/- 2.8 for COPD, and 18 +/- 3.5 for controls; p = 0.016) and more pronounced daytime somnolence (mean ESS score: 11.8 +/- 1.1 for COPD, and 8.6 +/- 3.6 for controls; p = 0.009). Moreover, compared to those without RLS, COPD patients with RLS showed increased daytime sleepiness (mean ESS score: 11.8 +/- 1.1 for COPD/RLS, and 7.3 +/- 4 for COPD/non-RLS; p &lt; 0.001) and longer disease duration (11.9 +/- 7 years for COPD/RLS, and 8.7 +/- 6.9 years for COPD/non-RLS; p = 0.045). Multivariate analysis showed that ESS score was the only factor significantly associated with RLS in COPD patients. Conclusions: RLS is a frequent cause of disabling sleep disturbance in patients with COPD and should be specifically investigated in these patients. Answer: Yes, valerian appears to improve sleepiness and symptom severity in people with restless legs syndrome (RLS). A study comparing the effects of 800 mg of valerian with a placebo on sleep quality and symptom severity in people with RLS found that both groups reported improvement in RLS symptom severity and sleep. However, in a nested analysis of participants who received 800 mg of valerian and reported an Epworth Sleepiness Scale (ESS) score of 10 or greater, significant differences before and after treatment were found in sleepiness and RLS symptoms. There was also a strong positive association between changes in sleepiness and RLS symptom severity. The study concluded that the use of 800 mg of valerian for 8 weeks improves symptoms of RLS and decreases daytime sleepiness in patients with higher sleepiness scores (PUBMED:19284179).
Instruction: Is intracameral lidocaine really effective in cataract surgery? Abstracts: abstract_id: PUBMED:26378751 Intracameral epinephrine without the addition of intracameral lidocaine in the management of tamsulosin associated intraoperative floppy iris syndrome. Objective: Intracameral mydriatics using epinephrine associated with lidocaine have been reported as efficacious in reducing intraoperative floppy iris syndrome (IFIS) complications during cataract surgery. The aim of this study was to verify the efficacy of intracameral epinephrine without intracameral lidocaine as prophylaxis against IFIS in patients on tamsulosin. Materials And Methods: This was a retrospective study on the results of cataract surgery in 18 patients on therapy with tamsulosin. Patients had undergone routine phacoemulsification in one eye. Successively, they underwent phacoemulsifcation in the fellow eye using non preserved intracameral epinephrine 1:4000 diluted with BSS. Intraoperative complications during cataract surgery had been documented and IFIS was graded based on iris billowing, miosis or iris prolapse. Follow-up was 3 months. Results: Thirty-six eyes of 18 patients were included in the evaluation. The incidence of IFIS was significantly higher in the eyes where routine phacoemulsificaton had been performed (100%) with respect to eyes where phacoemulsification was carried out using intracameral epinephrine (33%) (Chi Square test =15.12, p&lt;0.001). In routine phacoemulsification 16 eyes showed iris billowing, 14 eyes had some extent of miosis and 14 eyes had tendency to iris prolapse. In phacoemulsification with the use of intracameral epinephrine 5 eyes showed iris billowing, 4 eyes presented some extent of miosis and 2 eyes had tendency to iris prolapse. There were no serious intraoperative complications. Conclusions: Intracameral epinephrine without the addition of lidocaine was efficacious in the management of IFIS in patients on tamsulosin. abstract_id: PUBMED:17534812 Is intracameral lidocaine really effective in cataract surgery? Purpose: To evaluate the usefulness of intracameral lidocaine in cataract surgery under topical anesthesia and especially if the patient wanted intravenous sedation preoperatively. Methods: In this prospective study 96 patients were randomly assigned to receive 0.5 cc of balanced salt solution (Group 1) or 1% unpreserved lidocaine (Group 2). Patients who wanted sedation received intravenous midazolam hydrochloride. All surgery was done by one surgeon using a clear corneal technique. Results: Mean pain scores were 0.73 (of a maximum 3) in Group 1 and 0.54 in Group 2; the difference between groups was not statistically significant. Forty patients in Group 1 (83%) and 44 patients in Group 2 (92%) reported no discomfort or only mild discomfort. The two study groups were comparable in need for intravenous midazolam. Logistic regression analysis showed a significant relationship between pain scores and intravenous sedation (p=0.02) but not with intracameral lidocaine or other tested variables. However, odds ratio for pain increased to 5.1 (95% CI; 1.29-20.41) in participants without intravenous sedation compared to those with sedation. Conclusions: The results of the present study suggest that intravenous sedation preoperatively seems to be an important determinant to relieve the sensation of discomfort/pain during small incision cataract surgery, but intracameral lidocaine was shown not to have a clinically useful role. abstract_id: PUBMED:27239591 Transient complete visual loss after intracameral anesthetic injection in cataract surgery. Purpose: We describe a case of transient visual loss following cataract surgery with unpreserved intracameral lidocaine. Method: A 50-year-old man with posterior polar cataract underwent phacoemulsification. Following capsulorhexis and hydrodelineation with 0.5 cc of unpreserved lidocaine 1%, a portion of fluid reached behind the crystalline lens and caused the posterior capsule rupture. Cataract extraction and anterior vitrectomy were performed. Anesthetic administration was repeated to relieve the discomfort felt by the patient. A three-piece hydrophobic acrylic intraocular lens was implanted in the ciliary sulcus. Results: On the first postoperative morning, the patient's vision was recorded as having no light perception. The relative afferent pupillary defect (RAPD) was found to be 4+. The retina and optic nerve head appeared normal. In the afternoon, the visual acuity (VA) was improved to 3-m count-finger. On the second postoperative morning, the patient's VA was improved to 4/10. On the third postoperative day, his VA returned to normal at 20/20 without RAPD. Conclusion: In the event of posterior capsular rupture, to reduce retinal toxicity risks, intracameral lidocaine should not be repeated. abstract_id: PUBMED:29988901 Transient complete visual loss and subsequent cystoid macular edema after intracameral lidocaine injection following uneventful cataract surgery. Purpose: To report a case of transient visual loss following uncomplicated cataract surgery with unpreserved intracameral lidocaine. Methods: A 61-year-old woman with nuclear sclerosis cataract underwent uncomplicated phacoemulsification and in-the-bag intraocular lens (IOL) implantation. Results: After opening the eye patch on the first postoperative day, the patient complained of complete blindness. Her vision was no light perception (NLP) and the Marcus-Gunn was found to be 4+. Eight hours later, the patient's visual acuity improved to count fingers at 1 m. After two days, the vision improved surprisingly to 20/20 without any Marcus-Gunn. After 4 weeks, the vision decreased surprisingly to 20/80 without any Marcus-Gunn. On this day, macular optical coherence tomography (OCT) was performed, and cystoid macular edema was detected. Conclusion: Transient visual loss after intracameral lidocaine has been reported after violation of posterior capsule during cataract surgery, and here, we report a case of transient visual loss despite uncomplicated cataract surgery. abstract_id: PUBMED:31118559 Systemic exposure to intracameral vs topical mydriatic agents: in cataract surgery. Objective: The objective of this study was to compare systemic exposure to tropicamide/phenylephrine following intracameral or topical administration before cataract surgery. Patients And Methods: Mydriatics exposure was calculated in patients randomized to intracameral fixed combination of mydriatics and anesthetic ([ICMA]: tropicamide 0.02%, phenylephrine 0.31%, and lidocaine 1%, N=271) or mydriatic eye drops ([EDs]: tropicamide 0.5% and phenylephrine 10%, N=283). Additional doses were permitted if required. Mydriatic plasma levels were determined by mass spectrometric HPLC in 15 patients per group before and after administration. Results: Most ICMA patients (73.6%) received a single dose (200 µL) representing an exposure to tropicamide of 0.04 mg and phenylephrine of 0.62 mg. None of these patients received additional mydriatics. In the control group (three administrations), the exposure was 0.45 (11.3-fold higher than ICMA) and 10.2 (16.5-fold higher) mg. When additional ED was used in this group (9.2% of patients), it was 37.5-fold higher for tropicamide (10 drops, 1.5 mg) and 54.8-fold higher for phenylephrine (10 drops, 34 mg) than the recommended ICMA dose. Tropicamide plasma levels were not detectable at any time point in ICMA patients while it was detectable in all ED patients at 12 and 30 minutes. Phenylephrine was detectable in 14.3% of ICMA patients compared to all ED patients at least at one time point. More ED patients experienced a meaningful increase in blood pressure and/or heart rate (11.2% vs 6.0% of ICMA patients; P=0.03). Conclusion: Systemic exposure to tropicamide/phenylephrine was lower and cardiovascular (CV) effects were less frequent with ICMA. This could be of particular significance in patients at CV risk. abstract_id: PUBMED:32174572 Comparative clinical trial of intracameral ropivacaine vs. lignocaine in subjects undergoing phacoemulsification under augmented topical anesthesia. Purpose: To compare intracameral Ropivacaine to Lignocaine during phacoemulsification under augmented topical anesthesia, in terms of efficacy and safety. Methods: This prospective, randomized, double-masked clinical trial included subjects planned for phacoemulsification with posterior chamber intraocular lens implantation for visually significant uncomplicated senile cataract, under augmented topical anesthesia. Cases were randomized into two groups, Group A (Ropivacaine 0.1%) or Group B (Lignocaine 1.0%). The pain experienced by the patients during the surgery, mydriasis, post-op inflammation and endothelial cell change at six weeks after the procedure was evaluated. Surgeon's feedback was recorded to evaluate the cooperation of the patient during surgery. Results: A total of 210 subjects were screened and 184 were randomized to have 92 subjects in each group. There was no statistically significant difference seen on comparing Group A and B with respect to Age (P = 0.05), painful surgical steps (P = 0.85), visual analog scale scores (P = 0.65), surgeon's score (P = 0.11), postoperative inflammation (P = 0.90) and average ultrasound time during phacoemulsification (P = 0.10). Subjects in Group A fared better when compared to Group B with respect to endothelial cell loss (P = 0.0008), and augmentation in mydriasis (P &lt; 0.001). Conclusion: Intracameral Ropivacaine and Lignocaine, both are equally effective in providing analgesia during phacoemulsification. However, intracameral Ropivacaine is superior to Lignocaine with regards to corneal endothelial cell safety, and augmenting mydriasis. abstract_id: PUBMED:37498980 Pain experience in patients undergoing topical anesthesia alone versus topical plus intracameral anesthesia during cataract surgery. Purpose: To evaluate and compare the pain experience and discomfort during cataract surgery and over the 24 hours after surgery in patients undergoing either topical anesthesia alone or topical anesthesia plus intracameral anesthesia, provided by using a standard topical anesthesia regimen and a 0.2-mL dose of Mydrane®. Methods: Prospective study involving 100 patients who underwent cataract surgery receiving either topical anesthesia alone (group 1, n = 50) or topical anesthesia plus intracameral anesthesia (group 2, n = 50) between January 2021 and March 2022. The pain experienced by patients during and after surgery was assessed using a pain scale and a questionnaire. One hour after surgery, patients were asked to rate the intensity of discomfort they experienced throughout the procedure by pointing to a 0-100 Visual Analogue Scale (VAS). Results: According to VAS measurements, patients who underwent surgery under topical anesthesia reported more significant pain than those who underwent surgery under topical anesthesia plus intracameral anesthesia during and over the 24 hours after surgery. (p = 0.02 and p = 0.01, respectively). Patients undergoing topical anesthesia had 2.34-fold greater odds of having pain during surgery [95% Confidence Interval (CI): 1.58-5.25, p = 0.03]. Conclusions: Topical anesthesia plus intracameral anesthesia lower intraoperative and postoperative pain levels, improving patient cooperation and representing a useful analgesic delivery method in cataract surgery. abstract_id: PUBMED:36308110 Evaluation of efficacy of intracameral lidocaine and tropicamide injection in manual small-incision cataract surgery: A prospective clinical study. Purpose: The study was conducted to evaluate efficacy of intracameral lidocaine hydrochloride 1% and tropicamide injection 0.02% for anaesthesia and mydriasis in manual small-incision cataract surgery (MSICS) and to report any adverse drug reaction. Methods: This was a randomized, prospective, observational study on 32 participants that took place from October 2021 to March 2022 (6 months). Patients between age group 40-75 year with nuclear sclerosis cataract and pupil diameter &amp;gt;6 mm in preoperative evaluation were included in the study. Patients with pseudoexfoliation, rigid pupil, senile miosis, history of uveitis, ocular trauma, recent ocular infections, with known allergy to tropicamide, all types of glaucoma were excluded from the study. Results: Thirty-two eyes with nuclear sclerosis cataract who underwent MSICS were studied. Fixed dose combination of 2 ml phenyl epinephrine (0.31%), tropicamide (0.02%), and lidocaine (1%) intracamerally was used for mydriasis and analgesia. More than 7 mm pupillary dilatation was achieved within 20 seconds of injection in 29 cases (90.6%). Mild pain and discomfort was noted in 12 cases (37.5%). Postoperative day 1 unaided visual acuity was in the range of 6/18-6/12 for all patients and grade 1 iritis was seen in 7 cases (21.8%) which was self-limiting. No adverse event like corneal decompensation or TASS were noted. Conclusion: Thus, Intracameral injection of mydriatic provides rapid and sustainable mydriasis and analgesia for manual SICS. abstract_id: PUBMED:27382247 A comparison of patient pain and visual outcome using topical anesthesia versus regional anesthesia during cataract surgery. Purpose: The purpose of this study was to compare the level of patient pain during the phacoemulsification and implantation of foldable intraocular lenses while under topical, intracameral, or sub-Tenon lidocaine. Patients And Methods: This was a retrospective study. Three hundred and one eyes subjected to cataract surgery were included in this study. All eyes underwent phacoemulsification surgery and intraocular lens implantation using topical, sub-Tenon, or intracameral anesthesia. The topical group received 4% lidocaine drops, and the intracameral group received a 0.1-0.2 cc infusion of 1% preservative-free lidocaine into the anterior chamber through the side port combined with topical drops of lidocaine. The sub-Tenon group received 2% lidocaine. Best-corrected visual acuity, corneal endothelial cell loss, and intraoperative pain level were evaluated. Pain level was assessed on a visual analog scale (range 0-2). Results: There were no significant differences in visual outcome and corneal endothelial cell loss between the three groups. The mean pain score in the sub-Tenon group was significantly lower than that in the topical and intracameral groups (P=0.0009 and P=0.0055, respectively). In 250 eyes without high myopia (&lt; -6D), there were no significant differences in mean pain score between the sub-Tenon and intracameral groups (P=0.1417). No additional anesthesia was required in all groups. Conclusion: Intracameral lidocaine provides sufficient pain suppressive effects in eyes without high myopia, while sub-Tenon anesthesia is better for cataract surgery in eyes with high myopia. abstract_id: PUBMED:10221186 Diffusion if lidocaine after intracameral injection Purpose: To determine the lidocaine diffusion space, we compared lidocaine aquous humor concentration in topical anesthesia with 1% lidocaine intracameral injection and in peribulbar anesthesia with 2% lidocaine prior phacoemulsification. Material And Method: A gas chromatography technique of analyzing 100 microliters aqueous humor was used to detect the presence of lidocaine prior to phakoemulsification cataract surgery in two groups of patients: group A: after peribulbar anesthesia with 10 ml 2% lidocaine, group B: after 1% tetracaine topical anesthesia and 0.5 ml intracameral injection of 1% preservative-free lidocaine. The intracameral volume was estimated mathematically in group B. Endothelial cells loss was analyzed in two groups with non contact specular microscopy. Results: Lidocaine was detected in aqueous humor with a good reliability. The mean concentration after intracameral injection was 6,300 micrograms/ml and was higher than after peribulbar injection. This concentration was near than theorical intracameral rate, suggesting that there was no diffusion in the posterior segment. There was no significant difference in the 2 groups in endothelial cells loss. Conclusion: Intracameral injection of lidocaine is an effective technique to anesthetize intracameral structures without diffusion in posterior segment prior to phakoemulsification. Answer: The effectiveness of intracameral lidocaine in cataract surgery has been evaluated in various studies with mixed results. One study suggested that intracameral lidocaine did not have a clinically useful role in reducing discomfort or pain during small incision cataract surgery when compared to balanced salt solution, and that intravenous sedation preoperatively was a more important determinant for relieving discomfort/pain (PUBMED:17534812). However, other studies have shown different outcomes. For instance, a study evaluating the efficacy of intracameral lidocaine and tropicamide injection in manual small-incision cataract surgery found that it provided rapid and sustainable mydriasis and analgesia, with no adverse events like corneal decompensation or toxic anterior segment syndrome (TASS) noted (PUBMED:36308110). Another study comparing patient pain and visual outcome using topical anesthesia versus regional anesthesia during cataract surgery found that intracameral lidocaine provided sufficient pain suppression in eyes without high myopia, suggesting its effectiveness in certain patient groups (PUBMED:27382247). Additionally, a study on the systemic exposure to intracameral mydriatic agents versus topical mydriatic agents in cataract surgery found that systemic exposure to tropicamide/phenylephrine was lower and cardiovascular effects were less frequent with intracameral administration, which included lidocaine as part of the intracameral fixed combination (PUBMED:31118559). This could indicate that intracameral lidocaine is not only effective but also potentially safer for patients at cardiovascular risk. However, there have been reports of transient complete visual loss after intracameral anesthetic injection in cataract surgery, which raises concerns about the safety of intracameral lidocaine in certain situations, such as when there is a violation of the posterior capsule (PUBMED:27239591, PUBMED:29988901). In conclusion, while some studies question the clinical utility of intracameral lidocaine in reducing pain during cataract surgery, others demonstrate its efficacy in providing analgesia and mydriasis, as well as its safety profile in terms of systemic exposure and cardiovascular effects. The effectiveness of intracameral lidocaine may depend on the specific surgical technique, patient characteristics, and whether additional sedation is used.
Instruction: Plasma disappearance of indocyanine green: a marker for excretory liver function? Abstracts: abstract_id: PUBMED:2641459 Assessment of hepatic excretory function in chronic liver disease by hepatobiliary scintigraphy. Hepatobiliary scintigraphy was performed in 23 normal subjects and 47 patients with chronic liver disease (chronic hepatitis; n = 27, liver cirrhosis; n = 20) to evaluate its availability as a test of liver function. After intravenous administration of Tc-99m N-pyridoxyl-5-methyl-tryptophan, the data were acquired for 60 min and the time-activity curves of ROIs (the heart and liver) were generated. In two compartment model simulation, the early blood clearance rate (kl), late blood clearance rate (km), hepatic uptake rate (ku) hepatic excretion rate (ke), and hepatic excretion T 1/2 were calculated. There was no significant difference in those four k values in normal and chronic hepatitis. However, in liver cirrhosis each of them, except km, was lower than in normal subjects. The kl value correlated closely with the indocyanine green plasma clearance test, whereas the ke and T 1/2 values were closely correlated with the level of serum bilirubins. Only hepatobiliary scintigraphy showed the excretory function of the liver quantitatively and the ke value was helpful in detecting hepatic excretory dysfunction early in chronic liver disease before serum bilirubins increased. abstract_id: PUBMED:30428438 Hepatectomy in a case of hepatocellular carcinoma with constitutional indocyanine green excretory defect. Introduction: Constitutional indocyanine green (ICG) excretory defect is extremely rare. The indocyanine green retention rate at15 min (ICGR15) is important for estimating hepatic functional reserve and selection of the appropriate surgical procedure before hepatectomy is performed. Because of the rarity of constitutional ICG excretory defect, its clinical features are not well understood. We report here evaluation and treatment of a patient with such a disorder. Presentation Of Case: An 83-year-old man was admitted to hospital with the diagnosis of resectable hepatocellular carcinoma. The preoperative indocyanine green (ICG) retention rate at 15 min was greater than 76.2%. Despite this finding, Child-Pugh classification and 99mTc-galactosyl human serum albumin (GSA) liver scintigraphy didn't show any abnormal findings, and there was no background disease. Therefore, we diagnosed him with constitutional ICG excretory defect and performed partial hepatectomy. For patients requiring hepatectomy with this disease the indications and procedure for surgery should be considered. These should be based on liver function tests such as GSA liver scintigraphy. Conclusions: Constitutional ICG excretory defect is an extremely rare disorder. At present, the indications for surgery for this condition should be comprehensively considered. Findings of liver function tests, such as a general liver function test and GSA liver scintigraphy, are important for treating this disorder. abstract_id: PUBMED:27589984 Central bisectionectomy for hepatocellular carcinoma in a patient with indocyanine green excretory defect associated with reduced expression of the liver transporter. Background: Indocyanine green (ICG) excretory defect is a dye excretory disorder, and it is characterized by the selective impairment of plasma ICG clearance with normal liver histology. The pathophysiology involves selective loss of active transporters for ICG in the hepatic cell membrane. Several cases of hepatectomy in patients with ICG excretory defect have been reported, but the expression of hepatic transporters involved in ICG excretory defect has not been examined in these cases. Case Presentation: An 81-year-old man who was hepatitis B and C virus negative was admitted to our hospital with a diagnosis of HCC. Abdominal computed tomography revealed an 8-cm-diameter tumor in hepatic segments 4 and 8. The retention rate of ICG at 15 min (ICGR15), which has been used to evaluate hepatic functional reserve, was markedly elevated (79.1 %), whereas other liver function test results, were normal. Therefore, we diagnosed the patient with HCC with an ICG excretory defect, and considered major hepatectomy. Central bisectionectomy was performed, and the postoperative course was uneventful. Microscopic examination of the resected specimen showed moderately differentiated HCC. Immunohistochemical staining and polymerase chain reaction analysis of a non-neoplastic site of the resected specimen showed very few expression of the organic anion-transporting polypeptide 1B3 (OATP1B3), which is usually expressed on the basolateral membrane of human hepatocytes and mediates the uptake of ICG. Conclusions: In this case, we present a case of hepatectomy for HCC in a patient with ICG excretory defect, which may be attributable to a congenital disorder of OATP1B3 expression; however, an ICG excretory defect did not seem to have any effect on the short-term prognosis after hepatectomy. abstract_id: PUBMED:695200 Ujoviridin method of studying the absorptive and excretory function of the liver and its blood flow in mechanical jaundice The blood flow and absorption-excretory function of the liver were studied in 5 normal individuals and in 50 cases of mechanical jaundice. In 26 out of them mechanical jaundice was due to tumors localized in the hepatoduodenal zone, and in 24--mechanical jaundice was the complication of cholelithiasis. It was found that obstructive jaundice cases develop severe hepatocellular and hemodynamic disorders, the degree of which could be determined by the hall-absorption of ujoviridin, its clearance and relative percentage of the clearance. abstract_id: PUBMED:22534730 How to assess liver function? Purpose Of Review: The liver comprises a multitude of parenchymal and nonparenchymal cells with diverse metabolic, hemodynamic and immune functions. Available monitoring options consist of 'static' laboratory parameters, quantitative tests of liver function based on clearance, elimination or metabolite formation and scores, most notably the 'model for end-stage liver disease'. This review aims at balancing conventional markers against 'dynamic' tests in the critically ill. Recent Findings: There is emerging evidence that conventional laboratory markers, most notably bilirubin, and the composite model for end-stage liver disease are superior to assess cirrhosis and their acute decompensation, while dynamic tests provide information in the absence of preexisting liver disease. Bilirubin and plasma disappearance rate of indocyanine green reflecting static and dynamic indicators of excretory dysfunction prognosticate unfavorable outcome, both, in the absence and presence of chronic liver disease better than other functions or indicators of injury. Although dye excretion is superior to conventional static parameters in the critically ill, it still underestimates impaired canalicular transport, an increasingly recognized facet of excretory dysfunction. Summary: Progress has been made in the last year to weigh static and dynamic tests to monitor parenchymal liver functions, whereas biomarkers to assess nonparenchymal functions remain largely obscure. abstract_id: PUBMED:16231068 Plasma disappearance of indocyanine green: a marker for excretory liver function? Objective: To investigate whether the plasma disappearance rate of indocyanine green (ICG) assessed using a commercially available bedside monitor provides an accurate estimation of cumulative biliary ICG excretion in a clinically relevant model of long-term, hyperdynamic porcine endotoxemia. Design And Setting: Prospective experimental study in the animal laboratory in a university hospital. Subjects: Fifteen domestic pigs. Interventions: Pigs were anesthetized, mechanically ventilated, and instrumented. Intravenous endotoxin was continuously infused over 12 h concomitant with fluid resuscitation. Measurements were performed before and 12 h after the start of endotoxin infusion. Measurements And Results: All animals developed hyperdynamic circulation characterized by a sustained increase in cardiac output. Despite well maintained portal venous and consequently total liver blood flow endotoxemia decreased hepatic lactate uptake, which was accompanied by a significant fall in portal and hepatic venous pH. Both the cumulative bile flow and biliary ICG and bicarbonate excretion measured during 1 h after intravenous bolus of 25 mg ICG fell significantly. By contrast, neither the plasma disappearance rate of ICG nor the rate corrected for liver blood flow exhibited any changes over time. Conclusions: In hyperdynamic porcine endotoxemia the plasma disappearance rate of ICG failed to accurately substitute for direct short-term measures of biliary ICG excretion. Hence normal values of plasma disappearance rate of ICG should be interpreted with caution in early, acute inflammatory conditions. abstract_id: PUBMED:19630098 Could quantitative liver function tests gain wide acceptance among hepatologists? It has been emphasized that the assessment of residual liver function is of paramount importance to determine the following: severity of acute or chronic liver diseases independent of etiology; long-term prognosis; step-by-step disease progression; surgical risk; and efficacy of antiviral treatment. The most frequently used tools are the galactose elimination capacity to asses hepatocyte cytosol activity, plasma clearance of indocyanine green to assess excretory function, and antipyrine clearance to estimate microsomal activity. However, a widely accepted liver test (not necessarily a laboratory one) to assess quantitative functional hepatic reserve still needs to be established, although there have been various proposals. Furthermore, who are the operators that should order these tests? Advances in analytic methods are expected to allow quantitative liver function tests to be used in clinical practice. abstract_id: PUBMED:33845602 Hepatocellular carcinoma with indocyanine green excretory defect: a case report and review of the literature. Constitutional indocyanine green (ICG) excretory defect is rare. However, ICG excretory defect concomitant with hepatocellular carcinoma (HCC) is extremely rare, and only six reports of hepatectomy in patients with constitutional ICG excretory defect have been published in the English language literature through 2020. In this study, we report a case of combined HCC and ICG excretory defect and discuss its clinicopathological features and outcomes. The case featured a 68-year-old man who was admitted to the hospital with a diagnosis of resectable HCC. The preoperative ICG retention rate at 15 minutes was 82.9%. Despite this finding, the Child-Pugh assessment and hepatobiliary-specific magnetic resonance imaging (MRI) did not reveal any abnormal findings. Therefore, we diagnosed the patient with constitutional ICG excretory defect and performed partial hepatectomy. For patients requiring hepatectomy, the indications and procedure for surgery should be considered. These should be based on liver function tests such as gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid-enhanced MRI. abstract_id: PUBMED:376974 Clearance tests of liver function. Measurement of the clearance of substances removed from the circulation primarily by the liver, which depends on the uptake, storage, and excretory capacity of the liver, provides the most sensitive, specific, and reliable test for hepatic function. Since serum enzymes and other tests of liver function do not permit detection of early phase injury or provide information on progress, and since the use of sulfobromophthalein has been discontinued because of occasional fatal reactions, substitutes are necessary. Clearance of indocyanine green, bile acid, and exogenous metabolites such as galactose, protein, and aminopyrine are reviewed. abstract_id: PUBMED:23301644 Liver function assessment in oncology practice The overall condition and prognosis of a patient can be affected by impaired liver function. It applies to anticancer pharmacotherapy, liver surgery and radiological interventions. The liver condition is usually assessed by common laboratory tests and clinical examination in daily practice. Liver tests consist of aminotransferases - alanine transaminase, aspartate transaminase, bilirubin, alkaline phosphatase, gamma glutamyl transpeptidase, lactate dehydrogenase, albumin and prothrombin time, less frequently prealbumin and cholinesterase. The alkaline phosphatase and aspartate transaminase are markers of a liver damage, the alkaline phosphatase and gamma glutamyl transpeptidase are most useful as markers for cholestatic liver injury. Albumin, prealbumin, cholinesterase and prothrombin time are the markers of synthetic liver function. Bilirubin and bile acids are related to the liver transport and excretory capacity. The Child-Pugh score is used to assess prognosis of chronic liver disease, mainly cirrhosis. The examination of liver function using indocyanine green helps to determinate the extent of possible liver resection. A mathematical analysis of dynamic cholescintigraphy and a calculation of hepatic extraction fraction enables quantification of liver function. Other liver function tests are of little use in oncology. Answer: The plasma disappearance rate of indocyanine green (ICG) has been investigated as a potential marker for excretory liver function. In a study involving a porcine model of long-term, hyperdynamic endotoxemia, it was found that despite well-maintained portal venous and total liver blood flow, endotoxemia decreased hepatic lactate uptake and significantly reduced cumulative bile flow and biliary ICG and bicarbonate excretion. However, the plasma disappearance rate of ICG did not exhibit any changes over time, suggesting that in hyperdynamic conditions, the plasma disappearance rate of ICG may not accurately reflect direct short-term measures of biliary ICG excretion (PUBMED:16231068). In clinical practice, the plasma disappearance rate of ICG is used to assess liver function, particularly its excretory capacity. It is considered a dynamic test that can provide information in the absence of preexisting liver disease. Bilirubin and the plasma disappearance rate of ICG are both indicators of excretory dysfunction and can prognosticate unfavorable outcomes in patients with or without chronic liver disease (PUBMED:22534730). However, the plasma disappearance rate of ICG may still underestimate impaired canalicular transport, which is an increasingly recognized aspect of excretory dysfunction (PUBMED:22534730). Additionally, the clearance of ICG and other exogenous metabolites provides sensitive, specific, and reliable tests for hepatic function, particularly when early phase injury detection or progress information is needed (PUBMED:376974). In summary, while the plasma disappearance rate of ICG is used as a marker for excretory liver function, its accuracy may be limited in certain conditions, such as hyperdynamic endotoxemia. It is still considered a valuable dynamic test for assessing liver excretory capacity, especially when conventional static parameters are not sufficient (PUBMED:22534730; PUBMED:16231068).
Instruction: CT versus plain radiographs for evaluation of c-spine injury in young children: do benefits outweigh risks? Abstracts: abstract_id: PUBMED:18368400 CT versus plain radiographs for evaluation of c-spine injury in young children: do benefits outweigh risks? Background: Various reports support the use of cervical spine (c-spine) CT over conventional radiography in screening of c-spine injury. Interest now exists in diagnostic radiation-induced morbidity. Objective: To estimate excess relative risk for developing cancer from c-spine high-resolution CT radiation exposure. Materials And Methods: We conducted a retrospective review of children evaluated for c-spine injury using CT. The study population was divided into three age groups, 0-4 years (group 1), 5-8 years (group 2), and older than 8 years (group 3). Anthropomorphic 1-year-old and 5-year-old phantoms were used to measure radiation at the thyroid during radiography and CT. Excess relative risk for thyroid cancer was estimated using these measurements. Results: A total of 557 patients were evaluated with CT. The radiographic method most commonly used was head CT/c-spine CT in 363 (65%). Only 179 children (32%) had any type of prior radiography. The use of c-spine CT exposes the thyroid to 90-200 times more radiation than multiple conventional radiographs. The mean excess relative risk for thyroid cancer after CT was 2.0 for group 1 and 0.6 for group 2. There were no comparison data for group 3. Conclusion: C-spine CT is associated with a significant exposure to ionizing radiation and increases excess relative risk for thyroid cancer in young children. abstract_id: PUBMED:4037477 Emergency evaluation of cervical spine injuries: CT versus plain radiographs. The recognition and appropriate initial management of the patient with an acute cervical spine injury in the ED is important because of the devastating and catastrophic effects of spinal cord injury. The use of computed tomography (CT) scan compared with initial plain radiographs in the detection of acute blunt traumatic cervical spine injury was evaluated in 20 patients. There was a disparity between the plain film and the CT scan as read by an attending radiologist in 12 patients (60%). In five patients (25%) the plain radiograph suggested a fracture or dislocation that was confirmed by CT scan. In eight patients (40%) the cervical spine film was read as a fracture, dislocation, or soft tissue widening between the cervical spine vertebrae. CT scan done later after admission was normal. In the remaining seven patients the plain film was read as "normal." CT scan, however, was normal in only three, and in four of these seven patients there was a discrepancy between the plain radiograph and the CT. Thus in four of 20 patients (20%) the plain film was read as "normal," while CT scan showed a fracture in our study. CT scan was superior to plain films in diagnosing cervical spine trauma, and it eliminated the false-positive (40%) and false-negative (20%) results obtained by relying on plain radiographs alone. abstract_id: PUBMED:36061007 Diagnostic accuracy of deep learning for evaluation of C-spine injury from lateral neck radiographs. Background: Traumatic spinal cord injury (TSI) is a leading cause of morbidity and mortality worldwide, with the cervical spine being the most affected. Delayed diagnosis carries a risk of morbidity and mortality. However, cervical spine CT scans are time-consuming, costly, and not always available in general care. In this study, deep learning was used to assess and improve the detection of cervical spine injuries on lateral radiographs, the most widely used screening method to help physicians triage patients quickly and avoid unnecessary CT scans. Materials And Methods: Lateral neck or lateral cervical spine radiographs were obtained for patients who underwent CT scan of cervical spine. Ground truth was determined based on CT reports. CiRA CORE, a codeless deep learning program, was used as a training and testing platform. YOLO network models, including V2, V3, and V4, were trained to detect cervical spine injury. The diagnostic accuracy, sensitivity, and specificity of the model were calculated. Results: A total of 229 radiographs (129 negative and 100 positive) were selected for inclusion in our study from a list of 625 patients with cervical spine CT scans, 181 (28.9%) of whom had cervical spine injury. The YOLO V4 model performed better than the V2 or V3 (AUC = 0.743), with sensitivity, specificity, and accuracy of 80%, 72% and 75% respectively. Conclusion: Deep learning can improve the accuracy of lateral c-spine or neck radiographs. We anticipate that this will assist clinicians in quickly triaging patients and help to minimize the number of unnecessary CT scans. abstract_id: PUBMED:27321014 Sensitivity of plain radiography for pediatric cervical spine injury. Pediatric patients with suspected cervical spine injuries (CSI) often receive a computed tomography (CT) scan as an initial diagnostic imaging test. While sensitive, CT of the cervical spine carries significant radiation and risk of lethal malignant transformation later in life. Plain radiographs carry significantly less radiation and could serve as the preferred screening tool, provided they have a high functional sensitivity in detecting pediatric patients with CSI. We hypothesize that plain cervical spine radiographs can reliably detect pediatric patients with CSI and seek to quantify the functional sensitivity of plain radiography as compared to CT. We analyzed data from the NEXUS cervical spine study to assess the sensitivity of plain radiographs in the evaluation of CSI. We identified all pediatric patients who underwent plain radiographic imaging, and all pediatric patients found to have CSI. We then determined the sensitivity of plain radiographs in detecting pediatric patients with CSI. We identified 44 pediatric patients with CSI in the dataset with age ranging from 2 to 18 years old. Thirty-two of the 44 pediatric patients received cervical spine plain films as a part of their workup. Plain films were able to identify all 32 pediatric patients with CSI to yield a sensitivity of 100 % in detecting injury victims (95 % confidence interval 89.1-100.0 %). Plain radiography was highly sensitive for the identification of CSI in our cohort of pediatric patients and is useful as a screening tool in the evaluation of pediatric CSI. abstract_id: PUBMED:24342907 Are plain radiographs sufficient to exclude cervical spine injuries in low-risk adults? Background: The routine use of clinical decision rules and three-view plain radiography to clear the cervical spine in blunt trauma patients has been recently called into question. Clinical Question: In low-risk adult blunt trauma patients, can plain radiographs adequately exclude cervical spine injury when clinical prediction rules cannot? Evidence Review: Four observational studies investigating the performance of plain radiographs in detecting cervical spine injury in low-risk adult blunt trauma patients were reviewed. Conclusion: The consistently poor performance of plain radiographs to rule out cervical spine injury in adult blunt trauma victims is concerning. Large, rigorously performed prospective trials focusing on low- or low/moderate-risk patients will be needed to truly define the utility of plain radiographs of the cervical spine in blunt trauma. abstract_id: PUBMED:24017957 "Artifactual fracture-subluxation" of cervical spine in computed tomography screening sans plain radiographs. Background Context: Computed tomography (CT) has become the sole modality of screening for cervical injury in polytrauma because of the high sensitivity, speed, and convenience, thereby eliminating the need for plain radiographs. Purpose: We report two cases of misleading artifactual fracture-subluxation of cervical spine in CT, which could have resulted in needless treatment, and describe its characteristics. Study Design: Case report and review. Methods: Two patients who were initially diagnosed with fracture-subluxation on screening CT cervical spine were later noted to have motion artifacts and were reviewed. Results: The artifactual nature of the supposed fracture-subluxation was unmasked by the soft-tissue findings of obscuration in sagittal reconstruction and duplication in axial images, along with the presence of double bony margins. Conclusions: Motion artifact in cervical CT screening can lead to a misdiagnosis of fracture subluxation. Duplication of soft tissue is highly suggestive of this motion artifact, and an additional single lateral plain radiograph may avert this pitfall. abstract_id: PUBMED:25085950 Unstable C-spine injury with normal C-spine radiographs. There is some controversy surrounding the optimal mode of imaging in trauma patients with suspected cervical (C) spine injury. Various rules (most notably the Canadian C-spine rules and the NEXUS rules) have been designed to help reduce the need for imaging given the poor yield. Some authorities advocate CT for almost all cases whereas others advocate three view radiographs unless the patient is at high risk, in which case CT is the preferred choice. One meta-analysis showed sensitivity of 58% (39-76%) for plain radiographs and 98% for CT in identification of C-spine injuries following blunt trauma. This case report illustrates how very unstable C-spine injuries may not be apparent on plain radiographs and a degree of clinical suspicion may be required for further imaging. abstract_id: PUBMED:17563660 Prospective evaluation of multislice computed tomography versus plain radiographic cervical spine clearance in trauma patients. Background: The objective of this study was to compare the utility of plain radiographs to multislice computed tomography (MCT) for cervical spine (c-spine) evaluation. We hypothesized that plain radiographs add no clinically relevant diagnostic information to MCT in the screening evaluation of the c-spine of trauma patients. Methods: This was a prospective, unblinded, consecutive series of injured patients requiring c-spine evaluation that were imaged with three-view plain films and MCT (occiput to T1 with 3-dimensional reconstruction). The final discharge diagnosis based on all prospectively collected clinical data, MCT, and plain films was utilized as the gold standard for the sensitivity calculation. Results: From October 2004 to February 2005, 667 trauma patients requiring c-spine evaluation were enrolled. Average age was 35.4 years and 70% were male. The mechanism of injury was blunt in 99% and 48.7% occurred as a result of motor vehicle collision. Sixty of 667 (9%) sustained acute c-spine injuries. MCT had a sensitivity of 100% and specificity of 99.5%. Plain films had a sensitivity of 45% and specificity of 97.4%. Plain radiography missed 15 of 27 (55.5%) clinically significant c-spine injuries. Conclusion: MCT outperformed plain radiography as a screening modality for the identification of acute c-spine injury in trauma patients. All clinically significant injuries were detected by MCT. Plain films failed to identify 55.5% of clinically significant fractures identified by MCT and added no clinically relevant information. abstract_id: PUBMED:15664088 Cervical spine evaluation in urban trauma centers: lowering institutional costs and complications through helical CT scan. Background: In the evaluation of the cervical spine (c-spine), helical CT scan has higher sensitivity and specificity than plain radiographs in the moderate- and high-risk trauma population, but is more costly. We hypothesize that institutional costs associated with missed injuries make helical CT scan the least costly approach. Study Design: A cost-minimization study was performed using decision analysis examining helical CT scan versus radiographic evaluation of the c-spine. Parameter estimates were obtained from the literature for probability of c-spine injury, probability of paralysis after missed injury, plain film sensitivity and specificity, CT scan sensitivity and specificity, and settlement cost of missed injuries resulting in paralysis. Institutional costs of CT scan and plain radiography were used. Sensitivity analyses tested robustness of strategy preference, accounted for parameter variability, and determined threshold values for individual parameters on strategy preference. Results: C-spine evaluation with helical CT scan has an expected cost of US 554 dollars per patient compared with US 2,142 dollars for plain films. CT scan is the least costly alternative if threshold values exceed US 58,180 dollars for institutional settlement costs, 0.9% for probability of c-spine fracture, and 1.7% for probability of paralysis. Plain films are least costly if CT scan costs surpass US 1,918 dollars or plain film sensitivity exceeds 90%. Conclusions: Helical CT scan is the preferred initial screening test for detection of cervical spine fractures among moderate- to high-risk patients seen in urban trauma centers, reducing the incidence of paralysis resulting from false-negative imaging studies and institutional costs, when settlement costs are taken into account. abstract_id: PUBMED:19412022 Comparison of computed tomography and plain radiography in assessing traumatic spinal deformity. Study Design: An imaging study assessing agreement between computed tomographic (CT) scans and plain radiographs when measuring acute thoracolumbar spinal deformity. Objective: To compare the ability of the screening CT scans to measure spinal deformity in the thoracolumbar spine with that of portable plain radiographs. Summary Of Background Data: At the time of arrival at many trauma centers, patients undergo screening whole body CT as the initial imaging evaluation for trauma. The thoracolumbar spine is well visualized on both coronal and sagittal reconstructions. Methods: Images of patients who underwent screening CT and portable supine plain radiography of thoracolumbar fractures were reviewed. Four observers, at 2 separate times, measured pathologic kyphosis and scoliosis on each image by using the Cobb method. Data were analyzed for statistical differences. Results: In the sagittal plane, screening CT scans and plain radiographs showed an average mean difference of -1.13 degrees+/-SD of 3.76 degrees. In the coronal plane, CT scans and radiographs showed an average mean difference of 0.10 degrees+/-SD of 2.52 degrees. The interobserver correlation coefficients among the 4 observers were 0.913 in the sagittal plane and 0.953 in the lateral plane, indicating excellent interobserver correlation. Conclusions: Screening CT demonstrates excellent correlation with plain radiography for measuring thoracolumbar spinal deformity in the coronal and sagittal planes for patients with trauma. Answer: The evaluation of cervical spine (c-spine) injury in young children using CT versus plain radiographs presents a complex risk-benefit analysis. On one hand, CT scans have been shown to be superior to plain films in diagnosing cervical spine trauma, as they eliminate false-positive and false-negative results obtained by relying on plain radiographs alone (PUBMED:4037477). Additionally, CT scans have a sensitivity of 100% and specificity of 99.5% for detecting acute c-spine injury, whereas plain films have a much lower sensitivity of 45% (PUBMED:17563660). This suggests that CT is a more reliable method for identifying c-spine injuries. However, the use of c-spine CT exposes the thyroid to significantly more radiation than multiple conventional radiographs—90-200 times more—which increases the excess relative risk for thyroid cancer in young children (PUBMED:18368400). This is particularly concerning given the vulnerability of children to radiation-induced morbidity. The mean excess relative risk for thyroid cancer after CT was estimated to be 2.0 for children aged 0-4 years and 0.6 for children aged 5-8 years (PUBMED:18368400). Deep learning models have been developed to improve the accuracy of lateral c-spine or neck radiographs, which could assist clinicians in quickly triaging patients and help to minimize the number of unnecessary CT scans (PUBMED:36061007). This suggests that there may be alternative methods to reduce radiation exposure while still maintaining diagnostic accuracy. In conclusion, while CT scans offer a more accurate diagnosis of c-spine injuries in young children, the associated risks of radiation exposure and the potential for increased cancer risk must be carefully weighed. The benefits of CT scans in terms of diagnostic accuracy need to be balanced against the risks of radiation, especially in young children who are more susceptible to its harmful effects. It is crucial to consider individual patient factors, the availability of alternative diagnostic methods such as deep learning-enhanced radiographs, and the clinical context when deciding between CT scans and plain radiographs for evaluating c-spine injury in young children.
Instruction: Do the malnutrition universal screening tool (MUST) and Birmingham nutrition risk (BNR) score predict mortality in older hospitalised patients? Abstracts: abstract_id: PUBMED:18847458 Do the malnutrition universal screening tool (MUST) and Birmingham nutrition risk (BNR) score predict mortality in older hospitalised patients? Background: Undernutrition is common in older hospitalised patients, and routine screening is advocated. It is unclear whether screening tools such as the Birmingham Nutrition Risk (BNR) score and the Malnutrition Universal Screening Tool (MUST) can successfully predict outcome in this patient group. Methods: Consecutive admissions to Medicine for the Elderly assessment wards in Dundee were assessed between mid-October 2003 and mid-January 2004. Body Mass Index (BMI), MUST and BNR scores were prospectively collected. Time to death was obtained from the Scottish Death Register and compared across strata of risk. Results: 115 patients were analysed, mean age 82.1 years. 39/115 (34%) were male. 20 patients were identified as high risk by both methods of screening. A further 10 were categorised high risk only with the Birmingham classification and 12 only with MUST.80/115 (67%) patients had died at the time of accessing death records. MUST category significantly predicted death (log rank test, p = 0.022). Neither BMI (log rank p = 0.37) or Birmingham nutrition score (log rank p = 0.35) predicted death. Conclusion: The MUST score, but not the BNR, is able to predict increased mortality in older hospitalised patients. abstract_id: PUBMED:35791649 Association of nutrition risk screening 2002 and Malnutrition Universal Screening Tool with COVID-19 severity in hospitalized patients in Iran. Background: Malnutrition affects normal body function and is associated with disease severity and mortality. Due to the high prevalence of malnutrition reported in patients with coronavirus disease 2019 (COVID-19), the current study examined the association between malnutrition and disease severity in hospitalized adult patients with COVID-19 in Iran. Methods: In this prospective observational study, 203 adult patients with COVID-19 verified by real-time polymerase chain reaction test and chest computed tomography were recruited from those admitted to a university hospital in Iran. To determine COVID-19 intensity, patients were categorized into four groups. Malnutrition assessment was based on the Malnutrition Universal Screening Tool (MUST) and nutrition risk screening score (NRS-2002). An ordinal regression model was run to assess the association between malnutrition and disease severity. Results: In the studies sample of Iranian patients with COVID-19, 38.3% of patients had severe COVID-19. According to NRS-2002, 12.9% of patients were malnourished. Based on MUST, 2% of patients were at medium, and 13.4% of patients were at high risk of malnutrition. Malnutrition was associated with a higher odds of extremely severe COVID-19 according to NRS-2002 (odds ratio, 1.38; 95% confidence interval, 0.21-2.56; P=0.021). Conclusions: Malnutrition was not prevalent in the studies sample of Iranian patients with COVID-19; however, it was associated with a higher odds of extremely severe COVID-19. abstract_id: PUBMED:15573502 Simplified malnutrition screening tool: Malnutrition Universal Screening Tool (MUST) MUST (Malnutrition Universal Screening Tool) is a nutritional screening tool easy to use by any trained care-giver and valid for any adult patient. It considers body mass index, weight change and acute disease effect equally and determines a malnutrition risk score. If necessary, anthropometric measures may be simpliyfied by alternative methods. MUST is reliable between different healthcare settings et promotes detection and management of malnutrition during the patient medical course. abstract_id: PUBMED:27528452 Mini-Nutritional Assessment, Malnutrition Universal Screening Tool, and Nutrition Risk Screening Tool for the Nutritional Evaluation of Older Nursing Home Residents. Introduction: Malnutrition plays a major role in clinical and functional impairment in older adults. The use of validated, user-friendly and rapid screening tools for malnutrition in the elderly may improve the diagnosis and, possibly, the prognosis. The aim of this study was to assess the agreement between Mini-Nutritional Assessment (MNA), considered as a reference tool, MNA short form (MNA-SF), Malnutrition Universal Screening Tool (MUST), and Nutrition Risk Screening (NRS-2002) in elderly institutionalized participants. Methods: Participants were enrolled among nursing home residents and underwent a multidimensional evaluation. Predictive value and survival analysis were performed to compare the nutritional classifications obtained from the different tools. Results: A total of 246 participants (164 women, age: 82.3 ± 9 years, and 82 men, age: 76.5 ± 11 years) were enrolled. Based on MNA, 22.6% of females and 17% of males were classified as malnourished; 56.7% of women and 61% of men were at risk of malnutrition. Agreement between MNA and MUST or NRS-2002 was classified as "fair" (k = 0.270 and 0.291, respectively; P &lt; .001), whereas the agreement between MNA and MNA-SF was classified as "moderate" (k = 0.588; P &lt; .001). Because of the high percentage of false negative participants, MUST and NRS-2002 presented a low overall predictive value compared with MNA and MNA-SF. Clinical parameters were significantly different in false negative participants with MUST or NRS-2002 from true negative and true positive individuals using the reference tool. For all screening tools, there was a significant association between malnutrition and mortality. MNA showed the best predictive value for survival among well-nourished participants. Conclusions: Functional, psychological, and cognitive parameters, not considered in MUST and NRS-2002 tools, are probably more important risk factors for malnutrition than acute illness in geriatric long-term care inpatient settings and may account for the low predictive value of these tests. MNA-SF seems to combine the predictive capacity of the full version of the MNA with a sufficiently short time of administration. abstract_id: PUBMED:30058522 'Self-screening' for malnutrition with an electronic version of the Malnutrition Universal Screening Tool ('MUST') in hospital outpatients: concurrent validity, preference and ease of use. Self-screening using an electronic version of the Malnutrition Universal Screening Tool ('MUST') has been developed but its implementation requires investigation. A total of 100 outpatients (mean age 50 (sd 16) years; 57 % male) self-screened with an electronic version of 'MUST' and were then screened by a healthcare professional (HCP) to assess concurrent validity. Ease of use, time to self-screen and prevalence of malnutrition were also assessed. A further twenty outpatients (mean age 54 (sd 15) years; 55 % male) examined preference between self- screening with paper and electronic versions of 'MUST'. For the three-category classification of 'MUST' (low, medium and high risk), agreement between electronic self-screening and HCP screening was 94 % (κ=0·74, se 0·092; P&lt;0·001). For the two-category classification (low risk; medium+high risk) agreement was 96 % (κ=0·82, se 0·085; P&lt;0·001), comparable with the previously reported paper-based self-screening. In all, 15 % of patients categorised themselves 'at risk' of malnutrition (5 % medium, 10 % high). Electronic self-screening took 3 min (sd 1·2 min), 40 % faster than previously reported for the paper-based version. Patients found the tool easy or very easy to understand (99 %) and complete (98 %). Patients that assessed both tools found the electronic tool easier to complete (65 %) and preferred it (55 %) to the paper version. Electronic self-screening using 'MUST' in a heterogeneous group of hospital outpatients is acceptable, user-friendly and has 'substantial to almost-perfect' agreement with HCP screening. The electronic format appears to be as agreeable and often the preferred format when compared with the validated paper-based 'MUST' self-screening tool. abstract_id: PUBMED:28199797 Nutritional Risk Screening 2002, Short Nutritional Assessment Questionnaire, Malnutrition Screening Tool, and Malnutrition Universal Screening Tool Are Good Predictors of Nutrition Risk in an Emergency Service. Background: There is an international consensus that nutrition screening be performed at the hospital; however, there is no "best tool" for screening of malnutrition risk in hospitalized patients. Objective: To evaluate (1) the accuracy of the MUST (Malnutrition Universal Screening Tool), MST (Malnutrition Screening Tool), and SNAQ (Short Nutritional Assessment Questionnaire) in comparison with the NRS-2002 (Nutritional Risk Screening 2002) to identify patients at risk of malnutrition and (2) the ability of these nutrition screening tools to predict morbidity and mortality. Methods: A specific questionnaire was administered to complete the 4 screening tools. Outcomes measures included length of hospital stay, transfer to the intensive care unit, presence of infection, and incidence of death. Results: A total of 752 patients were included. The nutrition risk was 29.3%, 37.1%, 33.6%, and 31.3% according to the NRS-2002, MUST, MST, and SNAQ, respectively. All screening tools showed satisfactory performance to identify patients at nutrition risk (area under the receiver operating characteristic curve between 0.765-0.808). Patients at nutrition risk showed higher risk of very long length of hospital stay as compared with those not at nutrition risk, independent of the tool applied (relative risk, 1.35-1.78). Increased risk of mortality (2.34 times) was detected by the MUST. Conclusion: The MUST, MST, and SNAQ share similar accuracy to the NRS-2002 in identifying risk of malnutrition, and all instruments were positively associated with very long hospital stay. In clinical practice, the 4 tools could be applied, and the choice for one of them should be made per the particularities of the service. abstract_id: PUBMED:34945154 A Comparison of the Malnutrition Universal Screening Tool (MUST) and the Mini Nutritional Assessment-Short Form (MNA-SF) Tool for Older Patients Undergoing General Surgery. The optimal malnutrition screening tool in geriatric surgery has yet to be determined. Herein, we compare two main tools in older patients undergoing general surgery operations. Older patients (&gt;65 years old) who underwent general surgery operations between 2012 and 2017 in a tertiary centre were included. The Malnutrition Universal Screening Tool (MUST) and the Mini Nutritional Assessment Short Form (MNA-SF) were used for nutritional risk assessment. Preoperative variables as well as postoperative outcomes were recorded prospectively. Agreement between tools was determined with the weighted kappa (κ) statistic. Multiple regression analysis was used to assess the association of the screening tools with postoperative outcomes. A total of 302 patients (median age 74 years, range: 65-92) were included. A similar number of patients were classified as medium/high risk for malnutrition with the MNA-SF and MUST (26% vs. 36%, p = 0.126). Agreement between the two tools was moderate (weighted κ: 0.474; 95%CI: 0.381-0.568). In the multivariate analysis, MNA-SF was associated significantly with postoperative mortality (p = 0.038) and with postoperative length of stay (p = 0.001). MUST was associated with postoperative length of stay (p = 0.048). The MNA-SF seems to be more consistently associated with postoperative outcomes in elderly patients undergoing general surgery compared with the MUST tool. abstract_id: PUBMED:35529305 Nutritional Risk Screening in Hospitalized Adults Using the Malnutrition Universal Screening Tool at a Tertiary Care Hospital in South India. Background and objectives Malnutrition is still widely prevalent in India. Various nutritional screening tools have been developed to screen for nutritional risk status but no one tool is considered the best. The Malnutrition Universal Screening Tool (MUST) is accepted by the European Society for Clinical Nutrition and Metabolism and validated for use in hospitalized adults. Hence, it was used in this study to estimate the prevalence of malnutrition in hospitalized adults and its association with socioeconomic inequality. Methods A sample of randomly selected 358 ambulatory hospitalized patients above 18 years of age was used in the study. Data pertaining to demography, socioeconomic status, medical history, and MUST were collected using a structured questionnaire. The height and weight of the patients were measured, and their BMI was determined. The patients were classified into five socioeconomic classes and their MUST scores were determined. Results Statistically significant (P &lt; 0.05) increasing trend was observed in the height, weight, and BMI of patients with increasing socioeconomic status. Diabetes mellitus (39%) followed by hypertension (30%) were the predominant comorbid conditions. According to MUST, the overall prevalence of medium and high risk of malnutrition was 11% and 24%, respectively, and the socioeconomic class that was most impacted was Class 4 (1,130-2,259 INR per capita monthly income). Interpretation and conclusions Socioeconomic status influences the prevalence of malnutrition, comorbid conditions, and the anthropometric measurements of admitted patients. The prevalence of nutritional risk status irrespective of sex was found to be 34.91% (24.3% in males and 10.61% in women) in the study. abstract_id: PUBMED:33992514 Nutritional risk screening in noninvasively mechanically ventilated critically ill adult patients: A feasibility trial. Background: Malnutrition rates for critically ill patients being admitted to the intensive care unit (ICU) are reported to range from 38% to 78%. Malnutrition in the ICU is associated with increased mortality, morbidity, length of hospital admission, and ICU readmission rates. The high volume of ICU admissions means that efficient screening processes to identify patients at nutritional or malnutrition risk are imperative to appropriately prioritise nutrition intervention. As the proportion of noninvasively mechanically ventilated patients in the ICU increases, the feasibility of using nutrition risk screening tools in this population needs to be established. Objectives: The aim of this study was to compare the feasibility of using the Malnutrition Universal Screening Tool (MUST) with the modified NUtriTion Risk In the Critically ill (mNUTRIC) score for identifying patients at nutritional or malnutrition risk in this population. Methods: A single-centre, prospective, descriptive, feasibility study was conducted. The MUST and mNUTRIC tool were completed within 24 h of ICU admission in a convenience sample of noninvasively mechanically ventilated adult patients (≥18 years) by a trained allied health assistant. The number (n) of eligible patients screened, time to complete screening (minutes), and barriers to completion were documented. Data are presented as mean (standard deviation), and the independent samples t-test was used for comparisons between tools. Results: Twenty patients were included (60% men; aged 65.3 [13.9] years). Screening using the MUST took a significantly shorter time to complete than screening using the mNUTRIC tool (8.1 [2.8] vs 22.1 [5.6] minutes; p = 0.001). Barriers to completion included obtaining accurate weight history for the MUST and time taken for collection of information and overall training requirements to perform mNUTRIC. Conclusions: The MUST took less time and had fewer barriers to completion than mNUTRIC. The MUST may be the more feasible nutrition risk screening tool for use in noninvasively mechanically ventilated critically ill adults. abstract_id: PUBMED:37450959 Outpatient screening with the Royal Free Hospital-Nutrition Prioritizing Tool for patients with cirrhosis at risk of malnutrition. Objectives: Malnutrition is common among inpatients with cirrhosis. However, data on the prevalence of malnutrition among stable ambulatory patients with cirrhosis is lacking. We sought to investigate the prevalence of patents at risk of malnutrition (ARMN) among ambulatory patients with cirrhosis using the Royal Free Hospital-Nutrition Prioritizing Tool (RFH-NPT) and the Malnutrition Universal Screening Tool (MUST) and compare their correlation to clinical outcomes. Methods: Patients attending an outpatient liver cirrhosis clinic at a tertiary hospital were screened for ARMN using both the RFH-NPT and MUST (defined by a score of ≥2 for either tool). Differences in clinical outcomes after 6 mo were compared. Results: There were 134 patients recruited. The RFH-NPT identified more ARMN patients compared with MUST (32.8% versus 8.2%; P &lt; 0.01; Cohen κ, 0.27 [95% CI, 0.12-0.42]; P &lt; 0.001). Fluid overload at recruitment was the only independent predictor of disagreement between the RFH-NPT and MUST (odds ratio [OR], 43.14; 95% CI, 8.70-214.00; P &lt; 0.001). There was a trend toward an increased risk of mortality for ARMN patients by the RFH-NPT (hazard ratio, 3.58; 95% CI, 0.81-15.83; P = 0.06) but not by the MUST (P = 0.62). The incidence of hospital admissions in ARMN patients was higher by the RFH-NPT, with an incidence rate ratio of 13.27 (95% CI, 5.11-43.70; P &lt; 0.001), but not in ARMN patients by the MUST (P = 0.85). Being ARMN by the RFH-NPT was the only independent predictor of hospital admissions (OR, 15.08; 95% CI, 2.47-91.98; P = 0.003). Conclusions: The RFH-NPT identified more ARMN patients when compared with the MUST, especially among patients with fluid overload. Patients at risk of malnutrition were at an increased risk of hospital admissions and possibly death. Answer: The Malnutrition Universal Screening Tool (MUST) has been shown to predict increased mortality in older hospitalized patients. A study analyzing consecutive admissions to Medicine for the Elderly assessment wards in Dundee found that the MUST category significantly predicted death (log rank test, p = 0.022), while neither BMI (log rank p = 0.37) nor the Birmingham Nutrition Risk (BNR) score (log rank p = 0.35) predicted death (PUBMED:18847458). This suggests that the MUST score can be a useful tool for predicting mortality risk in this patient group, whereas the BNR score does not have the same predictive value for mortality.
Instruction: Short tone burst-evoked myogenic potentials on the sternocleidomastoid muscle: are these potentials also of vestibular origin? Abstracts: abstract_id: PUBMED:10367923 Short tone burst-evoked myogenic potentials on the sternocleidomastoid muscle: are these potentials also of vestibular origin? Objectives: To show that short tone bursts (STBs) evoke myogenic potentials from the sternocleidomastoid muscle (SCM) that are of vestibular origin. Design: Evoked potential activity was recorded from the SCMs of normal volunteers and from patients with vestibulocochlear disorders. Setting: This outpatient study was conducted at the Department of Otolaryngology, University of Tokyo, Tokyo, Japan. Subjects: Nine normal volunteers and 30 patients (34 affected ears) with vestibulocochlear disorders were examined. Intervention: Diagnostic. Outcome Measures: Sound-evoked myogenic potentials in response to STBs were recorded with surface electrodes over each SCM. Responses evoked by STBs in patients were compared with responses evoked by clicks. Results: In all normal subjects, STBs (0.5, 1, and 2 kHz) evoked biphasic responses on the SCM ipsilateral to the stimulated ear; the same was true for clicks. Short tone bursts of 0.5 kHz evoked the largest responses, while STBs of 2 kHz evoked the smallest. In patients with vestibulocochlear disorders, responses to STBs of 0.5 kHz were similar to responses evoked by clicks. Thirty (88%) of the 34 affected ears demonstrated the same results with 0.5-kHz STBs and with clicks. Responses were present in patients with total or near-total hearing loss and intact vestibular function. Conversely, patients with preserved hearing but with absent or severely decreased vestibular function had absent or significantly decreased myogenic potentials evoked by STBs. Conclusions: Short tone bursts as well as clicks can evoke myogenic potentials from the SCM. Myogenic potentials evoked by STBs are also probably of vestibular origin. abstract_id: PUBMED:20955634 Comparison of vestibular evoked myogenic potentials elicited by click and short duration tone burst stimuli. Introduction: Vestibular evoked myogenic potentials are short latency electrical impulses that are produced in response to higher level acoustic stimuli. They are used clinically to diagnose sacculocollic pathway dysfunction. Aim: This study aimed to compare the vestibular evoked myogenic potential responses elicited by click stimuli and short duration tone burst stimuli, in normal hearing individuals. Method: Seventeen subjects participated. In all subjects, we assessed vestibular evoked myogenic potentials elicited by click and short duration tone burst stimuli. Results And Conclusion: The latency of the vestibular evoked myogenic potential responses (i.e. the p13 and n23 peaks) was longer for tone burst stimuli compared with click stimuli. The amplitude of the p13-n23 waveform was greater for tone burst stimuli than click stimuli. Thus, the click stimulus may be preferable for clinical assessment and identification of abnormalities as this stimulus has less variability, while a low frequency tone burst stimulus may be preferable when assessing the presence or absence of vestibular evoked myogenic potential responses. abstract_id: PUBMED:11698798 Characteristics of tone burst-evoked myogenic potentials in the sternocleidomastoid muscles. Hypothesis: Optimum stimulus parameters for tone burst-evoked myogenic responses can be defined. These optimized responses will be similar to those evoked by clicks in the same subjects. Background: Loud tones give rise to myogenic responses in the anterior neck muscles, similar to click-evoked potentials, and are likely to be saccular in origin. Methods: Tone burst-evoked and click-evoked myogenic potentials were measured from the sternocleidomastoid muscles of 12 normal subjects (6 men, 6 women) during tonic activation. The effects of tone burst frequency and duration were systematically investigated. Thresholds were measured and compared with click thresholds for the same subjects. Patients with specific lesions were studied using both stimuli. Results: Tone burst-evoked responses showed frequency tuning, with the largest reflex amplitudes at either 500 Hz or 1 kHz. As the stimulus duration was increased, using a constant repetition rate, there was an increase in the reflex amplitudes followed by a decline. The overall optimum stimulus duration was 7 milliseconds. The mean tone burst threshold was 114.4-dB sound pressure level. Stimulus thresholds for click-evoked and tone burst-evoked responses were significantly correlated. Tone burst-evoked and click-evoked responses were present after stimulation of the affected ears of subjects with profound sensorineural hearing loss. Four subjects who had previously undergone vestibular neurectomy had an absence of click and tone burst-evoked responses on the side of the lesion, confirming their vestibular dependence. Conclusion: Tone burst-evoked myogenic responses are similar to click-evoked responses but require lower absolute stimulus intensities. To be certain of an optimum response, a stimulus duration of 7 milliseconds, an adequate intensity, and frequencies of both 500 Hz and 1 kHz should be used. abstract_id: PUBMED:26223715 Comparison of Tone Burst, Click and Chirp Stimulation in Vestibular Evoked Myogenic Potential Testing in Healthy People. Objective: Vestibular evoked myogenic potential (VEMP) is a clinical test used in the diagnosis of vestibular diseases. VEMP uses several stimulants to stimulate the vestibular system and measure myogenic potentials. The aim of this study was to compare the effects of tone burst, click, and chirp stimulation in VEMP on the latency and amplitude of myogenic potentials. Materials And Methods: We compared the results of 78 ears from 39 volunteers. We measured the sternocleidomastoid muscle potential of each ear following a 500-Hz tone burst, click, and chirp stimulation while in a sitting position and evaluated the latency and amplitude. Results: The tone burst stimulus resulted in waves with longer latency (15.8±1.9 ms) but higher amplitude (35.9±17.1 µV) compared with the other stimuli, and the chirp stimulus resulted in waves with shorter latency (9.9±2.4 ms) but lower amplitude (33±18.6 µV) (p&lt;0.001). The VEMP asymmetry ratio did not significantly differ. Onclusion: Because the amplitudes and latencies of different stimuli significantly differ, further studies including more patients and stimulus types are needed to obtain standardized VEMP protocols. abstract_id: PUBMED:36975085 Comparison of Compressed High-Intensity Radar Pulse and Tone Burst Stimulation in Vestibular Evoked Myogenic Potentials in Acute Peripheral Vestibular System Pathologies. Background: It is ascertained that the compressed high-intensity radar pulse (CHIRP) is an effective stimulus in auditory electrophysiology. This study aims to investigate whether Narrow Band Level Specific Claus Elberling Compressed High-Intensity Radar Pulse (NB LS CE-CHIRP) stimulus is an effective stimulus in the vestibular evoked myogenic potentials test. Methods: A case-control study was designed. Fifty-four healthy participants with no vertigo complaints and 50 patients diagnosed with acute peripheral vestibular pathology were enrolled in this study. Cervical and ocular vestibular evoked myogenic potential tests (cervical vestibular evoked myogenic potentials and ocular vestibular evoked myogenic potentials) with 500 Hz tone burst and 500 Hz Narrow Band Level Specific CE-CHIRP stimulations were performed on all participants. In addition, cervical vestibular evoked myogenic potentials and ocular vestibular evoked myogenic potentials tests with 1000 Hz tone burst and 1000 Hz Narrow Band Level Specific CE-CHIRP were performed on 24 Meniere's disease patients. P1 latency, N1 latency, amplitude, threshold, and the asymmetry ratio of responses were recorded. Results: In healthy participants, with CHIRP stimulus, shorter P1 latency (P &lt; .001), shorter N1 latency (P &lt; .001), and lower threshold (P = .003) were obtained in the cervical vestibular evoked myogenic potentials test; shorter P1 latency (P &lt; .001), shorter N1 latency (P &lt; .001), higher amplitude (P &lt; .001), and lower threshold (P &lt; .001) were obtained in ocular vestibular evoked myogenic potentials test. In symptomatic ears of patients, with CHIRP stimulus, shorter P1 latency (P &lt; .001), shorter N1 latency (P &lt; .001), and lower threshold (P=.013 in cervical vestibular evoked myogenic potentials; P=.015 in ocular vestibular evoked myogenic potentials) were obtained in cervical vestibular evoked myogenic potentials and ocular vestibular evoked myogenic potentials tests. In asymptomatic ears of patients, with CHIRP stimulus, shorter P1 latency (P &lt; .001) and shorter N1 latency (P &lt; .001) were obtained in the cervical vestibular evoked myogenic potentials test; shorter P1 latency (P &lt; .001), shorter N1 latency (P &lt; .001), higher amplitude (P &lt; .001), and lower threshold (P=.006) were obtained in ocular vestibular evoked myogenic potentials test. Conclusion: Our results suggest that due to higher response rates, shorter latencies, higher amplitude, and lower threshold values, the Narrow Band Level Specific CE-CHIRP stimulus is an effective stimulus for both cervical vestibular evoked myogenic potentials and ocular vestibular evoked myogenic potentials tests. abstract_id: PUBMED:33303285 Differences in bone conduction ocular vestibular evoked myogenic potentials to 500 Hz narrow band chirp stimulus and 500 Hz tone burst. Objective: This study aims to investigate the differences of N1 latency, P1 latency and N1P1 amplitude in response to bone conducted 500 Hz tone burst and narrowband CE chirp stimulus in ocular vestibular evoked myogenic potentials (oVEMPs). Methods: Forty-two healthy volunteers were included in this prospective study. Subjects with abnormal otological examinations and otological diseases were excluded. oVEMPs were randomly recorded in response to BC 500 Hz narrowband (NB) chirp stimulus and BC 500 Hz tone burst. The stimulus intensity was 50 dB nHL for both 500 Hz tone burst and 500 Hz NB CE chirp stimulus. P1 latency, N1 latency, and N1P1 amplitude were measured, and these measurements were compared between these two types of stimuli. Results: Both types of stimuli elicited oVEMP in all subjects. N1 latency and P1 latency were significantly shorter (6.41 ms vs 10.84 ms; 10.64 ms vs 15.56 ms, respectively) for chirp stimulus (p &lt; 0.05). N1P1 amplitude was significantly higher (11.64 vs 7.18 μV) for NB chirp stimulus (p &lt; 0.05). Conclusion: It is reasonable to conclude that the NB CE chirp stimulus is effective to elicit robust BC oVEMP in healthy subjects. abstract_id: PUBMED:30957614 Effects of stimulus conditions on vestibular evoked myogenic potentials in healthy subjects. Background: Characteristics of vestibular evoked myogenic potentials (VEMPs) depend on stimulus conditions. Objective: To determine the optimal stimulus conditions for cervical and ocular VEMPs. Methods: Participants were 23 healthy subjects. We compared air-conducted cervical and ocular VEMPs elicited by various tone-burst conditions (frequencies 500-1,000 Hz, rise/fall times 1-2 ms, and plateau times 0-6 ms) with an intensity of 105 dB normal hearing level. Effects of simultaneous contralateral masking noise on VEMPs were also evaluated. Results: The largest cervical VEMP amplitudes were elicited by 500-750 Hz and 2-6 ms plateau time-tone-bursts, and the largest ocular VEMP amplitudes by 750 Hz and 2-4 ms plateau time-tone-bursts. Repeatability of the latency was better at 1 ms than at 2 ms rise/fall time in both VEMPs. In both VEMPs, masking noise reduced amplitude, and in ocular VEMP, amplitudes were significantly larger at the left ear stimulation than the right. Conclusion: Optimal tone-burst stimulation for both VEMPs seemed to be 500-750 Hz frequency and 1/2/1 ms rise/plateau/fall time without contralateral masking noise. Ocular VEMP amplitudes from left ear stimulation were originally larger than those from right ear stimulation. abstract_id: PUBMED:22183275 Vestibular evoked myogenic potentials using low frequency stimuli. Unlabelled: Vestibular evoked myogenic potentials are vestibulocervical reflexes resulting from sacculus stimulation with strong intensity sounds. Normality parameters are necessary for young normal individuals, using low frequency stimuli, which configure the most sensitive region of this sensory organ. Aim: To establish vestibular evoked myogenic potential standards for low frequency stimulation. Material And Method: Vestibular evoked myogenic potential was captured from 160 ears, in the ipsilateral sternocleidomastoid muscle, using 200 averaged tone-burst stimuli, at 250 Hz, with an intensity of 95 dB NAn. Case Study: Clinical observational cross-sectional. Results: Neither the student's t-test nor the Mann-Whitney test showed a significant difference in latency or vestibular evoked myogenic potential amplitudes, for p &lt;; 0.05. Irrespective of gender, we found latencies of p13-n23 and p13-n23 interpeaks of 13.84 ms (± 1.41), 23.81 ms (± 1.99) and 10.62 ms (± 6.56), respectively. Observed values for amplitude asymmetry between the ears were equal to 13.48% for females and 3.81% for males. Conclusion: Low frequency stimuli generate vestibular evoked myogenic potentials, with adequate morphology and amplitude, thereby enabling the establishment of standard values for normal individuals at this frequency. abstract_id: PUBMED:14708838 The effects of click and tone-burst stimulus parameters on the vestibular evoked myogenic potential (VEMP). Vestibular evoked myogenic potentials (VEMP) are short latency electromyograms (EMG) evoked by high-level acoustic stimuli and recorded from surface electrodes over the tonically contracted sternocleidomastoid (SCM) muscle and are presumed to originate in the saccule. The present experiments examined the effects of click and tone-burst level and stimulus frequency on the latency, amplitude, and threshold of the VEMP in subjects with normal hearing sensitivity and no history of vestibular disease. VEMPs were recorded in all subjects using 100 dB nHL click stimuli. Most subjects had VEMPs present at 500, 750, and 1000 Hz, and few subjects had VEMPs present at 2000 Hz. The response amplitude of the VEMP increased with click and tone-burst level, whereas VEMP latency was not influenced by the stimulus level. The largest tone-burst-evoked VEMPs and lowest thresholds were obtained at 500 and 750 Hz. VEMP latency was independent of stimulus frequency when tone-burst duration was held constant. abstract_id: PUBMED:26478874 Vestibular evoked myogenic potentials in patients with rheumatoid arthritis. Background: Rheumatoid arthritis (RA) is an autoimmune systemic disease. Most common autoimmune diseases are multisystem disorders that may also present with otological manifestations, and autoimmune inner ear disease accompanied by vestibular dysfunction. This study aimed to compare the vestibular function between RA patients and normal subjects using cervical vestibular evoked myogenic potentials (cVEMPs). Methods: In this cross- sectional study, 25patients with RA (19 female and 6 male: mean (±SD) age, 40.00 (±7.92) years) and 20 healthy subjects (15 female and 5 male: mean (±SD) age, 35.35 (±10.48) years) underwent cVEMPs, using 500 Hz-tone bursts at 95 dB nHL intensity level. Data were analyzed using independent sample t-test through SPSS software v. 16. Results: The mean peak latency of p13 was significantly higher in RA patients (p&lt;0.001). The mean peak latency of n23 was significantly higher in patients in the left ear (p=0.03). Vestibular evoked myogenic potential (VEMP) responses were present in all (100%) of the participants. There were no significant differences in mean peak to peak amplitude and amplitude ratio between the two groups. Conclusion: According to the prolonged latency of VEMP responses in RA patients, lesions in the retrolabyrinthine, especially in the vestibulospinal tract are suspected. Answer: Yes, short tone burst-evoked myogenic potentials on the sternocleidomastoid muscle (SCM) are also of vestibular origin. This is supported by studies that have recorded evoked potential activity from the SCMs of normal volunteers and patients with vestibulocochlear disorders. The findings indicate that short tone bursts (STBs) as well as clicks can evoke myogenic potentials from the SCM, and these responses are present in patients with total or near-total hearing loss and intact vestibular function. Conversely, patients with preserved hearing but with absent or severely decreased vestibular function had absent or significantly decreased myogenic potentials evoked by STBs, suggesting a vestibular origin for these potentials (PUBMED:10367923). Additional studies have compared vestibular evoked myogenic potential (VEMP) responses elicited by click stimuli and short duration tone burst stimuli, finding that the latency of the VEMP responses was longer for tone burst stimuli compared with click stimuli, but the amplitude of the p13-n23 waveform was greater for tone burst stimuli than click stimuli. This suggests that while click stimuli may be preferable for clinical assessment due to less variability, low frequency tone burst stimuli may be preferable when assessing the presence or absence of VEMP responses, further supporting the vestibular origin of these potentials (PUBMED:20955634). Moreover, characteristics of tone burst-evoked myogenic potentials in the SCM have been studied, showing that these responses are similar to click-evoked responses but require lower absolute stimulus intensities. The presence of tone burst-evoked and click-evoked responses after stimulation of the affected ears of subjects with profound sensorineural hearing loss, and the absence of these responses in subjects who had previously undergone vestibular neurectomy, confirm their vestibular dependence (PUBMED:11698798).
Instruction: Does Intraoral Miniplate Fixation Have Good Postoperative Stability After Sagittal Splitting Ramus Osteotomy? Abstracts: abstract_id: PUBMED:26117377 Does Intraoral Miniplate Fixation Have Good Postoperative Stability After Sagittal Splitting Ramus Osteotomy? Comparison With Intraoral Bicortical Screw Fixation. Purpose: Bicortical screw fixation systems and miniplate with monocortical screw fixation systems have been reported mainly in bilateral sagittal split ramus osteotomy (BSSO). This study compared postoperative stability between these 2 fixation systems by an intraoral approach. Materials And Methods: This was a retrospective cohort study. The study sample was composed of patients treated by BSSO at the authors' institute from January 2006 through December 2012. All cases had facial symmetry and were performed by setback surgery. The predictor variable was treatment group (intraoral screw fixation [SG] vs intraoral miniplate fixation [MG]), and the primary outcome variable was stability defined as the change in the position of point B. Other outcome variables were stability defined as the change in the position of the menton, blood loss, incidence of postoperative temporomandibular joint disorder, and nerve injury. Descriptive and bivariate statistics were computed and the P value was set at .05. Results: Seventy-five patients (35 men and 40 women; mean age, 25.8 yr) were divided into 2 groups (39 SG cases and 36 MG cases). Postoperative changes at point B and the menton in the 2 fixation groups were not statistically different. Lingual nerve injury occurred only in SG cases. Moreover, total blood loss was greater in SG cases. Conclusion: An intraoral miniplate with monocortical screw fixation system is recommended over intraoral bicortical screw fixation for bone segments in setback BSSO in patients without facial asymmetry. abstract_id: PUBMED:34768470 Skeletal Stability after Mandibular Setback via Sagittal Split Ramus Osteotomy Verse Intraoral Vertical Ramus Osteotomy: A Systematic Review. Purpose: The purpose of present study was to review the literature regarding the postoperative skeletal stability in the treatment of mandibular prognathism after isolated sagittal split ramus osteotomy (SSRO) or intraoral vertical ramus osteotomy (IVRO). Materials And Methods: The articles were selected from 1980 to 2020 in the English published databases (PubMed, Web of Science and Cochrane Library). The articles meeting the searching strategy were evaluated based on the eligibility criteria, especially at least 30 patients. Results: Based on the eligibility criteria, 9 articles (5 in SSRO and 4 in IVRO) were examined. The amounts of mandibular setback (B point, Pog, and Me) were ranged from 5.53-9.07 mm in SSRO and 6.7-12.4 mm in IVRO, respectively. In 1-year follow-up, SSRO showed the relapse (anterior displacement: 0.2 to 2.26 mm) By contrast, IVRO revealed the posterior drift (posterior displacement: 0.1 to 1.2 mm). In 2-year follow-up, both of SSRO and IVRO presented the relapse with a range from 0.9 to 1.63 mm and 1 to 1.3 mm respectively. Conclusion: In 1-year follow-up, SSRO presented the relapse (anterior displacement) and IVRO posterior drift (posterior displacement). In 2-year follow-up, both of SSRO and IVRO showed the similar relapse distances. abstract_id: PUBMED:32760640 A Comparative Review of Mandibular Orthognathic Surgeries with a Focus on Intraoral Vertico-sagittal Ramus Osteotomy. Severe dentofacial deformities require both orthodontics and surgical management to repair. Modern mandibular orthognathic surgery commonly uses sagittal split ramus osteotomy (SSRO) and intraoral vertical ramus osteotomy (IVRO) methods to treat patients. However, complications like neurosensory disturbances and temporomandibular joint disorders are common following both procedures. In 1992, Choung introduced the intraoral vertico-sagittal ramus osteotomy (IVSRO) which led to a decrease in postoperative complications. The 'straight' IVSRO or Choung's type II osteotomy has a 'condylotomy' effect that reduces iatrogenic temporomandibular joint symptoms and treats preoperative temporomandibular joint symptoms. This osteotomy type is especially applicable for prognathism with excessive flaring of the ramus and with temporomandibular joint dysfunction. The 'L-shaped' IVSRO or Choung's type I osteotomy is indicated for patients with condylar hyperplasia and high condylar process fractures. abstract_id: PUBMED:32068168 Evaluation of the effect of mandibular length and height on the sagittal split ramus osteotomy rigid internal fixation techniques: A finite element analysis. Purpose: A major concern after mandibular advancement with sagittal split ramus osteotomy surgery is postoperative stability and relapse. Currently, there is no consensus on the ideal fixation technique, or how prognosis is affected by mandibular height and length. The aim of the present study was to assess stress distribution on the fixation units and the bone after sagittal split ramus osteotomy and determine the contributions of different mandibular body heights and lengths. Materials And Methods: Sagittal split ramus osteotomy and mandibular advancement were simulated in different height/length models prior to fixation using a miniplate, hybrid, or inverted L system using finite element analysis. The greatest and least amount of stress was generated using the miniplate, and inverted L system, respectively. Results: The highest tension and compression in the bone was measured in the miniplate system. While the inverted L system generated less stress in the fixation units than the hybrid system, the hybrid system caused less stress in the bone and lower displacement values compared to other systems. An increase in length, and a decrease in height, both promoted stress, however, the difference was greatest in the former. Conclusion: Based on our results, when sagittal split ramus osteotomy is planned for a rather long or thin mandible, using the hybrid system for fixation is recommended. abstract_id: PUBMED:29843949 Comparison of osseous healing after sagittal split ramus osteotomy and intraoral vertical ramus osteotomy. The sagittal split ramus osteotomy (SSRO) is generally associated with greater postoperative stability than the intraoral vertical ramus osteotomy (IVRO); however, it entails a risk of inferior alveolar nerve damage. In contrast, IVRO has the disadvantages of slow postoperative osseous healing and projection of the antegonial notch, but inferior alveolar nerve damage is believed to be less likely. The purposes of this study were to compare the osseous healing processes associated with SSRO and IVRO and to investigate changes in mandibular width after IVRO in 29 patients undergoing mandibular setback. On computed tomography images, osseous healing was similar in patients undergoing SSRO and IVRO at 1year after surgery. Projection of the antegonial notch occurred after IVRO, but returned to the preoperative state within 1year. The results of the study indicate that IVRO is equivalent to SSRO with regard to both bone healing and morphological recovery of the mandible. abstract_id: PUBMED:25247146 Comparative analysis of the amount of postoperative drainage after intraoral vertical ramus osteotomy and sagittal split ramus osteotomy. Objectives: The purpose of this retrospective study was to compare the amount of postoperative drainage via closed suction drainage system after intraoral vertical ramus osteotomy (IVRO) and sagittal split ramus osteotomy (SSRO). Materials And Methods: We planned a retrospective cohort study of 40 patients selected from a larger group who underwent orthognathic surgery from 2007 to 2013. Mean age (range) was 23.95 (16 to 35) years. Patients who underwent bilateral IVRO or SSRO were categorized into group I or group II, respectively, and each group consisted of 20 patients. Closed suction drainage system was inserted in mandibular osteotomy sites to decrease swelling and dead space, and records of drainage amount were collected. The data were compared and analyzed with independent t-test. Results: The closed suction drainage system was removed at 32 hours postoperatively, and the amount of drainage was recorded every 8 hours. In group I, the mean amount of drainage was 79.42 mL in total, with 31.20 mL, 19.90 mL, 13.90 mL, 9.47 mL, and 4.95 mL measured at 0, 8, 16, 24, and 32 hours postoperatively, respectively. In group II, the mean total amount of drainage was 90.11 mL, with 30.25 mL, 25.75 mL, 19.70 mL, 8.50 mL, and 5.91 mL measured at 0, 8, 16, 24, and 32 hours postoperatively, respectively. Total amount of drainage from group I was less than group II, but there was no statistically significant difference between the two groups (P=0.338). There was a significant difference in drainage between group I and group II only at 16 hours postoperatively (P=0.029). Conclusion: IVRO and SSRO have different osteotomy design and different extent of medullary exposure; however, our results reveal that there is no remarkable difference in postoperative drainage of blood and exudate. abstract_id: PUBMED:35303119 Three-dimensional analysis of mandible ramus morphology and transverse stability after intraoral vertical ramus osteotomy. Objectives: The purpose of this study was to investigate short- and long-term postoperative changes of both morphology and transverse stability in mandibular ramus after intraoral vertical ramus osteotomy (IVRO) in patients with jaw deformity using three-dimensional (3D) orthognathic surgery planning treatment software for measurement of distances and angles. Study Design: This retrospective study included consecutive patients with skeletal Class III malocclusion who had undergone intraoral vertical ramus osteotomy and computed tomography images before (T0), immediately after (T1), and 1 year after (T2) surgery. Reference points, reference lines and evaluation items were designated on the reconstructed 3D surface models to measure distances, angles and volume. The average values at T0, T1, T2 and time-dependent changes in variables were obtained. Results: After surgery, the condylar length, ramal height, mandibular body length and mandibular ramus volume were significantly decreased (P &lt; 0.01), while clinically insignificant change was observed from T1 to T2. The angular length was increased immediately after surgery (P &lt; 0.05), but it was decreased 1 year after surgery (P &lt; 0.05). Lateral ramal inclination showed significant increase after surgery (P &lt; 0.05) and maintained at T2. Conclusion: Changes in the morphology of the mandibular ramus caused by IVRO do not obviously bring negative effect on facial appearance. Furthermore, despite position and angle of mandibular ramus changed after IVRO, good transverse stability was observed postoperatively. Therefore, IVRO technique can be safely used without compromising esthetic results. abstract_id: PUBMED:28663018 Intraoral vertico-sagittal ramus osteotomy: modification of the L-shaped osteotomy. The sagittal split ramus osteotomy and intraoral vertical ramus osteotomy carry the potential risk of postoperative nerve paralysis, bleeding, and fracture and dislocation of the condyle. In 1992, Choung first described the intraoral vertico-sagittal ramus osteotomy for the purpose of avoiding postoperative dislocation of the condyle. However, there is still potential for damaging the inferior alveolar nerve and maxillary artery with this technique. The authors have developed a modified technique to minimize these risks. An evaluation of surgical experience and patient outcomes with the use of this technique is presented herein. One hundred twenty-two sides in 97 Japanese patients diagnosed with a jaw deformity were analyzed. This technique includes a horizontal osteotomy that is performed at a higher position than in the original Choung procedure. Intraoperatively, there was no unexpected bleeding from the operative site. Proximal segment dislocation from the glenoid fossa was observed on one side (0.82%). Non-union of the osteotomy was not observed in any patient. Intraoperative fracture of the coronoid process occurred in 2.46%, but none necessitated treatment of the fracture. Nerve dysfunction was found in 2.46% at the 12-month postoperative follow-up. The modified technique presented herein was developed to reduce postoperative nerve dysfunction and intraoperative hemorrhage. abstract_id: PUBMED:30320697 Safety and Stability of Postponed Maxillomandibular Fixation After Intraoral Vertical Ramus Osteotomy. The purpose of this study was to evaluate the postoperative safety and long-term stability of bimaxillary orthognathic patients with postponed maxillomandibular fixation (MMF) after intraoral vertical ramus osteotomy.A total of 61 patients (21 male and 40 female patients; average age [SD], 21.7 [4.7]) were enrolled. All patients underwent maxillary LeFort I osteotomy and bilateral intraoral vertical ramus osteotomy for mandibular prognathism. During the hospital stay, postoperative airway compromise was observed and patients underwent MMF with wire at the second postoperative day. Stability was evaluated by measuring the position at each period: preoperative (T0), 2-day postoperative (T1), and 1-year postoperative.Postoperative dyspnea and respiratory distress were absent in all patients. The mean number of refixations in physiotherapy was 0.62 (0.86) and the mean duration of physiotherapy was 11.6 (5.5) days. The mean amount of mandibular setback was 12.56 (5.76) mm and menton movement 0.98 (2.36) mm superiorly (T1). The mean mandibular relapse at Pog was 0.87 (1.96) mm anteriorly. Menton showed 1.11 (1.41) mm superiorly movement 1-year postoperatively (T2).Despite its many advantages, intraoral vertical ramus osteotomy requires a period of MMF which can lead to early discomfort and airway-related emergency. In this study, the physiotherapy procedure and postoperative long-term stability in the postponed MMF group were not different from those of an immediate MMF group studied previously. It therefore constitutes a viable option for oral breathers and other compromised patients. abstract_id: PUBMED:33168369 Surgical morbidities of sagittal split ramus osteotomy versus intraoral vertical ramus osteotomy for the correction of mandibular prognathism: a randomized clinical trial. The sagittal split ramus osteotomy (SSRO) and intraoral vertical ramus osteotomy (IVRO) are two common orthognathic procedures for the treatment of mandibular prognathism. This randomized clinical trial compared the surgical morbidities between SSRO and IVRO for patients with mandibular prognathism over the first 2 years postoperative. Ninety-eight patients (40 male, 58 female) with a mean age of 24.4±3.5 years underwent bilateral SSRO (98 sides) or IVRO (98 sides) as part or all of their orthognathic surgery. IVRO presented less short-term and long-term surgical morbidity in general. The SSRO group had a greater incidence of inferior alveolar nerve deficit at all follow-up time points (P&lt; 0.01). There was more TMJ pain at 6 weeks (P= 0.047) and 3 months (P= 0.001) postoperative in the SSRO group. The SSRO group also presented more minor complications, which were related to titanium plate exposure and infection. There were no major complications for either technique in this study. Despite the need for intermaxillary fixation, IVRO appears to be associated with less surgical morbidity than SSRO when performed as a mandibular setback procedure to treat mandibular prognathism. Answer: Yes, intraoral miniplate fixation has good postoperative stability after sagittal split ramus osteotomy (SSRO). A retrospective cohort study compared postoperative stability between intraoral miniplate fixation and intraoral bicortical screw fixation systems after SSRO. The results showed that postoperative changes at point B and the menton in the two fixation groups were not statistically different, indicating that both systems provided similar stability. However, the study recommended the intraoral miniplate with monocortical screw fixation system over intraoral bicortical screw fixation for bone segments in setback BSSO in patients without facial asymmetry, as lingual nerve injury occurred only in screw fixation cases and total blood loss was greater in screw fixation cases (PUBMED:26117377).
Instruction: Is there an association between histopathological changes in the lower ureter and renal functions? Abstracts: abstract_id: PUBMED:25790557 Is there an association between histopathological changes in the lower ureter and renal functions? Evaluation of patients who underwent ureteroneocystostomy for ureterovesical obstruction or vesicoureteral reflux. Background/aim: We aimed to assess the relationship between the histological changes of the ureterovesical junction (UVJ) and renal functions. Therefore, we evaluated histological changes of the lower ureter and renal scintigraphy findings of patients for whom ureteroneocystostomy was performed because of vesicoureteral reflux (VUR) or ureterovesical junction obstruction (UVO). Materials And Methods: UVJ specimens were obtained from 18 children. We investigated the changes in neuronal innervation, muscular morphology, extracellular matrix, and apoptosis rate with renal scintigraphy findings. Results: Seven UVO and 11 VUR patients were treated. Alpha-actin expression in smooth muscle cells was found to be lower (P &lt; 0.001) while neuronal defect was more prominent in the UVO group (P = 0.002). The renal functions decreased as the smooth muscle structural defect increased in the VUR group (P &lt; 0.05). Conclusion: Neuronal tissue and muscle tissue were more defective in the UVO group. The decrease in neuronal fibers and muscle cells explains the pathogenesis of the obstructive group, but no difference was observed regarding the accumulation of collagen type 3 and cellular apoptosis between the VUR and UVO groups. In the VUR group, renal functions decreased while the smooth muscle defect at the distal end of the ureter increased. abstract_id: PUBMED:32963383 Late metastasis of right breast cancer to renal pelvis and right ureter. A case report. Antecedentes: Las metástasis de mama a uréter son extremadamente raras y la mayoría son asintomáticas. Caso Clínico: Mujer de 67 años, con cáncer de mama derecha EC IA luminal A de 5 años de evolución. Presentó infección de vías urinarias de repetición, hematuria macroscópica total y dolor renal derecho. La tomografía abdominopélvica mostró dilatación del cáliz superior y defecto de llenado en la pelvis renal derechos. Se le realizó nefroureterectomía radical derecha, rodete vesical. El reporte histopatológico fue metástasis de carcinoma infiltrante con afectación de pelvis renal y uréter. Conclusiones: La metástasis tardía del cáncer de mama al uréter y la pelvis es rara. Background: Breast metastases to ureter are extremely rare. Most are asymptomatic. Case Report: A 67-year-old woman with 5 years of evolution, with right breast cancer stage IA luminal A. She presented repeated urinary tract infection, total macroscopic hematuria and pain in the right renal fossa. The computed tomography showed dilation of the upper calyx and filling defect in the renal pelvis. A right laparoscopic radical nephroureterectomy was realized; the histopathological report was metastasis of infiltrating carcinoma without specific pattern involving renal pelvis and ureter. Conclusions: Late metastasis of breast cancer to ureter and pelvis are rare. abstract_id: PUBMED:33776704 A Case of Renal Pelvic Cancer with a Complete Duplication of the Renal Pelvis and Ureter. This paper describes a case of renal pelvic cancer with a complete duplication of the renal pelvis and ureter, which is substantially rare. A 76-year-old man was referred to the hospital because of gross hematuria for 2 years. A tumor was detected in the upper right kidney using enhanced computed tomography and magnetic resonance imaging scan, and the downstream ureter was suspected to open into the prostate. Retrograde ureteroscopy via the ectopic ureter orifice showed a hemorrhagic papillary tumor consistent with imaging findings. Laparoscopic radical nephroureterectomy was performed and the prostate was preserved because the tumor was only in the renal pelvis. Histopathological examination showed the tumor as a high-grade urothelial carcinoma. There was no sign of recurrence at one and a half years after operation. Ureteroscopy was effective in detecting an upper urinary tract tumor, even via ectopic ureter orifice, and preserving the prostate was possible. abstract_id: PUBMED:31666787 A Cadaveric Report on a Giant Ureteric Stone Led Right Hydro Ureter and Severe Hydronephrosis. Background: The ureter shows natural constrictions in its course, and these are the potential site for the impaction of the renal calculus. Giant ureteral stones are associated with insidious growth and late presentation, often leading to renal failure. Case Presentation: In the present case, we observed a huge ureteric stone obstructing the right ureterovesical junction in a 58 year-old male cadaver. We also found hydroureter distal to the impaction of the calculus, renal damage and severe hydronephrosis on the right side. Histopathological analysis showed conditions of arterio-nephro-sclerosis and eroded ureter secondary to the calculus. Ureteric stones obstruction may result in hydroureter, hydronephrosis and progressive renal damage leading to irreversible renal function. The present case provides valuable information regarding the gross and histopathological alterations in ureteric calculi. Conclusion: It further enables clinicians to be armed with the knowledge of preventive approaches to educate patients with previous calculi, or those who may develop in the future. abstract_id: PUBMED:32871769 Evaluation of Renal Function in Obstructed Ureter Model Using 99mTc-DMSA. Background/aim: Urinary obstruction is a condition of impaired urinary drainage, which may result in progressive renal deterioration. This study applied 99mTc-labeled dimercaptosuccinic acid (99mTc-DMSA) renal scintigraphy to a rabbit model of right ureter obstruction and evaluated its utility in studying obstructive renal diseases. Materials And Methods: Complete unilateral ureter obstruction in rabbits was generated by complete ligation of the right ureter. Renal function was investigated during a 4-week post-obstruction period by obtaining planar images of 99mTc-DMSA activity following ear vein injection. Renal blood perfusion was evaluated by non-invasive scintigraphy in conjunction with parallel histological and hematological examinations. Results: Renal perfusion was remarkably and rapidly reduced in the ureter-obstructed kidneys. During the experimental period, the size of left kidney appeared normal in the scintigraphic images, but the ureter-obstructed right kidney progressively became larger. Histopathological examination showed flattening and atrophy of tubules, enlargement of interstitial areas, accumulation of extracellular martices and infiltration of inflammatory cells in the obstreucted kidney. Conclusion: 99mTc-DMSA scintigraphy is a sensitive, non-invasive method to assess renal function in unilateral kidney diseases. abstract_id: PUBMED:33363676 Renal agenesis associated with contralateral ectopic ureter and hydroureteronephrosis. Congenital anomalies of the kidney and the urinary tract such as renal agenesis and ectopic ureter have complex development. These anomalies have variable presentations and associations. In this report, we highlight the case of a young man with congenital renal agenesis presenting for a urinary tract infection. Abdominal and pelvic computed tomography imaging revealed the rare association of renal agenesis with contralateral ectopic ureter and subsequent hydroureteronephrosis. A urinary tract infection can be the presenting complication of such association, and a long follow-up is needed to anticipate the management. abstract_id: PUBMED:3705352 Surgical treatment of compression of the parapelvic portion of the ureter by the lower polar renal vessels in children Suboperative observations of 73 patients have shown that lower polar renal vessels are often associated with other abnormalities (congenital fibrosis and segmentary hypoplasia of the parapelvic portion of the ureter resulting in the development of hydronephrosis, chronic urethritis, atrophy of the muscle layer and secondary stenosis of the ureter). The operation of choice is resection of the pyeloureteral segment and part of the pelvis followed by antevasal anastomosis. abstract_id: PUBMED:30178459 Right circumcaval ureter and double right renal vein in the Brazilian shorthair cat (Felis catus): two case reports. Variations of the renal veins are well described in the literature, although variations concerning the ureter are considered a rare finding in cats. The circumcaval ureter is one of the rarest variations of the ureter and is characterised by a loop of the ureter posterior to the caudal vena cava. This variant is also known as preureteral vena cava and retrocaval ureter. It is thought to be caused by a deviation during embryonic development of the aforementioned vein. Due to its rarity, there are scarce reports of the circumcaval ureter in cats, and its association with two renal veins makes it less common as well. These variations should be preoperatively identified in order to avoid complications in kidney transplants, ureteral surgeries and cystoscopies, for instance. The present work aims to report two cases of a circumcaval ureter with two renal veins in two different Brazilian shorthair cats (Felis catus). abstract_id: PUBMED:25931845 Biochemical and histopathological changes of intra-abdominal hypertension on the kidneys: Experimental study in rats. Objective: This study aimed to evaluate the effects of experimentally induced intra-abdominal hypertension on renal functions, with the combination of biochemical and histopathological properties. Material And Methods: Thirty male Wistar albino rats were used in this experimental study. Rats were divided into four groups. Group 1 (control group, n=6) only received anesthesia. After the induction of anesthesia, a 20 G catheter was introduced intraperitoneally to Group 2 (sham group, n=8), Group 3 (n=8) and Group 4 (n=8). The intra-abdominal pressure was not increased in Group 2. We applied 20 mmHg intra-peritoneal pressure to Group 3 and 30 mmHg to Group 4 for 3 hours. After withdrawing 3 mL intracardiac blood from all groups, the kidneys were removed for histopathological examination. Serum urea and creatinine levels were measured in all groups. Results: Biochemical examination showed that blood urea and creatinine levels were statistically different among all groups (p&lt;0.05). Serum creatinine levels in Group 3 and serum urea and creatinine levels in Group 4 were significantly increased. In the histopathological examination, the kidneys in Group 1 and Group 2 were classified as normal. In Group 3, areas with congestion were detected in the glomeruli and interstitial regions. In addition to these findings seen in Group 3, dilatation of the pelvi-caliceal structures and proximal ureters were noticed in Group 4. Conclusion: The levels of serum urea and creatinine are elevated when intra-abdominal pressure is increased due to kidney damage. Foci of hemorrhage in the interstitial area, dilatations in the proximal ureter, renal pelvis, and lymphatics were the pathologic findings seen in the kidneys under such circumstances. abstract_id: PUBMED:7733696 Renal agenesis, cranial blind ureter and ureterocele. Report of a case Objectives: We report a case of associated malformations in a young male with recurrent epididymitis. Methods: In our study we used the following techniques: ultrasound, intravenous urography, retrograde pyelography, computed tomography and cystoscopy. Results: We observed three coexisting congenital malformations: renal agenesis, cranial blind-ending ureter and ureterocele. Conclusions: Different urologic malformations are usually found in association. In our view only symptomatic ureteroceles require treatment. Answer: Yes, there is an association between histopathological changes in the lower ureter and renal functions. A study aimed to assess this relationship by evaluating histological changes of the lower ureter and renal scintigraphy findings in patients who underwent ureteroneocystostomy due to vesicoureteral reflux (VUR) or ureterovesical junction obstruction (UVO). The results indicated that alpha-actin expression in smooth muscle cells was lower and neuronal defect was more prominent in the UVO group. Furthermore, in the VUR group, renal functions decreased as the smooth muscle structural defect increased. This suggests that defects in neuronal tissue and muscle tissue are more pronounced in UVO, and the decrease in neuronal fibers and muscle cells may explain the pathogenesis of the obstructive group. However, no significant difference was observed regarding the accumulation of collagen type 3 and cellular apoptosis between the VUR and UVO groups (PUBMED:25790557).
Instruction: Do histopathology reports of primary cutaneous melanoma contain enough essential information? Abstracts: abstract_id: PUBMED:8675728 Do histopathology reports of primary cutaneous melanoma contain enough essential information? Aims: To audit the content of primary cutaneous malignant melanoma histopathology reports with special reference to Breslow thickness and lateral excision margins. Methods: The Trent Regional Cancer Registry was asked to provide details of primary cutaneous malignant melanomas for the most recent year available (1990). Histopathology departments were then requested to provide copies of the relevant reports, which were then analysed. Results: In total, 178 reports were obtained from 16 departments. Breslow thickness was present in 87.1% (155/178) and a comment had been made on lateral excision in 85.4% (152/178). A specific clearance measurement was recorded in 5.6% (10/178), and in 9.6% (17/178) tumour was stated to extend to the margin. In 4.5% (8/178) neither thickness nor a comment on excision was recorded. Clinical advice on excision was offered in 12.4% (22/178). A macroscopic description was absent in 6.7% (12/178). Conclusions: Deficiencies were identified in the quality of malignant melanoma histopathology reports in Trent Region. There is no reason to believe that significant improvements have occurred since 1990 or that other regions are performing differently. A national standard for reporting primary cutaneous malignant melanoma is recommended. As a minimum, all reports should include Breslow thickness and a specific measurement of lateral clearance. This will facilitate prognostic evaluation, clinical management and audit. This standard would not exclude the reporting of other information, depending on local policy. As with all standards, continual review must be undertaken and consideration given as to whether other more recent parameters, such as growth phase, also warrant future inclusion. abstract_id: PUBMED:32468449 Distant metastasis from oral cavity-correlation between histopathology results and primary site. Objectives: Oral cancer is the eighth most common type of cancer worldwide and a significant contributor to the global burden caused by this disease. The principal parameters considered to influence prognosis, and thus treatment selection, are size and location of the primary tumor, as well as assessment of the presence and extent of lymph node and distant metastasis (DM). However, no known report regarding the relationship between the primary site and DM has been presented. For effective treatment selection and good prognosis, the correlation of DM with anatomic site and histopathology results of the primary malignancy is important. In the present study, we performed a systematic review of published reports in an effort to determine the relationship between the anatomic site of various types of oral cavity cancer and DM. Methods: A systematic review of articles published until the end of 2018 was performed using PubMed/MEDLINE. Results: A total of 150 studies were selected for this review. The percentage of all cases reported with DM was 6.3%, ranging from 0.6% to 33.1% in the individual studies. The rate of incidence of tongue occurrence was 9.3%. A frequent DM site was the lungs, with adenoid cystic carcinoma the most commonly involved histopathological factor. Malignant melanoma was most frequent (43.4%) in all histopathology findings, whereas there were no cases with an acinic cell carcinoma or cystadenocarcinoma. Conclusions: We found that the occurrence of DM from the primary site as well as rate of incidence was dependent on histopathological factors. abstract_id: PUBMED:11345839 Lack of relevant information for tumor staging in pathology reports of primary cutaneous melanoma. For the T classification of primary cutaneous melanoma, the current American Joint Committee on Cancer staging (AJCC) system relies on tumor thickness and level of invasion. A new T classification has been proposed based on thickness and ulceration. The slides and reports of 135 departmental pathology consultations of patients referred to a major cancer center with a diagnosis of primary cutaneous invasive malignant melanoma were examined. Whether the outside pathology reports contained information on tumor thickness, level of invasion, and ulceration was recorded. Dermatopathologists had issued 76.3% of the reports and general surgical pathologists, 24.3%. Information provided was as follows: tumor thickness, 97.8%; Clark level, 71.9%; and presence or absence of ulceration, 28.1%. Of the 97 melanomas with no comment on ulceration, 17 were indeed ulcerated. Thus, the lack of a comment on ulceration cannot be equated with the absence of ulceration. The present study documents that many pathology reports on melanomas lack sufficient information for AJCC staging. Therefore, review of outside pathology material is necessary not only to confirm or revise the tumor diagnosis but also to provide clinicians with histologic parameters required for AJCC staging. abstract_id: PUBMED:36921726 Update on nail unit histopathology. Histopathologic evaluation of the nail unit is an essential component in the diagnosis of nail unit disorders. This review highlights recent updates in nail unit histopathology and discusses literature covering a wide range of nail disorders including melanoma/melanocytic lesions, squamous cell carcinoma, onychomatricoma, onychopapilloma, onychomycosis, lichen planus, and other inflammatory conditions. Herein we also discuss recent literature on nail clipping histopathology, a useful and noninvasive diagnostic tool that continues to grow in popularity and importance to both dermatologists and dermatopathologists. abstract_id: PUBMED:37763207 The Implications of a Dermatopathologist's Report on Melanoma Diagnosis and Treatment. An accurate and comprehensive histopathology report is essential for cutaneous melanoma management, providing critical information for accurate staging and risk estimation and determining the optimal surgical approach. In many institutions, a review of melanoma biopsy specimens by expert dermatopathologists is considered a necessary step. This study examined these reviews to determine the critical primary histopathology Breslow score in which a histopathology review would be most beneficial. Histopathology reports of patients referred to our institute between January 2011 and September 2019 were compared with our in-house review conducted by an expert dermatopathologist. The review focused on assessing fundamental histologic and clinical prognostic features. A total of 177 specimens underwent histopathology review. Significant changes in the Breslow index were identified in 103 cases (58.2%). Notably, in many of these cases (73.2%), the revised Breslow was higher than the initially reported score. Consequently, the T-stage was modified in 51 lesions (28.8%). Substantial discordance rates were observed in Tis (57%), T1b (59%), T3a (67%) and T4a (50%) classifications. The revised histopathology reports resulted in alterations to the surgical plan in 15.3% of the cases. These findings emphasize the importance of having all routine pathologies of pigmented lesions referred to a dedicated cancer center and reviewed by an experienced dermatopathologist. This recommendation is particularly crucial in instances where the histopathology review can potentially alter the diagnosis and treatment plan, such as in melanoma in situ and thinner melanomas measuring 0.6-2.2 mm in thickness. Our study highlights the significant impact of histopathology reviews in cutaneous melanoma cases. The observed changes in Breslow scores and subsequent modifications in T-stage classification underline the need for thorough evaluation by an expert dermatopathologist, especially in cases of melanoma in situ and thin melanomas. Incorporating such reviews into routine practice within dedicated cancer centers can improve diagnostic accuracy and guide appropriate treatment decisions, ultimately leading to better patient outcomes. abstract_id: PUBMED:25735220 Population-based method for investigating adherence to international recommendations for pathology reporting of primary cutaneous melanoma: Results of a EUROCARE-5 high resolution study. Aim: Our study aim was to investigate the degree of adherence to international recommendations for cutaneous melanoma pathology reports at the population level by a EUROCARE high resolution study. Methods: The availability of nine characteristics - predominant cell type, tumour-infiltrating lymphocytes, mitotic index, histological subtype, growth phase, Clark level, Breslow thickness, ulceration, and sentinel-node biopsy - was examined on pathology reports of a random sample of 636 cases diagnosed in 2003-2005 in seven Italian cancer registries: Biella, Ferrara, Firenze, Latina, Ragusa, Reggio Emilia, Romagna. The odds of having (versus not having) information for all four core characteristics (last four listed above) were estimated. Results: Sentinel node biopsy was available most often, followed by Clark level, Breslow thickness, histological subtype and ulceration. Information on all nine characteristics was more often available in Biella and Ferrara (northern Italy) than elsewhere. Information on all four core items was available for 78% of cases. Odds of four-core-item availability were higher (than mean) in Biella and lower in Latina (centre) and Ragusa (south). Conclusions: The availability of information important for staging and management was good overall on pathology reports, but varied with geography. It is likely to be improved by wider dissemination of reporting guidelines and adoption of a standardised synoptic reporting system. abstract_id: PUBMED:18832807 Completeness of histopathology reporting of melanoma in a high-incidence geographical region. Background: Appropriate histopathology reporting helps to ensure effective therapy and prognosis. Objective: To examine compliance with clinical practice guidelines for histopathology reports of melanomas. Methods: A sample of melanoma histopathology reports in Queensland was audited for inclusion of recommended information. The quality of documentation was constructed and multivariate analysis used to determine factors affecting the quality of reporting practices. Results: Documentation of the most important features of melanoma was high: clear diagnosis (99.8%; 95% CI 98.6-100), thickness (99.8%; 95% CI 98.6-100), comment on adequacy of excision (87.9%; 95% CI 84.9-91.0) and measurement of margins (91.9%; 95% CI 88.8-91.4). Overall reporting of ulceration and regression was of lesser completeness (83.0 and 77.8%, respectively) and these features were more likely to be reported by high-volume laboratories (p &lt; 0.001 and p = 0.037, respectively). This trend was not apparent for other features. Fewer than 50% of reports documented mitotic rate per square millimetre, predominant cell type, microsatellites, growth phase and desmoplasia. Conclusion: Awareness of current reporting practices and identification of areas in which insufficiencies exist enable the revision of systems and potential improvements to the transfer of information to treating clinicians. abstract_id: PUBMED:25071058 Skin cancer excision performance in Scottish primary and secondary care: a retrospective analysis. Background: In contrast with most published evidence, studies from north-east Scotland suggest that GPs may be as good at treating skin cancers in primary care as secondary care specialists. Aim: To compare the quality of skin cancer excisions of GPs and secondary care skin specialists in east and south-east Scotland. Design And Setting: A retrospective analysis of reports from GPs in Lothian, Fife, and Tayside regions. Method: Skin cancer histopathology reports from GPs in Lothian, Fife, and Tayside regions in 2010 were compared with reports from skin specialists in November 2010. The histopathology reports were rated for completeness and adequacy of excision. Results: A total of 944 histopathology reports were analysed. In 1 year, GPs biopsied or excised 380 skin cancers. In 1 month, dermatologists biopsied or excised 385 skin cancers, and plastic surgeons 179 skin cancers. 'High risk' basal cell carcinomas (BCC) comprised 63.0% of BCC excised by GPs. For all skin cancer types, GPs excised smaller lesions, and had a lower rate of complete excisions compared with skin specialists. A statistical difference was demonstrated for BCC excisions only. Conclusion: GPs in east and south-east Scotland excise a number of skin cancers including malignant melanoma (MM), squamous cell carcinoma (SCC) and high-risk BCC. Despite removing smaller lesions, less commonly on difficult surgical sites of the head and neck, GP excision rates are lower for all skin cancers, and statistically inferior for BCC, compared with secondary care, supporting the development of guidelines in Scotland similar to those in other UK regions. Poorer GP excision rates may have serious consequences for patients with high-risk lesions. abstract_id: PUBMED:35814437 Primary Malignant Melanoma of the Cervix: An Integrated Analysis of Case Reports and Series. Melanoma, also known as malignant melanoma, is a type of malignant tumour that originates from melanocytes in the basal layer of the epidermis. Primary malignant melanomas of the female genital tract are rare. Similarly, primary malignant melanoma of cervix, which originates from cervical melanocytes, is an extremely rare disease and the second most common type of female melanoma in women aged between 15 to 44 years worldwide. To date, primary malignant melanoma of the cervix is characterized by poor patient prognosis and little consensus exists regarding the best treatment therapy. The situation is worsened by lack of clinical studies with large samples. Notably, surgery remains the preferred treatment option for patients with primary malignant melanomas of the cervix. Current treatments are based on Federation International of Gynecology and Obstetrics(2018) staging with reference to National Comprehensive Cancer Network guidelines. This study is in order to find a more suitable treatment modality for primary malignant melanoma of cervix. Therefore, we first conducted an integrated analysis of case reports and series to assess the impact of various factors on the prognosis of such patients. In summary, this is the first pooled analysis including 149 cases of primary cervical melanoma. We found that patients who underwent radical hysterectomy-based surgery, those with non-metastatic lymph nodes and those who underwent lymphadenectomy had significantly higher survival rates. In patients who had RH-based surgery, survival rates at the 24m time point of those who did not add other treatments was higher than those who did, but for those who had total hysterectomy-based surgery, the addition of other treatments to prolong median survival may be considered. In the overall analysis, age and lymphadenectomy were associated with increased and reduced risk of death in these patients, respectively. Although there is no statistical difference, stage III&amp;IV, TAH, lymphatic metastases increase the risk of death; whereas radical hysterectomy was associated with reduced risk of death. In the subgroup analysis, for patients who have undergone radical hysterectomy-based surgery, lymphadenectomy reduces the risk of death, while lymphatic metastases and complementary other treatments increase the risk of death. For patients who have undergone total hysterectomy-based surgery, complementary treatment reduces the risk of death. In conclusion, via summarizing previous reports, the recommended treatment procedure for PMMC are radical hysterectomy and lymphadenectomy. The addition of other treatment options for patients who undergoing RH-based surgery need further study. abstract_id: PUBMED:38200620 Concordance between reflectance confocal microscopy and histopathology for the diagnosis of acral lentiginous melanoma. Background: Acral lentiginous melanoma (ALM) is a highly malignant and invasive type of melanoma with unique locations of onset. Its incidence is increasing and early diagnosis is challenging. Reflectance confocal microscopy (RCM) is a non-invasive technique that provides an accurate image of tissue pathology. There are few reports on the use of RCM for the assessment of ALM. Materials And Methods: In this retrospective study, data from 31 patients with a clinical diagnosis of ALM were collected. RCM image features were compared with histopathological findings to determine the concordance between the two methods. The sensitivity, specificity, positive predictive value, and negative predictive value of RCM for the diagnosis of ALM were evaluated. Results: RCM and histopathology findings were concordant in 29 of 31 patients (93.5%). There were no false-negative results, although there were two false positives in RCM diagnosis. The sensitivity of RCM for diagnosing ALM was 100%, specificity was 50%, positive predictive value was 93.1%, and negative predictive value was 100%. Conclusions: RCM showed substantial concordance with histopathology in the diagnosis of ALM. It is a reliable and valuable non-invasive diagnostic tool that holds promise for the early diagnosis of ALM. Answer: Histopathology reports of primary cutaneous melanoma do not always contain enough essential information. An audit of the content of primary cutaneous malignant melanoma histopathology reports in the Trent Region revealed deficiencies in the quality of reports, with Breslow thickness present in 87.1% of reports and a comment on lateral excision in 85.4%. However, a specific clearance measurement was only recorded in 5.6% of reports, and in 4.5% of cases, neither thickness nor a comment on excision was recorded (PUBMED:8675728). Similarly, a study examining pathology reports of primary cutaneous melanoma found that many reports lacked sufficient information for American Joint Committee on Cancer (AJCC) staging, with only 28.1% of reports commenting on the presence or absence of ulceration (PUBMED:11345839). A population-based study investigating adherence to international recommendations for cutaneous melanoma pathology reports found that while the availability of information important for staging and management was good overall, it varied with geography (PUBMED:25735220). Another study in Queensland showed high documentation of important features such as clear diagnosis and thickness, but less completeness in reporting ulceration and regression, and fewer than 50% of reports documented mitotic rate per square millimeter, predominant cell type, microsatellites, growth phase, and desmoplasia (PUBMED:18832807). Moreover, a study examining the implications of a dermatopathologist's report on melanoma diagnosis and treatment found that significant changes in the Breslow index were identified in 58.2% of cases after review by an expert dermatopathologist, leading to modifications in the T-stage in 28.8% of lesions and alterations to the surgical plan in 15.3% of the cases (PUBMED:37763207). This underscores the importance of thorough evaluation by an expert dermatopathologist, especially in cases of melanoma in situ and thin melanomas. In conclusion, while histopathology reports of primary cutaneous melanoma often contain critical information, there are notable deficiencies and variations in the completeness and quality of reporting, which can impact staging, management, and patient outcomes. Therefore, adherence to reporting guidelines and standardization of reports are necessary to ensure that all essential information is consistently provided.
Instruction: Can we consider standard microsurgical anastomosis on the posterior tibial perforator network? Abstracts: abstract_id: PUBMED:24482060 Can we consider standard microsurgical anastomosis on the posterior tibial perforator network? An anatomical study. Purpose: The main vessels in an injured leg can be spared with perforator-to-perforator anastomosis. However, supermicrosurgery is not a routine procedure for all plastic surgeons. Our objective was to establish if the diameter of the perforators of the leg could allow anastomosis with standard microsurgical procedures. Methods: Twenty lower legs harvested from ten fresh cadavers were dissected. Arterial and venous vessels were injected with colored latex. The limbs were then dissected in a suprafascial plane. All the perforating arteries of a diameter &gt;0.8 mm were located and their external diameter, the number and external diameter of the venae comitantes were reported. Results: We found at least three tibial posterior artery perforators with diameters &gt;0.8 mm per leg with a mean external diameter of 1.1 mm and one vena comitans in almost all cases (96 %). The vena comitans was usually bigger than the perforating artery with a mean diameter of 1.6 mm. After statistical analysis, we were able to locate two main perforator clusters: at the junctions of the upper two-thirds of the leg and of the lower two-thirds of the leg. Conclusion: The low-morbidity concept of perforator-to-perforator anastomosis can apply to posterior tibial artery perforators without using supermicrosurgical techniques. This is of high interest for open leg fractures where main vessels could be injured. We hope that the results of our study will incite surgeons to consider sparing of main vessels for coverage of open leg fractures whether surgical teams master supermicrosurgery or not. abstract_id: PUBMED:37770196 Clinical application of posterior tibial artery or peroneal artery perforator flap in curing plate exposure after ankle fracture fixation. The study aims to evaluate the clinical application of posterior tibial artery or peroneal artery perforator flap in the treatment of plate exposure after ankle fracture fixation. A posterior tibial artery or peroneal artery perforator flap was used on 16 patients with plate exposure after ankle fracture fixation in our hospital between July 2018 and July 2021. The time required to harvest the flap, the amount of intraoperative blood loss, the duration of postoperative drainage tube placement, the outcome of the flap and the healing observed at the donor site are reported. The sizes of the flaps were 2.5-7.0 cm × 5.0-18.0 cm and averaged 4.0 cm × 12.0 cm. The time required to harvest the posterior tibial artery or peroneal artery perforator flap ranged from 35 to 55 min and averaged 45 min. The amount of intraoperative blood loss ranged from 20 to 50 mL and averaged 35 mL. The duration of postoperative drainage tube placement ranged from 3 to 5 days and averaged 4 days. A total of 15 flaps survived and one flap had partial necrosis and survived after conservative treatment. All donor area defects were directly sewed and stitched without complications. There are multiple advantages of the posterior tibial artery or peroneal artery perforator flap, including simple preparation technique, reliable repair of the defects and without the need for performing microvascular anastomosis. It can be safely used in curing plate exposure after ankle fracture fixation and worth popularizing in grassroots hospitals. abstract_id: PUBMED:24228506 Anatomic study and clinical application of thinned posterior tibial artery perforator flap Objective: To explore the feasibility and therapeutic effect of thinned posterior tibial artery free perforator flap for the reconstruction of soft tissue defects at dorsum of hands. Methods: Six fresh adult lower limbs specimens were injected with red latex via arterial cannula and dissected. The number, distribution, branches, and outer diameter of posterior tibia artery perforators were observed. Based on the anatomic study, the perforator flaps were designed to reconstruct soft tissue defects at dorsum of hands and wrists. The redundant fat on the flaps was removed, but preserving the nutrient vascular system. 11 flaps were used with the size ranging from 2 cm x 5 cm to 10 cm x 14 cm. Results: 43 skin perforators of posterior tibial artery were observed in six lower limbs, 29 perforators with the outer diameter is greater than 0.5 mm when they threading over the deep fascia plane, on average every 4.8 bundles of sides. The mean outside diameter of perforating artery is (1.8 +/- 0.5) mm, and the length is (44 +/- 15) mm. 6 perforators were founded both in the second and fifth zone which could be used for anastomosis for its better diameters. All flaps survived completely without any complication at donor sites. 7 cases were followed up for 3-12 months. Both satisfactory functional and cosmetic results were achieved with a soft and thinned appearance. Conclusions: The thinned posterior tibial artery free perforator flap has a reliable blood supply and good appearance. It is very suitable for the reconstruction of small or medium-sized defects at the dorsum of hands and wrists. abstract_id: PUBMED:29664183 Posterior tibial perforators relationship with superficial nerves and veins: A cadaver study. Background: Most authors have evaluated the location of lower leg arterial perforators, but little is still known about the relationship between the arterial network and great saphenous vein (GSV) and saphenous nerve (SN). The aim of this study is to evaluate the relationship between the arterial network of the posterior tibial artery perforators, the cutaneous nerves, and the superficial venous system in the lower one third of the leg. Methods: Eighteen lower limbs from cadavers were used for this study. The arterial and venous compartment were selectively injected with a mixture of barium sulfate and epoxy. The specimen were CT scanned and the superficial veins, nerves, and the arterial perforators were dissected. Results: A large perforator of the posterior tibial artery was found at a mean distance of 6.23 cm ± 0.88, with a 95% CI: 5.79-6.67, from the medial malleolus. The average diameter was 0.9 mm ± 0.17, with a 95% CI: 0.81-0.99. In 67% the connection of the venae comitantes to the superficial venous system was established with the GSV, in the other cases, with Leonardo's vein. Both dissection and imaging studies showed perineural interperforator connections along the branches of SN in all the specimens examined. Conclusions: The distribution pattern of posterior tibial artery perforators followed the superficial nerves in this region. There is an interperforator anastomotic network along the SN. The various patterns of the venous drainage system, in relationship to the distribution of the branches of posterior tibial artery perforators, have been clarified. abstract_id: PUBMED:31086144 Posterior Tibial Artery Perforator Based Propeller Flap for Lower Leg and Ankle Defect Coverage: A Prospective Observational Study. Reconstruction of lower leg and ankle defect with exposed bone or tendon is a challenging task for a Plastic Surgeon. There are various options, among them perforator based propeller flap is a very good option though this is a microsurgical procedure but no need of microvascular anastomosis. This study was designed to see the clinical results of Posterior tibial artery perforator based propeller flap for lower leg and ankle defect coverage. The study was a prospective observational study. It was conducted in the Department of Burn and Plastic Surgery, Mymensingh Medical College Hospital, Mymensingh, Bangladesh from July 2017 to June 2018. Sample size was 9. Sampling was carried out purposively. Postoperative follow up period was up to 6 weeks. Among the 9 cases, 8 flaps completely survived, 1 case developed marginal necrosis which was secondarily healed. There were total 2 complications among 9 cases i.e. transient venous congestion and superficial epidermonecrolysis which were resolved spontaneously. Regarding the cause of the defect, maximum cases were post traumatic wound (66.7%), others were post infective, post malignancy excision and post electric burn wound. Defect size was 2cm×2cm to 7cm×5cm. Maximum dimension of the flap was 19cm×6cm and minimum size was 7cm×3cm. Posterior tibial artery perforator location was 4cm to 9cm from lowest level of medial malleolous (mean 6.2±1.6cm). Rotation of the flap was 145°-180° (mean 163°±1.39°). In all cases donor site was covered with split thickness skin graft. Operation time was 120 minutes to 180 minutes; mean operative time was 143.3±2.38 minutes. After operation hospital stay was 10 days to 21 days, mean 11.44±3.64 days. So, posterior tibial artery perforator based propeller flap for lower leg and ankle defect coverage is a very good option. abstract_id: PUBMED:17235618 Endoscopic assisted posterior tibial tendon reconstruction for stage 2 posterior tibial tendon insufficiency. Posterior tibial tendon insufficiency is the commonest cause of adult onset flatfoot deformity. The treatment of stage 2 posterior tibial tendon insufficiency is still controversial. Different combination of open procedures of tendon transfer, calcaneal osteotomy and hindfoot arthrodesis has been described. We describe an endoscopic approach of posterior tibial tendon reconstruction. By means of anterior and posterior tibial tendon tendoscopies, the medial half of the anterior tibial tendon is then transferred to the posterior tibial tendon. The construct is then augmented by side-to-side anastomosis with flexor digitorum longus tendon. This is supplemented with subtalar arthroereisis with a bioresorbable arthroereisis implant. abstract_id: PUBMED:23102914 Transtibial amputation salvage with a cutaneous flap based on posterior tibial perforators When performing an amputation of the lower limb, the preservation of the knee joint is important to obtain an optimal functional outcome. Many reconstruction procedures are available to cover the amputation defect in order to preserve a sufficient length of the stump, so a prosthesis could be put in place with the best functional results. Local musculocutaneous flaps or free flaps are conventionally described with their advantages and disadvantages. In this report, we describe our experience with a transtibial amputation and stump covering using a fasciocutaneous flap based on tibial posterior perforators. An extensive tibial bone exposure with only posterior skin was viable. It is an efficient and reliable solution for covering tibial stump without microvascular anastomosis. abstract_id: PUBMED:33503755 The perforator-centralizing technique for super-thin anterolateral thigh perforator flaps: Minimizing the partial necrosis rate. Background: Despite the wide demand for thin flaps for various types of extremity reconstruction, the thin elevation technique for anterolateral thigh (ALT) flaps is not very popular because of its technical difficulty and safety concerns. This study proposes a novel perforatorcentralizing technique for super-thin ALT flaps and analyzes its effects in comparison with a skewed-perforator group. Methods: From June 2018 to January 2020, 41 patients who required coverage of various types of defects with a single perforator-based super-thin ALT free flap were enrolled. The incidence of partial necrosis and proportion of the necrotic area were analyzed on postoperative day 20 according to the location of superficial penetrating perforators along the flap. The centralized-perforator group was defined as having a perforator anchored to the middle third of the x- and y-axes of the flap, while the skewed-perforator group was defined as having a perforator anchored outside of the middle third of the x- and y-axes of the flap. Results: No statistically significant difference in flap thickness and dimension was found between the two groups. The arterial and venous anastomosis patterns of patients in both groups were not significantly different. Only the mean partial necrotic area showed a statistically significant difference between the two groups (centralized-perforator group, 3.4%±2.2%; skewed-perforator group, 15.8%±8.6%; P=0.022). Conclusions: The present study demonstrated that super-thin ALT perforator flaps can be elevated safely, with minimal partial necrosis, using the perforator-centralizing technique. abstract_id: PUBMED:21818949 Repair of soft tissue defects of lower extremity by using cross-bridge contralateral distally based posterior tibial artery perforator flaps or peroneal artery perforator flaps Objective: To discuss the feasibility of repairing soft tissue defects of lower extremity with a distally based posterior tibial artery perforator cross-bridge flap or a distally based peroneal artery perforator cross-bridge flap. Methods: Between August 2007 and February 2010, 15 patients with soft tissue defect of the legs or feet were treated. There were 14 males and 1 female with a mean age of 33.9 years (range, 25-48 years). The injury causes included traffic accident in 8 cases, crush injury by machine in 4 cases, and crush injury by heavy weights in 3 cases. There was a scar (22 cm x 8 cm at size) left on the ankle after the skin graft in 1 patient (after 35 months of traffic accident). And in the other 14 patients, the defect locations were the ankle in 1 case, the upper part of the lower leg in 1 case, and the lower part of the lower leg in 12 cases; the defect sizes ranged from 8 cm x 6 cm to 26 cm x 15 cm; the mean interval from injury to admission was 14.8 days (range, 4-28 days). Defects were repaired with distally based posterior tibial artery perforator cross-bridge flaps in 9 cases and distally based peroneal artery perforator cross-bridge flaps in 6 cases, and the flap sizes ranged from 10 cm x 8 cm to 28 cm x 17 cm. The donor sites were sutured directly, but a split-thickness skin graft was used in the middle part. The pedicles of all flaps were cut at 5-6 weeks postoperatively. Results: Distal mild congestion and partial necrosis at the edge of the skin flap occurred in 2 cases and were cured after dressing change, and the other flaps survived. After cutting the pedicles, all flaps survived, and wounds of recipient sites healed by first intention. Incisions of the donor sites healed by first intention, and skin graft survived. Fifteen patients were followed up 7-35 months with an average of 19.5 months. The color and texture of the flaps were similar to these of the recipient site. According to American Orthopaedic Foot and Ankle Society (AOFAS) ankle and hindfoot score system, the mean score was 87.3 (range, 81-92). Conclusion: A distally based posterior tibial artery perforator cross-bridge flap or a distally based peroneal artery perforator cross-bridge flap is an optimal alternative for the reconstruction of the serious tissue defect of contralateral leg or foot because of no microvascular anastomosis necessary, low vascular crisis risk, and high survival rate. abstract_id: PUBMED:24746385 Posterior interosseous artery perforator-free flap: treating intermediate-size hand and foot defects. Introduction: Ambiguous defects on the hand and foot, especially on the fingers and toes, are still challenging to treat despite achievements in reconstruction. Aim: The purpose of this study was to evaluate the use of the posterior interosseous artery perforator flap for resurfacing intermediate-sized defects and provide adequate coverage over tendons and bones. Method: Between October 2008 and March 2013, a total of 19 patients with soft-tissue defects on the hand or foot were treated. Flap elevation, anatomy, and clinical progress were evaluated. Result: All flaps survived and covered the defects, which ranged in area from 12 to 45 cm(2). The freestyle approach was used to harvest the flaps. The average length of the pedicle was 2.5 cm, and the pedicle was harvested without affecting the source vessel. The average diameter of the artery was 0.8 mm, and the average thickness of the flap was 3.5 mm. Anastomosis was performed either end-to-end on the perforator, or end-to-side on deep vessels. No subsequent thinning or surgical flap correction was necessary. Ambulation was allowed at 3 days postsurgery. The donor site was closed primarily to leave an acceptable donor site. Discussion: A posterior interosseous artery perforator-free flap is a suitable choice for intermediate-size defects that are too large to cover using a local flap or too small for a first-line perforator flap. Up to 45 cm(2) of adequate coverage can be provided using a thin posterior interosseous artery perforator-free flap that does not require additional debulking. The disadvantages of a short pedicle can be overcome using perforator-to-perforator supermicrosurgery. Answer: Yes, standard microsurgical anastomosis can be considered on the posterior tibial perforator network. An anatomical study found that at least three tibial posterior artery perforators with diameters greater than 0.8 mm per leg could be located, with a mean external diameter of 1.1 mm, which would allow anastomosis with standard microsurgical procedures without using supermicrosurgical techniques. This is particularly relevant for open leg fractures where main vessels could be injured (PUBMED:24482060). Additionally, clinical applications of posterior tibial artery or peroneal artery perforator flaps have been successfully used in the treatment of plate exposure after ankle fracture fixation, indicating that these flaps can be harvested and applied without the need for performing microvascular anastomosis, thus supporting their use in standard microsurgical procedures (PUBMED:37770196). Moreover, the posterior tibial artery perforator-based propeller flap has been used for lower leg and ankle defect coverage, demonstrating that these flaps can be a good option for reconstruction, again without the need for microvascular anastomosis (PUBMED:31086144). Therefore, the evidence suggests that the posterior tibial perforator network is suitable for standard microsurgical anastomosis, providing a viable option for reconstructive surgery in cases where main vessels are compromised or in the coverage of defects following trauma or surgical interventions.
Instruction: Do lifestyle interventions affect dietary diversity score in the general population? Abstracts: abstract_id: PUBMED:19232153 Do lifestyle interventions affect dietary diversity score in the general population? Objective: The dietary diversity score (DDS) is a good indicator of diet quality as well as of diet-disease relationships; therefore, the present study was undertaken to reveal the effect of a lifestyle intervention on this index. Design: A baseline and three evaluation studies were conducted in two intervention districts (Isfahan and Najaf-Abad) and a reference area (Arak), all located in central Iran. The Isfahan Healthy Hearth Programme (IHHP) targeted the entire population of nearly 2 million in urban and rural areas of the intervention communities. One of the main strategies of the lifestyle intervention phase in the IHHP was healthy nutrition. Usual dietary intake was assessed using a forty-nine-item FFQ. A diversity score for each food group was calculated and the DDS was considered the sum of the diversity scores of the food groups. Results: There were significant increases in DDS in both intervention areas (P = 0.0001) after controlling for confounding factors. There was a significant interaction between area and evaluation stage with regard to DDS (P = 0.0001). The effect of the intervention on the diversity scores of all food groups was also significant (P = 0.0001 for all) after adjusting for socio-economic status. Conclusion: The community-based lifestyle intervention in the IHHP was successful in improving DDS which might be related to an increase of diet quality of the population that in turn might decrease the risks of chronic diseases. abstract_id: PUBMED:33148898 Dietary diversity and characteristics of lifestyle and awareness of health in Japanese workers : a cross-sectional study. The aim of this study was to clarify the characteristics of lifestyle and health awareness according to dietary diversity in a Japanese worksite population. The participants were 1,312 men and women aged 20 to 63 years who were living in Tokushima Prefecture, Japan during the period 2012-2013. We obtained anthropometric data and information on lifestyle characteristics using a self-administered questionnaire. Dietary intake was assessed using a food frequency questionnaire, and dietary diversity was determined using the Quantitative Index for Dietary Diversity (QUANTIDD). The characteristics of lifestyle and health awareness according to quartiles of the QUANTIDD score were assessed using the chi-square test and a general linear model. The higher the QUANTIDD score was, the larger were the proportions of participants who knew the appropriate amount of dietary intake and participants who referred to nutritional component information when choosing and / or buying food. Among participants with higher QUANTIDD scores, the proportion of participants who considered their current diet was good was high in women, whereas the proportion of participants who wanted to improve their diet in the future was high in men. Those results indicate that higher dietary diversity was related to better characteristics of lifestyle and awareness of health. J. Med. Invest. 67 : 255-264, August, 2020. abstract_id: PUBMED:35433799 Association of Dietary and Lifestyle Inflammation Score With Cardiorespiratory Fitness. Objective: We aimed to assess the potential association of dietary (DIS) and lifestyle inflammation score (LIS) and their joint association (DLIS) with cardiorespiratory fitness (CRF) in Tehranian adults. Design: The present study was designed cross-sectional. Participants: A total of 265 males and females aged 18-70 years (mean ± SD: 36.9 ± 13.3) were entered in the present cross-sectional study. Eligible participants were healthy men and women who were free of medications and had no acute or chronic infection or inflammatory disease. Measures: The DIS was calculated by the use of data from 18 anti- and pro-inflammatory dietary components, and the LIS by three non-dietary components including physical activity, smoking status, and general adiposity, with higher scores indicating a more pro-inflammatory diet and lifestyle, respectively. The DLIS was calculated by summing the DIS and LIS. CRF was assessed by the Bruce protocol and VO2 max was measuredas the main variable of CRF. The odds ratio (OR) and 95% confidence interval (CI) of CRF across tertiles of the DIS, LIS, and DLIS were estimated by logistic regression analysis with considering age, gender, energy intake, marital and education status, and occupation as confounders. Results: The DLIS ranged from -2.10 to 0.38 (mean ± SD: -1.25 ± 0.64). In the model that controlled for all variables, the ORs of CRF for the second and third tertiles of the DLIS as compared to the first tertile were 0.42 (95%CI: 0.20, 0.90) and 0.12 (95%CI: 0.05, 0.32), respectively (P-trend &lt; 0.001). There was a strong inverse association between the LIS and CRF (ORthirdvs.firsttertile: 0.12, 95%CI: 0.05, 0.32). There was no association between DIS and CRF. Conclusion: The present study examined the joint association of inflammation-related lifestyle behaviors with CRF and found a strong inverse association between a pro-inflammatory lifestyle with CRF. We did not find any association between dietary inflammatory properties with CRF. Future studies should address the relationship between the inflammatory potential of the diet and CRF. abstract_id: PUBMED:37970372 Empirical dietary inflammatory index and lifestyle inflammation score relationship with obesity: A population-based cross-sectional study. The present study aimed to investigate the association between the empirical dietary inflammatory index (EDII) and lifestyle inflammatory score (LIS) with general and abdominal obesity in Iranian adults using data from the Yazd Health study (YaHS). This cross-sectional study was conducted using the information of participants of the YaHS study. The dietary assessment was conducted using a validated food frequency questionnaire (FFQ) and anthropometric measurements assessed by standard protocols. The inflammatory potential of diet and lifestyle were calculated using EDII and LIS scores. We also created a combinational index of EDII and LIS as an EDII-LIS score. General and abdominal obesity were defined based on body mass index (BMI), waist circumference (WC), and waist-to-hip ratio (WHR) cut points, respectively. The odds ratio (OR) and 95% confidence interval (CI) of general and abdominal obesity across tertiles of EDII and LIS were estimated using logistic regression analyses, adjusted for potential confounders. A significant association was found between a higher EDII score and general obesity (OR: 1.21, 95% CI: 1.04-1.41, p trend: .016), however, there was no significant association between EDII and both definitions of abdominal obesity. Participants in the highest versus lowest tertile of LIS had higher odds of increased abdominal obesity (ORWC: 37.0, 95% CI: 28.8-47.5, p trend &lt;.001, ORWHR: 3.30, 95% CI: 2.65-4.11, p trend &lt;.001). In addition, there was also a direct relationship between the higher score of EDII-LIS and the increased likelihood of abdominal obesity (ORWC: 15.0, 95% CI: 12.3-18.3, p trend &lt;.001, ORWHR: 2.68, 95% CI: 2.18-3.29, p trend &lt;.001). Greater adherence to the EDII score was associated with a higher odds of general obesity, but not abdominal obesity. Also, individuals with a higher score of LIS and EDII-LIS are more prone to abdominal obesity. abstract_id: PUBMED:35215465 The Impact of Dietary Diversity, Lifestyle, and Blood Lipids on Carotid Atherosclerosis: A Cross-Sectional Study. Carotid atherosclerosis is a common arterial wall lesion that causes narrowing and occlusion of the arteries and is the basis of cardiovascular events. Dietary habits, lifestyle, and lipid metabolism should be considered integrally in the context of carotid atherosclerosis (CAS). However, this area has been investigated less often in China. To understand the prevalence of CAS in China and the impact of dietary diversity and habits, lifestyle, and lipid metabolism on CAS as well as its predictive factors, a cross-sectional study was performed in two northern and southern Chinese tertiary hospitals from 2017 to 2019. Included participants underwent carotid artery color Doppler ultrasonography, blood lipid examination and dietary evaluation. In total, 11,601 CAS patients and 27,041 individuals without carotid artery lesions were included. The prevalence of CAS was 30.0% in this group. High BMI (OR: 1.685, 95% CI [1.315-2.160]), current (1.148 [1.077-1.224]) or ex-smoking (1.349 [1.190-1.529]), abstinence from alcohol ((1.223 [1.026-1.459]), social engagement (1.122 [1.050-1.198]), hypertension (1.828 [1.718-1.945]), and total cholesterol (1.438 [1.298-1.594]) were risk factors for CAS, while higher dietary diversity according to DDS-2 (0.891 [0.805-0.989]), HDL-C (0.558 [0.487-0.639]), sugar-sweetened beverages (0.734 [0.696-0.774]), and no midnight snack consumption (0.846 [0.792-0.903]) were protective factors. This current study demonstrated that higher dietary diversity was a protective factor against CAS in a healthy population. In addition, current recommendations of healthy lifestyle and dietary habits for preventing CAS should be strengthened. In addition, dietary diversity should concentrate on food attributes and dietary balance, rather than increased quantities. abstract_id: PUBMED:36169334 Dietary diversity score and the incidence of chronic kidney disease in an agricultural Moroccan adults population. Background: Healthy diet plays an important role in the management of chronic kidney disease (CKD) and in the prevention of related comorbidities. Dietary diversity score (DDS) is well recognized as an indicator for assessing diet quality and food security. However, its association with CKD has not been investigated. Objective: The aim of this study was to estimate the prevalence of CKD and to evaluate its association with DDS among a Moroccan adults from Sidi Bennour province. Materials And Methods: A cross sectional study was conducted among 210 individuals. General information among others was collected. Weight, height and waist circumference were measured and body mass index (BMI) was calculated. Blood samples were collected and the serum creatinine was determined. Subsequent glomerular filtration rate (eGFR) was estimated by the modification of diet in renal disease (MDRD) formula and the chronic kidney disease was defined by an eGFR&lt;60 ml/min/1.73m². Dietary intake was assessed using a 24-hours dietary recall, and DDS was computed according to the FAO guidelines. Results: The participants mean age was 54.18±13.45 years, with a sex ratio of 0.38 and 4.4% as the prevalence of chronic kidney disease. The dietary diversity score was lower than 3 (lowest DDS) in 14.4% of the subjects, between 4 and 5 (medium DDS) in 72.5% and higher than 6 (high DDS) in 13.1% of the subjects. Subjects with higher DDS consistently have a higher level of eGFR compared to those with lower DDS while the DDS was not associated with the incidence of CKD in the present study. Conclusion: Even if no statistically significant association was found between CKD and dietary diversity, there is a relationship of higher eGFR levels among the study participants with higher dietary diversity. abstract_id: PUBMED:33098384 Associations of Major Dietary Patterns and Dietary Diversity Score with Semen Parameters: A Cross-Sectional Study in Iranian Infertile Men. Background: This cross-sectional study pointed to assess the relationship between major dietary patterns and dietary diversity score with semen parameters, in infertile Iranian males. Materials And Methods: In this cross-sectional study, 260 infertile men (18-55 years old) who met the inclusion criteria, entered the study. Four Semen parameters, namely sperm concentration (SC), total sperm movement (TSM), normal sperm morphology (NSM) and sperm volume were considered according to spermogram. A 168-item food frequency questionnaire (FFQ) was used to collect dietary intakes and calculate dietary diversity score. Factor analysis was used to extract dietary patterns. Results: The following four factors were extracted: "traditional pattern", "prudent pattern", "vegetable-based pattern" and "mixed pattern". After adjusting potential confounders, those in the highest quartile of the traditional pattern had 83% less odds for abnormal concentration, compared with the first quartile (OR=0.17, 95% CI: 0.04-00.73); however, subjects in the highest quartile of this pattern had 2.69 fold higher odds for abnormal sperm volume as compared with those of the first quartile (95%Cl: 1.06-6.82). Men in the second quartile of prudent pattern had 4.36 higher odds of an abnormal sperm volume in comparison to the reference category (95%CI: 1.75-10.86), after considering potential confounders. With regard to mixed pattern, men in the second, third and fourth quartile of this pattern had respectively 85 (5%Cl: 0.03-0.76,), 86 (95%Cl: 0.02-0.75) and 83 % (95%Cl: 0.034-0.9) less odds of abnormal concentration, compared with the first quartile. Additionally, no significant association was found between dietary diversity score and sperm quality parameters. Conclusion: Higher intake of the traditional diet was linked to lower abnormal semen concentration and poorer sperm volume. Also, the mixed diet was associated with reduced prevalence of abnormal semen concentration. abstract_id: PUBMED:38433254 The effects of dietary diversity on health status among the older adults: an empirical study from China. Background: Dietary diversity is an indicator of nutrient intake among the elderly. Previous researches have primarily examined dietary diversity and the risks with chronic and infectious disease and cognitive impairment, limited evidence shows the association between dietary diversity and the overall health status of specific populations with a heterogeneity analysis. This study aimed to probe the effects of dietary diversity on health status among Chinese older adults. Methods: There were 5740 sample participants aged 65 and above selected from the Chinese Longitudinal Healthy Longevity Survey, among which 3334 samples in 2018 wave and 2406 samples in 2011 wave. Dietary diversity was assessed by Dietary Diversity Score ranged from 0 to 9, the higher the score, the better dietary diversity. Health status was assessed into healthy, impaired and dysfunctional state by three indicators: Activities of Daily Living, Instrument Activities of Daily Living and Mini-Mental State Examination. Multinomial logistic regression was employed to assess the effects of dietary diversity on the health status among the elderly. Heterogeneity analysis between different groups by age was further discussed. Results: Older adults with better dietary diversity are in better health status, the mean dietary diversity score for healthy group was higher than that of impaired and dysfunctional groups (In 2018 wave, the scores were 6.54, 6.26 and 5.92, respectively; and in 2011 wave, they were 6.38, 5.93 and 5.71, respectively). Heterogeneity analysis shows that the younger groups tend to have more diversified dietary and be in better health status. Dietary diversity was more significantly associated with health status of the younger elderly (OR, 1.22, 95% CI, 1.04-1.44, p &lt; 0.05) than the older elderly (OR, 1.01, 95% CI, 0.37-2.78, p &gt; 0.05) in 2018 wave; and in 2011 wave, dietary diversity was more significantly related to health status among the younger elderly (OR, 1.62, 95% CI, 1.26-2.08, p &lt; 0.001) than the older elderly (OR, 0.08, 95%CI, 0.31-1.94, p &gt; 0.05). Conclusions: Better dietary diversity has positive effects on health status and is more significantly related to the younger elderly than the older elderly. So interventions including available dietary diversity assessment, variety of dietary assistance services in daily life, keeping nutrient digestion and absorption capacity for the venerable age might benefit to ensure the effects of dietary diversity on health status among older adults especially in maintaining intrinsic ability and physical function. In addition, healthy lifestyle should also be recommended. abstract_id: PUBMED:34164120 Only one in four lactating mothers met the minimum dietary diversity score in the pastoral community, Afar region, Ethiopia: a community-based cross-sectional study. Maternal dietary feeding practice is one of the proxy indicators of maternal nutrient adequacy and it improves outcomes for both mothers and their offspring. The minimum maternal dietary diversity score of lactating women is defined as when the mother ate at least four and above food groups from the nine food groups 24 h preceding the survey regardless of the portion size. Therefore, the present study aimed to determine the minimum dietary diversity score (MDDS) and its predictors among lactating mothers in the Pastoralist community, Ethiopia. A community-based cross-sectional study design was employed on 360 lactating mothers using a multi-stage sampling technique from 5 January 2020 to 10 February 2020. Data were collected using questionnaires and anthropometry measurements. Data were entered using EPI-data 4.6.02 and exported into SPSS version 25. Statistical significance was declared at P-value &lt;0⋅05 at multivariable logistic regression. Only one in four lactating mothers met the MDDS. The majority of them consumed cereals in the preceding 24 h of data collection. The most important predictors were maternal meal frequency (adjusted odds ratio (AOR) 6⋅26; 95 % confidence interval (CI) (3⋅51, 11⋅15)), antenatal care (ANC) follow-up one to three times and four and above times (AOR: 2⋅58; 95 % CI (1⋅24, 5⋅36), 4⋅77 (1⋅90, 11⋅95), respectively) and secondary paternal education (AOR 2⋅97; 95 % CI (1⋅44, 6⋅11)). The MDDS among lactating mothers was low. Paternal education, maternal meal frequency and ANC follow-up were the significant predictors. Therefore, to improve maternal dietary diversity score emphasis should be given to those predictors. abstract_id: PUBMED:37937364 Development and validation of an occurrence-based healthy dietary diversity (ORCHID) score easy to operationalise in dietary prevention interventions in older adults: a French study. Healthy diet and dietary diversity have been associated with healthy ageing. Several scores have been developed to assess dietary diversity or healthy diets in epidemiological studies, but they are not adapted to be used in the context of preventive nutrition interventions. This study aimed to develop an occurrence-based healthy dietary diversity (ORCHID) score easy to implement in the field and to validate it using dietary data from older participants in the latest French food consumption survey (INCA3). The ORCHID score was made of several components representing the consumption occurrences of twenty food groups, in line with French dietary guidelines. The score was then validated using dietary data (namely three 24-h recalls and a food propensity questionnaire) from 696 participants aged 60 years and over in the INCA3 survey. Score validity was evaluated by describing the association of the score with its components, as well as with energy intakes, solid energy density (SED) and the probability of adequate nutrient intakes (assessed by the PANDiet). Higher scores were associated with more points in healthy components such as 'fruits' and 'vegetables' (r = 0·51, and r = 0·54, respectively). The score was positively associated with the PANDiet (r = 0·43) and inversely associated with SED (r = -0·37), while no significant association was found with energy intakes. The ORCHID score was validated as a good proxy of the nutritional quality of French older adults' diets. It could therefore be a useful tool for both public health research and nutrition interventions. Answer: Yes, lifestyle interventions do affect dietary diversity score (DDS) in the general population. The study conducted as part of the Isfahan Healthy Heart Programme (IHHP) in central Iran found that a community-based lifestyle intervention was successful in improving DDS, which could be related to an increase in diet quality and potentially decrease the risks of chronic diseases (PUBMED:19232153). Similarly, in a Japanese worksite population, it was observed that higher dietary diversity was related to better characteristics of lifestyle and awareness of health (PUBMED:33148898). Moreover, dietary diversity has been associated with various health outcomes, such as a protective effect against carotid atherosclerosis (PUBMED:35215465), and a positive impact on health status among older adults (PUBMED:38433254). However, it is important to note that while some studies have found a positive association between dietary diversity and health outcomes, not all studies have found a direct association between dietary diversity and specific health conditions. For instance, a study in a Moroccan adult population did not find a statistically significant association between CKD and dietary diversity, although there was a relationship of higher eGFR levels among participants with higher dietary diversity (PUBMED:36169334). Additionally, a study on Iranian infertile men found no significant association between dietary diversity score and sperm quality parameters (PUBMED:33098384). In the context of maternal health, only one in four lactating mothers in a pastoral community in Ethiopia met the minimum dietary diversity score, with maternal meal frequency, antenatal care follow-up, and paternal education being significant predictors of dietary diversity (PUBMED:34164120). Overall, the evidence suggests that lifestyle interventions can positively influence dietary diversity in the general population, which in turn can have beneficial effects on various health outcomes. However, the relationship between dietary diversity and specific health conditions may vary and is influenced by multiple factors.
Instruction: Can prepregnancy care of diabetic women reduce the risk of abnormal babies? Abstracts: abstract_id: PUBMED:2249069 Can prepregnancy care of diabetic women reduce the risk of abnormal babies? Objective: To see whether a prepregnancy clinic for diabetic women can achieve tight glycaemic control in early pregnancy and so reduce the high incidence of major congenital malformation that occurs in the infants of these women. Design: An analysis of diabetic control in early pregnancy including a record of severe hypoglycaemic episodes in relation to the occurrence of major congenital malformation among the infants. Setting: A diabetic clinic and a combined diabetic and antenatal clinic of a teaching hospital. Patients: 143 Insulin dependent women attending a prepregnancy clinic and 96 insulin dependent women managed over the same period who had not received specific prepregnancy care. Main Outcome Measure: The incidence of major congenital malformation. Results: Compared with the women who were not given specific prepregnancy care the group who attended the prepregnancy clinic had a lower haemoglobin AI concentration in the first trimester (8.4% v 10.5%), a higher incidence of hypoglycaemia in early pregnancy (38/143 women v 8/96), and fewer infants with congenital abnormalities (2/143 v 10/96; relative risk among women not given specific prepregnancy care 7.4 (95% confidence interval 1.7 to 33.2]. Conclusion: Tight control of the maternal blood glucose concentration in the early weeks of pregnancy can be achieved by the prepregnancy clinic approach and is associated with a highly significant reduction in the risk of serious congenital abnormalities in the offspring. Hypoglycaemic episodes do not seem to lead to fetal malformation even when they occur during the period of organogenesis. abstract_id: PUBMED:37062367 Association between Prepregnancy Weight Change and Risk of Gestational Diabetes Mellitus in Chinese Pregnant Women. Background: Evidence regarding prepregnancy weight change and gestational diabetes mellitus (GDM) is lacking among East Asian women. Objectives: Our study aimed to investigate the association between weight change from age 18 y to pregnancy and GDM in Chinese pregnant women. Methods: Our analyses included 6972 pregnant women from the Tongji-Shuangliu Birth Cohort. Body weights were recalled for age 18 y and the time point immediately before pregnancy, whereas height was measured during early pregnancy. Prepregnancy weight change was calculated as the difference between weight immediately before pregnancy and weight at age 18 y. GDM outcomes were ascertained by 75-g oral-glucose-tolerance test. Multivariable logistic regression models were used to examine the association between prepregnancy weight change and risk of GDM. Results: In total, 501 (7.2%) developed GDM in the cohort. After multivariable adjustments, prepregnancy weight change was linearly associated with a higher risk of GDM (P &lt; 0.001). Compared with participants with stable weight (weight change within 5.0 kg) before pregnancy, multivariable-adjusted odds ratios and 95% confidence intervals were 1.55 (1.22, 1.98) and 2.24 (1.78, 2.83) for participants with moderate (5-9.9 kg) and high (≥10 kg) weight gain, respectively. In addition, overweight/obesity immediately before pregnancy mediated 17.6% and 31.7% of the associations of moderate and high-weight gain with GDM risk, whereas weekly weight gain during pregnancy mediated 21.1% and 22.7% of the associations. Conclusions: Weight gain from age 18 y to pregnancy was significantly associated with a higher risk of GDM. Maintaining weight stability, especially prevention of excessive weight gain from early adulthood to pregnancy, could be a potential strategy to reduce GDM risk. abstract_id: PUBMED:28385076 Higher prepregnancy body mass index is a risk factor for developing preeclampsia in Maya-Mestizo women: a cohort study. Aim: Preeclampsia and obesity are two closely related syndromes. The high maternal prepregnancy body mass index (BMI) is a risk factor for present preeclampsia, independently of the ethnic background of the studied population. The aim of this study was to analyse in a prospective cohort study the relation between prepregnancy BMI and development of preeclampsia in Maya-Mestizo women. Design: This is a prospective cohort study of 642 pregnant women that were included in the first trimester of the pregnancy (gestational age ≤12 weeks at the first antenatal visit) and all of them were of Maya-Mestizo ethnic origin from the state of Yucatán, México. We assessed the potential risk factors for preeclampsia and documented the prepregnancy BMI (kg/m2) that was based on measured height and maternal self-report of prepregnancy weight at the initial visit. Besides, in the antenatal visit we documented if the pregnant women developed preeclampsia. Results: Of the 642 pregnant Maya-Mestizo women, 49 developed preeclampsia, with an incidence of 7.6% (44.9% had severe and 55% mild). The prepregnancy BMI was higher in women with developed preeclampsia than in those with normal pregnancies. Women with overweight or obesity in comparison with normal weight presented a RR = 2.82 (95% CI: 1.32-6.03; P = 0.008) and RR= 4.22 (95% CI: 2.07-8.61; P = 0.001), respectively. Conclusions: Our findings expand the previous studies to show that the higher prepregnancy BMI is a strong, independent risk factor for preeclampsia. abstract_id: PUBMED:33929654 Veteran-Reported Receipt of Prepregnancy Care: Data from the Examining Contraceptive Use and Unmet Need (ECUUN) Study. Objectives: To identify the prevalence of women Veterans reporting receipt of counseling about health optimization prior to pregnancy, topics most frequently discussed, and factors associated with receipt of this care. Methods: We analyzed data from a nationally representative, cross-sectional telephone survey of women Veterans (n = 2302) ages 18-45 who used VA for primary care in the previous year. Our sample included women who were (1) currently pregnant or trying to become pregnant, (2) not currently trying but planning for pregnancy in the future, or (3) unsure of pregnancy intention. Multivariable logistic regression was used to examine adjusted associations of patient- and provider-level factors with receipt of any counseling about health optimization prior to pregnancy (prepregnancy counseling) and with counseling on specific topics. Results: Among 512 women who were considering or unsure about pregnancy, fewer than half (49%) reported receiving any prepregnancy counseling from a VA provider in the past year. For those who did, the most frequently discussed topics included healthy weight (29%), medication safety (27%), smoking (27%), and folic acid use before pregnancy (27%). Factors positively associated with receipt of prepregnancy counseling include history of mental health conditions (aOR = 1.96, 95% CI: 1.28, 3.00) and receipt of primary care within a dedicated women's health clinic (aOR = 2.07, 95% CI: 1.35, 3.18), whereas factors negatively associated include far-future and unsure pregnancy intentions (aOR = 0.35, 95% CI: 0.17, 0.71 and aOR = 0.33, 95% CI: 0.16, 0.70, respectively). Conclusions For Practice: Routine assessment of pregnancy preferences in primary care could identify individuals to whom counseling about health optimization prior to pregnancy can be offered to promote patient-centered family planning care. abstract_id: PUBMED:30284940 Prepregnancy Factors Are Associated with Development of Hypertension Later in Life in Women with Pre-Eclampsia. Background: The aim of our study was to investigate the prepregnancy characteristics that are risk factors for the development of hypertension (HTN) and identify prepregnancy factors for the development of HTN in women affected by pre-eclampsia in their first pregnancy. Methods: We enrolled 1910 women who had undergone a National Health Screening Examination through the National Health Insurance Corporation between 2002 and 2003, and who had their first delivery affected by pre-eclampsia in 2004. Women were classified as having HTN if they were newly diagnosed with HTN from 2005 through 2012. Results: After 8 years of follow-up, 7.7% (148/1910) of pre-eclamptic women developed HTN. Using the Cox proportional hazards model, old age (hazard ratio [HR] 3.92, 95% confidence interval [CI] 2.47-6.23), a family history of HTN (HR 2.28, 95% CI 1.46-3.58), prepregnancy obesity (HR 3.74, 95% CI 2.50-5.59), and high blood pressure (BP) (HR 2.78, 95% CI 1.85-4.19) were independently associated with the development of HTN. Conclusions: The results show that the development of HTN in pre-eclamptic women is related to prepregnancy factors. Recognizing who subsequently develops HTN postpartum in pre-eclamptic women with these prepregnancy factors could lead to early identification and lifestyle interventions, which could reduce the burden of cardiovascular disease. abstract_id: PUBMED:28092059 The Effects of Race and Ethnicity on the Risk of Large-for-Gestational-Age Newborns in Women Without Gestational Diabetes by Prepregnancy Body Mass Index Categories. Objectives Children born large for gestational age (LGA) are at risk of numerous adverse outcomes. While the racial/ethnic disparity in LGA risk has been studied among women with Gestational Diabetes Mellitus (GDM), the independent effect of race on LGA risk by maternal prepregnancy BMI is still unclear among women without GDM. Therefore, the objective of this study was to assess the association between maternal race/ethnicity and LGA among women without GDM. Methods This was a population-based cohort study of 2,842,278 singleton births using 2012 U.S. Natality data. We conducted bivariate and multivariate logistic regression analyses to assess the association between race and LGA. Due to effect modification by maternal prepregnancy BMI, we stratified our analysis by four BMI subgroups. Results The prevalence of LGA was similar across the different racial/ethnic groups at about 9%, but non-Hispanic Asian Americans had slightly higher prevalence of 11%. After controlling for potential confounders, minority women had higher odds of birthing LGA babies compared to non-Hispanic white women. Non-Hispanic Asian Americans had the highest odds of LGA babies across all BMI categories: underweight (aOR = 2.67; 95% CI: 2.24, 3.05); normal weight (aOR = 2.53; 2.43, 2.62); overweight (aOR = 2.45; 2.32, 2.60) and obese (aOR = 2.05; 1.91, 2.20). Conclusions for practice Racial/ethnic disparities exist in LGA odds, particularly among women with underweight or normal prepregnancy BMI. Most minorities had higher LGA odds than non-Hispanic white women regardless of prepregnancy BMI category. These racial/ethnic disparities should inform public health policies and interventions to address this problem. abstract_id: PUBMED:24147927 Prepregnancy obesity and the risk of birth defects: an update. The growing number of obese women worldwide has many implications for the reproductive health outcomes of mothers and their children. Specifically, prepregnancy obesity has been associated with certain major birth defects. Provided here is a summary of the most recent and comprehensive meta-analysis of reports of associations between prepregnancy obesity and birth defects, along with an update that includes a brief overview of reports of similar associations published since that meta-analysis. The possible reasons for the observed association between prepregnancy obesity and birth defects are explored, and knowledge gaps that suggest possible avenues for future research are highlighted. abstract_id: PUBMED:28236376 Spontaneous and indicated preterm delivery risk is increased among overweight and obese women without prepregnancy chronic disease. Objective: To investigate the independent impact of prepregnancy obesity on preterm delivery among women without chronic diseases by gestational age, preterm category and parity. Design: A retrospective cohort study. Setting: Data from the Consortium on Safe Labor (CSL) in the USA (2002-08). Population: Singleton deliveries at ≥23 weeks of gestation in the CSL (43 200 nulliparas and 63 129 multiparas) with a prepregnancy body mass index (BMI) ≥18.5 kg/m2 and without chronic diseases. Methods: Association of prepregnancy BMI and the risk of preterm delivery was examined using Poisson regression with normal weight as reference. Main Outcome Measures: Preterm deliveries were categorised by gestational age (extremely, very, moderate to late) and category (spontaneous, indicated, no recorded indication). Results: Relative risk of spontaneous preterm delivery was increased for extremely preterm among obese nulliparas (1.26, 95% CI: 0.94-1.70 for overweight; 1.88, 95% CI: 1.30-2.71 for obese class I; 1.99, 95% CI: 1.32-3.01 for obese class II/III) and decreased for moderate to late preterm delivery among overweight and obese multiparas (0.90, 95% CI: 0.83-0.97 for overweight; 0.87, 95% CI: 0.78-0.97 for obese class I; 0.79, 95% CI: 0.69-0.90 for obese class II/III). Indicated preterm delivery risk was increased with prepregnancy BMI in a dose-response manner for extremely preterm and moderate to late preterm among nulliparas, as it was for moderate to late preterm delivery among multiparas. Conclusions: Prepregnancy BMI was associated with increased risk of preterm delivery even in the absence of chronic diseases, but the association was heterogeneous by preterm categories, gestational age and parity. Tweetable Abstract: Obese nulliparas without chronic disease had higher risk for spontaneous delivery &lt;28 weeks of gestation. abstract_id: PUBMED:24984802 Receipt of preconception care among women with prepregnancy and gestational diabetes. Aims: To determine the extent of provision of preconception care among women with prepregnancy diabetes or women who develop gestational diabetes compared with women without diabetes and to examine the association between preconception care receipt and diabetes status, adjusting for maternal characteristics. Methods: Data were collected from women who completed the Pregnancy Risk Assessment Monitoring System questionnaire in 10 U.S. states (Hawaii, Maryland, Maine, Michigan, Minnesota, New Jersey, Ohio, Tennessee, Utah and West Virginia) in the period 2009 to 2010. Weighted, self-reported receipt of preconception care by diabetes status was examined. Multivariate logistic regression was used to identify the association between preconception care receipt and diabetes status. Results: Overall, 31% of women reported receiving preconception care. Women with prepregnancy diabetes (53%) reported the highest prevalence of preconception care, while women with gestational diabetes and women without diabetes reported a lower prevalence (32 and 31%, respectively). In the adjusted model, there was no difference in reported preconception care receipt between women with gestational diabetes and women without diabetes (odds ratio 1.1, 95% CI 0.9, 1.3), while women with prepregnancy diabetes were significantly more likely to report receipt of preconception care (odds ratio 2.2, 95% CI 1.5, 3.3) than women without diabetes. Conclusions: Although all women of reproductive age should receive preconception care, it is vital that women with known risk factors, such as those with prepregnancy diabetes and with risk factors for gestational diabetes, are counselled before pregnancy to optimize maternal and infant health outcomes. It is encouraging that women with prepregnancy diabetes report receiving preconception care more often than women on average, but preconception care is still not reaching all women at high risk. abstract_id: PUBMED:30156498 Prepregnancy Insurance and Timely Prenatal Care for Medicaid Births: Before and After the Affordable Care Act in Ohio. Background/Objective: Persistent instability in insurance coverage before and after pregnancy among low-income mothers in the United States contributes to delayed prenatal care and poor infant outcomes. States that expand Medicaid under the Affordable Care Act (ACA) make public insurance free for many low-income women regardless of parental or pregnancy status. Our objective is to analyze the effects of expanding Medicaid in Ohio on enrollment of pregnant women and receipt of recommended prenatal care. A key objective in the state is to address infant mortality as Ohio ranks above the national average and racial disparities persist. Materials and Methods: We used linked enrollment/claims/birth certificate data for women with Medicaid-paid deliveries/births, aged 19-44 years with months of last menstrual period (LMP) in calendar year 2011-2015 (N = 290,091). We used interrupted time-series analysis of enrollment prepregnancy and receipt of guideline-concordant screenings (anemia, asymptomatic bacteriuria, chlamydia, human immunodeficiency virus [HIV], and TORCH) and prenatal vitamins after versus before the expansion. We stratified by parity since first-time mothers would be impacted more. Results: We found almost a 12 percentage point increase in enrollment prepregnancy among first-time mothers compared with almost a 6 percentage point increase for parous women. We found significant increases in all screens and vitamins for both groups. TORCH screening increased 8 percentage points and receipt of prenatal vitamins almost 14 percentage points, by the end of 2015 for first-time mothers, compared with 5 and 4 percentage points, respectively, for parous women. Conclusions: Early enrollment and prenatal care for low-income women in Ohio could erode if the state's Medicaid expansion is altered. Answer: Yes, prepregnancy care of diabetic women can reduce the risk of abnormal babies. A study analyzing diabetic control in early pregnancy showed that women who attended a prepregnancy clinic had a lower incidence of major congenital malformation in their infants compared to those who did not receive specific prepregnancy care (PUBMED:2249069). The women who attended the clinic had a lower haemoglobin AI concentration in the first trimester and a higher incidence of hypoglycaemia in early pregnancy, but despite the increased hypoglycaemia, there were fewer infants with congenital abnormalities. This suggests that tight control of the maternal blood glucose concentration in the early weeks of pregnancy, as achieved by the prepregnancy clinic approach, is associated with a significant reduction in the risk of serious congenital abnormalities in the offspring. Hypoglycaemic episodes did not seem to lead to fetal malformation even when they occurred during the period of organogenesis (PUBMED:2249069).
Instruction: Do race-specific models explain disparities in treatments after acute myocardial infarction? Abstracts: abstract_id: PUBMED:17452154 Do race-specific models explain disparities in treatments after acute myocardial infarction? Background: Racial differences in healthcare are well known, although some have challenged previous research where risk-adjustment assumed covariates affect whites and blacks equally. If incorrect, this assumption may misestimate disparities. We sought to determine whether clinical factors affect treatment decisions for blacks and whites equally. Methods: We used data from the Cardiovascular Cooperative Project for 130,709 white and 8286 black patients admitted with an acute myocardial infarction. We examined the rates of receipt of 6 treatments using conventional common-effects models, where covariates affect whites and blacks equally, and race-specific models, where the effect of each covariate can vary by race. Results: The common-effects models showed that blacks were less likely to receive 5 of the 6 treatments (odds ratios 0.64-1.10). The race-specific models displayed nearly identical treatment disparities (odds ratios 0.65-1.07). We found no interaction effect, which systematically suggested the presence of race-specific effects. Conclusions: Race-specific models yield nearly identical estimates of racial disparities to those obtained from conventional models. This suggests that clinical variables, such as hypertension or diabetes, seem to affect treatment decisions equally for whites and blacks. Previously described racial disparities in care are unlikely to be an artifact of misspecified models. abstract_id: PUBMED:30666634 Measuring hospital-specific disparities by dual eligibility and race to reduce health inequities. Objective: To propose and evaluate a metric for quantifying hospital-specific disparities in health outcomes that can be used by patients and hospitals. Data Sources/study Setting: Inpatient admissions for Medicare patients with acute myocardial infarction, heart failure, or pneumonia to all non-federal, short-term, acute care hospitals during 2012-2015. Study Design: Building on the current Centers for Medicare and Medicaid Services methodology for calculating risk-standardized readmission rates, we developed models that include a hospital-specific random coefficient for either patient dual eligibility status or African American race. These coefficients quantify the difference in risk-standardized outcomes by dual eligibility and race at a given hospital after accounting for the hospital's patient case mix and proportion of dual eligible or African American patients. We demonstrate this approach and report variation and performance in hospital-specific disparities. Principal Findings: Dual eligibility and African American race were associated with higher readmission rates within hospitals for all three conditions. However, this disparity effect varied substantially across hospitals. Conclusion: Our models isolate a hospital-specific disparity effect and demonstrate variation in quality of care for different groups of patients across conditions and hospitals. Illuminating within-hospital disparities can incentivize hospitals to reduce inequities in health care quality. abstract_id: PUBMED:27468142 Race and Sex Differences in Management and Outcomes of Patients After ST-Elevation and Non-ST-Elevation Myocardial Infarct: Results From the NCDR. Background: Race and sex have been shown to affect management of myocardial infarction (MI); however, it is unclear if such disparities exist in contemporary care of ST-segment elevation myocardial infarction (STEMI) and non-ST-segment elevation myocardial infarction (NSTEMI). Hypothesis: Disparities in care will be less prevalent in more heavily protocol-driven management of STEMI than the less algorithmic care of NSTEMI. Methods: Data were collected from the ACTION Registry-GWTG database to assess care differences related to race and sex of patients presenting with NSTEMI or STEMI. For key treatments and outcomes, adjustments were made including patient demographics, baseline comorbidities, and markers of socioeconomic status. Results: Key demographic variables demonstrate significant differences in baseline comorbidities; black patients had higher incidences of hypertension and diabetes, and women more frequently had diabetes. With few exceptions, rates of acute and discharge medical therapy were similar by race in any sex category in both STEMI and NSTEMI populations. Rates of catheterization were similar by race for STEMI but not for NSTEMI, where both black men and women had lower rates of invasive therapy. Rates of revascularization were significantly lower for black patients in both the STEMI and NSTEMI groups regardless of sex. Rates of adverse events differed by sex, with disparities for death and major bleeding; after adjustment, rates were similar by race within sex comparisons. Conclusions: In this contemporary cohort, although there are differences by race in presentation and management of MI, heavily protocol-driven processes seem to show fewer racial disparities. abstract_id: PUBMED:26703665 Harnessing Data to Assess Equity of Care by Race, Ethnicity and Language. Objective: Determine any disparities in care based on race, ethnicity and language (REaL) by utilizing inpatient (IP) core measures at Texas Health Resources, a large, faith-based, non-profit health care delivery system located in a large, ethnically diverse metropolitan area in Texas. These measures, which were established by the U.S. Centers for Medicare and Medicaid Services (CMS) and The Joint Commission (TJC), help to ensure better accountability for patient outcomes throughout the U.S. health care system. Methods: Sample analysis to understand the architecture of race, ethnicity and language (REaL) variables within the Texas Health clinical database, followed by development of the logic, method and framework for isolating populations and evaluating disparities by race (non-Hispanic White, non-Hispanic Black, Native American/Native Hawaiian/Pacific Islander, Asian and Other); ethnicity (Hispanic and non-Hispanic); and preferred language (English and Spanish). The study is based on use of existing clinical data for four inpatient (IP) core measures: Acute Myocardial Infarction (AMI), Congestive Heart Failure (CHF), Pneumonia (PN) and Surgical Care (SCIP), representing 100% of the sample population. These comprise a high number of cases presenting in our acute care facilities. Findings are based on a sample of clinical data (N = 19,873 cases) for the four inpatient (IP) core measures derived from 13 of Texas Health's wholly-owned facilities, formulating a set of baseline data. Results: Based on applied method, Texas Health facilities consistently scored high with no discernable race, ethnicity and language (REaL) disparities as evidenced by a low percentage difference to the reference point (non-Hispanic White) on IP core measures, including: AMI (0.3%-1.2%), CHF (0.7%-3.0%), PN (0.5%-3.7%), and SCIP (0-0.7%). abstract_id: PUBMED:23269575 Racial and ethnic disparities in the surgical treatment of acute myocardial infarction: the role of hospital and physician effects. Many studies document disparities between Blacks and Whites in the treatment of acute myocardial infarction on controlling for patient demographic factors and comorbid conditions. Other studies provide evidence of disparities between Hispanics and Whites in cardiac care. Such disparities may be explained by differences in the hospitals where minority and nonminority patients obtain treatment and by differences in the traits of physicians who treat minority and nonminority patients. We used 1997-2005 Florida hospital inpatient discharge data to estimate models of cardiac catheterization, percutaneous transluminal coronary angioplasty, and coronary artery bypass grafting in Medicare fee-for-service patients 65 years and older. Controlling for hospital fixed effects does not explain Black-White disparities in cardiac treatment but largely explains Hispanic-White disparities. Controlling for physician fixed effects accounts for some extent of the racial disparities in treatment and entirely explains the ethnic disparities in treatment. abstract_id: PUBMED:33918132 Racial Disparities in the Utilization and Outcomes of Temporary Mechanical Circulatory Support for Acute Myocardial Infarction-Cardiogenic Shock. Racial disparities in utilization and outcomes of mechanical circulatory support (MCS) in patients with acute myocardial infarction-cardiogenic shock (AMI-CS) are infrequently studied. This study sought to evaluate racial disparities in the outcomes of MCS in AMI-CS. The National Inpatient Sample (2012-2017) was used to identify adult AMI-CS admissions receiving MCS support. MCS devices were classified as intra-aortic balloon pump (IABP), percutaneous left ventricular assist device (pLVAD) or extracorporeal membrane oxygenation (ECMO). Self-reported race was classified as white, black and others. Outcomes included in-hospital mortality, hospital length of stay and discharge disposition. During this period, 90,071 admissions were included with white, black and other races constituting 73.6%, 8.3% and 18.1%, respectively. Compared to white and other races, black race admissions were on average younger, female, with greater comorbidities, and non-cardiac organ failure (all p &lt; 0.001). Compared to the white race (31.3%), in-hospital mortality was comparable in black (31.4%; adjusted odds ratio (aOR) 0.98 (95% confidence interval (CI) 0.93-1.05); p = 0.60) and other (30.2%; aOR 0.96 (95% CI 0.92-1.01); p = 0.10). Higher in-hospital mortality was noted in non-white races with concomitant cardiac arrest, and those receiving ECMO support. Black admissions had longer lengths of hospital stay (12.1 ± 14.2, 10.3 ± 11.2, 10.9 ± 1.2 days) and transferred less often (12.6%, 14.2%, 13.9%) compared to white and other races (both p &lt; 0.001). In conclusion, this study of AMI-CS admissions receiving MCS devices did not identify racial disparities in in-hospital mortality. Black admissions had longer hospital stay and were transferred less often. Further evaluation with granular data including angiographic and hemodynamic parameters is essential to rule out racial differences. abstract_id: PUBMED:37306075 Race differences in cardiac testing rates for patients with chest pain in a multisite cohort. Background: Identifying and eliminating racial health care disparities is a public health priority. However, data evaluating race differences in emergency department (ED) chest pain care are limited. Methods: We conducted a secondary analysis of the High-Sensitivity Cardiac Troponin T to Optimize Chest Pain Risk Stratification (STOP-CP) cohort, which prospectively enrolled adults with symptoms suggestive of acute coronary syndrome without ST-elevation from eight EDs in the United States from 2017 to 2018. Race was self-reported by patients and abstracted from health records. Rates of 30-day noninvasive testing (NIT), cardiac catheterization, revascularization, and adjudicated cardiac death or myocardial infarction (MI) were determined. Logistic regression was used to evaluate the association between race and 30-day outcomes with and without adjustment for potential confounders. Results: Among 1454 participants, 42.3% (615/1454) were non-White. At 30 days NIT occurred in 31.4% (457/1454), cardiac catheterization in 13.5% (197/1454), revascularization in 6.0% (87/1454), and cardiac death or MI in 13.1% (190/1454). Among Whites versus non-Whites, NIT occurred in 33.8% (284/839) versus 28.1% (173/615; odds ratio [OR] 0.76, 95% confidence interval [CI] 0.61-0.96) and catheterization in 15.9% (133/839) versus 10.4% (64/615; OR 0.62, 95% CI 0.45-0.84). After covariates were adjusted for, non-White race remained associated with decreased 30-day NIT (adjusted OR [aOR] 0.71, 95% CI 0.56-0.90) and cardiac catheterization (aOR 0.62, 95% CI 0.43-0.88). Revascularization occurred in 6.9% (58/839) of Whites versus 4.7% (29/615) of non-Whites (OR 0.67, 95% CI 0.42-1.04). Cardiac death or MI at 30 days occurred in 14.2% of Whites (119/839) versus 11.5% (71/615) of non-Whites (OR 0.79 95% CI 0.57-1.08). After adjustment there was still no association between race and 30-day revascularization (aOR 0.74, 95% CI 0.45-1.20) or cardiac death or MI (aOR 0.74, 95% CI 0.50-1.09). Conclusions: In this U.S. cohort, non-White patients were less likely to receive NIT and cardiac catheterization compared to Whites but had similar rates of revascularization and cardiac death or MI. abstract_id: PUBMED:35699168 Association of Race and Ethnicity on the Management of Acute Non-ST-Segment Elevation Myocardial Infarction. Background Prior studies have reported disparities by race in the management of acute myocardial infarction (MI), with many studies having limited covariates or now dated. We examined racial and ethnic differences in the management of MI, specifically non-ST-segment-elevation MI (NSTEMI), in a large, socially diverse cohort of insured patients. We hypothesized that the racial and ethnic disparities in the receipt of coronary angiography or percutaneous coronary intervention would persist in contemporary data. Methods and Results We identified individuals presenting with incident, type I NSTEMI from 2017 to 2019 captured by a health claims database. Race and ethnicity were categorized by the database as Asian, Black, Hispanic, or White. Covariates included demographics (age, sex, race, and ethnicity); Elixhauser variables, including cardiovascular risk factors and other comorbid conditions; and social factors of estimated annual household income and educational attainment. We examined rates of coronary angiography and percutaneous coronary intervention by race and ethnicity and income categories and in multivariable-adjusted models. We identified 87 094 individuals (age 73.8±11.6 years; 55.6% male; 2.6% Asian, 13.4% Black, 11.2% Hispanic, 72.7% White) with incident NSTEMI events from 2017 to 2019. Individuals of Black race were less likely to undergo coronary angiography (odds ratio [OR], 0.93; [95% CI, 0.89-0.98]) and percutaneous coronary intervention (OR, 0.86; [95% CI, 0.81-0.90]) than those of White race. Hispanic individuals were less likely (OR, 0.88; [95% CI, 0.84-0.93]) to undergo coronary angiography and percutaneous coronary intervention (OR, 0.85; [95% CI, 0.81-0.89]) than those of White race. Higher annual household income attenuated differences in the receipt of coronary angiography across all racial and ethnic groups. Conclusions We identified significant racial and ethnic differences in the management of individuals presenting with NSTEMI that were marginally attenuated by higher household income. Our findings suggest continued evidence of health inequities in contemporary NSTEMI treatment. abstract_id: PUBMED:32700216 Identifying Racial/Ethnic Disparities in Interhospital Transfer: an Observational Study. Background: Interhospital transfer (IHT) is often performed to provide patients with specialized care. Racial/ethnic disparities in IHT have been suggested but are not well-characterized. Objective: To evaluate the association between race/ethnicity and IHT. Design: Cross-sectional analysis of 2016 National Inpatient Sample data. Patients: Patients aged ≥ 18 years old with common medical diagnoses at transfer, including acute myocardial infarction, congestive heart failure, arrhythmia, stroke, sepsis, pneumonia, and gastrointestinal bleed. Main Measures: We performed a series of logistic regression models to estimate adjusted odds of transfer by race/ethnicity controlling for patient demographics, clinical variables, and hospital characteristics and to identify potential mediators. In secondary analyses, we estimated adjusted odds of transfer among patients at community hospitals (those more likely to transfer patients) and performed subgroup analyses by region and primary medical diagnosis. Key Results: Of 5,774,175 weighted hospital admissions, 199,015 (4.5%) underwent IHT, including 4.7% of White patients, compared with 3.9% of Black patients and 3.8% of Hispanic patients. Black (OR 0.83, 95% CI 0.78-0.89) and Hispanic (OR 0.81, 95% CI 0.75-0.87) patients had lower crude odds of transfer compared with White patients, but this became non-significant after adjusting for hospital-level characteristics. In secondary analyses among patients hospitalized at community hospitals, Hispanic patients had lower adjusted odds of transfer (aOR 0.89, 95% CI 0.79-0.98). Disparities in IHT by race/ethnicity varied by region and medical diagnosis. Conclusions: Black and Hispanic patients had lower odds of IHT, largely explained by a higher likelihood of being hospitalized at urban teaching hospitals. Racial/ethnic disparities in transfer were demonstrated at community hospitals, in certain geographic regions and among patients with specific diseases. abstract_id: PUBMED:26353998 Young Hispanic Women Experience Higher In-Hospital Mortality Following an Acute Myocardial Infarction. Background: Although mortality rates for acute myocardial infarction (AMI) have declined for men and women, prior studies have reported a sex gap in mortality such that younger women were most likely to die after an AMI. Methods And Results: We sought to explore the impact of race and ethnicity on the sex gap in AMI patterns of care and mortality for younger women in a contemporary patient cohort. We constructed multivariable hierarchical logistic regression models to examine trends in AMI hospitalizations, procedures, and in-hospital mortality by sex, age (&lt;65 and ≥65 years), and race/ethnicity (white, black, and Hispanic). Analyses were derived from 194 071 patients who were hospitalized for an AMI with available race and ethnicity data from the 2009-2010 National Inpatient Sample. Hospitalization rates, procedures (coronary angiography, percutaneous coronary interventions, and cardiac bypass surgery), and inpatient mortality were analyzed across age, sex, and race/ethnic groups. There was significant variation in hospitalization rates by age and race/ethnicity. All racial/ethnic groups were less likely to undergo invasive procedures compared with white men (P&lt;0.001). After adjustment for comorbidities, younger Hispanic women experienced higher in-hospital mortality compared with younger white men, with an odds ratio of 1.5 (95% CI 1.2 to 1.9), adjusted for age and comorbidities. Conclusion: We found significant racial and sex disparities in AMI hospitalizations, care patterns, and mortality, with higher in-hospital mortality experienced by younger Hispanic women. Future studies are necessary to explore determinants of these significant racial and sex disparities in outcomes for AMI. Answer: Race-specific models do not appear to significantly alter the estimation of disparities in treatments after acute myocardial infarction. A study using data from the Cardiovascular Cooperative Project found that race-specific models yielded nearly identical estimates of racial disparities to those obtained from conventional models, suggesting that clinical variables such as hypertension or diabetes seem to affect treatment decisions equally for whites and blacks. This indicates that previously described racial disparities in care are unlikely to be an artifact of misspecified models (PUBMED:17452154). Additional studies have found that racial disparities in healthcare persist, with evidence of differences in treatment and outcomes by race. For example, research has shown that dual eligibility and African American race were associated with higher readmission rates within hospitals for certain conditions, and this disparity effect varied substantially across hospitals (PUBMED:30666634). Another study found that while rates of acute and discharge medical therapy were similar by race in any sex category for both STEMI and NSTEMI populations, rates of catheterization were similar by race for STEMI but not for NSTEMI, where black men and women had lower rates of invasive therapy (PUBMED:27468142). Moreover, disparities in the utilization and outcomes of mechanical circulatory support in patients with acute myocardial infarction-cardiogenic shock were observed, with black admissions having longer hospital stays and being transferred less often, although in-hospital mortality did not show racial disparities (PUBMED:33918132). Another study reported that non-White patients were less likely to receive noninvasive testing and cardiac catheterization compared to Whites but had similar rates of revascularization and cardiac death or myocardial infarction (PUBMED:37306075). In summary, while race-specific models do not seem to significantly change the estimates of racial disparities in treatment after acute myocardial infarction, disparities in care based on race do exist, as evidenced by differences in treatment rates, readmission rates, and outcomes across various studies.
Instruction: Indicators for Universal Health Coverage: can Kenya comply with the proposed post-2015 monitoring recommendations? Abstracts: abstract_id: PUBMED:25532714 Indicators for Universal Health Coverage: can Kenya comply with the proposed post-2015 monitoring recommendations? Introduction: Universal Health Coverage (UHC), referring to access to healthcare without financial burden, has received renewed attention in global health spheres. UHC is a potential goal in the post-2015 development agenda. Monitoring of progress towards achieving UHC is thus critical at both country and global level, and a monitoring framework for UHC was proposed by a joint WHO/World Bank discussion paper in December 2013. The aim of this study was to determine the feasibility of the framework proposed by WHO/World Bank for global UHC monitoring framework in Kenya. Methods: The study utilised three documents--the joint WHO/World Bank UHC monitoring framework and its update, and the Bellagio meeting report sponsored by WHO and the Rockefeller Foundation--to conduct the research. These documents informed the list of potential indicators that were used to determine the feasibility of the framework. A purposive literature search was undertaken to identify key government policy documents and relevant scholarly articles. A desk review of the literature was undertaken to answer the research objectives of this study. Results: Kenya has yet to establish an official policy on UHC that provides a clear mandate on the goals, targets and monitoring and evaluation of performance. However, a significant majority of Kenyans continue to have limited access to health services as well as limited financial risk protection. The country has the capacity to reasonably report on five out of the seven proposed UHC indicators. However, there was very limited capacity to report on the two service coverage indicators for the chronic condition and injuries (CCIs) interventions. Out of the potential tracer indicators (n = 27) for aggregate CCI-related measures, four tracer indicators were available. Moreover the country experiences some wider challenges that may impact on the implementation and feasibility of the WHO/World Bank framework. Conclusion: The proposed global framework for monitoring UHC will only be feasible in Kenya if systemic challenges are addressed. While the infrastructure for reporting the MDG related indicators is in place, Kenya will require continued international investment to extend its capacity to meet the data requirements of the proposed UHC monitoring framework, particularly for the CCI-related indicators. abstract_id: PUBMED:26077857 Universal Health Coverage and the Right to Health: From Legal Principle to Post-2015 Indicators. Universal Health Coverage (UHC) is widely considered one of the key components for the post-2015 health goal. The idea of UHC is rooted in the right to health, set out in the International Covenant on Economic, Social, and Cultural Rights. Based on the Covenant and the General Comment of the Committee on Economic, Social, and Cultural Rights, which is responsible for interpreting and monitoring the Covenant, we identify 6 key legal principles that should underpin UHC based on the right to health: minimum core obligation, progressive realization, cost-effectiveness, shared responsibility, participatory decision making, and prioritizing vulnerable or marginalized groups. Yet, although these principles are widely accepted, they are criticized for not being specific enough to operationalize as post-2015 indicators for reaching the target of UHC. In this article, we propose measurable and achievable indicators for UHC based on the right to health that can be used to inform the ongoing negotiations on Sustainable Development Goals. However, we identify 3 major challenges that face any exercise in setting indicators post-2015: data availability as an essential criterion, the universality of targets, and the adaptation of global goals to local populations. abstract_id: PUBMED:29541562 Measuring Progress Toward Universal Health Coverage: Does the Monitoring Framework of Bangladesh Need Further Improvement? This review aimed to compare Bangladesh's Universal Health Coverage (UHC) monitoring framework with the global-level recommendations and to find out the existing gaps of Bangladesh's UHC monitoring framework compared to the global recommendations. In order to reach the aims of the review, we systematically searched two electronic databases - PubMed and Google Scholar - by using appropriate keywords to select articles that describe issues related to UHC and the monitoring framework of UHC applied globally and particularly in Bangladesh. Four relevant documents were found and synthesized. The review found that Bangladesh incorporated all of the recommendations suggested by the global monitoring framework regarding mentoring the financial risk protection and equity perspective. However, a significant gap in the monitoring framework related to service coverage was observed. Although Bangladesh has a significant burden of mental illnesses, cataract, and neglected tropical diseases, indicators related to these issues were absent in Bangladesh's UHC framework. Moreover, palliative-care-related indicators were completely missing in the framework. The results of this review suggest that Bangladesh should incorporate these indicators in their UHC monitoring framework in order to track the progress of the country toward UHC more efficiently and in a robust way. abstract_id: PUBMED:28480749 Seeking consensus on universal health coverage indicators in the sustainable development goals. There is optimism that the inclusion of universal health coverage in the Sustainable Development Goals advances its prominence in global and national health policy. However, formulating indicators for Target 3.8 through the Inter-Agency Expert Group on Sustainable Development Indicators has been challenging. Achieving consensus on the conceptual and methodological aspects of universal health coverage is likely to take some time in multi-stakeholder fora compared with national efforts to select indicators. abstract_id: PUBMED:27706437 Availability of indicators for monitoring the achievement of "Universal Health" in Latin America and the Caribbean Objective The objective of this study was to identify the availability of health indicators for validly measuring advances in the attainment of "universal health" in Latin America and the Caribbean (LAC). Methods A systematic search was undertaken for scientific evidence and available technical and scientific documents on assessing health system performance and advances in universal health in the following phases: phase 1, mapping of indicators; phase 2, classification of indicators; and phase 3, mapping the availability of selected indicators in LAC. Results Sixty-three (63) national sources of information and eight international sources were identified. A total of 749 indicators were selected from the different databases and studies evaluated, 619 of which were related to the attainment of universal health and 130 to the burden of disease. The following indicators were identified: financial protection, 42 (6%); coverage of service delivery, 415 (55.4%); population coverage, 6 (0.8%); health determinants, 101 (14%); assessment of inequalities in health, 55 (7.3%); and estimation of burden of disease, 130 (17.3%). Finally, the availability of 141 indicators was mapped for each LAC country. Conclusions The results of this study will help establish a framework for measuring the achievements, obstacles, and rate of progress toward universal health in LAC. abstract_id: PUBMED:24569977 Integrating social determinants of health in the universal health coverage monitoring framework. Underpinning the global commitment to universal health coverage (UHC) is the fundamental role of health for well-being and sustainable development. UHC is proposed as an umbrella health goal in the post-2015 sustainable development agenda because it implies universal and equitable effective delivery of comprehensive health services by a strong health system, aligned with multiple sectors around the shared goal of better health. In this paper, we argue that social determinants of health (SDH) are central to both the equitable pursuit of healthy lives and the provision of health services for all and, therefore, should be expressly incorporated into the framework for monitoring UHC. This can be done by: (a) disaggregating UHC indicators by different measures of socioeconomic position to reflect the social gradient and the complexity of social stratification; and (b) connecting health indicators, both outcomes and coverage, with SDH and policies within and outside of the health sector. Not locating UHC in the context of action on SDH increases the risk of going down a narrow route that limits the right to health to coverage of services and financial protection. abstract_id: PUBMED:37120527 Indicators of integrating oral health care within universal health coverage and general health care in low-, middle-, and high-income countries: a scoping review. Background: The World Health Organization (WHO) has recently devoted special attention to oral health and oral health care recommending the latter becoming part of universal health coverage (UHC) so as to reduce oral health inequalities across the globe. In this context, as countries consider acting on this recommendation, it is essential to develop a monitoring framework to measure the progress of integrating oral health/health care into UHC. This study aimed to identify existing measures in the literature that could be used to indicate oral health/health care integration within UHC across a range of low-, middle- and high-income countries. Methods: A scoping review was conducted by searching MEDLINE via Ovid, CINAHL, and Ovid Global Health databases. There were no quality or publication date restrictions in the search strategy. An initial search by an academic librarian was followed by the independent reviewing of all identified articles by two authors for inclusion or exclusion based on the relevance of the work in the articles to the review topic. The included articles were all published in English. Articles concerning which the reviewers disagreed on inclusion or exclusion were reviewed by a third author, and subsequent discussion resulted in agreement on which articles were to be included and excluded. The included articles were reviewed to identify relevant indicators and the results were descriptively mapped using a simple frequency count of the indicators. Results: The 83 included articles included work from a wide range of 32 countries and were published between 1995 and 2021. The review identified 54 indicators divided into 15 categories. The most frequently reported indicators were in the following categories: dental service utilization, oral health status, cost/service/population coverage, finances, health facility access, and workforce and human resources. This study was limited by the databases searched and the use of English-language publications only. Conclusions: This scoping review identified 54 indicators in a wide range of 15 categories of indicators that have the potential to be used to evaluate the integration of oral health/health care into UHC across a wide range of countries. abstract_id: PUBMED:32745081 Monitoring Universal Health Coverage reforms in primary health care facilities: Creating a framework, selecting and field-testing indicators in Kerala, India. In line with the Sustainable Development Goals (SDGs) and the target for achieving Universal Health Coverage (UHC), state level initiatives to promote health with "no-one left behind" are underway in India. In Kerala, reforms under the flagship Aardram mission include upgradation of Primary Health Centres (PHCs) to Family Health Centres (FHCs, similar to the national model of health and wellness centres (HWCs)), with the proactive provision of a package of primary care services for the population in an administrative area. We report on a component of Aardram's monitoring and evaluation framework for primary health care, where tracer input, output, and outcome indicators were selected using a modified Delphi process and field tested. A conceptual framework and indicator inventory were developed drawing upon literature review and stakeholder consultations, followed by mapping of manual registers currently used in PHCs to identify sources of data and processes of monitoring. The indicator inventory was reduced to a list using a modified Delphi method, followed by facility-level field testing across three districts. The modified Delphi comprised 25 participants in two rounds, who brought the list down to 23 approved and 12 recommended indicators. Three types of challenges in monitoring indicators were identified: appropriateness of indicators relative to local use, lack of clarity or procedural differences among those doing the reporting, and validity of data. Further field-testing of indicators, as well as the revision or removal of some may be required to support ongoing health systems reform, learning, monitoring and evaluation. abstract_id: PUBMED:27994283 Summary indices for monitoring universal coverage in maternal and child health care. Objective: To compare two summary indicators for monitoring universal coverage of reproductive, maternal, newborn and child health care. Methods: Using our experience of the Countdown to 2015 initiative, we describe the characteristics of the composite coverage index (a weighted average of eight preventive and curative interventions along the continuum of care) and co-coverage index (a cumulative count of eight preventive interventions that should be received by all mothers and children). For in-depth analysis and comparisons, we extracted data from 49 demographic and health surveys. We calculated percentage coverage for the two summary indices, and correlated these with each other and with outcome indicators of mortality and undernutrition. We also stratified the summary indicators by wealth quintiles for a subset of nine countries. Findings: Data on the component indicators in the required age range were less often available for co-coverage than for the composite coverage index. The composite coverage index and co-coverage with 6+ indicators were strongly correlated (Pearson r = 0.73, P &lt; 0.001). The composite coverage index was more strongly correlated with under-five mortality, neonatal mortality and prevalence of stunting (r = -0.57, -0.68 and -0.46 respectively) than was co-coverage (r = -0.49, -0.43 and -0.33 respectively). Both summary indices provided useful summaries of the degrees of inequality in the countries' coverage. Adding more indicators did not substantially affect the composite coverage index. Conclusion: The composite coverage index, based on the average value of separate coverage indicators, is easy to calculate and could be useful for monitoring progress and inequalities in universal health coverage. abstract_id: PUBMED:24347731 Monitoring service delivery for universal health coverage: the Service Availability and Readiness Assessment. Objective: To describe the Service Availability and Readiness Assessment (SARA) and the results of its implementation in six countries across three continents. Methods: The SARA is a comprehensive approach for assessing and monitoring health service availability and the readiness of facilities to deliver health-care interventions, with a standardized set of indicators that cover all main programmes. Standardized data-collection instruments are used to gather information on a defined set of selected tracer items from public and private health facilities through a facility sample survey or census. Results from assessments in six countries are shown. Findings: The results highlight important gaps in service delivery that are obstacles to universal access to health services. Considerable variation was found within and across countries in the distribution of health facility infrastructure and workforce and in the types of services offered. Weaknesses in laboratory diagnostic capacities and gaps in essential medicines and commodities were common across all countries. Conclusion: The SARA fills an important information gap in monitoring health system performance and universal health coverage by providing objective and regular information on all major health programmes that feeds into country planning cycles. Answer: Based on the abstract provided (PUBMED:25532714), Kenya faces challenges in complying with the proposed post-2015 monitoring recommendations for Universal Health Coverage (UHC). While the country has the capacity to report on five out of the seven proposed UHC indicators, it has very limited capacity to report on the two service coverage indicators for chronic condition and injuries (CCIs) interventions. Out of the potential tracer indicators for aggregate CCI-related measures, only four tracer indicators were available. Additionally, systemic challenges may impact the implementation and feasibility of the WHO/World Bank framework for monitoring UHC in Kenya. The country would require continued international investment to extend its capacity to meet the data requirements of the proposed UHC monitoring framework, particularly for the CCI-related indicators. Therefore, while the infrastructure for reporting the Millennium Development Goal (MDG) related indicators is in place, Kenya still needs to address several issues to fully comply with the proposed post-2015 UHC monitoring recommendations.
Instruction: Does a bidirectional Glenn shunt improve the oxygenation of right ventricle-dependent coronary circulation in pulmonary atresia with intact ventricular septum? Abstracts: abstract_id: PUBMED:16214519 Does a bidirectional Glenn shunt improve the oxygenation of right ventricle-dependent coronary circulation in pulmonary atresia with intact ventricular septum? Objective: There is a risk of myocardial ischemia in patients with pulmonary atresia and intact ventricular septum associated with the right ventricle-dependent coronary circulation. In this patient group, the oxygen delivery to the myocardium depends on the oxygen saturation of the right ventricular cavity. We hypothesized that bidirectional Glenn shunt would improve the oxygenation of right ventricle-dependent coronary circulation relative to a systemic-pulmonary artery shunt. The reduction of systemic venous return to the right atrium due to a bidirectional Glenn shunt could increase the oxygen saturation of the right ventricle in the clinical setting, when the mixture of systemic and pulmonary venous blood is unchanged at the atrial level. Methods: Patients with right ventricle-dependent coronary circulation were defined as those with right ventricle-coronary artery fistulas plus stenoses of the right or left coronary arteries. For 7 patients with right ventricle-dependent coronary circulation before and after bidirectional Glenn shunt, cardiac catheterization was performed and the oxygen saturation of the right ventricular cavity was measured. Results: For all 7 patients, the bidirectional Glenn shunt was performed at a mean age of 18 months. Ischemic changes in the electrocardiogram before the bidirectional Glenn shunt improved after the procedure in 2 patients. The oxygen saturation of the right ventricular cavity before the bidirectional Glenn shunt was 54.6 +/- 8.8%, and that after the BGS significantly increased to 75.6% +/- 5.8% (P &lt; .01). All 7 patients have subsequently undergone the Fontan procedure with excellent results. Conclusion: Early bidirectional Glenn shunt could prevent progression of myocardial ischemia in pulmonary atresia with intact ventricular septum with right ventricle-dependent coronary circulation. abstract_id: PUBMED:35105393 Very preterm and very low birthweight infant with pulmonary atresia intact ventricular septum, right ventricle-dependent coronary circulation, and discontinuous pulmonary arteries. Prematurity and low birthweight are associated with increased mortality in infants undergoing cardiac surgery. Pulmonary atresia with intact ventricular septum and right ventricle-dependent coronary circulation carries one of the highest risks of mortality. We present a patient who was born at 28 weeks of gestation at 1.2 kg, with pulmonary atresia intact ventricular septum, right ventricle-dependent coronary circulation, coronary artery atresia, and discontinuous pulmonary arteries, who successfully underwent palliation with a modified Blalock-Taussig shunt, pulmonary arterioplasty, and subsequently a bidirectional Glenn. abstract_id: PUBMED:36935831 Absent left main coronary artery in a case of pulmonary atresia-intact ventricular septum and right ventricle-dependent coronary circulation. Right ventricle-dependent coronary circulation coexisting with left main coronary atresia in the setting of pulmonary atresia-intact ventricular septum is rare. In the case described, the left coronary artery (LCA) origin from the aorta could not be found on conventional angiography or cardiac magnetic resonance imaging. During surgery, multiple LCA branches originating from the finger-like continuum of the primitive right ventricular sinusoidal network were observed. A Damus-Kaye-Stansel anastomosis and an aortopulmonary shunt operation were performed. Shunt takedown and a bidirectional Glenn anastomosis followed at 3 months of age. At 18 months follow-up, the child is thriving with stable hemodynamics and a saturation of 85%. Awareness about this rare coronary artery anomaly is necessary to prevent catastrophic consequences. The challenges, complications, and lessons learned while treating this rare variant are discussed. abstract_id: PUBMED:37006129 Perfusion Strategy to Prevent Right Ventricular Decompression on Cardiopulmonary Bypass During Extracardiac Fontan for Right Ventricle-Dependent Coronary Circulation. Early and long-term outcomes in patients with pulmonary atresia-intact ventricular septum undergoing staged univentricular palliations have been known to be adversely affected by the presence of right ventricle-dependent coronary circulation. We describe a surgical technique to circumvent the coronary insufficiency caused by acute decompression of the right heart. abstract_id: PUBMED:30056521 Neonatal Myocardial Perfusion in Right Ventricle Dependent Coronary Circulation: Clinical Surrogates and Role of Troponin-I in Postoperative Management Following Systemic-to-Pulmonary Shunt Physiology. Right ventricle dependent coronary circulation (RVDCC) in pulmonary atresia with intact ventricular septum (PA/IVS) is associated with significant mortality risk in the immediate post-operative period following the initial stage of surgical palliation. Prognosis remains guarded during the interstage period towards conversion to the superior cavopulmonary shunt physiology. Current literature is scarce regarding this specific patient population. Cardiac troponin-I is widely used as a marker of coronary ischemia in adults, but its use for routine monitoring of neonatal myocardial tissue injury due to supply/demand perfusion mismatch is, yet to be determined. We sought to evaluate the clinical correlation of cTnl perioperative use in a PA/IVS RVDCC case and assess its interplay with established clinical, echocardiographic, and laboratory variables in guiding a real-time (dynamic) management strategy following systemic-to-pulmonary shunt palliation. abstract_id: PUBMED:16731162 Natural history of pulmonary atresia with intact ventricular septum and right-ventricle-dependent coronary circulation managed by the single-ventricle approach. Background: Long-term outcome of patients with pulmonary valvar atresia and intact ventricular septum with right-ventricle-dependent coronary circulation (PA/IVS-RVDCC) managed by staged palliation directed toward Fontan circulation is unknown, but should serve as a basis for comparison with management protocols that include initial systemic-to-pulmonary artery shunting followed by listing for cardiac transplantation. Methods: Retrospective review of patients admitted to our institution with the diagnosis of PA/IVS-RVDCC from 1989 to 2004. All angiographic imaging studies, operative reports, and follow-up information were reviewed. Right-ventricle-dependent coronary circulation was defined as situations in which ventriculocoronary fistulae with proximal coronary stenosis or atresia were present, putting significant left ventricle myocardium at risk for ischemia with right ventricle decompression. Results: Thirty-two patients were identified with PA/IVS-RVDCC. All underwent initial palliation with modified Blalock-Taussig shunt (BTS). Median tricuspid valve z-score was -3.62 (-2.42 to -5.15), and all had moderate (n = 13) or severe (n = 19) right ventricular hypoplasia. Median follow-up was 5.1 years (9 months to 14.8 years). Overall mortality was 18.8% (6 of 32), with all deaths occurring within 3 months of BTS. Aortocoronary atresia was associated with 100% mortality (3 of 3). Of the survivors (n = 26), 19 have undergone Fontan operation whereas 7, having undergone bidirectional Glenn shunt, currently await Fontan. Actuarial survival by the Kaplan-Meier method for all patients was 81.3% at 5, 10, and 15 years, whereas mean survival was 12.1 years (95% confidence interval: 10.04 to 14.05). No late mortality occurred among those surviving beyond 3 months of age. Conclusions: In patients with PA/IVS-RVDCC, early mortality appears related to coronary ischemia at the time of BTS. Single-ventricle palliation yields excellent long-term survival and should be the preferred management strategy for these patients. Those with aortocoronary atresia have a particularly poor prognosis and should undergo cardiac transplantation. abstract_id: PUBMED:23332812 Pulmonary atresia with intact ventricular septum and right ventricular dependent coronary circulation through the "vessels of Wearn". We present an autopsy case of a male baby born at 35 weeks of gestation with pulmonary atresia with intact ventricular septum (PAIVS), who had coronary blood flow that was dependent on outflow from the right ventricle through the vessels described by Wearn. At 7 weeks of age, he underwent single ventricle palliation consisting of ligation of a patent ductus arteriosus and placement of a modified Blalock-Taussig shunt. The patient experienced a perioperative myocardial infarction, requiring extracorporeal membrane oxygenation. Progressive hemodynamic decline resulted in death 8 days after surgery. Autopsy revealed acute and remote infarctions in both ventricles and fibromuscular dysplasia of the subepicardial and intramural coronary arteries. In 1926, Grant first reported the association between PAIVS and secondary dysplasia of the heart vasculature and hypothesized that the high pressure resulted in dilation of the myocardial sinusoids. Confusion secondary to the unmeritorious dismissal of the myocardial sinusoids has obscured the pathogenesis of PAIVS and led to several publications suggesting second heart field abnormalities as a disease model for PAIVS. We discuss the pathogenesis of PAIVS, the ventriculocoronary arterial connections and the sinusoidal relationship to the vessels described by Wearn, and we attempt to correct the solecism plaguing the nomenclature of myocardial vasculature. abstract_id: PUBMED:15960071 Right ventricle-dependent coronary circulation in pulmonary atresia with intact ventricular septum: a case report. Pulmonary atresia with intact ventricular septum (PAIVS) is a morphologically heterogeneous lesion and accounts for 1-3% of critically ill infants with congenital heart disease. Numerous surgical approaches have been attempted with varying degrees of success, but the mortality rate is still high in most series. The optimal surgical procedure depends on the size and morphology of the tricuspid valve and right ventricle and the presence or absence of right ventricle-dependent coronary circulation. Therefore, it is pivotal to define the precise morphologic and hemodynamic characteristics, especially coronary artery anatomy. In this report, we describe a full-term female neonate with cyanosis soon after birth. Two-dimensional and color Doppler echocardiography corroborated the diagnosis of PAIVS and showed a small right ventricle. Cardiac catheterization indicated PAIVS and further revealed right ventricle-dependent coronary circulation. A systemic-to-pulmonary artery shunt was constructed with a positive immediate result. abstract_id: PUBMED:37231590 Percutaneous transient occlusion of the transtricuspid flow: a new method to evaluate the right ventricle-dependent coronary circulation in pulmonary atresia with intact ventricular septum. Pulmonary atresia with an intact ventricular septum is characterised by heterogeneity in right ventricle morphology and coronary anatomy. In some cases, the presence of ventriculocoronary connections may promote coronary artery stenosis or interruption, and aortic diastolic pressure may not be sufficient to drive coronary blood flow. This requires a correct evaluation (currently done by angiography) which depends on whether the patient can be offered decompression of the right ventricle. To date, there is no objective method to do so, so we designed a percutaneous, transitory technique with the purpose of occluding the transtricuspid anterograde flow. The manoeuverer was performed in a 25-day-old female with pulmonary atresia with intact ventricular septum, right ventricle at suprasystemic level, and selective coronarography was not conclusive, the anterior descendant with stenosis in its middle third and from this point, thinner with to-fro flow. Occlusion was performed with a balloon catheter. We re-evaluated the coronary flow and the normalised anterior descendant flow. We hope that with this new method, we can give a more accurate diagnosis and determine the cases in which the coronary circulation is truly not right ventricle dependent to offer a greater number of patients biventricular or 1.5 ventricular repairs and thereby improve their quality of life and survival, the ones that turn out to be right ventricular dependant; offer them an early reference for cardiac transplant or in case it is not available to consider univentricular palliation knowing that this probably would not reduce the risk of ischaemia and/or death over time. abstract_id: PUBMED:16095245 Temporary decompression of the right ventricle to assess the right ventricle-dependent coronary circulation in pulmonary atresia with intact ventricular septum. We repaired a case of pulmonary atresia with intact ventricular septum in which the blood supply to the left anterior descending coronary artery depended on the right ventricle. At the time of a bidirectional Glenn operation, total cardiopulmonary bypass with venous drainage from the right atrium was performed in order to evaluate the safety of right ventricular decompression required for a planned Fontan operation. We confirmed the dependence of the coronary perfusion on the right ventricle by demonstrating transient depression of the ST segment in the epicardial electrocardiogram during temporary decompression of the right ventricle. To prevent ischemic myocardial damage, we then performed an extracardiac Fontan operation with a temporary venous shunt and without cardiopulmonary bypass. Answer: Yes, a bidirectional Glenn shunt does improve the oxygenation of right ventricle-dependent coronary circulation in pulmonary atresia with intact ventricular septum. This is supported by a study where patients with right ventricle-dependent coronary circulation, defined as those with right ventricle-coronary artery fistulas plus stenoses of the right or left coronary arteries, underwent bidirectional Glenn shunt. The oxygen saturation of the right ventricular cavity before the bidirectional Glenn shunt was measured at 54.6 +/- 8.8%, and after the procedure, it significantly increased to 75.6% +/- 5.8% (P < .01). Additionally, ischemic changes in the electrocardiogram before the bidirectional Glenn shunt improved after the procedure in 2 patients. All patients in the study subsequently underwent the Fontan procedure with excellent results, suggesting that early bidirectional Glenn shunt could prevent the progression of myocardial ischemia in this patient group (PUBMED:16214519).
Instruction: Providing information on metered dose inhaler technique: is multimedia as effective as print? Abstracts: abstract_id: PUBMED:25496601 Understanding pressurized metered dose inhaler performance. Introduction: Deepening the current understanding of the factors governing the performance of the pressurized metered dose inhaler (pMDI) has the potential to benefit patients by providing improved drugs for current indications as well as by enabling new areas of therapy. Although a great deal of work has been conducted to this end, our knowledge of the physical mechanisms that drive pMDI performance remains incomplete. Areas Covered: This review focuses on research into the influence of device and formulation variables on pMDI performance metrics. Literature in the areas of dose metering, atomization and aerosol evolution and deposition is covered, with an emphasis on studies of a more fundamental nature. Simple models which may be of use to those developing pMDI products are summarized. Expert Opinion: Although researchers have had good success utilizing an empirically developed knowledge base to predict pMDI performance, such knowledge may not be applicable when pursuing innovations in device or formulation technology. Developing a better understanding of the underlying mechanisms is a worthwhile investment for those working to enable the next generation of pMDI products. abstract_id: PUBMED:31478744 Outcome of illustrated information leaflet on correct usage of asthma-metered dose inhaler. Background: Research globally has shown that metered dose inhaler (MDI) technique is poor, with patient education and regular demonstration critical in maintaining correct use of inhalers. Patient information containing pictorial aids improves understanding of medicine usage; however, manufacturer leaflets illustrating MDI use may not be easily understood by low-literacy asthma patients. Aim: To develop and evaluate the outcome of a tailored, simplified leaflet on correct MDI technique in asthma patients with limited literacy skills. Setting: A rural primary health care clinic in the Eastern Cape, South Africa. Methods: Pictograms illustrating MDI steps were designed to ensure cultural relevance. The design process of the leaflet was iterative and consultative involving a range of health care professionals as well as patients. Fifty-five rural asthma patients were recruited for the pre-post design educational intervention study. Metered dose inhaler technique was assessed using a checklist, and patients were then educated using the study leaflet. The principal researcher then demonstrated correct MDI technique. This process was repeated at follow-up 4 weeks later. Results: The number of correct steps increased significantly post intervention from 4.6 ± 2.2 at baseline to 7.9 ± 2.7 at follow-up (p 0.05). Statistically significant improvement of correct technique was established for 10 of the 12 steps. Patients liked the pictograms and preferred the study leaflet over the manufacturer leaflet. Conclusion: The tailored, simple, illustrated study leaflet accompanied by a demonstration of MDI technique significantly increased correct MDI technique in low-literacy patients. Patients approved of the illustrated, simple text leaflet, and noted its usefulness in helping them improve their MDI technique. abstract_id: PUBMED:14507797 Providing information on metered dose inhaler technique: is multimedia as effective as print? Background: Metered dose inhalers (MDIs) are not easy to use well. Every MDI user receives a manufacturer's patient information leaflet (PIL). However, not everyone is able or willing to read written information. Multimedia offers an alternative method for teaching or reinforcing correct inhaler technique. Objective: The aim of this study was to compare the effects of brief exposure to the same key information, given by PIL and multimedia touchscreen computer (MTS). Methods: A single-blind randomized trial was conducted in 105 fluent English speakers (53% female; 93% White) aged 12-87 years in London general practices. All patients had had at least one repeat prescription for a bronchodilator MDI in the last 6 months. Inhaler technique was videotaped before and after viewing information from a PIL (n = 48) or MTS (n = 57). Key steps were rated blind using a checklist and videotape timings. The main outcome measures were a change in (i) global technique; (ii) co-ordination of inspiration and inhaler actuation; (iii) breathing-in time; and (iv) information acceptability. Results: Initially, over a third of both groups had poor technique. After information, 44% (MTS) and 19% (PIL) were rated as improved. Co-ordination improved significantly after viewing information via MTS, but not after PIL. Breathing-in time increased significantly in both groups. Half the subjects said they had learned 'something new'. The MTS group were more likely to mention co-ordination and breathing. Conclusions: Short-term, multimedia is as least as effective as a good leaflet, and may have advantages for steps involving movement. MTS was acceptable to all age groups. The method could be used more widely in primary care. abstract_id: PUBMED:28811727 Metered-dose inhaler technique among healthcare providers practising in Oman. Objective: To evaluate the correctness of metered-dose inhaler (MDI) technique in a sample of healthcare providers practising in Oman, considering that poor inhaler technique is a common problem both in asthma patients and healthcare providers, which contributes to poor asthma control. Method: A total of 150 healthcare providers (107 physicians, 33 nurses and 10 pharmacists) who were participants in symposia on asthma management conducted in five regions of Oman, volunteered for the study. After the participants answered a questionnaire aimed at identifying their involvement in MDI prescribing and counselling, a trained observer assessed their MDI technique using a checklist of nine steps. Results: Of the 150 participants, 148 (99%) were involved in teaching inhaler techniques to patients, and 103 of 107 physicians (96%) had prescribed inhaled medications. However only 22 participants (15%) performed all steps correctly. Physicians performed significantly better than non-physicians (20% vs. 2%, p &lt;0.05) Among the physicians, internists performed better (26%) than general practitioners (5%) and accident and emergency doctors (9%). Conclusion: The majority of health-care providers responsible for instructing patients on the correct MDI technique were unable to perform this technique correctly indicating the need for regular formal training programmes on inhaler techniques. abstract_id: PUBMED:30642378 Status of metered dose inhaler technique among patients with asthma and its effect on asthma control in Northwest Ethiopia. Objective: In Asthma management, poor handling of inhalation devices and wrong inhalation technique are associated with decreased medication delivery and poor disease control. The aim of this study was to assess the status of Metered dose inhaler technique, associated factors and its impact on Asthma control among adult patients with Asthma. Results: The mean duration of Asthma was 15 ± 13 years. Asthma was uncontrolled in 70.4% of the participants and the poor technique of Asthma inhaler device was observed in 71.4% of the patients. Lack of health education on Metered dose inhaler technique [AOR =4.96; 95% CI (1.08-22.89)], and uncontrolled Asthma [AOR =3.67; 95% CI (1.85-7.23)], was independently associated with poor Metered dose inhaler technique. abstract_id: PUBMED:28526598 Breathe easy: 5 steps to better breathing with your metered-dose inhaler (MDI). Proper use of the metered-dose inhaler (MDI) is essential for medications to prevent and treat acute asthma exacerbations. This training video teaches children and clinicians correct technique for MDI use. abstract_id: PUBMED:31907655 Is "Slow Inhalation" Always Suitable for Pressurized Metered Dose Inhaler? To achieve adequate inhalation therapy, a proper inhalation technique is needed in clinical practice. However, there is limited information on proper inhalation flow patterns of commercial inhalers. Here, we quantitatively estimated airway deposition of two commercial pressurized metered dose inhalers (pMDIs) to determine their optimal inhalation patterns. Sultanol® inhaler (drug particles suspended in a propellant, suspension-pMDI) and QVAR™ (drug dissolved in a propellant with ethanol, solution-pMDI) were used as model pMDIs. Aerodynamic properties of the two pMDIs were determined using an Andersen cascade impactor with human inhalation flow simulator developed by our laboratory. As indices of peripheral-airway drug deposition, fine particle fractions (FPFPA) at different inhalation flow rates were calculated. The time-dependent particle diameters of sprayed drug particles were determined by laser diffraction. On aerodynamic testing, FPFPA of suspension-pMDI significantly decreased depending on the increasing inhalation flow rate, while solution-pMDI achieved higher and constant FPFPA in the range of the tested inhalation flow rates. The particle diameter of solution-pMDI markedly decreased from 5 to 3 μm in a time-dependent manner. Conversely, that of suspension-pMDI remained at 4 μm during the spraying time. Although "slow inhalation" is recommended for pMDIs, airway drug deposition via solution-pMDI (extra-fine particles) is independent of patients' inhalation flow pattern. Clinical studies should be performed to validate instruction for use of pMDIs for each inhaler for the optimization of inhalation therapy. abstract_id: PUBMED:33855311 Nebulized albuterol delivery is associated with decreased skeletal muscle strength in comparison with metered-dose inhaler delivery among children with acute asthma exacerbations. Objective: Albuterol is a β2-agonist and causes an intracellular shift of potassium from the interstitium. Whole-body hypokalemia is known to cause skeletal muscle weakness, but whether this occurs as a result of hypokalemia from the intracellular shift during albuterol treatment is unknown. We sought to determine if albuterol total dose or route of administration (nebulization and/or metered-dose inhaler) is associated with skeletal muscle weakness. Methods: This was a prospective observational study using convenience sampling. Skeletal muscle strength was measured before and after 1 hour of albuterol treatment using a hand-grip dynamometer in participants aged 5-17 years with acute asthma exacerbation in the emergency department. We examined associations of albuterol dose and route of administration with changes in grip strength. Results: Among 50 participants, 10 received continuous albuterol by nebulizer and 40 received albuterol by metered-dose inhaler. The median (interquartile range) in change of grip was -7.8% (interquartile range, -23.3, +5.1) for those treated with a nebulizer and +2.4% (interquartile range, -5%, +12.7%) for those treated with a metered-dose inhaler (P = 0.036 for the difference). In a multiple linear regression model adjusted for the pretreatment Acute Asthma Intensity Research Score and age, participants treated with a nebulizer had a 12.9% decrease in skeletal muscle strength compared with those treated with a metered-dose inhaler. Conclusion: Higher doses of albuterol administered via nebulization result in decreased skeletal muscle strength in patients with acute asthma; whereas, albuterol administration via metered-dose inhalers showed no effect on skeletal muscle strength. abstract_id: PUBMED:28170282 Pharmacokinetics of Salbutamol Delivered from the Unit Dose Dry Powder Inhaler: Comparison with the Metered Dose Inhaler and Diskus Dry Powder Inhaler. Aim: To compare the systemic exposure of salbutamol following delivery from the unit dose dry powder inhaler (UD-DPI) system with that from the Diskus® and metered dose inhaler (MDI). Materials And Methods: This open-label, two-part, six-way crossover, randomized single-dose study in healthy subjects evaluated salbutamol systemic exposure of three dose strengths (using three inhalations: 3 × 150 μg [450 μg], 3 × 200 μg [600 μg], and 3 × 250 μg [750 μg]) and 2% of drug in lactose blends (1.6% and 1.0% [600 μg dose only] by weight) following delivery through the UD-DPI compared with systemic exposure from the Diskus and MDI (600 μg dose). Systemic exposure in the presence of charcoal block was also evaluated. Primary treatment comparisons were area under the concentration-time curve from time zero to 12 hours [AUC0-12] and maximum plasma concentration [Cmax]. Results: Delivery of salbutamol 600 μg from the UD-DPI resulted in total systemic exposure similar to that from the Diskus and approximately half of that from the MDI (AUC0-12 geometric least squares mean ratio [GMR] [90% confidence interval (CI)] for UD-DPI [1.6% blend]/Diskus: 0.91 [0.83-1.00]; UD-DPI [1.6% blend]/MDI: 0.46 [0.42-0.50]. Cmax GMR [90% CI] for UD-DPI [1.6% blend]/Diskus: 1.20 [1.07-1.33]; UD-DPI [1.6% blend]/MDI: 0.58 [0.52-0.64]). Results were consistent between the 1.6% and the 1.0% blends and systemic exposure for the 3 dose strengths of salbutamol (1.6% blend) showed increases that were 12-16% greater than dose proportional. Systemic exposure due to pulmonary absorption (as calculated from AUC0-12 in the presence and absence of charcoal block) was 48% for the UD-DPI, 24% for Diskus, and 37% for MDI of the total salbutamol systemic exposure, and the corresponding estimated lung dose was 65% for the UD-DPI and 34% for the Diskus relative to the MDI. Conclusions: Salbutamol total systemic exposure following UD-DPI was similar to that from the Diskus and was lower than that following the MDI. The different blend formulations tested resulted in consistent salbutamol systemic exposure. The contribution of the lung and gut to systemic exposure revealed a different profile for the three inhaler platforms. These data suggest that the UD-DPI warrants further evaluation. abstract_id: PUBMED:21217925 Skills in handling turbuhaler, diskus, and pressurized metered-dose inhaler in korean asthmatic patients. Purpose: The objective of this study was to evaluate skills in handling inhalers and factors associated with these skills among patients with asthma who had undergone treatment at special asthma and allergy clinics in Korea. Methods: We enrolled 78 subjects who used Turbuhaler and 145 who used Diskus for asthma control at special clinics in 10 university hospitals and visually assessed their skills in handling these inhalers. We also evaluated skills in 137 subjects who had used pressurized metered-dose inhalers (pMDIs) for symptom relief. Age, sex, duration of asthma and inhaler use, smoking status, monthly income, highest grade completed in school and previous instruction for handling inhalers were also measured to evaluate their association with overall inhaler skills. Results: Performance grade was inadequate for 12.8% of participants using Turbuhaler, 6.2% for Diskus, and 23.4% for pMDIs. The success rates for each step in handling the inhalers were relatively high except for the "exhale slowly to residual volume" step, in which success rates ranged from 24.2% to 28.5%. Older age, male sex, lower educational grade, and absence of previous instruction for handling inhalers were associated with inadequate inhaler technique in univariate analysis; however, only older age and absence of previous instruction remained significant independent risk factors in multivariate analysis. Conclusions: Among Korean asthmatic patients in special asthma and allergy clinics, skills in handling their inhalers were mostly excellent; meanwhile, older age and absence of previous instruction for handling inhalers were associated with inadequate techniques. Answer: The study described in abstract PUBMED:14507797 aimed to compare the effectiveness of multimedia touchscreen computer (MTS) and a manufacturer's patient information leaflet (PIL) in teaching correct metered dose inhaler (MDI) technique. The study involved 105 fluent English speakers aged 12-87 years who had at least one repeat prescription for a bronchodilator MDI in the last 6 months. The participants were randomized to receive information from either a PIL or MTS, and their inhaler technique was videotaped before and after viewing the information. The results showed that after exposure to the information, 44% of the MTS group and 19% of the PIL group improved their technique. Co-ordination of inspiration and inhaler actuation improved significantly after viewing information via MTS but not after PIL. Both groups showed a significant increase in breathing-in time. The MTS group was more likely to mention learning about co-ordination and breathing. The study concluded that multimedia is at least as effective as a good leaflet, and may have advantages for steps involving movement, and was acceptable to all age groups. Therefore, multimedia could be used more widely in primary care to teach or reinforce correct inhaler technique.
Instruction: Is there need for radioimmunotherapy? Abstracts: abstract_id: PUBMED:1760818 An overview of radioimmunotherapy. In reviewing the current state of affairs in radioimmunotherapy the paper focuses on the main difficulties thus far encountered and the procedures designed to avoid or circumvent these problems. The long range beta-emitters 90Y and 188Re have replaced 131I as the isotopes currently receiving most attention for use in radioimmunotherapy, and a range of new chelators are under investigation for in vivo stability and immunogenicity. Approaches aimed at improving tumour targetting and antigen expression such as two-step pretargetting techniques, tumour necrosis treatment and cytokine pretreatment are summarized. Methods designed to improve host-Mab interactions are outlined and the need to incorporate successful ideas from current cancer therapies is emphasised. abstract_id: PUBMED:34646364 Radioimmunotherapy for solid tumors: spotlight on Glypican-1 as a radioimmunotherapy target. Radioimmunotherapy (i.e., the use of radiolabeled tumor targeting antibodies) is an emerging approach for the diagnosis, therapy, and monitoring of solid tumors. Often using paired agents, each targeting the same tumor molecule, but labelled with an imaging or therapeutic isotope, radioimmunotherapy has achieved promising clinical results in relatively radio-resistant solid tumors such as prostate. Several approaches to optimize therapeutic efficacy, such as dose fractionation and personalized dosimetry, have seen clinical success. The clinical use and optimization of a radioimmunotherapy approach is, in part, influenced by the targeted tumor antigen, several of which have been proposed for different solid tumors. Glypican-1 (GPC-1) is a heparan sulfate proteoglycan that is expressed in a variety of solid tumors, but whose expression is restricted in normal adult tissue. Here, we discuss the preclinical and clinical evidence for the potential of GPC-1 as a radioimmunotherapy target. We describe the current treatment paradigm for several solid tumors expressing GPC-1 and suggest the potential clinical utility of a GPC-1 directed radioimmunotherapy for these tumors. abstract_id: PUBMED:22064461 Clinical radioimmunotherapy--the role of radiobiology. Conventional external-beam radiation therapy is dedicated to the treatment of localized disease, whereas radioimmunotherapy represents an innovative tool for the treatment of local or diffuse tumors. Radioimmunotherapy involves the administration of radiolabeled monoclonal antibodies that are directed specifically against tumor-associated antigens or against the tumor microenvironment. Although many tumor-associated antigens have been identified as possible targets for radioimmunotherapy of patients with hematological or solid tumors, clinical success has so far been achieved mostly with radiolabeled antibodies against CD20 ((131)I-tositumomab and (90)Y-ibritumomab tiuxetan) for the treatment of lymphoma. In this Review, we provide an update on the current challenges aimed to improve the efficacy of radioimmunotherapy and discuss the main radiobiological issues associated with clinical radioimmunotherapy. abstract_id: PUBMED:7590766 Radioimmunotherapy of ovarian cancer. Despite the advances in the management of ovarian cancer, the disease remains the leading cause of death from gynecological malignancies. As it generally remains confined to the peritoneal cavity, ovarian cancer is an attractive target for radioimmunotherapy via the intraperitoneal route of administration. Several clinical trials have been carried out investigating radiolabeled monoclonal antibodies and the results seem promising, especially in patients with small-volume residual disease after conventional therapy. Intraperitoneal radioimmunotherapy has yet to prove itself as an important part of the treatment of ovarian cancer. abstract_id: PUBMED:31867072 Radioimmunotherapy (RIT) in Brain Tumors. Annually, the incidence of brain tumors has slightly increased and also the patient prognosis is still disappointing, especially for high-grade neoplasms. So, researchers seek methods to improve therapeutic index as a critical aim of treatment. One of these new challenging methods is radioimmunotherapy (RIT) that involves recruiting a coupling of radionuclide component with monoclonal antibody (mAb) which are targeted against cell surface tumor-related antigens or antigens of cells within the tumor microenvironment. In the context of cancer care, precision medicine is exemplified by RIT; precision medicine can offer a tailored treatment to meet the needs for treatment of brain tumors. This review aims to discuss the molecular targets used in radioimmunotherapy of brain tumors, available and future radioimmunopharmaceutics, clinical trials of radioimmunotherapy in brain neoplasms, and eventually, conclusion and future perspective of application of radioimmunotherapy in neurooncology cancer care. abstract_id: PUBMED:23725287 Radioimmunotherapy for high-grade glioma. Patients with high-grade glioma (HGG) still have a very poor prognosis. The infiltrative nature of the tumor and the inter- and intra-tumoral cellular and genetic heterogeneity, leading to the acquisition of new mutations over time, represent the main causes of treatment failure. Radioimmunotherapy represents an emerging approach for the treatment of HGG. Radioimmunotherapy utilizes a molecular vehicle (monoclonal antibodies) to deliver a radionuclide (the drug) to a selected cell population target. This review will provide an overview of preclinical and clinical studies to date and assess the effectiveness of radioimmunotherapy, focusing on possible future therapies for the treatment of HGG. abstract_id: PUBMED:21034409 Current concepts and future directions in radioimmunotherapy. Radioimmunotherapy relies on the principles of immunotherapy, but expands the cytotoxic effects of the antibody by complexing it with a radiation-emitting particle. If we consider radioimmunotherapy as a step beyond immunotherapy of cancer, the step was prompted by the (relative) failure of the latter. The conventional way to explain the failure is a lack of intrinsic killing effect and a lack of penetration into poorly vascularized tumor masses. The addition of a radioactive label (usually a β-emitter) to the antibody would improve both. Radiation is lethal and the type of radiation used (beta rays) has a sufficient range to overcome the lack of antibody penetration. At present, the most successful (and FDA approved) radioimmunotherapy agents for lymphomas are anti-CD20 monoclonal antibodies. Rituximab (Rituxan(®)) is a chimeric antibody, used as a non-radioactive antibody and to pre-load the patient when Zevalin(®) is used. Zevalin(®) is the Yttrium-90 ((90)Y) or Indium-111 ((111)In) labeled form of Ibritumomab Tiuxetan. Bexxar(®) is the Iodine-131 ((131)I) labeled form of Tositumomab. Ibritumomab Tiuxetan and Tositumomab are murine anti-CD20 monoclonal antibodies, not chimeric antibodies. Promising research is being done to utilize radioimmunotherapy earlier in the treatment algorithm for lymphoma, including as initial, consolidation, and salvage therapies. However, despite more than 8 years since initial regulatory approval, radioimmunotherapy still has not achieved widespread use due to a combination of medical, scientific, logistic, and financial barriers. Other experimental uses for radioimmunotherapy include other solid tumors to treat infections. Optimization can potentially be done with pre-targeting and bi-specific antibodies. Alpha particle and Auger electron emitters show promise as future radioimmunotherapy agents but are mostly still in pre-clinical stages. abstract_id: PUBMED:9251118 Recent progress in radioimmunotherapy for cancer. Radioimmunotherapy allows for the delivery of systemically targeted radiation to areas of disease while relatively sparing normal tissues. Despite numerous challenges, considerable progress has been made in the application of radioimmunotherapy to a wide variety of human malignancies. The greatest successes have occurred in the treatment of hematologic malignancies. Radioimmunotherapy, with or without stem-cell transplant support, has produced substantial complete remission rates in chemotherapy-resistant B-cell lymphomas. Nonmyeloablative regimens have shown so much promise that they are now being tested as initial therapy for low-grade B-cell lymphomas. Although solid tumor malignancies have been less responsive to radioimmunotherapy, encouraging results have been obtained with locoregional routes of administrations, especially when the tumor burden is small. Greater tumor-to-normal tissue ratios are achievable with regional administration. Even with intraperitoneal and intrathecal administration, bone marrow suppression remains the dose-limiting toxicity. Ongoing research into new targeting molecules, improved chelation chemistry, and novel isotope utilization is likely to extend the applications of this strategy to other tumor types. The potential for radioimmunotherapy will be enhanced if this modality can be optimally adapted for integration with other agents and if the administration method can be tailored to the type and distribution of malignancy. abstract_id: PUBMED:8306290 Experimental radioimmunotherapy. A brief overview. This overview highlights the use of experimental models in selecting monoclonal antibodies and radiolabels with promise for clinical radioimmunotherapy, discusses some of the experimental therapeutic approaches being studies in these models, and reviews some of the limitations of animal models caused by the allometric and other differences between man and mouse. These differences in scale must be considered when attempting to extrapolate animal radioimmunotherapy study data to human trials of radioimmunotherapy. With appropriate recognition of their limitations, experimental models of radioimmunotherapy have proven valuable and will continue to play a critical role as the place to first study innovations in radioimmunotherapy, before extension of the most promising reagents and treatment concepts into clinical therapeutic trials. abstract_id: PUBMED:14998779 Radioimmunotherapy with engineered antibodies. Although the advent of monoclonal antibody technology in the 1970s provided the means to specifically target radioisotopes to tumours, the initial clinical evaluations of radioimmunotherapy (RAIT) were largely unsuccessful. Over the past few decades, molecular biology techniques have advanced sufficiently to allow scientists to re-engineer antibodies to address the factors that were believed to be responsible for the failures of the early radioimmunotherapy trials. This review addresses the recent advances in antibody engineering and in RAIT strategies that have brought this field to the brink of success. Answer: Yes, there is a need for radioimmunotherapy (RIT) as it represents an innovative approach for the treatment of various types of cancers, including both hematological malignancies and solid tumors. RIT involves the administration of radiolabeled monoclonal antibodies that specifically target tumor-associated antigens or the tumor microenvironment, delivering targeted radiation to cancer cells while sparing normal tissues (PUBMED:22064461). Despite the advances in conventional cancer therapies, diseases like ovarian cancer, high-grade glioma, and brain tumors still have poor prognoses, and RIT offers a promising method to improve therapeutic outcomes for these challenging conditions (PUBMED:7590766, PUBMED:23725287, PUBMED:31867072). Clinical success has been achieved with RIT in the treatment of lymphoma, particularly with radiolabeled antibodies against CD20, and there is ongoing research to optimize RIT for use earlier in treatment algorithms, including as initial, consolidation, and salvage therapies (PUBMED:21034409). Additionally, RIT has shown promise in hematologic malignancies, with substantial complete remission rates in chemotherapy-resistant B-cell lymphomas, and is being tested as initial therapy for low-grade B-cell lymphomas (PUBMED:9251118). Furthermore, the need for RIT is underscored by the development of new strategies to improve its efficacy, such as the use of long-range beta-emitters like 90Y and 188Re, new chelators for in vivo stability, pretargeting techniques, and methods to enhance tumor targeting and antigen expression (PUBMED:1760818). The targeting of specific tumor antigens, such as Glypican-1 (GPC-1) in solid tumors, is also being explored to enhance the clinical utility of RIT (PUBMED:34646364). In summary, the need for RIT is driven by its potential to provide a more precise and effective treatment option for various cancers, particularly those with limited success from existing therapies. Advances in the field are aimed at overcoming the challenges and limitations of early RIT trials and integrating RIT with other treatment modalities to enhance its potential and broaden its application (PUBMED:14998779).
Instruction: Diabetes prevention and treatment strategies: are we doing enough? Abstracts: abstract_id: PUBMED:38473385 Recent Advances in Endometrial Cancer Prevention, Early Diagnosis and Treatment. Endometrial cancer is the sixth commonest cancer in women worldwide, with over 417,000 diagnoses in 2020. The disease incidence has increased by 132% over the last 30 years and is set to continue to rise in response to an ageing population and increasing global rates of obesity and diabetes. A greater understanding of the mechanisms driving endometrial carcinogenesis has led to the identification of potential strategies for primary disease prevention, although prospective evaluation of their efficacy within clinical trials is still awaited. The early diagnosis of endometrial cancer is associated with improved survival, but has historically relied on invasive endometrial sampling. New, minimally invasive tests using protein and DNA biomarkers and cytology have the potential to transform diagnostic pathways and to allow for the surveillance of high-risk populations. The molecular classification of endometrial cancers has been shown to not only have a prognostic impact, but also to have therapeutic value and is increasingly used to guide adjuvant treatment decisions. Advanced and recurrent disease management has also been revolutionised by increasing the use of debulking surgery and targeted treatments, particularly immunotherapy. This review summarises the recent advances in the prevention, diagnosis and treatment of endometrial cancer and seeks to identify areas for future research. abstract_id: PUBMED:11319367 Primary and secondary prevention for erysipelas Erysipelas is a bacterial infection of the deepest skin layer. Predisposing factors are systemic and/or local. Main systemic factors are alcoholism, diabetes and immunodeficiency. The main local factors are an Athlete's foot (tineapedis), venous or lymphatic stasis, prosthetic surgery of the knee, and a past history of saphenous phlebectomy, lymphadenectomy, or irradiation. Such predisposing factors account for the predominance of erysipelas in the lower limbs and for the frequency of recurrence. The prevention of recurrence is stressed by all authors, and would associate correct treatment of the disease, treatment of venous and lymphatic stasis and/or wounds. A preventive antibiotic treatment should be proposed to patients with multiple predisposing factors and frequent recurrence, by using prolonged therapy with Macrolides or Penicillin. Primary prevention could concern local and/or systemic predisposing factors; however its efficacy and necessity has yet to be demonstrated. The usefulness of nosopharyngeal streptococcal carriage eradication and/or vaccination has not demonstrated either. abstract_id: PUBMED:32864583 Importance of food plants in the prevention and treatment of diabetes in Cameroon. Background: Diabetes is a metabolic pathology that affects the human body's capacity to adequately produce and use insulin. Type 1 (insulin-dependent) diabetes accounts for 5-10 % of diabetic patients. In Type 2 diabetes the insulin produced by the pancreatic islets is not properly used by cells due to insulin resistance. Gestational diabetes sometimes occurs in pregnant women and affects about 18 % of all pregnancies.Diabetes is one of the most important multifactorial metabolic chronic diseases with fatal complications. According to the International Diabetes Federation's estimations in 2015, 415 million people had diabetes and there will be an increase to 642 million people by 2040. Although many ethnopharmacological surveys have been carried out in several parts of the world, no ethnomedical and ethnopharmacological surveys have been done to identify plants used for the prevention and treatment of diabetes. Objective: This study aimed to collect and document information on food plants' remedies consumed for the prevention and treatment of diabetes in Cameroon. Methods: Ethnomedical and ethnopharmacological thorough preparations were conducted with 1131 interviewees from 58 tribes, following a random distribution. Diabetic patients recorded among this sample signed the informed consent and allowed us to evaluate the effectiveness of 10 identified food plants usually used for self-medication. They were divided into two groups: Group 1 comprised of 42 diabetic patients who regularly consume certain of these food plants, and Group 2 included 58 patients who were town-dwellers and did not regularly eat these identified food plants. Results: It was recorded that the onset of diabetes in patients were at about 70 years and 45 years for Group 1 and Group 2 respectively. Hence, a relationship was demonstrated between the onset of diabetes and the consumption of food plants. They contributed to the prevention and/or the delay in clinical manifestations. Conclusion: Further investigations and/or clinical trials involving a large number of both type 1 and type 2 diabetics are needed to describe the therapeutic action of many food plants against diabetes. However, this study provides scientific support for the use of herbal medicines in the management of diabetes. abstract_id: PUBMED:38382332 Lifestyle modification and risk factor control in the prevention and treatment of atrial fibrillation Atrial fibrillation (AF) is the most prevalent arrhythmia and is related with significant morbidity, mortality and costs. In spite of relevant advances in the prevention of embolic events and rhythm control, little has been done to reduce its prevalence, progression and impact, since it increases with ageing as well as with common risk factors such as alcohol intake, tobacco use and stress as well as with arterial hypertension, diabetes mellitus, heart failure, sleep apnea, kidney failure, chronic pulmonary obstructive disease, ischemic heart disease and stroke, among other important comorbidities. Fortunately, new evidence suggests that lifestyle modifications and adequate risk factors and comorbidities control could be effective in primary and secondary AF prevention, especially in its paroxysmal presentations. This is why a multidisciplinary approach integrating lifestyle modifications, risk factors and comorbidities control, is necessary in conjunction with rhythm or rate control and anticoagulation. Unfortunately, that holistic approach strategy is not considered, is scarcely studied or is subtilized in general clinical practice. The present statement's objectives are to: 1) review the relationship between habits, risk factors and illnesses with AF, 2) review the individual and common physiopathology mechanisms of each one of those conditions that may lead to AF, 3) review the effect of control of habits, risk factors and co-morbidities on the control and impact of AF, and 4) supply guidelines and recommendations to start multidisciplinary and integrative AF treatment. abstract_id: PUBMED:36709085 Consensus of Chinese experts on strengthening personalized prevention and treatment of type 2 diabetes. Up to now, there has not yet been guidance or consensus from Chinese experts in the field of personalized prevention and treatment of type 2 diabetes. In view of the above, the endocrinology diabetes Professional Committee of Chinese Non-government Medical Institutions Association, the integrated endocrinology diabetes Professional Committee of the integrated medicine branch of Chinese Medical Doctor Association, and the diabetes education and microvascular complications group of the diabetes branch of the Chinese Medical Association organized relevant experts to discuss and reach the "Chinese expert consensus on strengthening personalized prevention and treatment of type 2 diabetes" for reference in clinical practice. abstract_id: PUBMED:34909644 Emerging roles of cardamonin, a multitargeted nutraceutical in the prevention and treatment of chronic diseases. Although chronic diseases are often caused by the perturbations in multiple cellular components involved in different biological processes, most of the approved therapeutics target a single gene/protein/pathway which makes them not as efficient as they are anticipated and are also known to cause severe side effects. Therefore, the pursuit of safe, efficacious, and multitargeted agents is imperative for the prevention and treatment of these diseases. Cardamonin is one such agent that has been known to modulate different signaling molecules such as transcription factors (NF-κB and STAT3), cytokines (TNF-α, IL-1β, and IL-6) enzymes (COX-2, MMP-9 and ALDH1), other proteins and genes (Bcl-2, XIAP and cyclin D1), involved in the development and progression of chronic diseases. Multiple lines of evidence emerging from pre-clinical studies advocate the promising potential of this agent against various pathological conditions like cancer, cardiovascular diseases, diabetes, neurological disorders, inflammation, rheumatoid arthritis, etc., despite its poor bioavailability. Therefore, further studies are paramount in establishing its efficacy in clinical settings. Hence, the current review focuses on highlighting the underlying molecular mechanism of action of cardamonin and delineating its potential in the prevention and treatment of different chronic diseases. abstract_id: PUBMED:12868324 Prevention of dementia: is it possible? Development of dementia depends on genetic susceptibility and on risk factors accessible to primary prevention. Among the latter, vascular risk factors are well defined: prevention of hyperhomocysteinemia, diabetes mellitus, hypercholesterolemia, and, to some extent, of arterial hypertension could avoid the cognitive decline of dementia. Estrogen replacement therapy, antiinflammatory drugs, alcohol, vitamin E and intellectual activities seem efficacious in term of primary prevention. When dementia is present, only vitamin E, selegiline and some antiinflammatory drugs have proved efficacy compared to placebo to slow the cognitive decline. Long-term effects of cholinesterase inhibitors need to be investigated in future trials. abstract_id: PUBMED:29381386 Prevention and treatment effects of edible berries for three deadly diseases: Cardiovascular disease, cancer and diabetes. Cardiovascular disease (CVD), cancer and diabetes are serious threat to human health and more and more aroused people's attention. It is important to find the safe and effective prevention and treatment methods for the three deadly diseases. At present, a generally attention in the possible positive effects of edible berries for the three deadly diseases has been noted. Berry phytochemical compounds regulate different signaling pathways about cell survival, growth and differentiation. They contribute to the prevention and treatment of CVD, cancer and diabetes. This article reviews previous experimental evidence, several common berry phytochemical compounds and their possible mechanisms involved in three deadly diseases were summarized. abstract_id: PUBMED:28282929 Bioactive Peptide of Marine Origin for the Prevention and Treatment of Non-Communicable Diseases. Non-communicable diseases (NCD) are the leading cause of death and disability worldwide. The four main leading causes of NCD are cardiovascular diseases, cancers, respiratory diseases and diabetes. Recognizing the devastating impact of NCD, novel prevention and treatment strategies are extensively sought. Marine organisms are considered as an important source of bioactive peptides that can exert biological functions to prevent and treatment of NCD. Recent pharmacological investigations reported cardio protective, anticancer, antioxidative, anti-diabetic, and anti-obesity effects of marine-derived bioactive peptides. Moreover, there is available evidence supporting the utilization of marine organisms and its bioactive peptides to alleviate NCD. Marine-derived bioactive peptides are alternative sources for synthetic ingredients that can contribute to a consumer's well-being, as a part of nutraceuticals and functional foods. This contribution focus on the bioactive peptides derived from marine organisms and elaborates its possible prevention and therapeutic roles in NCD. abstract_id: PUBMED:22746010 Primary and secondary prevention of ischemic stroke Primary prevention is aimed at reducing the risk of stroke in asymptomatic people. The most effective prevention is through control of modifiable risk factors. Adequate blood pressure reduction, cessation of cigarette smoking and use of antithrombotic therapy in atrial fibrillation are the most effective measures. Carotid endarterectomy may be useful in selected patients. Although very useful for health in general, tight control of diabetes and hypercholesterolemia, physical exercise and alimentary diet did not show a major influence for primary stroke prevention. Aspirin seems to be not very effective for primary stroke prevention, whereas some ACE inhibitors (e.g. ramipril), ARBs (e.g. losartan) or statins, may have a preventive role beyond their antihypertensive or hypocholesterolemic properties. Secondary stroke prevention is aimed at reducing the risk of recurrence after a first stroke or transient ischemic attack. Acting on risk factors is probably as effective as in primary prevention. Carotid endarterectomy for symptomatic stenoses &gt; 70% and anticoagulation in patients with atrial fibrillation are by far the most effective measures. Antiplatelet therapy (aspirin, ticlopidine, clopidogrel and long acting dipyridamole-aspirine association) reduces significantly stroke recurrence. Most recent data suggest also that perindopril, eprosartan and some statins are beneficial against stroke recurrence even in normotensive and normocholesterolemic patients. Answer: The abstracts provided offer insights into various strategies for the prevention and treatment of diabetes, suggesting that while progress has been made, there is still room for improvement and further research. In Cameroon, a study aimed to document food plants used for the prevention and treatment of diabetes, indicating that certain food plants may contribute to the prevention and/or delay in the clinical manifestations of diabetes (PUBMED:32864583). This suggests that dietary interventions using specific food plants could be a valuable strategy in diabetes management. The consensus of Chinese experts on personalized prevention and treatment of type 2 diabetes highlights the need for tailored approaches in clinical practice, which implies that a one-size-fits-all strategy may not be sufficient for effective diabetes management (PUBMED:36709085). The role of lifestyle modification and risk factor control in the prevention and treatment of atrial fibrillation (AF) is emphasized, with the suggestion that similar strategies could be effective in primary and secondary diabetes prevention, especially considering the shared risk factors between AF and diabetes (PUBMED:38382332). The potential of cardamonin, a multitargeted nutraceutical, in the prevention and treatment of chronic diseases, including diabetes, is discussed, indicating that natural compounds may offer a multifaceted approach to disease management (PUBMED:34909644). Bioactive peptides of marine origin are presented as a novel prevention and treatment strategy for non-communicable diseases, including diabetes, suggesting that there are still untapped natural resources that could be harnessed for diabetes management (PUBMED:28282929). Overall, while there are various strategies in place for the prevention and treatment of diabetes, the abstracts suggest that there is still a need for further investigation, clinical trials, and the development of more personalized and multifaceted approaches to improve outcomes for individuals with diabetes (PUBMED:32864583, PUBMED:36709085, PUBMED:38382332, PUBMED:34909644, PUBMED:28282929).
Instruction: Can positron emission tomography be more than a diagnostic tool? Abstracts: abstract_id: PUBMED:32078338 Atherosclerosis Immunoimaging by Positron Emission Tomography. The immune system's role in atherosclerosis has long been an important research topic and is increasingly investigated for therapeutic and diagnostic purposes. Therefore, noninvasive imaging of hematopoietic organs and immune cells will undoubtedly improve atherosclerosis phenotyping and serve as a monitoring method for immunotherapeutic treatments. Among the available imaging techniques, positron emission tomography's unique features make it an ideal tool to quantitatively image the immune response in the context of atherosclerosis and afford reliable readouts to guide medical interventions in cardiovascular disease. Here, we summarize the state of the art in the field of atherosclerosis positron emission tomography immunoimaging and provide an outlook on current and future applications. abstract_id: PUBMED:16900009 Positron emission tomography for assessment of viability. Purpose Of Review: The recent success of magnetic resonance imaging for viability assessment has raised questions about the future role of positron emission tomography and older imaging modalities in the assessment of viability. Recent information, however, indicates that positron emission tomography will remain a valuable tool. Recent Findings: The primary positron emission tomography tracer used for assessment of viability is 18F-fluorodeoxyglucose, a glucose analogue that exhibits enhanced uptake in ischemic tissue. The finding of enhanced 18F-fluorodeoxyglucose uptake and a relative reduction in perfusion is considered the positron emission tomography correlate of myocardial hibernation. The mismatch pattern has been shown to identify patients with improvement in systolic function, heart failure symptoms, and prognosis with revascularization. Mismatch identifies a subset of patients with vulnerable myocardium who have a higher likelihood of a cardiac event compared with those without significant mismatch. Delay in revascularization may pose extra risk for those with mismatch. Positron emission tomography and magnetic resonance imaging demonstrate a close correlation in the detection of viable myocardium. The development of combined positron emission tomography/computed tomography scanners can reduce imaging time and improve functional-anatomic correlations. Summary: Positron emission tomography imaging utilizing 18F-fluorodeoxyglucose and perfusion tracers provides valuable diagnostic and prognostic information in patients with ischemic left ventricular dysfunction and has comparable accuracy to competing technologies for detection of viability. abstract_id: PUBMED:16056188 The diagnostic possibilities of positron emission tomography (PET): applications in oral and maxillofacial buccal oncology. The principles of positron emission tomography (PET), recently introduced as a diagnostic procedure into the health sciences, are described. The principle clinical applications apply to a particular group of specialties: cardiology, neurology, psychiatry, and above all oncology. Positron emission tomography is a non-invasive diagnostic imaging technique with clinical applications. It is an excellent tool for the study of the stage and possible malignancy of tumors of head and neck, the detection of otherwise clinically indeterminate metastases and lymphadenopathies, and likewise for the diagnosis of relapses. The only tracer with any practical clinical application is fluor-desoxyglucosa-F18 (FDG). PET detects the intense accumulation of FDG produced in malignant tumors due to the increased glycolytic rate of the neoplastic cells. With the introduction of hybrid systems that combine computerized tomography or magnetic resonance with positron emission tomography, important advances are being made in the diagnosis and follow-up of oncologic pathology of head and neck. abstract_id: PUBMED:38159001 Positron emission tomography in cardiological practice The utility of positron emission tomography in cardiology currently goes beyond the ischemic heart disease and covers an increasingly wider range of non-coronary pathology, which requires timely expert diagnostics, including chronic heart disease of any etiology, valvular and electrophysiology disorders, cardiooncology. Authors emphasize the importance of the development of positron emission tomography technologies in the Russian Federation. This includes the development and implementation of new radiopharmaceuticals for the diagnosis of pathological processes of the cardiovascular system, systemic and local inflammation, including atherosclerosis, impaired perfusion and myocardial metabolism, and also for solving specific diagnostic tasks in comorbid pathology. abstract_id: PUBMED:20846762 Use of positron emission tomography in sarcoidosis FDG-PET, now hybrid positron emission tomography/computed tomography (PET-CT), has become an established diagnostic tool in oncology. Fluorodesoxyglucose ((18)F-FDG) is not specific for malignant lesions, as uptake of the tracer depends on its accumulation in cells with an increased glucose metabolism as it is also the case in infectious and inflammatory lesions, like sarcoidosis. Thus, FDG-PET has been proposed for internal medicine indications, one of whom is sarcoidosis. The main characteristics of FDG-PET are its better sensitivity compared to (67)Ga scintigraphy and its ability to be used as an earlier marker of therapeutic response as compared with anatomy-based and conventional scintigraphic imaging. However, FDG-PET should be used in atypical or advanced stage of the disease. Future prospective studies should be awaited before integrating FDG-PET in clinical routine for treatment outcome and disease activity assessment in sarcoidosis. New radiopharmaceutical probes are under development and will improve the performance of PET. abstract_id: PUBMED:29142348 18F-Fluorodeoxyglucose-Positron Emission Tomography/Computed Tomography in Tuberculosis: Spectrum of Manifestations. The objective of this article is to provide an illustrative tutorial highlighting the utility of 18F-fluorodeoxyglucose-positron emission tomography/computed tomography (18F-FDG-PET/CT) imaging to detect spectrum of manifestations in patients with tuberculosis (TB). FDG-PET/CT is a powerful tool for early diagnosis, measuring the extent of disease (staging), and consequently for evaluation of response to therapy in patients with TB. abstract_id: PUBMED:18795494 Pediatric positron emission tomography-computed tomography protocol considerations. Pediatric body oncology positron emission tomography-computed tomography studies require special considerations for optimal diagnostic performance while limiting radiation exposure to young patients. Differences from routine adult procedures include the patient preparation phase, radiopharmaceutical dose, computed tomography acquisition parameters, and approach to computed tomography contrast materials and imaging sequence. Attention to these differences define the best practice for positron emission tomography-computed tomography examinations of children with cancer contributing to optimal care of these patients. abstract_id: PUBMED:26991705 New Positron-Emission Tomography/Computed Tomography Imaging for Bone Metastases . With the increase in new therapies to treat cancer, improved diagnostic tools are needed to help determine best treatment options. Many radiopharmaceuticals used with positron-emission tomography/computed tomography have been tested to evaluate solid cancers. Two of the newer radiopharmaceuticals are 18F sodium fluoride and radiolabeled choline. This article reviews these new technologies, providing background and potential clinical use. abstract_id: PUBMED:17433969 Positron emission tomography in oncology: a review. Positron emission tomography is an evolving imaging tool that is becoming increasingly available for use in clinical practice. This overview will look at the current evidence for the use of positron emission tomography in imaging different tumour types and the different radiotracers that are either available or being evaluated in an investigational setting. abstract_id: PUBMED:15662519 Positron emission tomography in neuroscience. An integrative part of clinical diagnostic methods and experimental research The role of molecular neuroimaging techniques is increasing in the understanding of pathophysiological mechanism of diseases. To date, positron emission tomography is the most powerful tool for the non-invasive study of biochemical and molecular processes in humans and animals in vivo. With the development in radiochemistry and tracer technology, a variety of endogenously expressed and exogenously introduced genes can be analyzed by PET. This opens up the exciting and rapidly field of molecular imaging, aiming at the non-invasive localisation of a biological process of interest in normal and diseased cells in animal models and humans in vivo. Besides its usefulness for basic research positron emission tomography has been proven to be superior to conventional diagnostic methods in several clinical indications. This is illustrated by detection of biological or anatomic changes that cannot be demonstrated by computed tomography or magnetic resonance imaging, as well as even before symptoms are expressed. The present review summarizes the clinical use of positron emission tomography in neuroscience that has helped elucidate the pathophysiology of a number of diseases and has suggested strategies in the treatment of these patients. Special reference is given to the neurovascular, neurodegenerative and neurooncological disease. Answer: Yes, positron emission tomography (PET) can be more than just a diagnostic tool. While PET is widely recognized for its diagnostic capabilities in various medical fields, including cardiology, neurology, psychiatry, and oncology (PUBMED:16056188), it also has applications beyond diagnosis. For instance, PET can be used for monitoring therapeutic responses in conditions such as sarcoidosis (PUBMED:20846762) and tuberculosis (PUBMED:29142348), where it can help evaluate the effectiveness of treatments by measuring changes in metabolic activity associated with disease processes. Moreover, PET imaging plays a role in assessing disease activity and guiding medical interventions in cardiovascular diseases (PUBMED:32078338). It can provide valuable prognostic information, as seen in the assessment of myocardial viability, where the mismatch pattern of 18F-fluorodeoxyglucose (FDG) uptake and perfusion can identify patients who may benefit from revascularization (PUBMED:16900009). Additionally, PET can be instrumental in the development and implementation of new radiopharmaceuticals for diagnosing various pathological processes, including systemic and local inflammation, such as atherosclerosis (PUBMED:38159001). In the field of oncology, PET is not only used for tumor staging and detection but also for monitoring the response to cancer therapies (PUBMED:17433969). The development of new radiopharmaceutical probes is expected to enhance the performance of PET in this area (PUBMED:20846762). Furthermore, in pediatric oncology, PET-CT protocols are tailored to optimize diagnostic performance while minimizing radiation exposure, contributing to the optimal care of young cancer patients (PUBMED:18795494). In neuroscience, PET is a powerful tool for understanding the pathophysiological mechanisms of diseases and has been shown to detect biological or anatomical changes that other imaging modalities cannot, sometimes even before symptoms are expressed (PUBMED:15662519). Overall, PET's utility extends beyond diagnosis to include disease monitoring, treatment evaluation, and guiding therapeutic interventions, making it a multifaceted tool in modern medicine.
Instruction: Beyond Maastricht IV: are standard empiric triple therapies for Helicobacter pylori still useful in a South-European country? Abstracts: abstract_id: PUBMED:25886722 Beyond Maastricht IV: are standard empiric triple therapies for Helicobacter pylori still useful in a South-European country? Background: Empiric triple treatments for Helicobacter pylori (H. pylori) are increasingly unsuccessful. We evaluated factors associated with failure of these treatments in the central region of Portugal. Methods: This single-center, prospective study included 154 patients with positive (13)C-urea breath test (UBT). Patients with no previous H. pylori treatments (Group A, n = 103) received pantoprazole 40 mg 2×/day, amoxicillin 1000 mg 12/12 h and clarithromycin (CLARI) 500 mg 12/12 h, for 14 days. Patients with previous failed treatments (Group B, n = 51) and no history of levofloxacin (LVX) consumption were prescribed pantoprazole 40 mg 2×/day, amoxicillin 1000 mg 12/12 h and LVX 250 mg 12/12 h, for 10 days. H. pylori eradication was assessed by UBT 6-10 weeks after treatment. Compliance and adverse events were assessed by verbal and written questionnaires. Risk factors for eradication failure were determined by multivariate analysis. Results: Intention-to-treat and per-protocol eradication rates were Group A: 68.9% (95% CI: 59.4-77.1%) and 68.8% (95% CI: 58.9-77.2%); Group B: 52.9% (95% CI: 39.5-66%) and 55.1% (95% CI: 41.3-68.2%), with 43.7% of Group A and 31.4% of Group B reporting adverse events. Main risk factors for failure were H. pylori resistance to CLARI and LVX in Groups A and B, respectively. Another independent risk factor in Group A was history of frequent infections (OR = 4.24; 95% CI 1.04-17.24). For patients with no H. pylori resistance to CLARI, a history of frequent infections (OR = 4.76; 95% CI 1.24-18.27) and active tobacco consumption (OR = 5.25; 95% CI 1.22-22.69) were also associated with eradication failure. Conclusions: Empiric first and second-line triple treatments have unacceptable eradication rates in the central region of Portugal and cannot be used, according to Maastricht recommendations. Even for cases with no H. pylori resistance to the used antibiotics, results were unacceptable and, at least for CLARI, are influenced by history of frequent infections and tobacco consumption. abstract_id: PUBMED:34370686 Helicobacter pylori - 2021. Összefoglaló. A Helicobacter pylori továbbra is a világ legelterjedtebb fertőzése: prevalenciája a fejlődő országokban 70-80%, a fejlett országokban csökkenő tendenciát mutat. A dél-magyarországi véradókban a prevalencia 32%-ra csökkent. A migráció a befogadó ország számára a fertőzés fokozott kockázatával jár. A szövettani diagnózisban az immunhisztokémiai vizsgálat pontosabb a hagyományos Giemsa-festésnél. A mesterséges intelligencia érzékenysége a hagyományos endoszkópiáéval összehasonlítva 87%, pontossága 86%. Az újgenerációs szekvenálással lehetséges egy biopsziás mintából több antibiotikumérzékenység meghatározása. A Helicobacter pylori kezelésének európai regisztere kimutatta, hogy 2013 és 2018 között a bizmutalapú négyes vagy a 14 napos egyidejű négyes kezelések hatásosabbak, mint a hagyományos hármas kezelés, de elterjedésük igen lassú folyamat, jelentős földrajzi különbségekkel. Az új típusú koronavírus (SARS-CoV-2) felléphet Helicobacter pylori fertőzésben is, egymás kóros hatását felerősítve. A diagnosztikai módszerek korlátozottak. Protonpumpagátlók szedése növeli a COVID-19-fertőzés kockázatát és annak súlyos kimenetelét. Előzetesen ismert peptikus fekély, vérzés, illetve antikoguláns kezelés előtt az eradikáció a vírusos fertőzés lezajlása után indokolt. A probiotikumoknak az eradikációra gyakorolt hatásáról 20, közepes minőségű metaanalízis született, így a konszenzusokban foglalt álláspontok sem egyértelműek: a jövőben ezt tisztázni kell. Orv Hetil. 2021; 162(32): 1275-1282. Summary. Helicobacter pylori is still the most widespread infection in the world: its overall prevalence is 70-80% in developing regions, but fortunately it is decreasing in the Western world. The prevalence in blood donors from South-Eastern Hungary decreased from 63% in the 1990's to 32% in 2019. Migration constitutes an increased risk of infection for the destination countries. Immunohistochemistry has proven to be more accurate in histological diagnosis than the conventional Giemsa stain. The sensitivity and accuracy of artificial intelligence as compared to videoendoscopy were 87% and 86%, respectively. The European Register on the management of Helicobacter pylori infection revealed that concomitant quadruple and 14-day bismuth-based therapies are more efficient than triple combinations, although their incorporation in practice is a long-lasting process, with large geographical variations. The novel type of coronavirus (SARS-CoV-2) can also occur in Helicobacter pylori-infected patients, mutually enhancing their pathogenetic effects. Diagnostic possibilities are limited in this setting. The use of proton pump inhibitors increases the risk of viral infection and the severity of the disease. Eradication treatment seems justified in patients with previously known peptic ulcers or gastrointestinal bleeding, or before starting anticoagulant treatment, but must be postponed after resolution of viral infection. The effect of probiotics on eradication was addressed by 20, medium-to-low quality meta-analyses and so, the recommendations of the guidelines are equivocal, which must be clarified in the future with higher quality studies. Orv Hetil. 2021; 162(32): 1275-1282. abstract_id: PUBMED:22951408 Helicobacter pylori - 2012 The author overviews some aspects of literature data of the past 2 years. Genetic research has identified polymorphisms of Helicobacter pylori virulence factors and the host which could play a role in the clinical outcome of the infection (peptic ulcer or gastric cancer). So far they have been performed in research centers but with a decrease of costs, they will take their place in diagnosing the diseases and tailoring the treatment. Antibiotic resistance is still growing in Southern European countries and is decreasing in Belgium and Scandinavia. Currently, the clarithromycin resistance rate is of 17-33% in Budapest and levofloxacin resistance achieved 27%. With careful assessment of former antibiotic use the resistance to certain antibiotics can be avoided and the rates of eradication improved. Immigration is a growing problem worldwide: according to Australian, Canadian and Texan studies, the prevalence of Helicobacter pylori is much higher in the immigrant groups than in the local population. An Italian study showed that the eradication rate of triple therapy is significantly lower in the Eastern European immigrants than in the Italians. A recent research has suggested a link between female/male infertility, habitual abortion and Helicobacter pylori infection. However, there are no published data or personal experience to show whether successful eradication of the virus in these cases is followed by successful pregnancies or not. The author overviews the Maastricht process and analyzes the provisions of the Maastricht IV/Florence consensus, in which the new diagnostic algorithms and indications of eradication therapy are reformulated according to the latest levels of evidence and recommendation grading. According to the "test and treat" strategy, either the urea breath test or the stool monoclonal antigen test are recommended as a non-invasive diagnostic method in primary care. Endoscopy is still recommended in case of alarm symptoms, complicated ulcer, or if there is a suspicion of malignancy or MALT lymphoma. Local resistance to clarithromycin and levofloxacin should be considered in the choice of first-line therapy, in case of levels &gt;15-20% these compounds should not be used. In regions with low resistance rates, classical triple therapy remains the regimen of choice; its alternative is the bismuth-based quadruple therapy. Determining antimicrobial resistance is justified after failed second- or third-line therapies; where available, molecular methods (fluorescence in situ hybridization, polymerase chain reaction) should be used. As second/third line treatments, the sequential, bismuth-based quadruple, concomitant quadruple regimens, hybrid are all possible alternatives. The Hungarian diagnostic and therapeutic approach in practice is different in some aspects from the provisions of the European consensus. Orv. Hetil., 2012, 153, 1407-1418. abstract_id: PUBMED:23758027 Helicobacter pylori - Update 2013 Helicobacter pylori has an important role in the pathogenesis of peptic ulcer, adenocarcinoma of the stomach, lymphoma of the stomach and autoimmune gastritis. Furthermore Helicobacter pylori is involved in the development of symptoms in patients with dyspepsia. Guidelines of the German Society of Digestive Diseases (DGVS) and recommendations of the European Helicobacter Study Group (Maastricht-Consensus) exist for the diagnosis and treatment of Helicobacter pylori and were recently published in updated versions. The German approval and introduction of a new quadrupel eradication therapy for Helicobacter pylori infections is a good occasion to outline and discuss the current state of the art of diagnosis and treatment of Helicobacter pylori in Germany. abstract_id: PUBMED:23929066 Current recommendations for Helicobacter pylori therapies in a world of evolving resistance. Occurrence of resistance, especially to clarithromycin, renders the standard triple therapy used to cure Helicobacter pylori infection ineffective. This review presents the bacteriological and pharmacological basis for H. pylori therapy and the current recommendations. The third-line treatment must be based on clarithromycin susceptibility testing. If the bacteria are still susceptible, failure may come from problems of compliance, hyperacidity or high bacterial load which can be overcome. If the bacteria are resistant, different regimens must be considered, including bismuth and non-bismuth-based quadruple therapies (sequential or concomitant), as well as triple therapies where amoxicillin is administered several times a day to obtain an optimal concentration at the gastric mucosal level. The treatments are becoming more and more complex and ecologically unsatisfactory, waiting for new agents or vaccines. abstract_id: PUBMED:15722982 The diagnosis of helicobacter pylori infection: guidelines from the Maastricht 2-2000 Consensus Report The European Helicobacter pylori Study Group (EHPSG), during the Maastricht 2-2000 Workshop, revised and updated the original guidelines on the management of Helicobacter pylori (H. pylori) infection. The present review focuses on the diagnostic approach for patients referred to the primary care as well as to the specialist. Currently, two diagnostic methods can be used to detect H. pylori: invasive (urease test, histological detection, culture, polymerase chain reaction, smear examination, string test) or non-invasive (serology, urea breath test, antigen stool assay, ''doctor's tests'') tests. These methods vary in their sensitivity and specificity, and the choice depends on the situation, for example, whether the aim is to detect infection or the success of eradication treatment. Urea breath test (UBT) and antigen stool assay are recommended from EHPSG in patients without alarm symptoms or under 45 years of age, at low risk of malignancy in the ''test and treat strategy''. Confirmation of H. pylori eradication following treatment should be tested by UBT; a stool antigen assay is the alternative if the former is not available. Important added value can be gained from other tests: histology allows evaluation of the status of the mucosa while culture allows strain typing and tests for antibiotic susceptibility. abstract_id: PUBMED:24656156 Eradication of Helicobacter pylori infection. Eradication of Helicobacter pylori infection has become an important issue recently, because this bacterial species cluster can cause many gastrointestinal diseases. Elevated antibiotic resistance is related to an increasing failure rate of H. pylori eradication. Standard triple therapy is still the first-line therapy; however, according to the Maastricht IV Consensus Report, it should be abandoned in areas of high clarithromycin resistance. Alternative first-line therapies include bismuth-containing quadruple therapy, sequential, concomitant, and hybrid therapies. Quinolone-based triple therapy may be considered as first-line therapy in areas of clarithromycin resistance &gt;15-20% and quinolone resistance &lt;10%. Unique second-line therapy is still unclear, and bismuth-containing quadruple therapy or levofloxacin-based triple therapy can be used as rescue treatment. Third-line therapy should be under culture guidance to select the most effective regimens (such as levofloxacin-based, rifabutin-based, or furazolidone-based therapies). Antibiotics resistance, patient compliance, and CYP 2C19 genotypes could influence the outcome. Clinicians should use antibiotics according to local reports. abstract_id: PUBMED:15559530 Eradication of Helicobacter pylori infection in Europe: a meta-analysis based on congress abstracts, 1997-2002 Background: Meta-analyses evaluated several aspects of Helicobacter pylori eradication based on the randomised controlled trials. Aim: to perform a meta-analysis of the papers presented at the European Helicobacter Pylori Study Group and United European Gastroenterology Week meetings from 1997 to 2002. Methods: Abstracts dealing with the eradication of Helicobacter pylori have been reviewed and the randomised, controlled studies from European countries were included. The studies were classified into groups based on eradication schedules, antibiotics used and country of provenience. The pooled eradication rates were calculated and the differences were assessed by multiple variance analysis. Results: One-hundred and two studies were accepted comprising 25,644 cases and 398 treatment arms. The eradication rate of proton pump inhibitor-based first line triple therapies was 80.4% (confidence interval: 78.9-81.8); no difference was observed between the five proton pump inhibitors (p &gt; 0.05). Ranitidine bismuth citrate based regimens were efficient in 79.9% (75.7-84.0) (p = 0.95 vs PPI). H2 blockers-based therapies achieved 68.6% (59.0-78.1) (p = 0.0007 vs proton pump inhibitor and p = 0.005 vs ranitidine bismuth citrate-based regimens). Proton pump inhibitor-based double combinations were efficient in 47.1 (31.9-62.4) (p = 0.001 vs triple regimens). Clarithromycin+amoxicillin/nitroimidazole combinations achieved rates of 79.6% and 84.1%, respectively, while amoxicillin-nitroimidazole regimens were less efficient (72.5%, 66.6-78.5) (p = 0.006). The pooled eradication rate of second-line triple regimens was 75.5% (69.9-86.4)(p = 0.08 vs primary treatment). Quadruple therapies were successful in 81.1% (76.6-85.6) of cases as first-line and 73.8% (61.2-86.4) as second-line regimens (p = 0.77 and p = 0.02 vs triple regimens). The pooled eradication rates varied from 58% to 92% in the European countries. Conclusions: The pooled eradication rate of the primary proton pump inhibitor/ranitidine bismuth citrate-based triple regimens are comparable with the results of meta-analyses. H2 blocker-based triple and proton pump inhibitor-based double regimens are of lower efficacy. Quadruple regimens were not better than triple therapies. The eradication rates per country varied, approaching 80% in most places. The results confirm in part post-hoc the validity of the Maastricht consensus recommendations. abstract_id: PUBMED:29520199 Helicobacter Pylori Treatment Results in Slovenia in the Period 2013-2015 as a Part of European Registry on Helicobacter Pylori Management. Background: Helicobacter pylori (H. pylori) is the most common chronic bacterial infection in the world affecting over 50% of the world's population. H. pylori is a grade I carcinogen, responsible for the development of 89 % of noncardia gastric cancers. In the present study we analyzed the data for H. pylori eradication treatments in Slovenia. Patients And Methods: Slovenia is a part of the European Registry on Helicobacter pylori Management from the beginning. In seven medical institutions data for H. pylori eradication treatments was collected for 1774 patients from April 16th 2013 to May 15th 2016. For further modified intention to treat (mITT) analysis 1519 patients were eligible and for per protocol (PP) analysis 1346 patients. Results: Patients' dropout was 11.4%. Eradication rate for 7 day triple therapy with proton pump inhibitor (PPI) + Clarithromycin (C) + Amoxicillin (A) was 88.7% PP and 72.0% mITT; for PPI + C + Metronidazole (M) 85.2% PP and 84.4% mITT. Second line 14 day therapy PPI + A + Levofloxacin had 92.3% eradication rate PP and 87.1% mITT. Ten to fourteen day Bismuth quadruple therapy was the therapy in difficult to treat patients. At the end all patients that adhered to prescribed regimens were cured of their H. pylori infection. Conclusions: High dropout rate deserves further analysis. Slovenia is still a country with &lt; 15% H. pylori resistance to clarithromycin, triple therapy with PPI plus two antibiotics reaches PP eradication rate &gt; 85%, but mITT eradication rates are suboptimal. abstract_id: PUBMED:14526487 The management of Helicobacter pylori infection in adults--statement of the Polish Working Group and the Maastricht Consensus 2-2000 Current standards in the management of Helicobacter pylori infection according to the guidelines by European (Maastricht 2-2000 Consensus Report) and Polish Study Group (IX Congress of Polish Gastroenterology Association) were presented in this paper. The comparison analysis was performed as well as the diagnostic methods were described. The indications for Helicobacter pylori infection treatment especially in gastric and duodenal ulcer disease (also complicated by bleeding), gastric precancerous conditions, functional dyspepsia, gastroesophageal reflux disease, non-steroidal anti-inflammatory therapy were discussed. The acceptable schedules of the first and second-line therapy as well as the problem of drug resistance were presented. The special attention was drown to differences between the two sets of guidelines with respect to latest scientific reports. Answer: The standard empiric triple therapies for Helicobacter pylori (H. pylori) are increasingly unsuccessful in South European countries, as evidenced by a study conducted in the central region of Portugal. The study found that empiric first and second-line triple treatments have unacceptable eradication rates, with intention-to-treat and per-protocol eradication rates of 68.9% and 68.8% for first-line treatments, and 52.9% and 55.1% for second-line treatments, respectively. The main risk factors for treatment failure were H. pylori resistance to clarithromycin (CLARI) and levofloxacin (LVX) in the first and second-line treatments, respectively. Other independent risk factors for eradication failure in the first-line group included a history of frequent infections and active tobacco consumption. These findings suggest that according to Maastricht recommendations, these treatments cannot be used in the central region of Portugal, even in cases with no H. pylori resistance to the used antibiotics (PUBMED:25886722). Additionally, the European Register on the management of Helicobacter pylori infection revealed that concomitant quadruple and 14-day bismuth-based therapies are more efficient than triple combinations, although their incorporation in practice is a long-lasting process with large geographical variations (PUBMED:34370686). This indicates a shift towards more effective treatment regimens in response to the evolving resistance patterns. In light of the increasing antibiotic resistance, particularly to clarithromycin, alternative first-line therapies such as bismuth-containing quadruple therapy, sequential, concomitant, and hybrid therapies have been suggested. Quinolone-based triple therapy may be considered as first-line therapy in areas with high clarithromycin resistance and low quinolone resistance (PUBMED:24656156). Therefore, the standard empiric triple therapies for H. pylori are no longer considered useful in South European countries, and alternative treatment regimens are recommended to improve eradication rates in the face of growing antibiotic resistance.
Instruction: Measurement of anal sphincter muscles: endoanal US, endoanal MR imaging, or phased-array MR imaging? Abstracts: abstract_id: PUBMED:11425977 Measurement of anal sphincter muscles: endoanal US, endoanal MR imaging, or phased-array MR imaging? A study with healthy volunteers. Purpose: To compare endoanal ultrasonography (US), endoanal magnetic resonance (MR) imaging, and phased-array MR imaging for anal sphincter muscle measurement. Materials And Methods: Sixty healthy volunteers underwent 1.5-T phased-array MR, endoanal MR, and endoanal US examinations. Sphincter muscle thicknesses were measured. Measurement reliability was analyzed, and correlations among the imaging methods were calculated. Multivariate analysis was performed to assess the influence of age, weight, height, sex, parity, and obstetric trauma on sphincter dimensions. Results: Both MR methods had good reliability for measurements of all sphincter components, whereas endoanal US was reliable for internal sphincter measurement only. There was little correlation between the techniques, except between the two MR techniques, with a strong correlation for total sphincter and perineal body thickness. The internal sphincter thickened significantly (P =.002) with age at endoanal US and endoanal MR imaging but not at phased-array MR imaging. There were small sex-based differences in sphincter muscle measurements at phased-array MR imaging only. Conclusion: Endoanal US enables reliable measurement of only internal sphincter thickness, whereas both MR imaging methods enable reliable measurement of all sphincter components. Sphincter measurement with phased-array MR imaging is as reliable as that with endoanal MR imaging. abstract_id: PUBMED:16014438 Anal sphincter defects in patients with fecal incontinence: endoanal versus external phased-array MR imaging. Purpose: To prospectively compare external phased-array magnetic resonance (MR) imaging with endoanal MR imaging in depicting external and internal anal sphincter defects in patients with fecal incontinence and to prospectively evaluate observer reproducibility in the detection of external and internal anal sphincter defects with both MR imaging techniques. Materials And Methods: The medical ethics committees of both participating hospitals approved the study, and informed consent was obtained. Thirty patients (23 women, seven men; mean age, 58.7 years; range, 37-78 years) with fecal incontinence underwent MR imaging with both endoanal and external phased-array coils. MR images were evaluated by three radiologists with different levels of experience for external and internal anal sphincter defects. Measures of inter- and intraobserver agreement of both MR imaging techniques and of differences between both imaging techniques were calculated. Results: Both MR imaging techniques did not significantly differ in the depiction of external (P &gt; .99) and internal (P &gt; .99) anal sphincter defects. The techniques corresponded in 25 (83%) of 30 patients for the depiction of external anal sphincter defects and in 28 (93%) of 30 patients for the depiction of internal anal sphincter defects. Interobserver agreement was moderate to good for endoanal MR imaging and poor to fair for external phased-array MR imaging. Intraobserver agreement ranged from fair to very good for both imaging techniques. Conclusion: External phased-array MR imaging is comparable to endoanal MR imaging in the depiction of clinically relevant anal sphincter defects. Because of the weak interobserver agreement, both MR imaging techniques can be recommended in the diagnostic work-up of fecal incontinence only if sufficient experience is available. abstract_id: PUBMED:10517453 Endoanal MR imaging of the anal sphincter in fecal incontinence. Fecal incontinence is a major medical and social problem. The most frequent cause is a pathologic condition of the anal sphincter. Endoanal magnetic resonance (MR) imaging allows detailed visualization of the normal anatomy and pathologic conditions of the anal sphincter. The hyperintense internal sphincter appears as a continuation of the smooth muscle of the rectum; the hypointense external sphincter surrounds the lower part of the internal sphincter. A sphincteric defect is seen as a discontinuity of the muscle ring. Scarring appears as a hypointense deformation of the normal pattern of the muscle layer. Two external sphincteric patterns may be misdiagnosed as defects: a posterior discontinuity (often seen in young male patients) and an anterior discontinuity (often seen in female patients). Atrophy of the external sphincter is easily detected on coronal MR images by comparing the thicknesses of all anal muscles. Endoanal MR imaging is superior to endoanal ultrasonography because of the multiplanar capability and higher inherent contrast resolution of the former. Use of endoanal MR imaging may lead to better selection of candidates for surgery and therefore better surgical results. Endoanal MR imaging is the most accurate technique for detection and characterization of sphincteric lesions and planning of optimal therapy. abstract_id: PUBMED:17255418 External anal sphincter defects in patients with fecal incontinence: comparison of endoanal MR imaging and endoanal US. Purpose: To prospectively compare in a multicenter study the agreement between endoanal magnetic resonance (MR) imaging and endoanal ultrasonography (US) in depicting external anal sphincter (EAS) defects in patients with fecal incontinence. Materials And Methods: The study was approved by the medical ethics committee of all participating centers. A total of 237 consenting patients (214 women, 23 men; mean age, 58.6 years +/- 13 [standard deviation]) with fecal incontinence were examined from 13 different hospitals by using endoanal MR imaging and endoanal US. Patients with an anterior EAS defect depicted on endoanal MR images and/or endoanal US scans underwent anal sphincter repair. Surgical findings were used as the reference standard in the determination of anterior EAS defects. The Cohen kappa statistic and McNemar test were used to calculate agreement and differences between diagnostic techniques. Results: Agreement between endoanal MR imaging and endoanal US was fair for the depiction of sphincter defects (kappa = 0.24 [95% confidence interval: 0.12, 0.36]). At surgery, EAS defects were found in 31 (86%) of 36 patients. There was no significant difference between MR imaging and US in the depiction of sphincter defects (P = .23). Sensitivity and positive predictive value were 81% and 89%, respectively, for endoanal MR imaging and 90% and 85%, respectively, for endoanal US. Conclusion: In the selection of patients for anal sphincter repair, both endoanal MR imaging and endoanal US are sensitive tools for preoperative assessment, and both techniques can be used to depict surgically repairable anterior EAS defects. abstract_id: PUBMED:10429703 Fecal incontinence: endoanal US versus endoanal MR imaging. Purpose: To assess endoanal ultrasonography (US) and endoanal magnetic resonance (MR) imaging for mapping of anal sphincter defects that have been validated at surgery in patients with fecal incontinence. Materials And Methods: US, MR imaging, and surgical findings in 22 women with fecal incontinence who underwent sphincter repair were retrospectively reviewed. US and MR imaging had been performed before surgery. The findings were evaluated separately and validated with surgical results. Results: Endoanal MR imaging findings showed better agreement with surgical results than did endoanal US findings for diagnosis of lesions of the external sphincter (kappa value, 0.85 vs 0.53) and of the internal sphincter (kappa value, 0.64 vs 0.49). Endoanal US could not accurately demonstrate thinning of the external sphincter. MR imaging results correlated moderately with US results (kappa = 0.39). If endoanal MR images alone had been considered, the correct surgical decision would have been made in 21 (95%) patients; if endoanal US images alone had been considered, the correct decision would have been made in 17 (77%) patients. Conclusion: MR imaging is more accurate than US for demonstration of sphincter lesions. MR imaging provides higher spatial resolution and better inherent image contrast for lesion characterization. Endoanal MR imaging allows more precise description of the extent and structure of complex lesions and is superior for help in decisions about optimal therapy. abstract_id: PUBMED:35787708 3T external phased-array magnetic resonance imaging in detection of obstetric anal sphincter lesions: a pilot study. Background: Three-dimensional endoanal ultrasound (3D EAUS) has been the gold standard for detecting anal sphincter lesions in patients with a history of obstetric anal sphincter injury (OASI). Advances in imaging technologies have facilitated the detection of these lesions with external phased-array magnetic resonance imaging (MRI), which could offer an alternative imaging modality for the diagnosis of residual OASI (ROASI) in centers where 3D EAUS imaging is not available. Purpose: To compare two diagnostic modalities: the 3D EAUS and 3T external phased-array MRI in the detection of residual anal sphincter lesions. Material And Methods: A total of 24 women with a history of OASI were imaged with both 3D EAUS and 3T external phased-array MRI after primary repair of the injury. Intraclass correlation (ICC) and interrater reliability (IRR) values were calculated for the grade and circumference of the sphincter lesion. Sphincter lesions were graded according to the Sultan classification. Results: There was an almost perfect agreement between 3D EAUS and 3T external phased-array MRI in determining the extent of the sphincter lesions according to the Sultan classification (κ = 0.881; P &lt; 0.001) and the circumference of the external anal sphincter defects, measured in degrees (κ = 0.896; P &lt; 0.001). Conclusion: The results of this study indicate that 3T external phased-array MRI and 3D EAUS yield comparable results in the diagnosis of ROASI. These findings suggest that 3T external phased-array MRI could serve as an alternative diagnostic modality to 3D EAUS in the diagnosis of ROASI. abstract_id: PUBMED:12552401 Endoanal MR imaging: diagnostic assessment Endoanal MR imaging is an alternative to anal endosonography for the acquisition of high-resolution images of the external and internal anal sphincter. A dedicated anal receiver coil is placed in the anus so that it spans the sphincter complex. Highly detailed images of the sphincters can be obtained in any plane and the morphological abnormalities found in various types of anal incontinence can be demonstrated. Whilst MR demonstrates external sphincter disruption with an efficacy similar to that of endosonography, it is better able to demonstrate external sphincter atrophy that is presumed secondary to neuropathy. The finding of coexisting muscular atrophy on MR may prejudice the effects of anal sphincter repair for obstetric disruption. abstract_id: PUBMED:30915565 Comparison of 3D endoanal ultrasound and external phased array magnetic resonance imaging in the diagnosis of obstetric anal sphincter injuries. Objectives: The gold standard of postpartum anal sphincter imaging has been the 3D endoanal ultrasound (EAUS). Development of magnetic resonance imaging (MRI) has allowed anal sphincter evaluation without the use of endoanal coils. The aim of this study is to compare these two modalities in diagnosing residual sphincter lesions post obstetric anal sphincter injury (OASI). Methods: Forty women were followed up after primary repair of OASI with both 3D EAUS and external phased array MRI. Details of the anal sphincter injury and sphincter musculature were gathered and analysed. Results: There was a moderate interrater reliability (κ = 0.510) between the two imaging modalities in detecting sphincter lesions, with more lesions detected by MRI. There was a moderate intraclass correlation (ICC) between the circumference of the tear (κ = 0.506) and a fair ICC between the external anal sphincter thickness measurements at locations 3 and 9 on the proctologic clock face (κ = 0.320) and (κ = 0.336). Conclusions: The results of our study indicate that the use of external phased array MRI is feasible for detecting obstetric anal sphincter lesions postpartum. This allows for imaging of the sphincter defects in centres where EAUS imaging is not available. Key Points: • A two centre prospective study that showed external phased array MRI to be a valid imaging modality for diagnosing obstetric anal sphincter injuries. abstract_id: PUBMED:7480737 Anal sphincter complex: endoanal MR imaging of normal anatomy. Purpose: To determine the normal anatomy of the anal sphincter complex on magnetic resonance (MR) images. Materials And Methods: Ten healthy volunteers (four men, six women; age range, 21-26 years) underwent MR imaging with an endoanal coil. Results: The lower part of the anal canal contained the internal sphincter, the longitudinal muscle layer, and the external sphincter; the upper part comprised the internal sphincter, the longitudinal layer, and the puborectal muscle. At the upper end, the puborectal muscle was attached to the levator ani muscle. Anteriorly, the external sphincter was connected to the urogenital diaphragm; posteriorly, it was attached to the coccyx with the anococcygeal ligament. All perianal spaces were visible. The morphology of the anterior part of the external sphincter, different in men and women, was well displayed. Conclusion: Depiction of the anal sphincter complex on MR images is excellent. abstract_id: PUBMED:9207525 High-resolution MR imaging of the anal sphincter in children: a pilot study using endoanal receiver coils. Objective: The purpose of this study was to obtain high-resolution MR images of the various components of the anal sphincter complex in children who have anorectal disorders. We therefore used dedicated endoanal receiver coils for MR imaging. Conclusion: Our pilot study suggested that MR imaging that uses a dedicated endoanal coil may have considerable diagnostic potential in children who have anorectal disorders. Answer: The measurement of anal sphincter muscles can be performed using different imaging techniques, including endoanal ultrasonography (US), endoanal magnetic resonance (MR) imaging, and phased-array MR imaging. Each of these methods has its own advantages and limitations. Endoanal US is reliable for measuring the thickness of the internal sphincter only, but it does not provide reliable measurements for all sphincter components (PUBMED:11425977). On the other hand, both endoanal MR imaging and phased-array MR imaging have good reliability for measurements of all sphincter components (PUBMED:11425977). Phased-array MR imaging is comparable to endoanal MR imaging in the depiction of clinically relevant anal sphincter defects (PUBMED:16014438), and it can be used as an alternative to endoanal US in centers where endoanal US is not available (PUBMED:35787708, PUBMED:30915565). Endoanal MR imaging is superior to endoanal US in terms of multiplanar capability and higher inherent contrast resolution, which leads to better selection of candidates for surgery and better surgical results (PUBMED:10517453). It is also more accurate than US for demonstrating sphincter lesions, providing higher spatial resolution and better inherent image contrast for lesion characterization (PUBMED:10429703). Additionally, endoanal MR imaging is effective in demonstrating external sphincter atrophy, which is presumed secondary to neuropathy (PUBMED:12552401). In summary, while endoanal US is reliable for internal sphincter measurement, both endoanal MR imaging and phased-array MR imaging are reliable for measuring all sphincter components. Endoanal MR imaging offers superior visualization and characterization of sphincter lesions, which is crucial for planning optimal therapy. Phased-array MR imaging provides a non-invasive alternative with comparable results to endoanal MR imaging and can be used in the diagnosis of residual obstetric anal sphincter injuries (ROASI) (PUBMED:35787708, PUBMED:30915565).
Instruction: Feedback to Supervisors: Is Anonymity Really So Important? Abstracts: abstract_id: PUBMED:27028033 Feedback to Supervisors: Is Anonymity Really So Important? Purpose: Research demonstrates that physicians benefit from regular feedback on their clinical supervision from their trainees. Several features of effective feedback are enabled by nonanonymous processes (i.e., open feedback). However, most resident-to-faculty feedback processes are anonymous given concerns of power differentials and possible reprisals. This exploratory study investigated resident experiences of giving faculty open feedback, advantages, and disadvantages. Method: Between January and August 2014, nine graduates of a Canadian Physiatry residency program that uses open resident-to-faculty feedback participated in semistructured interviews in which they described their experiences of this system. Three members of the research team analyzed transcripts for emergent themes using conventional content analysis. In June 2014, semistructured group interviews were held with six residents who were actively enrolled in the program as a member-checking activity. Themes were refined on the basis of these data. Results: Advantages of the open feedback system included giving timely feedback that was acted upon (thus enhancing residents' educational experiences), and improved ability to receive feedback (thanks to observing modeled behavior). Although some disadvantages were noted, they were often speculative (e.g., "I think others might have felt …") and were described as outweighed by advantages. Participants emphasized the program's "feedback culture" as an open feedback enabler. Conclusions: The relationship between the feedback giver and recipient has been described as influencing the uptake of feedback. Findings suggest that nonanonymous practices can enable a positive relationship in resident-to-faculty feedback. The benefits of an open system for resident-to-faculty feedback can be reaped if a "feedback culture" exists. abstract_id: PUBMED:2269169 Acoustic feedback for probing at constant force Probing of the gingival crevice is generally recognized as the most important diagnostic procedure in periodontitis. Reliable measurements are only possible by probing at constant force. A freshly isolated porcine mandible was used to test, if acoustic feedback enhances the reliability of probing. Our results indicate that the feedback significantly reduces the total probing force and also the variance between single measurements. abstract_id: PUBMED:3687169 Marital feedback behavior: relations between feedback activity of the partner and feedback quality, duration of the marriage and ability of the marriage to function The effect of the quality of feedback and the feedback activity of partners on the feedback behavior of married controls, who differed with regard to the duration and the functioning of their marriages, was investigated. 106 married couples were divided into a group of disturbed marriages in the first half of life (GJE), a group of disturbed marriages in the second half of life (GAE), and harmonious marriages (HAE). During a conflict-conversation that was structured according to the technique of revealed differences the partners exchanged positive and negative feedback optically and acoustically. Corresponding to the frequency of the feedback active and less active partners were differentiated. In comparison with the couples of the HAE group, couples of the GAE group gave less feedback, especially less positive feedback. The less active partner of the GAE group gave significantly more negative feedback than the comparable partner of the HAE group. Differences between the GAE and the GJE group that can be called statistically significant do not exist. The results have been discussed with respect to their application. abstract_id: PUBMED:35899738 Feedback in medical education - separate coaching for improvement from summative assessment A supervisor's feedback can change a medical learner's behaviour consistently if the learner views the supervisor as a credible role model. A learner's trust in the supervisor is a prerequisite for feedback to contribute to effective learning. In current educational practice, coaching for improvement and summative assessment are frequently mixed, which leads medical learners to experience workplace based assessments as tests and makes them unresponsive to formative feedback. Carefully separating coaching for improvement from summative assessment is required to allow the learner to accept and apply the feedback given by the supervisor. Supervisors should focus their attention to providing formative feedback, not to documenting it. The R2C2 model (rapport - receptivity - content - coaching) is a useful tool to effectively provide constructive formative feedback. abstract_id: PUBMED:27334086 Peer feedback for trainers in general practice In medical specialist training programmes it is common practice for residents to provide feedback to their medical trainers. The problem is that due to its anonymous nature, the feedback often lacks the specificity necessary to improve the performance of trainers. If anonymity is to be abolished, there is a need for residents to feel safe in giving their feedback. Another way to improve the performance of trainers might be peer feedback. For peer feedback it is necessary that trainers observe each other during their training sessions with the residents. In speciality training in general practice peer feedback is done in group sessions of 12 trainers. They show videos of their training sessions and get feedback from their fellow trainers. Trainers also visit each other in their practices to observe training sessions and provide feedback. In order to improve trainer performance there is a need for more focus on peer feedback in medical specialist training programmes. abstract_id: PUBMED:18976615 Feedback in postgraduate medical training Feedback may be described as a process comprising communication of information and reactions to such communication. It has been defined as specific information about the difference between a trainee's observed performance and a given standard with the intent of achieving performance improvement. Feedback is essential in medical education and has great implications for the educational climate. It has been shown that a common language regarding the principles of feedback has a sustained effect on quality and frequency of feedback. Further research is needed on feedback and educational climate, and on how to motivate trainees to improve future learning through feedback. abstract_id: PUBMED:36420849 Audit &amp; Feedback: how it works. This article is the first of a series that aims to describe the Audit &amp; Feedback (A&amp;F) methodology. Some key elements focus on what A&amp;F is and how it works. While it is an effective tool for promoting change in professional behaviour and improving the quality of care, there is still substantial uncertainty concerning how to implement A&amp;F interventions to maximize its effects. The article explains how to design effective A&amp;F on relevant issues, considering the available literature and direct experiences conducted in the National Health System (NHS). A&amp;F interventions should aim to achieve clear, attainable, and evaluable objectives, which concern aspects of care for which there is solid evidence of literature and potential space for improvement. Based on data that measure any distance between what is expected and observed in local practice, the feedback must turn to those who can pursue the proposed change and who must trust the data collection and analysis process. Feedback should be provided more than once, in verbal and written form, and might include explicit objectives and an action plan. When planning A&amp;F interventions, it is essential to provide specific data (e.g., aggregated at the level of a team, department, or individual doctor) rather than general, sending them directly to the professional or department involved rather than generically to the healthcare organization involved. In addition, it is essential to simplify the message so that the staff who receives the feedback can quickly understand the quality of the performance addressed and how to change it. Finally, it is necessary to encourage collaboration between the various healthcare professionals responsible for the quality of care and competence for improvement interventions (health professions, health management, quality expert personnel, and methodologists). Networking between staff improves the knowledge and effectiveness of A&amp;F. This article finally proposes practical examples of two main aspects of A&amp;F planning from the context of the EASY-NET program: how to increase the participation and involvement of the recipients of the intervention and the related pros and cons regarding the choice between the use of routinely available data from health information systems (SIS) and data collected ad hoc. abstract_id: PUBMED:23739608 Feedback during laparoscopic training A recent Danish study showed that instructor feedback significantly reduced the duration of training time needed for acquiring laparoscopic skills. While there is a clear advantage to trainees reaching a predetermined expert level of performance more rapidly, this does not necessarily imply that the skills were also acquired more efficiently. Experiencing continual feedback while undergoing a training task could reduce the level of difficulty in performing it; the presence of an instructor can also heighten emotional tension. Both of these factors can impair the learning process. For this reason, we recommend self-directed feedback during training on complex laparoscopic skills. abstract_id: PUBMED:37910021 Is time really of the essence? Timeliness of narrative feedback in ophthalmology CBME assessments. Purpose: Competency-based medical education relies on a strong program of assessment, and quality comments play a vital role in ensuring its success. The goal of this study is to determine the effect of the timeliness of assessment completion on the quality of the feedback. Materials And Methods: Using the Quality of Assessment for Learning (QuAL) score 2478 assessments were evaluated. The assessments included those completed between July 2017 and December 2020 for 18 ophthalmology residents. Spearman correlation, Mann-Whitney U and Kruskal-Wallis tests were used to assess variations in QuAL scores based on the timeliness of assessment completion. Results: The timeliness of assessment completion ranged from 0 to 299 d with the mean time for completion being 3 d. As the delay increased, the QuAL score decreased. Feedback provided 4, 5, and 14 d post-encounter demonstrated statistically significant differences in the QuAL score. Additionally, there was a significant difference in the timeliness of feedback when there is no written comment. Conclusions: This study demonstrates that the timeliness of assessment completion might have an effect on the quality of written feedback. Written feedback should be completed within 14 d of the encounter to optimize quantity and quality. abstract_id: PUBMED:11828577 What's really important. Sept. 11 caused Americans and Michigan residents to re-evaluate what's really important and raised the public's expectations of institutions and elected officials. The desire for safety, security and ready access to health services in the event of an individual or community emergency vaulted to the top of the public's priorities. Answer: The importance of anonymity in feedback to supervisors is a topic of debate, particularly in the context of medical education and professional development. The study conducted on Canadian Physiatry residency program graduates suggests that nonanonymous feedback, or open feedback, can have several advantages. These include the ability to give timely feedback that is acted upon, thus enhancing the educational experience, and an improved ability to receive feedback, thanks to observing modeled behavior. The study found that while there were some disadvantages noted, they were often speculative and considered to be outweighed by the advantages. A key enabler for open feedback was the program's "feedback culture," which suggests that a positive relationship between the feedback giver and recipient can influence the uptake of feedback (PUBMED:27028033). In contrast, peer feedback for trainers in general practice indicates that anonymous feedback often lacks the specificity necessary to improve performance. It suggests that for feedback to be effective, residents need to feel safe in giving it, which may imply a need for non-anonymity. Peer feedback, where trainers observe each other and provide feedback, is highlighted as another method to improve performance, suggesting that anonymity is not always crucial (PUBMED:27334086). Overall, the literature suggests that while anonymity in feedback can protect against potential reprisals due to power differentials, nonanonymous feedback can foster a culture of openness and timely improvements when a supportive "feedback culture" is in place. The effectiveness of feedback, whether anonymous or not, seems to depend on the context, the existing culture, and the relationships between those giving and receiving feedback.
Instruction: Are allopurinol dose and duration of use nephroprotective in the elderly? Abstracts: abstract_id: PUBMED:27296322 Are allopurinol dose and duration of use nephroprotective in the elderly? A Medicare claims study of allopurinol use and incident renal failure. Objective: To assess the effect of allopurinol dose/duration on the risk of renal failure in the elderly with allopurinol use. Methods: We used the 5% random Medicare claims data from 2006 to 2012. Multivariable-adjusted Cox regression analyses assessed the association of allopurinol dose/duration with subsequent risk of developing incident renal failure or end-stage renal disease (ESRD) (no prior diagnosis in last 183 days) in allopurinol users, controlling for age, sex, race and Charlson-Romano comorbidity index. HRs with 95% CIs were calculated. Sensitivity analyses considered a longer baseline period (365 days), controlled for gout or used more specific codes. Results: Among the 30 022 allopurinol treatment episodes, 8314 incident renal failure episodes occurred. Compared with 1-199 mg/day, allopurinol dose of 200-299 mg/day (HR 0.81; 95% CI 0.75 to 0.87) and ≥300 mg/day, 0.71 (0.67 to 0.76), had significantly lower hazard of renal failure in multivariable-adjustment model, confirmed in multiple sensitivity analyses. Longer allopurinol use duration was significantly associated with lower hazards in sensitivity analyses (365-day look-back; reference, &lt;0.5 year): 0.5-1 year, 1.00 (0.88, 1.15); &gt;1-2 years, 0.85 (0.73 to 0.99); and &gt;2 years, 0.81 (0.67 to 0.98). Allopurinol ≥300 mg/day was also associated with significantly lower risk of acute renal failure and ESRD with HR of 0.89 (0.83 to 0.94) and 0.57 (0.46 to 0.71), respectively. Conclusions: Higher allopurinol dose is independently protective against incident renal failure in the elderly allopurinol users. A longer duration of allopurinol use may be associated with lower risk of incident renal failure. Potential mechanisms of these effects need to be examined. abstract_id: PUBMED:22575704 Anti-hyperuricemic and nephroprotective effects of Modified Simiao Decoction in hyperuricemic mice. Ethnopharmacological Relevance: Modified Simiao Decoction (MSD), based on clinical experience, has been used for decades and famous for its efficiency in treating hyperuricemic and gouty diseases. Aim Of The Study: To investigate the effects of MSD on anti-hyperuricemic and nephroprotective effects are involved in potassium oxonate-induced hyperuricemic mice. Materials And Methods: The effects of MSD were investigated in hyperuricemic mice induced by potassium oxonate. MSD were fed to hyperuricemic mice daily at a dose of 0.45, 0.90, 1.80 g/kg for 10 days, and allopurinol (5mg/kg) was given as a positive control. Serum and urine levels of uric acid and creatinine, and fractional excretion of uric acid (FEUA) were determined by colorimetric method. Its nephroprotective effects were evaluated by determining a panel of oxidative stress markers after the intervention in hyperuricemic mice. Simultaneously, protein levels of urate transporter 1 (URAT1) and organic anion transporter 1 (OAT1) in the kidney were analyzed by Western blotting. Results: MSD could inhibit XOD activities in serum and liver, decrease levels of serum uric acid, serum creatinine and BUN, and increased levels of urine uric acid, urine creatinine, FEUA dose-dependently through down-regulation of URAT1 and up-regulation of OAT1 protein expressions in the renal tissue of hyperuricemic mice. It also effectively reversed oxonate-induced alterations on renal MDA levels and SOD activities in this model. Conclusion: MSD processes uricosuric and nephroprotective actions by regulating renal urate transporters and enhancing antioxidant enzymes activities to improve renal dysfunction in hyperuricemic mice. abstract_id: PUBMED:10821456 Quality use of allopurinol in the elderly. Allupurinol is a commonly prescribed drug. However, the use of this drug is not based on evidence and guidelines. We audited Allopurinol prescriptions in patients aged 65 years and over in a teaching hospital over 22 weeks. In 47% of patients the dose was higher than recommended and in 40% it was lower. Quality use of medications is an important issue to maintain quality of life in the elderly. abstract_id: PUBMED:9869797 Exposure to allopurinol and the risk of cataract extraction in elderly patients. Objective: To determine whether exposure to allopurinol is associated with an increased risk of cataract extraction in elderly patients. Methods: We conducted a case-control study using data from the Quebec universal health insurance program for all elderly patients. The 3677 cases were patients with a cataract extraction between 1992 and 1994. The 21,868 controls were randomly selected among patients not diagnosed with cataract and matched to cases on the date of the extraction. We determined the odds ratio of cataract extraction according to the cumulative dose and duration of allopurinol use relative to nonusers, using conditional logistic regression analysis. The analysis was adjusted for the effects of age, sex, diabetes mellitus, hypertension, glaucoma, and ophthalmic and oral corticosteroid exposure. Results: A cumulative dose of allopurinol of more than 400 g or a duration of use of longer than 3 years were associated with an increased risk of cataract extraction, with odds ratios of 1.82 (95% confidence interval [CI], 1.18-2.80) and 1.53 (95% CI, 1.12-2.08), respectively. No increase in risk was observed for lower cumulative doses or shorter exposure periods. Conclusion: Long-term administration of allopurinol increases the risk of cataract extraction in elderly patients. abstract_id: PUBMED:32131742 Medication burden and inappropriate prescription risk among elderly with advanced chronic kidney disease. Background: Elderly patients with chronic kidney disease (CKD) frequently present comorbidities that put them at risk of polypharmacy and medication-related problems. This study aims to describe the overall medication profile of patients aged ≥75 years with advanced CKD from a multicenter French study and specifically the renally (RIMs) and potentially inappropriate-for-the-elderly medications (PIMs) that they take. Methods: This is a cross-sectional analysis of medication profiles of individuals aged ≥75 years with eGFR &lt; 20 ml/min/1.73 m2 followed by a nephrologist, who collected their active prescriptions at the study inclusion visit. Medication profiles were first analyzed according to route of administration, therapeutic classification. Second, patients were classified according to their risk of potential medication-related problems, based on whether the prescription was a RIM or a PIM. RIMs and PIMs have been defined according to renal appropriateness guidelines and to Beer's criteria in the elderly. RIMs were subclassified by 4 types of category: (a) contraindication; (b) dose modification is recommended based on creatinine clearance (CrCl); (c) dose modification based on CrCl is not recommended but a maximum daily dose is mentioned, (d) no specific recommendations based on CrCl: "use with caution", "avoid in severe impairment", "careful monitoring of dose is required" "reduce the dose". Results: We collected 5196 individual medication prescriptions for 556 patients, for a median of 9 daily medications [7-11]. Antihypertensive agents, antithrombotics, and antianemics were the classes most frequently prescribed. Moreover, 77.0% of patients had at least 1 medication classified as a RIM. They accounted 31.3% of the drugs prescribed and 9.25% was contraindicated drugs. At least 1 PIM was taken by 57.6 and 45.5% of patients had at least one medication classified as RIM and PIM. The prescriptions most frequently requiring reassessment due to potential adverse effects were for proton pump inhibitors and allopurinol. The PIMs for which deprescription is especially important in this population are rilmenidine, long-term benzodiazepines, and anticholinergic drugs such as hydroxyzine. Conclusion: We showed potential drug-related problems in elderly patients with advanced CKD. Healthcare providers must reassess each medication prescribed for this population, particularly the specific medications identified here. Trial Registration: NCT02910908. abstract_id: PUBMED:23345599 Low-dose aspirin use and recurrent gout attacks. Objective: To examine the association between cardioprotective use of low-dose aspirin and the risk of recurrent gout attacks among gout patients. Methods: We conducted an online case-crossover study of individuals with gout over 1 year. The following information was obtained during gout attacks: the onset dates, symptoms and signs, medications, and exposure to potential risk factors, including daily aspirin use and dosage, during the 2-day hazard period prior to the gout attacks. The same exposure information was also obtained over 2-day control periods. Results: Of the 724 participants analysed, 40.5% took aspirin ≤325 mg/day during either a hazard or a control period. Compared with no aspirin use, the adjusted OR of gout attacks increased by 81% (OR=1.81, 95% CI 1.30 to 2.51) for ≤325 mg/day of aspirin use on two consecutive days. The corresponding ORs were stronger with lower doses (eg, OR=1.91 for ≤100 mg, 95% CI 1.32 to 2.85). These associations persisted across subgroups by sex, age, body mass index categories and renal insufficiency status. Concomitant use of allopurinol nullified the detrimental effect of aspirin. Conclusions: Our findings suggest that the use of low-dose aspirin on two consecutive days is associated with an increased risk of recurrent gout attacks. Recommended serum urate monitoring with concomitant use and dose adjustment of a urate-lowering therapy among patients with gout may be especially important to help avoid the risk of gout attacks associated with low-dose aspirin. abstract_id: PUBMED:24866061 Anti-hyperuricemic and nephroprotective effects of Rhizoma Dioscoreae septemlobae extracts and its main component dioscin via regulation of mOAT1, mURAT1 and mOCT2 in hypertensive mice. Rhizoma Dioscoreae septemlobae (RDSE) has been widely used for the treatment of hyperuricemia in China. However, the therapeutic mechanism has been unknown. This study investigated the antihyperuricemic mechanisms of the extracts obtained from RDSE and its main component dioscin (DIS) in hyperuricemic mice. Hyperuricemic mice were induced by potassium oxonate (250 mg/kg). RDSE or DIS was orally administered to hyperuricemic mice at dosages of 319.22, 638.43, 1276.86 mg/kg/day for 10 days, respectively. Uric acid or creatinine in serum and urine was determined by HPLC or HPLC-MS/MS, respectively. The xanthine oxidase (XO) activities in mice liver were examined in vitro. Protein levels of organic anion transporter 1 (mOAT1), urate transporter 1 (mURAT1) and organic cation transporter 2 (mOCT2) in the kidney were analyzed by western blotting. The results indicated that uric acid and creatinine in serum were significantly increased by potassium oxonate, as compared to that of control mice. Compared saline-treated group, after RDSE treatment in the high and middle dose, the expression of mOAT1 increased 47.98 and 54.48 %, respectively, which accompanied with the decreased expression of mURAT1 (47.63 %) in high dose. After DIS treatment in high, middle and low dose, the expression of mOAT1 increased 23.93, 32.80 and 25.28 % compared to saline-treated group, respectively, which accompanied with the decreased expression of mURAT1 (51.07, 51.42 and 51.35 %). However, RDSE and DIS displayed a weak XO inhibition activity compared with allopurinol. Therefore, RDSE and DIS processed uricosuric and nephroprotective actions by regulation of mOAT1, mURAT1 and mOCT2. abstract_id: PUBMED:15292497 Use of single-dose rasburicase in an obese female. Objective: To report the use of single-dose rasburicase in an obese patient. Case Summary: A 53-year-old obese African American woman weighing 136 kg (ideal body weight [IBW] 55 kg) with new-onset chronic myelomonocytic leukemia in leukocytic blast crisis was treated with hydroxyurea 5 g daily. In addition, she received allopurinol 300 mg daily for prevention of tumor lysis syndrome (TLS). The following day, allopurinol was discontinued and rasburicase was administered at a dose of 0.2 mg/kg of IBW for a serum uric acid level of 11.9 mg/dL. The patient's serum uric acid level decreased to 1.9 mg/dL 48 hours after a single dose. Discussion: Rasburicase is indicated for the initial management of elevated plasma uric acid levels in patients with hematologic and solid tumor malignancies who are at risk for TLS. This case is unique because the patient received one dose of rasburicase followed by allopurinol rather than 5 daily doses of rasburicase. Additionally, the dose was based on IBW rather than actual body weight. Efficacy of this approach is apparent from the uric acid levels and the lack of hemodialysis requirements. Conclusions: A single dose of rasburicase (based on IBW) followed by allopurinol can effectively prevent TLS based on serum uric acid concentration. This approach resulted in a substantial cost savings. abstract_id: PUBMED:9789727 Gout in the elderly. Clinical presentation and treatment. Gout in the elderly differs from classical gout found in middle-aged men in several respects: it has a more equal gender distribution, frequent polyarticular presentation with involvement of the joints of the upper extremities, fewer acute gouty episodes, a more indolent chronic clinical course, and an increased incidence of tophi. Long term diuretic use in patients with hypertension or congestive cardiac failure, renal insufficiency, prophylactic low dose aspirin (acetylsalicylic acid), and alcohol (ethanol) abuse (particularly by men) are factors associated with the development of hyperuricaemia and gout in the elderly. Extreme caution is necessary when prescribing nonsteroidal anti-inflammatory drugs (NSAIDs) for the treatment of acute gouty arthritis in the elderly. NSAIDs with short plasma half-life (such as diclofenac and ketoprofen) are preferred, but these drugs are not recommended in patients with peptic ulcer disease, renal failure, uncontrolled hypertension or cardiac failure. Colchicine is poorly tolerated in the elderly and is best avoided. Intra-articular and systemic corticosteroids are increasingly being used for treating acute gouty flares in aged patients with medical disorders contraindicating NSAID therapy. Urate-lowering drugs are indicated for the treatment of hyperuricaemia and chronic gouty arthritis. Uricosuric drugs are poorly tolerated and the frequent presence of renal impairment in the elderly renders these drugs ineffective. Allopurinol is the urate-lowering drug of choice, but its use in the aged is associated with an increased incidence of both cutaneous and severe hypersensitivity reactions. To minimise this risk, allopurinol dose must be kept low. A starting dose of allopurinal 50 to 100mg on alternate days, to a maximum daily dose of about 100 to 300mg, based upon the patient's creatinine clearance and serum urate level, is recommended. Asymptomatic hyperuricaemia is not an indication for long term urate-lowering therapy; the risks of drug toxicity often outweigh any benefit. abstract_id: PUBMED:11114133 Low-dose fludarabine and cyclophosphamide in elderly patients with B-cell chronic lymphocytic leukemia refractory to conventional therapy. Background And Objectives: In recent years fludarabine alone or in combination with other drugs has been reported to be effective in the treatment of B-cell chronic lymphocytic leukemia (B-CLL), both as first line and salvage therapy. Among the different combination regimens, the association of fludarabine and cyclophosphamide has shown a considerable therapeutic efficacy, although a relevant number of infectious complications have been described, particularly in elderly patients. The aim of this work was to evaluate the efficacy, the toxicity, and the incidence of infectious episodes of a regimen combining lower doses of fludarabine and cyclophosphamide in elderly patients with B-CLL refractory to conventional therapy. Design And Methods: Twenty patients with progressive B-CLL with a median age of 75 years (4 in stage B and 16 in stage C) and refractory to conventional therapy were enrolled in this study. The combination regimen was as follows: fludarabine 15 mg/m2/day i.v. [max 25 mg] and cyclophosphamide 200 mg/m2/day i.v. for four days. Results: All patients enrolled were evaluable for response. Three out of 20 (15%) patients achieved a complete remission (CR), 14/20 (70%) a partial response (PR) with an overall response rate (CR+PR) of 85%, according to National Cancer Institute-Working Group response criteria. Three patients were considered resistant. In four out of 20 patients (20%), a severe neutropenia (neutrophils &lt; 0.5x10(9)/L) occurred and one of them developed an infectious complication which required treatment with systemic antibiotics and granulocyte colony- stimulating factor (G-CSF). Non-hematologic toxicity was negligible in all patients but one, who despite a adequate therapy with allopurinol and hydration, experienced a tumor lysis syndrome with transient but severe renal impairment. Interpretation And Conclusions: The association of low-dose fludarabine and cyclophosphamide appeared to be effective in this subset of B-CLL patients, reproducing a similar overall response rate obtained with other fludarabine-based combination therapies. In addition, in this group of elderly patients, toxic side effects were negligible and infectious complications remarkably low. Answer: Yes, allopurinol dose and duration of use appear to be nephroprotective in the elderly. A study using Medicare claims data from 2006 to 2012 found that higher doses of allopurinol were independently protective against incident renal failure in elderly allopurinol users. Specifically, doses of 200-299 mg/day and ≥300 mg/day were associated with significantly lower hazards of renal failure compared to doses of 1-199 mg/day. Additionally, a longer duration of allopurinol use was significantly associated with lower hazards of incident renal failure, particularly for durations of use greater than two years. Allopurinol at a dose of ≥300 mg/day was also associated with a significantly lower risk of acute renal failure and end-stage renal disease (ESRD) (PUBMED:27296322). However, it is important to note that the quality use of allopurinol in the elderly is a concern, as an audit in a teaching hospital revealed that in 47% of patients, the dose was higher than recommended, and in 40%, it was lower (PUBMED:10821456). This indicates that there may be issues with the appropriate dosing of allopurinol in elderly populations, which could impact its nephroprotective effects. Furthermore, the use of allopurinol in the elderly must be carefully considered due to the potential for medication-related problems, especially in those with advanced chronic kidney disease (CKD). A study showed that elderly patients with advanced CKD are at risk of polypharmacy and medication-related problems, including the use of potentially inappropriate medications (PIMs) and renally inappropriate medications (RIMs). Allopurinol was identified as one of the prescriptions frequently requiring reassessment due to potential adverse effects (PUBMED:32131742). In conclusion, while allopurinol dose and duration of use have been found to be nephroprotective in the elderly, careful consideration of dosing and monitoring for potential medication-related problems is essential to ensure the safe and effective use of this medication in elderly populations.
Instruction: Beating heart coronary artery surgery: is sternotomy a suitable alternative to minimal invasive technique? Abstracts: abstract_id: PUBMED:29387434 Beating heart minimally invasive mitral valve surgery in patients with previous sternotomy: the operative technique and early outcomes. Objective: Reoperative mitral valve surgery is increasingly required and can be associated with significant morbidity and mortality. The beating heart minimally invasive mitral valve surgery has a proposed benefit in avoiding the risks of repeat sternotomy, with reducing the need for adhesiolysis and cardioplegia reperfusion injury. We describe our experience with such a technique in patients with previous sternotomy. Methods: A retrospective study was performed and all patients undergoing surgery of mitral valve through a right limited thoracotomy without application of an aortic cross-clamp (beating heart) as a redo cardiac surgery between January 2006 and January 2015 were included (n=25). Perioperative data as well as the operative technique are presented. Results: Six patients (24%) had two previous sternotomies and one (4%) had three previous sternotomies. Mitral valve repair was performed in 11 patients (44%). No patient required conversion to median sternotomy. Inotropic support beyond 4 hours after operation was required in seven patients (28%). Ventilation time was less than 12 hours in 14 patients (56%) with another six patients (24%) extubated within 24 hours after surgery. Postoperative course was complicated with cerebrovascular accident in two patients (8%). In-hospital mortality was 4% (n=1). There was no 30-day mortality after discharge. Conclusions: Reoperative mitral valve surgery can be safely performed through a limited right thoracotomy approach on a beating heart while on full cardiopulmonary bypass. The technique can be associated with potentially shorter operation, shorter cardiopulmonary bypass and a less complicated recovery. abstract_id: PUBMED:32493495 Minimally invasive beating heart technique for mitral valve surgery in patients with previous sternotomy and giant left ventricle. Purpose: To analyze the efficacy of minimally invasive beating heart technique for mitral valve surgery in the cardiac patients with previous sternotomy and giant left ventricle. Methods: Eighty cardiac patients with previous sternotomy and giant left ventricle according to the diagnostic criteria that left ventricular end diastolic diameter (LVEDD) was ≥70 mm, who underwent mitral valve surgery at our center from January 2006 to January 2019 were analyzed. We divided all patients into minimally invasive beating heart technique group (n = 30) and conventional median resternotomy arrested heart technique group (n = 50) according to the surgical methods. Preoperative, intraoperative, and postoperative variables were compared between two groups. Results: Minimally invasive beating heart technique compared to the conventional median resternotomy arrested heart technique for mitral valve surgery in the cardiac patients with previous sternotomy and giant left ventricle had significant differences in operation time(P = 0.002), cardiopulmonary bypass (CPB) time(P &lt; 0.001), intraoperative blood loss(P &lt; 0.001), postoperative transfusion ratio(P = 0.01), postoperative transfusion amount(P &lt; 0.001), postoperative drainage volume(P = 0.001), extubation time(P = 0.04), intensive care unit (ICU) stay time(P = 0.04) and postoperative hospital stay time(P &lt; 0.001), but no significant differences in re-exploration for bleeding, postoperative 30-day mortality, postoperative complications and 6 months postoperative echocardiographic parameters. Conclusions: Using the method of minimally invasive beating heart technique for mitral valve surgery in the cardiac patients with previous sternotomy and giant left ventricle is effective and reliable, meanwhile reduce the operation time and CPB time, decrease the transfusion ratio and transfusion amount, shorten postoperative ICU stay and hospital stay time, promote the early extubation so that accelerate the patients' early recovery. All of these show a benefit of minimally invasive beating heart technique compared to conventional median resternotomy arrested heart technique. abstract_id: PUBMED:10355452 Reversed-J inferior sternotomy for beating heart coronary surgery. Median sternotomy or combined multiple minimally invasive approaches are currently used to revascularize patients with multivessel coronary artery disease on the beating heart. We present here a new alternative approach for minimally invasive coronary surgery on the beating heart: the reversed-J inferior sternotomy. Through this approach, the left anterior descending, diagonal, and right coronary arteries can be revascularized via a single minimally invasive approach. abstract_id: PUBMED:11574221 Beating heart coronary artery surgery: is sternotomy a suitable alternative to minimal invasive technique? Objectives: To evidence the respective advantages and drawbacks of minimal invasive-thoracotomy (MIDCAB) and off-pump sternotomy (OPCAB) coronary bypass techniques. Methods: The perioperative and mid-term (3 months) results of the first 31 MIDCABs and 39 OPCABs performed by a single experienced coronary surgeon (F.S.) were compared. Differences were assessed by two-tailed chi-square or unpaired t-test, and significance assumed for P-values &lt; or =0.05. Results: Groups were widely comparable. There were no in-hospital deaths nor permanent neurologic events. OPCAB patients received more anastomoses (mean 1.09/patient vs. 1.89/patient, P&lt;0.001) during a shorter coronary occlusion period (26.1+/-8 vs. 16.6+/-4.5min, P&lt;0.001), whilst immediate extubation prevailed in MIDCABs (22/31 vs. 17/39, P&lt;0.05). Significant complications occurred in seven MIDCABs vs. none in OPCABs (P&lt;0.01). Other in-hospital parameters were similar. Controls at 3 months evidenced more residual discomfort among MIDCAB patients (14/30 vs. 7/39, P&lt;0.05). Conclusions: Differences in early complication rates may be due to a learning effect. However, OPCAB allows us to implant more grafts and is more comfortable for both patient and surgeon. These advantages may well counterbalance the cosmetic benefits of MIDCAB procedures. abstract_id: PUBMED:9879619 Minimally invasive surgery for coronary disease: a new alternative To avoid the inflammatory syndrome generated by cardiopulmonary bypass, a new surgical technique, minimal invasive direct coronary artery bypass (MIDCAB), has been developed. An anastomosis is performed between the left internal mammary artery (LIMA) and the left anterior descending artery (LAD) on a beating heart, through a limited anterior thoracotomy. We describe our experience with this technique. Ten consecutive patients underwent a MIDCAB procedure. (9 males, age 65.9 +/- 9 years). There were 8 bypasses of the LIMA on the LAD, one bilateral mammary bypass on the LAD and the right coronary artery, and one conversion to a standard sternotomy with CPB for a saphenous vein bypass on the LAD because of injury to the LIMA (2nd case). There was one redo for haemostasis of the mammary artery bed (3rd case). The first 3 patients required postoperative blood transfusion. From the 4th operation onwards, with the introduction of new instrumentation which was better adapted to the narrowness of the surgical field, there were no further surgical complications. During the follow-up (mean 5 months; range 2-9), no patient suffered anginal recurrence. With the improvement of instrumentation, the MIDCAB technique offers satisfactory short- and mid-term results, while avoiding CPB with its adverse effects. Lastly, the cosmetic result is far better than with the conventional procedure. abstract_id: PUBMED:33061051 Intermittent on-pump beating-heart coronary artery bypass grafting-a safer option. Purpose: On-pump beating-heart coronary artery bypass grafting represents a merger of standard on and off-pump techniques and is thought to benefit patients by coupling the dual absence of cardioplegic arrest (conventional coronary surgery), with the hemodynamic instability during manipulation seen in off-pump surgery. However, the clinical benefits are still under discussion. We improvised on the standard on-pump beating-heart surgeries by introducing use of "intermittent" bypass as and when required. Methods: This study involved 108 patients. "Intermittent" on-pump-beating heart coronary artery bypass grafting was done using suction stabilizer and placing aortic and venous cannula, electively in all patients (group 1) who were supported by pump intermittently (n = 54). Retrospective data of patients who underwent off-pump surgery electively by the same surgeon (group 2, n = 54) was collected. Results: There was a significant advantage in the number of grafts performed for the lateral surface (circumflex branches) using the new technique compared to conventional technique (68vs22). Similarly, significant advantage was also noted in terms of total number of grafts along with shorter operating times. There were no mortalities in the new group compared to the off-pump group and blood loss was also lesser. Conclusions: "Intermittent" on-pump coronary revascularization is a technically reliable method of coronary revascularization taking advantage of the off-pump and conventional on-pump techniques while considerably eliminating the disadvantages of both. It has shown its superiority in safety, number of grafts, blood loss, operating time and perioperative course. abstract_id: PUBMED:25583646 Off-pump or on-pump beating heart: which technique offers better outcomes following coronary revascularization? A best evidence topic was written according to a structured protocol. The question addressed was whether on-pump beating heart coronary artery bypass (BH-ONCAB) surgery has a different outcome profile in comparison to off-pump coronary artery bypass (OPCAB). A total of 205 papers were found by systematic search of which 7 provided the largest and most recent outcome analysis comparing BH-ONCAB with OPCAB, and represented the best evidence to answer the clinical question. The authors, date, journal, study type, population, main outcome measures and results were tabulated. Reported outcome measures included mortality, stroke, myocardial infarction, renal failure, myocardial damage, change in ejection fraction, number of bypass grafts and completeness of revascularization. With the exception of one study that favoured the off-pump technique, our review did not demonstrate a statistically significant difference in terms of mortality between the groups. We did not identify a statistically significant difference in any reported morbidity outcomes. However, there was a trend towards better outcomes for the on-pump beating heart technique, despite a higher risk profile in terms of age, ejection fraction and burden of coronary disease in this group. Consistent statistically significant differences between the groups were the mean number of grafts performed and the completeness of revascularization, both of which were higher with the on-pump beating heart technique. Limitations to the current evidence include the finding that most of the current data arise from specialist off-pump surgeons or centres that would usually only carry out BH-ONCAB in the higher risk patients where the added safety of cardiopulmonary bypass is desired. abstract_id: PUBMED:9498086 Anesthesia for minimal invasive coronary surgery without employing extracorporeal circulation In the course of present reevaluation of aortocoronary bypass grafting a minimal invasive surgical procedure avoiding the use of cardiopulmonary bypass has been revised. It is suitable born for palliative treatment of patients with coronary multi-vessel-disease and compromised left ventricular function, and likewise for curative treatment of patients with single-vessel disease of a left coronary artery branch and unimpaired ventricular function. Avoiding possible complications of cardiopulmonary bypass can minimise morbidity and lethality of aortocoronary bypass grafting procedure and can help to lower costs. Anaesthesia for minimal invasive direct coronary artery bypass grafting needs an anaesthesiological concept differing from anaesthesia for conventional coronary artery bypass surgery. This concept, considering the special aims of minimal invasive technique, is discussed and demonstrated by means of case reports. abstract_id: PUBMED:11268737 Minimally invasive procedures in heart surgery. How does it work and who profits? The leading minimally invasive procedures employed in coronary surgery are minimally invasive direct coronary arterial bypass surgery (MIDCAB) and the Octopus system. These interventions are performed on the beating heart and require no extracorporeal circulation (ECC), thus avoiding the side effects, such as pulmonary or neurological complications, associated with ECC. In surgery on the mitral or aortic valve, the procedures are carried out via small incisions in the non-beating heart, and endovascular bypass systems (e.g. Port-Access) are sometimes needed for EEC. The advantages of small incisions are a reduction in the risk of infection, shorter hospital stay and, in particular, improved cosmesis. A disadvantage is the longer operating time. Only careful patients selection guarantees successful surgery. abstract_id: PUBMED:9757798 How to do: initial experiences with a new device in minimally invasive heart surgery A special surgical technique is required for minimal-invasive cardiac surgery. The view for the coronary artery anastomosis under beating heart conditions is important and coronary artery blood might prevent a clear view of the opened coronary artery vessel. A new system called VisoFlo promises to improve visualisation at the surgical site. VisoFlo delivers a column of air to help provide a clear view of anastomosis site and in addition has a controllable mist to help prevent desiccation of the graft and surrounding tissue. This system was tested on 45 patients with coronary-artery-bypass graft surgery under beating heart conditions and at 65 patients with standard coronary-artery-bypass graft surgery. Our conclusions are, that the VisoFlo system is easy to use, guarantees a clear view of the anastomosis site and the surgical work will not be impaired. Answer: Beating heart coronary artery surgery can be performed through different approaches, including sternotomy and minimally invasive techniques. The choice between these approaches depends on various factors, including the patient's condition, the surgeon's experience, and the specific clinical scenario. The minimally invasive direct coronary artery bypass (MIDCAB) technique is a form of beating heart surgery that avoids the use of cardiopulmonary bypass and is performed through a limited anterior thoracotomy. This approach has been shown to offer satisfactory short- and mid-term results while avoiding the adverse effects associated with cardiopulmonary bypass. The cosmetic result is also considered superior to conventional procedures (PUBMED:9879619). However, when comparing minimal invasive-thoracotomy (MIDCAB) and off-pump sternotomy (OPCAB) coronary bypass techniques, one study found that OPCAB patients received more anastomoses during a shorter coronary occlusion period, and significant complications occurred less frequently in OPCAB patients compared to MIDCAB patients. Additionally, OPCAB was found to be more comfortable for both patient and surgeon, potentially counterbalancing the cosmetic benefits of MIDCAB procedures (PUBMED:11574221). Another study comparing on-pump beating heart coronary artery bypass (BH-ONCAB) with off-pump coronary artery bypass (OPCAB) did not demonstrate a statistically significant difference in mortality or morbidity outcomes between the two techniques. However, there was a trend towards better outcomes for the on-pump beating heart technique, with a higher mean number of grafts performed and greater completeness of revascularization (PUBMED:25583646). In conclusion, while minimally invasive techniques such as MIDCAB offer certain advantages, including better cosmetic results and avoidance of cardiopulmonary bypass, sternotomy approaches like OPCAB and BH-ONCAB may allow for more grafts and more complete revascularization. The choice between sternotomy and minimally invasive techniques for beating heart coronary artery surgery should be individualized based on patient-specific factors and the surgeon's expertise.
Instruction: Abnormalities in glucose homeostasis in acromegaly. Does the prevalence of glucose intolerance depend on the level of activity of the disease and the duration of the symptoms? Abstracts: abstract_id: PUBMED:19224501 Abnormalities in glucose homeostasis in acromegaly. Does the prevalence of glucose intolerance depend on the level of activity of the disease and the duration of the symptoms? Introduction: Acromegaly is characterized not only by disabling symptoms, but also by relevant co-morbidities. Insulin resistance, leading to glucose intolerance is one of the most important contributory factors to the cardiovascular mortality in acromegaly. Material And Methods: We analysed the records of 220 naïve patients with acromegaly diagnosed at our Department in the years 1995-2007. Diagnosis of active acromegaly was established on the basis of widely recognized criteria. In each patient glucose and insulin concentrations were assessed when fasting and during the 75 g OGTT. Results: Normoglycaemia existed in 46% of acromegalic patients. Among glucose tolerance abnormalities we found impaired fasting glucose in 19%, impaired glucose tolerance in 15% and overt diabetes mellitus in 20%. There was no statistically significant differences in gender, duration of the disease, basal plasma GH, IGF-1 or fasting insulin concentrations between normoglycaemic patients and those with impairments in glucose tolerance. The groups showed statistically significant differences with respect to age at diagnosis (p &lt; 0.01). There was no significant correlation between GH, IGF-1 concentrations and fasting plasma glucose. There was no correlation between the duration of the disease and fasting plasma glucose. We found a statistically significant correlation between plasma GH, IGF-1 concentrations and HOMA, QUICKI and insulinAUC. Conclusions: The prevalence of diabetes mellitus among acromegalics is much higher than in the general population. The occurrence of glucose tolerance impairments does not depend on the duration of the disease. In patients with acromegaly insulin resistance and hyperinsulinemia are positively correlated with the level of activity of the disease. abstract_id: PUBMED:33715210 In patients with controlled acromegaly, indices of glucose homeostasis correlate with IGF-1 levels rather than with type of treatment. Objective: Acromegaly is accompanied by abnormalities in glucose and lipid metabolism which improve upon treatment. Few studies have investigated whether these improvements differ between treatment modalities. This study aimed to compare glucose homeostasis, lipid profiles and postprandial gut hormone response in patients with controlled acromegaly according to actual treatment. Design: Cross-sectional study at a tertiary care centre. Patients: Twenty-one patients with acromegaly under stable control (ie insulin growth factor 1 [IGF1] levels below sex- and age-specific thresholds and a random growth hormone level &lt;1.0 µg/L) after surgery (n = 5), during treatment with long-acting somatostatin analogues (n = 10) or long-acting somatostatin analogues + pegvisomant (n = 6) were included. Measurements: Glucose, insulin, total cholesterol and high-density lipoprotein-cholesterol were measured in fasting serum samples. Glucose, insulin, triglycerides, glucose-dependent insulinotropic polypeptide and glucagon-like peptide 1 were measured during a mixed meal test. Insulin sensitivity was evaluated by a hyperinsulinaemic-euglycaemic clamp. Results: There were no significant differences in glucose tolerance, insulin sensitivity or postprandial gut hormone responses between the three groups. Positive correlations between IGF1 levels and HbA1c, fasting glucose and insulin levels and postprandial area under the curve (AUC) of glucose and insulin and also an inverse association between IGF1 and glucose disposal rate were found in the whole cohort (all p &lt; .05, lowest p = .001 for postprandial AUC glucose with rs = 0.660). Conclusion: In this cross-sectional study in patients with controlled acromegaly, there were no differences in glucose homeostasis or postprandial substrate metabolism according to treatment modality. However, a lower IGF1 level seems associated with a better metabolic profile. abstract_id: PUBMED:26748034 Dopaminergic drugs in type 2 diabetes and glucose homeostasis. The importance of dopamine in central nervous system function is well known, but its effects on glucose homeostasis and pancreatic β cell function are beginning to be unraveled. Mutant mice lacking dopamine type 2 receptors (D2R) are glucose intolerant and have abnormal insulin secretion. In humans, administration of neuroleptic drugs, which block dopamine receptors, may cause hyperinsulinemia, increased weight gain and glucose intolerance. Conversely, treatment with the dopamine precursor l-DOPA in patients with Parkinson's disease reduces insulin secretion upon oral glucose tolerance test, and bromocriptine improves glycemic control and glucose tolerance in obese type 2 diabetic patients as well as in non diabetic obese animals and humans. The actions of dopamine on glucose homeostasis and food intake impact both the autonomic nervous system and the endocrine system. Different central actions of the dopamine system may mediate its metabolic effects such as: (i) regulation of hypothalamic noradrenaline output, (ii) participation in appetite control, and (iii) maintenance of the biological clock in the suprachiasmatic nucleus. On the other hand, dopamine inhibits prolactin, which has metabolic functions; and, at the pancreatic beta cell dopamine D2 receptors inhibit insulin secretion. We review the evidence obtained in animal models and clinical studies that posited dopamine receptors as key elements in glucose homeostasis and ultimately led to the FDA approval of bromocriptine in adults with type 2 diabetes to improve glycemic control. Furthermore, we discuss the metabolic consequences of treatment with neuroleptics which target the D2R, that should be monitored in psychiatric patients to prevent the development in diabetes, weight gain, and hypertriglyceridemia. abstract_id: PUBMED:21161601 Clinical and biochemical characteristics of acromegalic patients with different abnormalities in glucose metabolism. To determine the prevalence of diabetes, glucose intolerance and impaired fasting glucose in Mexican patients with acromegaly and establish associations with clinical, anthropometric and biochemical variables. 257 patients with acromegaly were evaluated by a 75 g-oral glucose tolerance test with measurements of both GH and glucose (0, 30, 60, 90 120 min) as well as baseline IGF-1. Normal glucose tolerance (NGT), impaired fasting glucose (IFG), impaired glucose tolerance (IGT) and diabetes (DM) were defined based on the 2003 ADA criteria. NGT, IFG, IGT and DM were found in 27.6, 8.9, 31.6 and 31.9% of the subjects, respectively; 42 of the DM patients were unaware of the diagnosis. Patients with diabetes were older than subjects in the other 3 categories (P = 0.001), and the proportion of women was significantly higher in the DM (74%) and IGT (68%) groups than in the NGT group (52%) (P = 0.004). Odds ratio for the development of DM was 3.29 (95% CI 3.28-3.3). GH and IGF-1 levels were comparable among the different groups. In a multivariable analysis DM was significantly associated with age, presence of a macroadenoma, disease duration and a basal GH &gt; 30 μg/dl. DM and probably IGT are more prevalent in acromegaly than in the general Mexican population. DM was more frequent in females of all ages, in subjects with severely elevated GH concentrations, in patients with macroadenomas, and long-standing disease duration. The odds ratio for DM in our subjects with acromegaly is more than 3 times higher than in the general population. abstract_id: PUBMED:17484056 Long-term effects of the combination of pegvisomant with somatostatin analogs (SSA) on glucose homeostasis in non-diabetic patients with active acromegaly partially resistant to SSA. Several recent studies have reported beneficial effects of pegvisomant monotherapy on glucose homeostasis for acromegalic patients resistant to somatostatin analogs (SSA). The aim of our longitudinal study was to test whether these beneficial effects on glucose homeostasis would also occur during combined pegvisomant + SSA treatment amongst partially SSA-resistant acromegalic patients. Ten non-diabetic, partially SSA-resistant acromegalic patients underwent a 12-month SSA+pegvisomant treatment after SSA-only therapy. Glucose homeostasis was evaluated at disease diagnosis, at the end of the SSA treatment and after 6 and 12 months of combined SSA+pegvisomant treatment. The addition of pegvisomant treatment was accompanied by a significant improvement in insulin and glycemic responses to the oral glucose tolerance test, without any significant changes in fasting plasma glucose, glycosylated haemoglobin, homeostatic model assessment-derived insulin resistance index and homeostatic model assessment-derived beta-cell function. Moreover, the number of patients with glucose intolerance did not significantly change during the 12-month combined treatment, notwithstanding the significant decrease in serum IGF-1 values. Therefore, our findings suggest that the combined pegvisomant and SSA treatment may not be able to restore normal clinical and biochemical glycometabolic features occurring in acromegalic patients resistant to SSA, while a slight but significant improvement in some biochemical features may be expected. abstract_id: PUBMED:15542931 Clinical-biochemical correlations in acromegaly at diagnosis and the real prevalence of biochemically discordant disease. Objective: To analyze clinical-biochemical correlations in newly diagnosed acromegaly, focusing in particular on patients with discrepant parameters. Design: Retrospective study. Methods: Data from 164 patients with acromegaly seen between 1995 and 2003. Patients were reviewed for the presence of headaches, arthralgias, hypertension, menstrual abnormalities, impotence, glucose intolerance or diabetes. Biochemical evaluation consisted of age- and gender-adjusted IGF-I levels and glucose-suppressed GH. Results: Magnetic resonance imaging (MRI) revealed macroadenoma in 127 patients and microadenoma in 37. Patients with macroadenomas were younger than those with microadenomas and the disease was more frequent in females. Excluding acral enlargement, which was present in all the patients, the most commonly reported complaints were headaches (66%) and arthralgias (52%). Hypertension was present in 37% of patients, whereas the prevalence of glucose intolerance and diabetes was 27 and 32%, respectively. Hyperprolactinemia was present in 20% of patients with microadenomas and in 40% of patients with macroadenomas. Hypogonadism was demonstrated in more than half of the patients and was not related to tumor size or prolactin level. Of all the clinical and metabolic abnormalities of acromegaly, only the presence of diabetes correlated with both basal and nadir post-glucose GH levels. Only 4 patients (2.4%) had glucose-suppressed GH values of &lt;1 ng/ml in the presence of clinical evidence of acromegaly, an elevated IGF-I level and a pituitary adenoma on MRI. Conclusions: Clinical features of acromegaly correlate poorly with indices of biochemical activity. The prevalence of biochemically discordant acromegaly is considerably lower than recently reported. abstract_id: PUBMED:14510913 Glucose homeostasis in acromegaly: effects of long-acting somatostatin analogues treatment. Objective: Acromegaly is a syndrome with a high risk of impaired glucose tolerance (IGT) and diabetes mellitus (DM). Somatostatin analogues, which are used for medical treatment of acromegaly, may exert different hormonal effects on glucose homeostasis. Twenty-four active acromegalic patients were studied in order to determine the long-term effects of octreotide-LAR and SR-lanreotide on insulin sensitivity and carbohydrate metabolism. Design: Prospective study. Patients: We studied 24 patients with active acromegaly, 11 males and 13 females, aged 50.7 +/- 12.7 years, body mass index (BMI) 30.1 +/- 4.8 (kg/m2). Measurements: All patients underwent an oral glucose tolerance test (OGTT) and 12 also had an euglycaemic hyperinsulinaemic clamp. All patients were evaluated at baseline and after 6 months of somatostatin analogues therapy. Results: Acromegalic patients showed low M-values in respect to the control group at baseline (P&lt;0.05), followed by a significant improvement after 6 months of therapy (P&lt;0.005 vs. baseline). Serum glucose levels at 120 min during OGTT worsened (P&lt;0.05) during somatostatin analogs therapy in patients with normal glucose tolerance, but not in those with impaired glucose tolerance or diabetes mellitus. This was associated with a reduced (P&lt;0.05) and 30 min delayed insulin secretion during OGTT. Also, HbA1c significantly deteriorated in all subjects after treatment (4.7 +/- 0.6% and 5.1 +/- 0.5%, basal and after six months, respectively, P&lt;0.005). Conclusion: In acromegalic patients, somatostatin analogues treatment reduces insulin resistance, and also impairs insulin secretion. This may suggest that the use of oral secretagogue hypoglycaemic agents and/or insulin therapy should be considered rather than insulin sensitizers, as the treatment of choice in acromegalic patients who develop frank hyperglycaemia during somatostatin analogues therapy. abstract_id: PUBMED:11207633 Relationship between blood pressure and glucose tolerance in acromegaly. Background: Hypertension represents a well-known risk factor for cardiovascular diseases. The pathogenesis of hypertension in acromegaly is commonly viewed as multifactorial, but the possible influence of metabolic disorders on blood pressure (BP) in affected patients is largely unknown. Objective: The aim of the present study was to evaluate the impact of glucose metabolism abnormalities on BP values in a series of patients with active acromegaly. Design: An open multicentre prospective study. Patients: Sixty-eight patients with active disease, aged 47.5 +/- 11.7 years, have been studied. Thirty-nine had normal glucose tolerance (NGT), 16 impaired glucose tolerance (IGT) and 13 suffered from diabetes mellitus (DM). Measurements: Mean clinical BP values were calculated as the mean of BP values obtained by sphygmomanometric measurement in three separate occasions and mean 24-h, diurnal and nocturnal systolic (SBP) and diastolic (DBP) values were obtained by 24-h ambulatory blood pressure monitoring (ABPM). Results: Patient's age and the degree of glucose tolerance abnormalities were found to significantly and independently influence BP values. All clinical and ABPM SBP and DBP values significantly increased with age by linear regression (P &lt; 0.02 for all BP values, 0.30 &lt; or = R &lt; or = 0.43), and the independent influence of this parameter on BP values was confirmed by mutivariate analysis. Similarly, the independent influence of glucose tolerance abnormalities on BP values was confirmed when introducing age as a covariable in a multivariate analysis, and patients with DM presented significantly higher clinical SBP and 24-h, diurnal and nocturnal SBP and DBP than patients with NGT (P &lt; 0.02 for clinical SBP, P &lt; 0.015 for all ABPM values, respectively). In addition, patients with DM showed significantly higher 24-h, diurnal and nocturnal DBP than those with IGT (P &lt; 0.05 in all cases). In contrast, no significant difference was found between NGT and IGT patients. No significant influence of disease duration, BMI, GH, IGF-I, or fasting and 2-h post glucose load insulinaemia on BP values was observed. Conclusions: Abnormalities of glucose metabolism significantly contribute to increase systolic blood pressure and especially diastolic blood pressure in acromegalic patients. Careful control of blood pressure and of risk factors for developing systemic hypertension, with special reference to glucose tolerance, is mandatory to decrease cardiovascular morbidity and mortality in such patients. abstract_id: PUBMED:33984540 Prevalence and predictors of abnormal glucose tolerance and its resolution in acromegaly: Single Centre retrospective study of 90 cases. Aims The aim of the study was to evaluate the prevalence and predictors of abnormal glucose tolerance (Diabetes + Prediabetes) and its resolution in Acromegaly. Settings And Design: Retrospective observational study. Methods And Material: Ninety patients with acromegaly and followed up post operatively for 1 year were included. The study cohort was divided into two groups: Group A: abnormal glucose tolerance [AGT: Diabetes + prediabetes (n = 40)] and Group B: normal glucose tolerance (NGT) (n = 50).The impact of the following parameters: age, sex, Waist Circumference(WC), Body Mass Index (BMI), duration of acromegaly, Growth Hormone (GH) levels, Insulin like Growth Factor 1 (IGF1) levels, pituitary tumour size, hypertension, and family history of diabetes as predictors for diabetes were studied pre surgery and post-surgery at 3 months and 1 year affecting glycaemia. Unpaired t-test, chi-square test and binary logistic regression analysis were used for statistical analysis. Results: The prevalence of AGT in our cohort was 44.44% (Diabetes 37.77%, prediabetes 6.66%).Patients with AGT were older (44.2 ± 12.21 years vs. 34.92 ± 11.62 years; p = 0.00040) and had higher WC (in cm) (91.35 ± 7.87 vs.87.12 ± 6.07; p = 0.005) than NGT. Hypertension and family history of diabetes were significantly more frequent in patients with AGT. GH and IGF1 levels were not significantly different between the groups. On binary logistic regression, Sex (p = 0.0105) (OR = 6.0985), waist circumference (p = 0.0023) (OR = 1.2276) and hypertension (p = 0.0236) (OR = 1.632) were found to be significant predictors of AGT in acromegaly. After surgery 42.5% and 62.5% patients became normoglycemic at 3 months and 1 year respectively. On binary logistic regression there were no predictors for achieving normoglycemia at 3 months or 1 year, however the delta change in GH, BMI and tumour size were significant. Conclusions: The prevalence of AGT was 44.44%. Female sex, WC and hypertension were found to be significant predictors of AGT in acromegaly. Post-surgery normoglycemia was achieved in 42.5% at 3 months and 62.5% at 1 year with no predictors for normalisation of AGT. abstract_id: PUBMED:14764777 Long-term biochemical status and disease-related morbidity in 53 postoperative patients with acromegaly. Unlabelled: Assessment of postoperative disease activity of acromegaly is a major challenge. The consensus criteria for cure, which are glucose-suppressed GH less than 1 micro g/liter and normal IGF-I levels, might be discrepant, and their respective relationship to acromegaly-related morbidity is not well known. The Aims Of Our Study Were: firstly, to correlate plasma IGF-I with plasma glucose-suppressed GH concentrations; and secondly, to correlate each of these biochemical parameters with morbidity [impaired glucose tolerance (IGT), diabetes, and hypertension] in postoperative patients with acromegaly. Fifty-three patients with long-term follow-up (mean, 12.7 yr; range, 1-30 yr) after transsphenoidal surgery for acromegaly and 20 healthy subjects matched for age, sex, and body mass index were evaluated for plasma glucose [by 75-g oral glucose tolerance test (OGTT)], GH (by immunoradiometric assay), plasma IGF-I (by immunoradiometric assay), and blood pressure (BP) measurements. Remission was defined by a normal IGF-I. We identified 34 acromegalics in remission and 19 with active disease. There was no statistical difference between all three groups for age, sex, BMI, and for fasting and 2-h post-OGTT plasma glucose. The time elapsed since surgery was similar in both groups of acromegalics. The OGTT-GH nadir was less than 1 micro g/liter in 31 patients in remission (91.2%) and in nine patients with active disease (47.4%). Prevalence of IGT was lower in acromegalics in remission (14.7%) in comparison with patients with active disease (47.4%; P = 0.01). Plasma IGF-I and GH nadir cut-off of 0.25 microg/liter were strong predictors of abnormal glucose tolerance (odds ratio, 13.6; confidence interval, 2.5-73.7; P = 0.003). GH nadir cut-off of 1 microg/liter and basal GH of 2.5 microg/liter failed to predict abnormal glucose tolerance. There was no statistical difference for prevalence of hypertension and systolic BP values, but diastolic BP was significantly lower in patients in remission than in those with active disease (P &lt; 0.05). Our observations indicate that the validity of the GH threshold of 1 microg/liter post OGTT might be inadequate as a criterion of biochemical remission of acromegaly and as a marker of associated comorbidities. However, normalized IGF-I concentrations and a lower GH cut-off value less than 0.25 micro g/liter are strongly associated with a lower prevalence of IGT and lower diastolic BP in long-term postoperative acromegaly. Answer: The prevalence of glucose intolerance in acromegaly does not appear to depend on the duration of the disease. A study analyzing records of 220 naïve patients with acromegaly found no significant correlation between the duration of the disease and fasting plasma glucose. However, there was a statistically significant correlation between plasma growth hormone (GH), insulin-like growth factor 1 (IGF-1) concentrations, and measures of insulin resistance such as HOMA and QUICKI, indicating that insulin resistance and hyperinsulinemia are positively correlated with the level of activity of the disease (PUBMED:19224501). Another study that included 21 patients with controlled acromegaly found no significant differences in glucose homeostasis or postprandial substrate metabolism according to treatment modality. However, lower IGF-1 levels were associated with a better metabolic profile, suggesting that indices of glucose homeostasis correlate with IGF-1 levels rather than with the type of treatment (PUBMED:33715210). In a study of Mexican patients with acromegaly, diabetes was more frequent in females, subjects with severely elevated GH concentrations, patients with macroadenomas, and those with long-standing disease duration. The odds ratio for diabetes in subjects with acromegaly was more than three times higher than in the general population, indicating a higher prevalence of glucose intolerance in acromegaly (PUBMED:21161601). A longitudinal study on the combined treatment of pegvisomant and somatostatin analogs (SSA) in non-diabetic, partially SSA-resistant acromegalic patients showed a slight but significant improvement in some biochemical features of glucose homeostasis, although it may not restore normal clinical and biochemical glycometabolic features (PUBMED:17484056). In summary, while the duration of acromegaly does not seem to significantly influence the prevalence of glucose intolerance, the level of disease activity, as indicated by GH and IGF-1 levels, does correlate with insulin resistance and glucose homeostasis abnormalities. Lower IGF-1 levels are associated with a better metabolic profile, and certain patient characteristics, such as gender, GH levels, and tumor size, may predict a higher prevalence of glucose intolerance in acromegaly.
Instruction: Does rectal wall tumor eradication with preoperative chemoradiation permit a change in the operative strategy? Abstracts: abstract_id: PUBMED:15540288 Does rectal wall tumor eradication with preoperative chemoradiation permit a change in the operative strategy? Purpose: Preoperative chemoradiation may downstage locally advanced rectal cancer and, in some cases, with no residual tumor. The management of complete response is controversial and recent data suggest that radical surgery may be avoided in selected cases. Transanal excision of the scar may determine the rectal wall response to chemoradiation. This study was designed to assess whether the absence of tumor in the bowel wall corresponds to the absence of tumor in the mesorectum, known as true complete response. Methods: A retrospective review of the medical records of patients who underwent preoperative chemoradiation for advanced mid (6-11 cm from the anal verge) and low (from the dentate line to 5 cm from the anal verge) rectal cancer (uT2-uT3) followed by radical surgery with total mesorectal excision was undertaken. Patients in whom the pathology specimen showed no residual tumor in the rectal wall (yT0, "y" signifies pathologic staging in postradiation patients) were assessed for tumoral involvement of the mesorectum. Results: A total of 109 patients underwent preoperative, high-dose radiation therapy (94 percent with 5-fluorouracil chemosensitization), followed by radical surgery for advanced rectal cancer. Preoperatively, 47 patients were clinically assessed to have potentially complete response. After radical rectal resection, pathology did not reveal any residual tumor within the rectal wall (yT0) in 17 patients. In two (12 percent) of these patients, the mesorectum was found to be positive for malignancy: one had positive lymph nodes that harbored cancer; one had tumor deposits in the mesorectal tissue. Conclusions: Compete rectal wall tumor eradication does not necessarily imply complete response, because the mesorectum may harbor tumor cells. Thus, caution should be exercised when considering the avoidance of radical surgery. Reliable imaging methods and clinical predictors for favorable outcome are important to allow less radical approaches in the future. abstract_id: PUBMED:27785313 Preoperative Chemoradiation in Locally Advanced Rectal Cancer: Efficacy and Safety. Background: Preoperative chemoradiation (CRT) is considered the standard of care in the management of stage II/III rectal cancer. The aim of this retrospective study was to assess the efficacy and safety of preoperative CRT in our patient cohort with locally advanced rectal adenocarcinoma. Methods: Forty patients with cT3-4N0-2M0 adenocarcinoma of the lower (n = 26) and mid/upper (n = 14) rectum were enrolled in this study between 2001 and 2012. Radiotherapy (RT) was given to the pelvis. The median prescribed dose was 45 Gy (daily dose, 1.8 - 2.0 Gy). All patients received chemotherapy concurrently with RT and underwent surgery 6 - 8 weeks after CRT. Low anterior resection (LAR) was achieved in 21 patients. Total mesorectal excision (TME) was performed in 24 patients. Results: Tumor downstaging (expressed as TN downstaging) was observed in 15 patients (38%); a pathological complete response (pCR) was pathologically confirmed in six of them. In nine out of the 26 (23%) patients with low lying tumors, sphincter preservation (SP) was possible. SP was also possible in all but one patient (13%) who achieved a pCR. In three out of 15 patients (8%) with preoperative sphincter infiltration, SP was achieved. With a median follow-up of 58 months, the 4-year local control (LC), distant metastases-free survival (DMFS), disease-free survival (DFS) and overall survival (OS) rates were 89.7%, 86.9%, 79.5% and 81.2%, respectively. The pretreatment tumor size was predictive of response to preoperative CRT. The response to preoperative CRT did show a significant impact on DFS and on OS. TME resulted in a statistically significant increased DFS rate. No grade 3/4 acute toxicity was reported. Three patients developed grade 3 late side effects. Conclusion: Preoperative CRT demonstrates encouraging rates of disease control and facilitates complete resection and SP in advanced rectal cancer with acceptable late toxicity. abstract_id: PUBMED:30348705 Potential Prognostic Factors of Downstaging Following Preoperative Chemoradiation for High Rectal Cancer. Background/aim: Treatment for high rectal cancers, particularly the value of preoperative treatment, is controversial. In our previous study, downstaging by preoperative chemoradiation resulted in improved outcomes. The aim of the present study was to identify prognostic factors to predict which patients will achieve downstaging and may benefit from preoperative treatment. Patients And Methods: In 54 patients with locally advanced non-metastatic high rectal cancer, 8 factors were evaluated for downstaging by preoperative chemoradiation including age, gender, carcinoembryonic antigen level, performance status, T-/N-category, UICC-stage (Union for International Cancer Control) and histological grade. Downstaging was defined as decrease by at least one UICC-stage. Results: Downstaging was achieved in 36 patients (67%). Patients at UICC-stage III showed a trend for downstaging. Conclusion: The majority of patients with UICC-stage III tumors were downstaged and appear to benefit from preoperative chemoradiation. In general, the potential value of preoperative treatment for high rectal cancers needs further investigation. abstract_id: PUBMED:25605475 Regional lymph node status after neoadjuvant chemoradiation of rectal cancer producing a complete or near complete rectal wall response. Aim: Transanal excision of the tumour site after complete response to chemoradiotherapy can determine the rectal wall response to treatment. This study was designed to assess whether the absence of tumour in the rectal wall corresponds to the absence of tumour in the mesorectum (true pathological complete response). Method: A retrospective review identified patients who underwent preoperative chemoradiation therapy for advanced mid and low rectal cancer followed by routine pre-planned radical surgery with total mesorectal excision. Patients in whom the pathology specimen showed no residual tumour in the rectal wall (ypT0) or a ypT1 lesion were assessed for tumour involvement in the mesorectum. Results: Seventy-eight patients who underwent pelvic chemoradiation followed by radical surgery were reviewed. The rectal wall tumour disappeared in eight (ypT0). Of these, residual tumour was found in the mesorectum (ypT0N1) in one (12%) patient. Eleven patients were found to have ypT1 residual tumour. Of these, two (18%) had a final post-surgical staging of ypT1N1. Conclusion: Complete rectal wall tumour eradication was achieved in 10% of the patients, and downstaging to ypT1 was achieved in 14%. In 15% (12% in ypT0 and 18% in ypT1) of these patients, residual tumour cells were evident in the mesorectum. This would probably have rendered these patients with residual disease had a nonradical approach of transanal excision of the original tumour site been employed. Caution should be taken when considering the avoidance of radical surgery. abstract_id: PUBMED:24596335 A comparison of laparoscopic and open surgery following pre-operative chemoradiation therapy for locally advanced lower rectal cancer. Objective: Although pre-operative chemoradiation therapy for advanced lower rectal cancer is a controversial treatment modality, it is increasingly used in combination with surgery. Few studies have considered the combination of chemoradiation therapy followed by laparoscopic surgery for locally advanced lower rectal cancer; therefore, this study aimed to assess the usefulness of this therapeutic combination. Methods: We retrospectively reviewed the medical records of patients with locally advanced lower rectal cancer treated by pre-operative chemoradiation therapy and surgery from February 2002 to November 2012 at Oita University. We divided patients into an open surgery group and a laparoscopic surgery group and evaluated various parameters by univariate and multivariate analyses. Results: In total, 33 patients were enrolled (open surgery group, n = 14; laparoscopic surgery group, n = 19). Univariate analysis revealed that compared with the open surgery group, operative time was significantly longer, whereas intra--operative blood loss and intra-operative blood transfusion requirements were significantly less in the laparoscopic surgery group. There were no significant differences in post-operative complication and recurrence rates between the two groups. According to multivariate analysis, operative time and intra-operative blood loss were significant predictors of outcome in the laparoscopic surgery group. Conclusions: This study suggests that laparoscopic surgery after chemoradiation therapy for locally advanced lower rectal cancer is a safe procedure. Further prospective investigation of the long-term oncological outcomes of laparoscopic surgery after chemoradiation therapy for locally advanced lower rectal cancer is required to confirm the advantages of laparoscopic surgery over open surgery. abstract_id: PUBMED:8624198 Preoperative "chemoradiation" for stages II and III rectal carcinoma. Objective: To determine whether preoperative administration of combination chemotherapy and external beam irradiation ("chemoradiation") for patients with stage II or stage III rectal carcinoma had an impact on perioperative morbidity on oncologic outcome, as compared with patients not receiving preoperative chemoradiation. Design: A group of patients with stage II or stage III rectal carcinoma receiving preoperative chemoradiation were followed up prospectively and compared in a nonrandomized fashion with an inception cohort group of similar patients. Setting: Northwestern Memorial Hospital, Chicago, Ill, a tertiary care academic medical center. Patients: Thirty patients with rectal carcinoma undergoing preoperative chemoradiation were compared with 56 patients not undergoing preoperative chemoradiation, and also with a subset group of 24 patients who received standard postoperative adjuvant chemoradiation. Intervention: External beam radiation, 45 to 50 Gy, was delivered concurrently with fluorouracil and mitomycin 4 to 8 weeks prior to surgical resection. Main Outcome Measures: Patients were followed up at regular intervals for either tumor recurrence or death. In addition, the group receiving preoperative chemoradiation was evaluated for major preoperative morbidity. Results: All patients agreeing to preoperative chemoradiation completed therapy. Perioperative major morbidity in this group (13%) was comparable to previously published results. Of the 56 patients with stage II or stage III rectal carcinoma not receiving preoperative chemoradiation, only 24 (43%) completed standard postoperative adjuvant chemoradiation. Patients receiving preoperative chemoradiation (n = 30), patients not receiving preoperative chemoradiation (n = 56), and the subset of the group not receiving preoperative chemoradiation who completed standard postoperative chemoradiation (n = 24) were followed up for a mean of 39 months, 31 months, and 32 months, respectively. Five-year actuarial local control rates were 96%, 83%, and 88%, respectively. Disease-free-survival rates were 80%, 57%, and 47%, respectively. Overall survival rates were 85%, 48%, and 78%, respectively. Conclusions: Preoperative chemoradiation in the treatment of stage II or stage III rectal carcinoma is well tolerated and not associated with an increase in subsequent perioperative major morbidity. In addition, local control, disease-free survival, and overall survival compare favorably with a nonrandomized inception cohort group of patients receiving standard postoperative adjuvant chemoradiation. abstract_id: PUBMED:30593459 Neoadjuvant chemoradiation and rectal cancer. Neoadjuvant chemoradiation (NACR) is now standard of care in stage II and III rectal cancer. The advent of this modality of treatment has impacted on the way the pathological evaluation of resection specimens that have been subjected to preoperative chemoradiation is conducted. The gross description, sectioning and microscopic examination have had to be adapted to accommodate the changes induced by NACR. Attempts at introducing a uniform approach to the gross triaging and reporting of these specimens have been met with muted response. There still exists much variation in approach. The purpose of this overview is to highlight some of the newer developments and issues around NACR-treated rectal cancers from a pathological point of view. The NACR-treated resection specimens should be handled in a consistent manner, at least within individual institutions, if not universally. There should be generous sampling with multiple sections taken as tumour is often sequestered deep in the bowel wall. Microscopic examination should be extra vigilant as residual cancer can be present as single cells or small clusters, often deep in the muscularis propria or serosa. Acellular pools of mucin or non-viable tumour cells in mucin within the bowel wall or lymph nodes are not regarded as positive and do not upstage the tumour. The issue of grading of regression has been the subject of much debate, and several approaches have been published. It is recommended that a system that has clinical meaning and use to oncologists be used. Lymph node counts will be reduced after NACR, but reasonable attempts to accrue 12 nodes should be made. abstract_id: PUBMED:25648465 Factors affecting the restaging accuracy of magnetic resonance imaging after preoperative chemoradiation in patients with rectal cancer. Purposes: We evaluated patient or tumor factors associated with the preoperative restaging accuracy of magnetic resonance imaging (MRI) for determining T and N stages as well as circumferential resection margin (CRM) involvement after chemoradiation (CRT) in patients with locally advanced rectal cancer. Methods: Seventy-seven patients with rectal cancer that were treated with preoperative CRT (50.4 Gy) followed by radical resection were included. Post-CRT MRI was performed approximately 4 weeks after preoperative CRT. Results: The median tumor distance from the anal verge was 6 cm, 48 (62%) of which were anterior and 29 (38%) posterior. The median tumor diameter was 3 cm. A stage-by-stage comparison showed that correct staging occurred in 62%, 43%, and 86% of patients for T staging, N staging, and CRM prediction, respectively. Shorter distance to the anal verge (&lt;5 cm), smaller tumor diameter (&lt;1 cm), and anterior tumor location were associated with incorrect T staging. There were no significant variables in terms of N staging accuracy. Shorter tumor distance and anterior tumor location were associated with incorrect CRM prediction. Conclusions: Our findings suggest that specific tumor factors such as small, distal, or anterior rectal tumors are closely associated with the accuracy of MRI after preoperative CRT. abstract_id: PUBMED:29468352 Local excision for ypT2 rectal cancer following preoperative chemoradiation therapy: it should not be justified. Purpose: Among individuals who respond well to preoperative chemoradiation therapy (CRT) for ypT0-1, local excision (LE) could provide acceptable oncological outcomes. However, in ypT2 cases, the oncological safety of LE has not been determined. This study aimed to compare oncological outcomes between LE and total mesorectal excision of ypT2-stage rectal cancer after chemoradiation therapy and investigate the oncological safety of LE in these patients. Methods: We included 351 patients who exhibited ypT2-stage rectal cancer after CRT followed by LE (n = 16 [5%]) or total mesorectal excision (TME) (n = 335 [95%]) after preoperative CRT between January 2007 and December 2013. After propensity matching, oncological outcomes between LE group and TME group were compared. Results: The median follow-up period was 57 months (range, 12-113 months). In the LE group, local recurrence occurred more frequently (18 vs. 4%; p = 0.034) but not distant metastases (12 vs. 11%; p = 0.690). The 5-year local recurrence-free (76 vs. 96%; p = 0.006), disease-free (64 vs. 84%; p = 0.075), and overall survival (79 vs. 93%; p = 0.045) rates of the LE group were significantly lower than those of the TME group. After propensity matching, 5-year local recurrence-free survival of the LE group was significantly lower than that of the TME group (76 vs. 97%, p = 0.029). Conclusion: The high local failure rate and poor oncological outcomes for ypT2-stage rectal cancer patients who undergo CRT followed by LE cannot be justified as an indication for LE. Salvage surgery should be recommended in these patients. abstract_id: PUBMED:9388521 Preoperative chemoradiation therapy for advanced rectal cancer Preoperative concurrent chemoradiation therapy with 5-fluorouracil and cisplatin was applied for advanced rectal cancer. Eligible criteria were as follows: no previous treatment, 4 more than hemicircular occupation, T3 or more invasion to adjacent organs or lymph node metastasis on CT scan, tumor fixation by digital examination. Eleven patients were enrolled with this regimen consisting of 5-FU; 500mg/day x5/w x4, CDDP; 10mg/day X 5/w x4 and radiation; 2Gy x 5/w x 4. As a toxicity, grade 2 leukopenia in 2 cases, grade 2 GI symptoms in one case and radiation dermatitis was observed in 8 cases. As a local response, there were PR in 10 cases and NC in 1 case. Surgical resection was performed on 8 patients. Histological responses in the resected specimens were grade 2, 5 cases; grade 1b, 1 case; and grade la, 2 cases. Operative radicalities were grade A, 3 cases; grade B, 3 cases; and grade C, 2 cases. Preoperative chemoradiation is one of the effective options in multimodal treatment for advanced rectal cancer. Answer: Rectal wall tumor eradication with preoperative chemoradiation does not necessarily permit a change in the operative strategy. While preoperative chemoradiation can downstage locally advanced rectal cancer and sometimes result in no residual tumor, the management of complete response is controversial. Studies have shown that even when there is no residual tumor within the rectal wall (yT0), the mesorectum may still harbor tumor cells. For example, in a study where 17 patients showed no residual tumor in the rectal wall after chemoradiation followed by radical surgery, 2 of these patients (12%) had positive malignancy in the mesorectum (PUBMED:15540288). This indicates that complete rectal wall tumor eradication does not imply a true complete response, and radical surgery should not be avoided based on rectal wall response alone. Furthermore, another study found that in patients with complete or near complete rectal wall response after chemoradiation, residual tumor cells were evident in the mesorectum in 15% of the cases (PUBMED:25605475). This suggests that caution should be exercised when considering less radical approaches, such as transanal excision of the scar, as they may leave residual disease in the mesorectum. In terms of operative strategy, preoperative chemoradiation has been shown to facilitate complete resection and sphincter preservation in advanced rectal cancer with acceptable late toxicity (PUBMED:27785313). However, the decision to alter the operative strategy should be made with caution, considering the potential for residual disease in the mesorectum and the importance of achieving clear margins to prevent local recurrence. In summary, while preoperative chemoradiation can downstage tumors and sometimes lead to complete rectal wall response, it does not always permit a change in the operative strategy due to the risk of residual disease in the mesorectum. Radical surgery with total mesorectal excision remains an important consideration to ensure the complete removal of the cancer.
Instruction: Is the earlier age at onset of schizophrenia in males a confounded finding? Abstracts: abstract_id: PUBMED:9229029 Is the earlier age at onset of schizophrenia in males a confounded finding? Results from a cross-cultural investigation. Background: The finding of an earlier age at onset of schizophrenia in males compared with females, replicated across a number of studies, appears to be so robust as to support hypotheses about gender differences in the aetiology of the disorder. However, the possibility that this observed gender effect might reflect other confounding variables has not been adequately explored. Method: We analysed data on 778 men and 653 women, in three developing countries and in seven developed countries, who had been assessed in the WHO 10-country study of schizophrenia. We applied a generalised linear modelling strategy to estimate the unconfounded contributions of gender, family history, premorbid personality and marital status to age at onset. Results: The model that explained the highest percentage of the total variance indicated strong main effects (P &lt; 0.001) for marital status and premorbid personality, a weak effect for family history, and an attenuated effect for gender. Two independent verification procedures suggested an independent onset-delaying effect for marital status (married), more marked in males. Conclusions: The gender difference in the age at onset of schizophrenia is not a robust biological characteristic of the disorder. Failure to control for marital status and premorbid personality in male/ female comparisons of age at onset may explain a large part of the differences reported previously. abstract_id: PUBMED:22716150 The association between cannabis use and earlier age at onset of schizophrenia and other psychoses: meta-analysis of possible confounding factors. A recent meta-analysis showed that the mean age of onset of psychosis among cannabis users was almost three years earlier than that of non-cannabis users. However, because cannabis users usually smoke tobacco, the use of tobacco might independently contribute to the earlier onset of psychosis. We aimed to use meta-analysis to compare the extent to which cannabis and tobacco use are each associated with an earlier age at onset of schizophrenia and other psychoses. We also examined other factors that might have contributed to the finding of an earlier age of onset among cannabis users, including the proportion of males in the samples, the diagnostic inclusion criteria and aspects of study quality. The electronic databases MEDLINE, EMBASE, PsycINFO and ISI Web of Science, were searched for English-language peer-reviewed publications that reported age at onset of schizophrenia and other psychoses separately for cannabis users and non-users, or for tobaccosmokers and non-smokers. Meta-analysis showed that the age at onset of psychosis for cannabis users was 32 months earlier than for cannabis non-users (SMD=- 0.399, 95%CI -0.493 - -0.306, z=-8.34, p &lt; 0.001), and was two weeks later in tobacco smokers compared with non-smokers (SMD=0.002, 95%CI -0.094 - 0.097, z=0.03, p=0.974). The main results were not affected by subgroup analyses examining studies of a single sex, the methods for making psychiatric diagnoses and measures of study quality. The results suggest that the association between cannabis use and earlier onset of psychosis is robust and is not the result either of tobacco smoking by cannabis using patients or the other potentially confounding factors we examined. This supports the hypothesis that, in some patients, cannabis use plays a causal role in the development of schizophrenia and raises the possibility of treating schizophrenia with new pharmacological treatments that have an affinity for endo-cannabinoid receptors. abstract_id: PUBMED:32883558 Examining which factors influence age of onset in males and females with schizophrenia. Objective: Data from the 2010 Australian National Survey of High Impact Psychosis (SHIP) was used to examine (1) what variables influence age of onset (AOO) for males and females, and (2) whether influencing variables were different between the sexes. Method: Data from 622 schizophrenia patients in the SHIP sample was used. These included early life factors, encompassing family psychiatric history, childhood development, trauma and parental loss. Factors occurring within 12 months of diagnosis were also used, including drug/alcohol abuse and premorbid work and social adjustment. Based on the recognised differences in symptom profiles and AOO between the sexes, these factors were regressed separately for males and females. Results: Stepwise linear regressions showed that a family history of psychiatric disorders was significantly associated with earlier AOO in both sexes. Other variables differed between males and females. Specifically, for females, an earlier AOO was associated with poor premorbid social adjustment and the loss of a family member in childhood. Older AOO was associated with immigrant status. For males, a younger AOO was associated with unemployment at onset, poor premorbid work adjustment, parental divorce in childhood, and lifetime cannabis use. A higher premorbid IQ was associated with an older AOO. Conclusion: Familial predisposition to psychiatric illness is related to earlier AOO of schizophrenia independent of sex. Males appear to have more individual-based predictive factors while females seem to have more community/social-based influences. Future directions for research in schizophrenia are suggested. abstract_id: PUBMED:22452790 Age at onset of non-affective psychosis in relation to cannabis use, other drug use and gender. Background: Cannabis use is associated with an earlier age at onset of psychotic illness. The aim of the present study was to examine whether this association is confounded by gender or other substance use in a large cohort of patients with a non-affective psychotic disorder. Method: In 785 patients with a non-affective psychotic disorder, regression analysis was used to investigate the independent effects of gender, cannabis use and other drug use on age at onset of first psychosis. Results: Age at onset was 1.8 years earlier in cannabis users compared to non-users, controlling for gender and other possible confounders. Use of other drugs did not have an additional effect on age at onset when cannabis use was taken into account. In 63.5% of cannabis-using patients, age at most intense cannabis use preceded the age at onset of first psychosis. In males, the mean age at onset was 1.3 years lower than in females, controlling for cannabis use and other confounders. Conclusions: Cannabis use and gender are independently associated with an earlier onset of psychotic illness. Our findings also suggest that cannabis use may precipitate psychosis. More research is needed to clarify the neurobiological factors that make people vulnerable to this precipitating effect of cannabis. abstract_id: PUBMED:22564907 Gender difference in age at onset of schizophrenia: a meta-analysis. Background: Most studies reporting the gender difference in age at onset of schizophrenia show an earlier onset in males, but vary considerably in their estimates of the difference. This may be due to variations in study design, setting and diagnostic criteria. In particular, several studies conducted in developing countries have found no difference or a reversed effect whereby females have an earlier onset. The aim of the study was to investigate gender differences in age of onset, and the impact of study design and setting on estimates thereof. Method: Study methods were a systematic literature search, meta-analysis and meta-regression. Results: A total of 46 studies with 29,218 males and 19,402 females fulfilled the inclusion criteria and were entered into a meta-analysis. A random-effects model gave a pooled estimate of the gender difference of 1.07 years (95% confidence interval 0.21-1.93) for age at first admission of schizophrenia, with males having earlier onset. The gender difference in age at onset was not significantly different between developed and developing countries. Studies using diagnostic and statistical manual of mental disorders (DSM) criteria showed a significantly greater gender difference in age at onset than studies using International Classification Of Diseases (ICD) criteria, the latter showing no difference. Conclusions: The gender difference in age of onset in schizophrenia is smaller than previously thought, and appears absent in studies using ICD. There is no evidence that the gender difference differs between developed and developing countries. abstract_id: PUBMED:25746410 Association of older paternal age with earlier onset among co-affected schizophrenia sib-pairs. Background: Advanced paternal age is associated with increased risk of schizophrenia. This study aimed to explore whether older paternal age is associated with earlier onset among co-affected schizophrenia sib-pairs with the same familial predisposition. Method: A total of 1297 patients with schizophrenia from 630 families, which were ascertained to have at least two siblings affected, throughout Taiwan were interviewed using the Diagnostic Interview for Genetic Studies. Both inter-family comparisons, a hierarchical regression model allowing for familial dependence and adjusting for confounders, and within-family comparisons, examining the consistency between onset order and birth order, were performed. Results: An inverted U shape was observed between paternal age and onset of schizophrenia. Affected offspring with paternal age of 20-24 years had the oldest onset. As paternal age increased over 25 years, older paternal age exhibited a linear decrease in the onset of schizophrenia. On average, the onset was lowered by 1.5 years for paternal age of 25-29 years and by 5.5 years for paternal age ⩾50 years (p = 0.04; trend test). The proportion of younger siblings with earlier onset (58%) was larger than that of older siblings with earlier onset (42%) (p = 0.0002). Conclusions: These findings indicate that paternal age older than 25 years and younger than 20 years were both associated with earlier onset among familial schizophrenia cases. The associations of advanced paternal age with both increased susceptibility to schizophrenia and earlier onset of schizophrenia are consistent with the rate of increases in spontaneous mutations in sperm as men age. abstract_id: PUBMED:9789910 Differences in distribution of ages of onset in males and females with schizophrenia. An earlier age at onset of schizophrenia in men as opposed to women, has been widely reported, but hitherto, insufficient account has been taken of parameters that might confound this finding. Furthermore, few explanatory models have accounted for the differences in shape of the age-at-onset distributions in males and females with schizophrenia. A catchment area sample of 477 first contact cases with schizophrenia or related disorders was ascertained through a case register. Differences in age at onset distributions between males and females were determined, and adjustment made for potential confounding factors. The most powerful predictors of early illness-onset were poor premorbid occupational functioning, single marital status, and male sex. The earlier onset in males was robust to controlling for other parameters. The shape of the onset distribution also differed between the sexes: SKUMIX analysis revealed a two-peak distribution for males, and a three-peak distribution for females. The mean age at onset for schizophrenia is earlier in males, and the onset distribution differs between the sexes. Psychosocial variables cannot explain these findings. Possible explanations for these gender differences include males and females being differentially susceptible to subtypes of illness with different mean ages at onset; precipitating and/or ameliorating factors operating at different stages of life in males and females; and/or an X-linked susceptibility locus that determines the age at onset. abstract_id: PUBMED:30926130 Advanced Paternal Age and Early Onset of Schizophrenia in Sporadic Cases: Not Confounded by Parental Polygenic Risk for Schizophrenia. Background: Whether paternal age effect on schizophrenia is a causation or just an association due to confounding by selection into late parenthood is still debated. We investigated the association between paternal age and early onset of schizophrenia in offspring, controlling for both paternal and maternal predisposition to schizophrenia as empirically estimated using polygenic risk score (PRS) derived from the Psychiatric Genomics Consortium. Methods: Among 2923 sporadic schizophrenia cases selected from the Schizophrenia Trio Genomic Research in Taiwan project, 1649 had parents' genotyping data. The relationships of paternal schizophrenia PRS to paternal age at first birth (AFB) and of maternal schizophrenia PRS to maternal AFB were examined. A logistic regression model of patients' early onset of schizophrenia (≤18 years old) on paternal age was conducted. Results: Advanced paternal age over 20 years exhibited a trend of an increasing proportion of early onset of schizophrenia (odds ratio per 10-year increase in paternal age = 1.28, p = .007) after adjusting for maternal age, sex, and age. Older paternal AFB also exhibited an increasing trend of paternal schizophrenia PRS. Additionally, a U-shaped relationship between maternal AFB and maternal schizophrenia PRS was observed. After adjusting for both paternal and maternal schizophrenia PRS, the association of paternal age with patients' early onset of schizophrenia remained (odds ratio = 1.29, p = .04). Conclusions: The association between paternal age and early onset of schizophrenia was not confounded by parental PRS for schizophrenia, which partially captures parental genetic vulnerability to schizophrenia. Our findings support an independent role of paternal age per se in increased risk of early onset of schizophrenia in offspring. abstract_id: PUBMED:9444493 Hippocampal volume asymmetry and age at illness onset in males with schizophrenia. To determine whether there are disturbances of hippocampal volume asymmetry in schizophrenic patients, we obtained contiguous, 1-mm-thick magnetic resonance images in 28 males with chronic schizophrenia and in 28 age-matched healthy males. The schizophrenic patients showed a bilateral reduction in volume of the hippocampal formation (HF; left 7.0%; right 8.7%). This reduction was significantly associated with the severity of disorganization syndrome (P &lt; 0.0005). A significant asymmetry in the HF volume was found in the control subjects (P = 0.006), but not in the patients (P = 0.40). There was a significant positive correlation between the asymmetry index and the patient's age at the onset of schizophrenia (r = 0.46, P = 0.01). Results indicate that a disturbance in the normal asymmetry of the HF may be a characteristic in schizophrenia, particularly in patients with an early onset of the illness. abstract_id: PUBMED:7840353 Gender and age at onset in schizophrenia: impact of family history. Objective: The 1-year prevalence of schizophrenia was studied in a limited geographical area of Reunion Island (Indian Ocean) to assess the impact of family history of schizophrenia on the well-known association between gender and age at onset. Method: The population of schizophrenic patients meeting the DSM-III-R criteria for schizophrenia (N = 663) was identified and divided according to the presence of another schizophrenic patient among the first- and second-degree relatives. Results: As previously reported, the median age at onset differed between the sexes: the males had an earlier onset (mean age = 27.8 years) than the females (31.5 years). Comparison of the ages at onset according to family history revealed that onset was later for female subjects with a negative family history than for the three other groups (i.e., males with or without a family history and females with a family history). No difference emerged in the comparison of the ages at onset of the males and females with a positive family history. Conclusions: Comparison of schizophrenic patients with familial versus sporadic disorder confirms the absence of a gender effect for age at onset in the subgroup with familial disorder. This approach also demonstrates the existence of a subgroup composed of affected females having late onset and no family history of schizophrenia. Answer: The question of whether the earlier age at onset of schizophrenia in males is a confounded finding has been explored in various studies. The results suggest that while there is an observed gender difference in the age at onset of schizophrenia, this difference may not be a robust biological characteristic of the disorder and could be influenced by other factors. One study found that the gender difference in age at onset was attenuated when controlling for marital status and premorbid personality, suggesting that these factors may explain a large part of the differences previously reported (PUBMED:9229029). Another study indicated that cannabis use was associated with an earlier onset of psychosis and that this association was robust and not due to tobacco smoking or other confounding factors examined (PUBMED:22716150). Additionally, familial predisposition to psychiatric illness was related to earlier age at onset of schizophrenia independent of sex, with males having more individual-based predictive factors and females having more community/social-based influences (PUBMED:32883558). Further research showed that cannabis use and gender are independently associated with an earlier onset of psychotic illness, with cannabis use potentially precipitating psychosis (PUBMED:22452790). A meta-analysis found that the gender difference in age of onset is smaller than previously thought and appears absent in studies using ICD criteria, with no evidence that the gender difference differs between developed and developing countries (PUBMED:22564907). Advanced paternal age was also associated with earlier onset among co-affected schizophrenia sib-pairs, suggesting that both increased susceptibility to schizophrenia and earlier onset are consistent with the rate of increases in spontaneous mutations in sperm as men age (PUBMED:25746410). Another study found that the earlier onset in males was robust to controlling for other parameters, and the onset distribution differed between the sexes (PUBMED:9789910). Lastly, a study examining the impact of family history on the association between gender and age at onset found that the absence of a gender effect for age at onset in the subgroup with familial disorder and the existence of a subgroup of affected females having late onset and no family history of schizophrenia (PUBMED:7840353). In conclusion, while there is evidence of an earlier age at onset of schizophrenia in males, this finding may be influenced by various confounding factors such as marital status, premorbid personality, cannabis use, family history, and paternal age.