input
stringlengths 6.82k
29k
|
---|
Instruction: Cardiac surgery in a high-risk group of patients: is prolonged postoperative antibiotic prophylaxis effective?
Abstracts:
abstract_id: PUBMED:34189757
Invasive dental procedures as risk factors for postoperative spinal infection and the effect of antibiotic prophylaxis. Aim: To identify invasive dental procedures as a risk factor for postoperative spinal infection (PSI) and evaluate the effectiveness of antibiotic prophylaxis.
Materials And Methods: We analysed 229,335 patients who underwent spinal surgery with instrumentation from 2010 to 2017, using the nationwide database. The incidence of spinal infection 2 years after surgery was determined. Invasive dental procedures as a risk factor for PSI and the effects of antibiotic prophylaxis during this period were also analysed.
Results: A total of 15,346 patients (6.69%) were diagnosed with PSI. It was found that advanced age, male sex, and a high Charlson Comorbidity Index were risk factors for PSI. The risk of PSI did not increase following dental procedures (adjusted hazard ratio [HR] 0.850; 95% confidence interval [CI], 0.793-0.912) and was not affected by antibiotics (adjusted HR 1.097; 95% CI, 0.987-1.218). Patients who received dental treatment as early as 3 months after spinal surgery had the lowest risk of postoperative infection (adjusted HR 0.869; 95% CI, 0.795-0.950).
Conclusions: Invasive dental procedure does not increase the risk of PSI, and antibiotic prophylaxis before dental procedure was not effective in preventing spinal infection.
abstract_id: PUBMED:9270631
Cardiac surgery in a high-risk group of patients: is prolonged postoperative antibiotic prophylaxis effective? Objective: In a prospective, randomized study, postoperatively prolonged antibiotic prophylaxis is evaluated in a high-risk group of patients undergoing cardiac operations. These patients had postoperative low cardiac output necessitating inotropic support and intraaortic balloon pumping.
Methods: Between January 1991 and 1994, 53 patients were enrolled in the study (42 men, mean age 65 years). All patients received the usual perioperative (24 hours) cefazolin prophylaxis. In the study group (n = 28) a prolonged regimen of prophylaxis with ticarcillin/clavulanate was performed for 2 days and vancomycin was added in a low dose until removal of the intraaortic balloon pump. The control group (n = 25) did not receive a prolonged regimen of prophylaxis. Follow-up ended at hospital discharge.
Results: Early mortality was 7 of 28 patients (25%) in the prophylaxis group and 8 of 25 patients (32%) in the control group (p = 0.397). Defined infections (pneumonia, n = 22; sepsis, n = 8; deep sternal wound infection, n = 2) occurred in 50% of the study group and 68% of the control group (p = 0.265). In all patients with septicemia, only coagulase-negative staphylococci could be isolated from the bloodstream (5 patients in the prophylaxis group vs 3 in the control group). Infectious parameters were controlled daily and did not differ significantly between groups. A total of 1158 bacteriologic tests were performed (blood cultures, n = 389; intravascular catheters, n = 208; bronchial aspirates, n = 411; intraaortic balloon pumps, n = 42; wound secretions, n = 108) showing bacterial growth in 322 (28%) without a significant difference between the groups. In the prophylaxis group, 13 intravascular catheters and intraaortic balloon pumps showed bacterial growth versus 11 in the control group. No side effects were seen.
Conclusions: In a high-risk group of patients undergoing cardiac operations, infectious outcome could not be effectively influenced by an additional and prolonged postoperative prophylaxis regimen with low-dose vancomycin and ticarcillin/clavulanate. Low-dose vancomycin did not reduce the rate of infections or colonizations of intravascular catheters with gram-positive organisms.
abstract_id: PUBMED:11767963
Surgical antibiotic prophylaxis: effect in postoperative infections. Objective: to assess the risk of surgical wound infection and hospital acquired infections among patients with and without adequate antibiotic prophylaxis. Also, to provide models to predict the contributing factors of hospital infection and surgical wound infection.
Design: survey study. Prospective cohort study over 14 months, with data collected by a nurse and a epidemiologist through visits to the surgical areas, a review of the medical record and consultation with the medical doctor and nurses attending the patients.
Setting: Two hundred and fifty bed, general hospital serving Puertollano (Ciudad Real), population--50,000.
Results: between February 1998 and April 1999, 754 patients underwent surgery, 263 (34.88%) received appropriate perioperative prophylaxis while 491 (65.12%) received inadequate prophylaxis. For those who received adequate antibiotic prophylaxis, the percentage of nosocomial infection was 10.65% compared with the group who received inadequate prophylaxis in which the percentage of nosocomial infection was 33.40%. The relative risk of nosocomial infection was, therefore, 4.21 times higher in the latter group (confidence intervals 95%: 2.71-6.51). A patient in the inadequate prophylaxis group had a 14.87% chance of wound infection while a patient in the adequate prophylaxis group had a 4.56% chance of wound infection. The relative risk of wound infection was 3.65 times higher in the group that received inadequate prophylaxis (confidence intervals 95%: 1.95-6.86). The final regression logistic model to assess nosocomial infection incorporated seven prognostic factors: age, length of venous periferic route, vesicle catheter, duration of operation, obesity, metabolic or neoplasm diseases and adequate or inadequate prophylaxis. When we incorporated these variables in the multi-factorial analysis we found that the relative risk of developing nosocomial infection was 2.33 times higher in the group which received inadequate prophylaxis. When we applied the second logistic multiple regression model (wound infection), we discovered that the probability of developing surgical wound infection was 2.32 times higher in the group which received inadequate prophylaxis as opposed to the group, which received adequate prophylaxis. The goodness of fit (Hosmer-Lemeshow test) showed a correct significance in all models.
Conclusions: a multi-factorial analysis was applied to identify the high-risk patients and the risk factors for postoperative infections. Through the application of these multiple regression logistic models, we conclude that the correct antibiotic prophylaxis is effective and will subsequently reduce postoperative infection rates, especially in high-risk patients. Therefore, the choice of antimicrobial agent should be made on the basis of the criteria of hospital committee.
abstract_id: PUBMED:35140033
Extended antibiotic prophylaxis after pancreatoduodenectomy reduces postoperative abdominal infection in high-risk patients: Results from a retrospective cohort study. Background: Preoperative biliary stenting before pancreatoduodenectomy is associated with a greater risk of bacteribilia and thus could lead to more frequent and severe surgical site infections. We hypothesized that an extended antibiotic prophylaxis could reduce the risk of surgical site infections for these high-risk patients compared with standard antibiotic prophylaxis.
Methods: All consecutive patients who underwent pancreatoduodenectomy between January 1, 2010 and December 31, 2016 were included in a tricentric retrospective cohort and classified according to the risk of surgical site infections (high or low) and the type of antibiotic prophylaxis (standard or extended). Extended antibiotic prophylaxis was defined by the use of high-rank β-lactams >2 days after surgery. Standard antibiotic prophylaxis concerned all single dose of low-rank β-lactams antibiotic prophylaxis. The primary outcome was postoperative surgical site infections.
Results: Three hundred and eight patients were included; 146 (47%) were high-risk patients, and 81 (55%) received extended antibiotic prophylaxis, mostly composed of piperacilline-tazobactam and gentamicin. There were significantly fewer surgical site infections in high-risk patients receiving extended antibiotic prophylaxis versus standard antibiotic prophylaxis (odds ratio = 0.4; 95% confidence interval, 0.2-0.8; P = .011), even after adjusting on age, sex, and duration of the surgical procedure (adjusted odds ratio = 0.3; 95% confidence interval, 0.1-0.7; P = .0071). There was no statistical difference in 28-day mortality (P = .32) or 90-day mortality (P = .13). Microorganisms identified in bile culture were more often sensitive to antibiotic prophylaxis in high-risk extended antibiotic prophylaxis group than in high-risk standard antibiotic prophylaxis group (64% versus 38%; P = .01).
Conclusion: Extended antibiotic prophylaxis is associated with a reduced risk of surgical site infections for high-risk patients with no significant reduction on 28-day mortality. Additional studies are required to determine the optimal duration of extended antibiotic prophylaxis for these patients.
abstract_id: PUBMED:34664290
Impact of antibiotic prophylaxis courses on postoperative complications following total joint arthroplasty: Finding from Chinese population. What Is Known And Objective: Prolonged antibiotic prophylaxis after total joint arthroplasty (TJA) may not assist in minimizing postoperative complications, however, data based on the Chinese population have been limited. The purpose of this study is to investigate the effect of antibiotic prophylaxis on postoperative complications after TJA in Chinese patients.
Methods: We retrospectively reviewed 990 patients undergoing elective primary TJA surgery from January 2016 to June 2019. Patients who received a short course (≤3 days) of antibiotic prophylaxis were compared with those who received a longer course (>3 days). Logistic regression analysis and subgroup analysis were performed to control for potential confounders. Beyond that, survival analysis was used to determine the cumulative incidence of postoperative complications.
Results And Discussion: Follow-up to 12 months after surgery, the prevalence of system complications in the longer course group and the short course group were 5.1% and 3.9%, respectively (p = 0.451). Similarly, no statistical differences in incisional complications (1.5% vs. 1.8%, p > 0.999) and periprosthetic joint infection (PJI) (1.0% vs. 1.0%, p > 0.999) were observed between the two groups. After performing logistic regression analysis and survival analysis, no potential association was found between the course of antibiotic prophylaxis and postoperative complications. In addition, prolonged antibiotic prophylaxis conferred no benefit for high-risk obese patients.
What Is New And Conclusion: Extended antibiotic prophylaxis did not result in a statistically significant and clinically meaningful reduction in postoperative complications. Therefore, we recommended that the duration of antibiotic prophylaxis in TJA should be shortened to 3 days or less in the Chinese population.
abstract_id: PUBMED:24598429
Prescribing antibiotic prophylaxis in orthognathic surgery: a systematic review. There is no consensus on the use of antibiotic prophylaxis in orthognathic surgery to prevent infections. A systematic review of randomized controlled trials investigating the efficacy of antibiotic prophylaxis was performed to make evidence-based recommendations. A search of Embase, Ovid Medline, and Cochrane databases (1966-November 2012) was conducted and the reference lists of articles identified were checked for relevant studies. Eleven studies were eligible and were reviewed independently by the authors using two validated quality assessment scales. Three studies were identified to have a low risk of bias and eight studies a high risk of bias. Most studies compared preoperative and perioperative antibiotic prophylaxis with or without continuous postoperative administration. Methodological flaws in the included studies were no description of inclusion and exclusion criteria and incorrect handling of dropouts and withdrawals. Studies investigating the efficacy of antibiotic prophylaxis are not placebo-controlled and mainly of poor quality. Based on the available evidence, preoperative antibiotic prophylaxis appears to be effective in reducing the postoperative infection rate in orthognathic surgery. However, there is no evidence for the effectiveness of prescribing additional continuous postoperative antibiotics. More trials with a low risk of bias are needed to produce evidence-based recommendations and establish guidelines.
abstract_id: PUBMED:37552239
Antibiotic prophylaxis in posterior colporrhaphy does not reduce postoperative infection: a nationwide observational cohort study. Introduction And Hypothesis: The aim of this study was to explore if antibiotic prophylaxis prevents postoperative infection after a posterior colporrhaphy.
Methods: In this register-based nationwide cohort study data were collected from the "The Swedish National Quality Register of Gynecological Surgery" (GynOp). Women 18 years or older who underwent a primary posterior colporrhaphy between 1 January 2015 and 31 December 2020 were included. Patients undergoing any concomitant prolapse procedure, mesh surgery, or incontinence procedure were excluded. The cohort was divided into two groups based on administration of antibiotic prophylaxis (n = 1,218) or not (n = 4,884). The primary outcome of this study was patient-reported infectious complication requiring antibiotic treatment. Secondary outcome measures included patient satisfaction and prolapse-related symptoms at 1 year postoperatively.
Results: A total of 7,799 patients who underwent posterior colporrhaphy and met the inclusion criteria and did not meet the exclusion criteria were identified in the register database. Of these patients 6,102 answered the primary outcome question (79%). In the antibiotic prophylaxis group a total of 138 reported a postoperative infection (11%) and in the no antibiotic prophylaxis group the corresponding data were 520 (11%). There were no significant differences regarding either the primary or the secondary outcomes between the study groups.
Conclusion: In this nationwide Swedish register study antibiotic prophylaxis was not associated with a reduced risk of postoperative infection after a posterior colporrhaphy.
abstract_id: PUBMED:11299497
Antibiotic prophylaxis for high risk patients undergoing cholecystectomy An open, randomised clinical trial was performed on 435 high risk patients who underwent open cholecystectomy between 1 = January 1993. and 31. December 1995. The patients were divided into three groups. Group 1 (AMOX/CLAV, N = 179) was treated with 1.2 g i.v. amoxicillin/clavulanic acid, the patients in Group 2 (COMPARATOR, N = 164) were given other antibiotics commonly used for prophylaxis in biliary surgery (cefamandole, cefuroxime, cefotaxim). Group 3 (CONTROL, N = 92) contained patients without any risk factors for infectious complication. In this group we did not use antibiotic prophylaxis. The results were analysed with Student t, and x2 methods. The wound infection rate in Group 1 was 2.76% versus 5.48% in Group 2. The difference was significant if the patients were older than 65 years or the preoperative hospitalisation was longer than 5 days. The concentration of amoxycillin/calavulanic acid was measured in the serum, in the wall of the gall bladder, in the bile obtained both from the gall bladder and the major bile duct. The observed levels were higher than the therapeutic concentration in the serum and in the bile gained from the major bile duct, whereas lower in the gall bladder wall, and in the bile gained from the gall bladder. Systemic antibiotic prophylaxis is required for open cholecystectomy in high risk patients.
abstract_id: PUBMED:19800467
Complications associated with postoperative antibiotic prophylaxis after breast surgery. Background: Evidence supports single-dose preoperative antibiotic (ABX) prophylaxis for breast surgery; however, limited data exist regarding the incidence and type of antibiotic complications postoperatively.
Methods: Breast/axillary surgeries between July 2004 and June 2006 were reviewed. Complications were analyzed by antibiotic group: preoperative prophylaxis, postoperative prophylaxis, and therapeutic intent. The Fisher exact test was used to compare complication rates.
Results: A total of 389 patients underwent breast/axillary surgeries during the study period. A total of 363 (93%) patients received ABX: 353 (91%) received a single preoperative dose, 91 (23%) received postoperative ABX prophylaxis, and 76 (201%) received ABX for therapeutic intent. Among patients receiving ABX, 15 (4%) had an ABX-related complication. The ABX-related complication rate was significantly higher among those receiving postoperative prophylaxis (5.5%; 5 of 91) compared with those receiving preoperative ABX alone (0%; 0 of 214) (P = .002).
Conclusions: Drug-related complications were uncommon with preoperative ABX prophylaxis. Considering the potential complications of ABX after breast surgery, the use of postoperative prophylaxis should be reexamined.
abstract_id: PUBMED:24029441
Correlation of antibiotic prophylaxis and difficulty of extraction with postoperative inflammatory complications in the lower third molar surgery. Our aim was to investigate the correlation among antibiotic prophylaxis, difficulty of extraction, and postoperative complications in the removal of lower 3rd molars. A total of 1222 such extractions in 890 patients between January 2010 and January 2012 were analysed retrospectively. The difficulty of extraction measured by Pederson's index, antibiotic prophylaxis with cefditoren, and postoperative complications were recorded. The difficulty of extraction was significantly associated with postoperative complications (p=0.03). There were no significant associations between antibiotic prophylaxis and postoperative complications in groups of equal difficulty ("easy" group (class I) p=1.00; "moderate" group (class II) p=1.00; and "difficult" group (class III) p=0.65). There was a small but insignificant increase in the number of dry sockets and infections in class III cases. In conclusion, this study provides further evidence that antibiotic prophylaxis for the prevention of postoperative inflammatory complications is unnecessary for extraction of 3rd molars.
Answer: Based on the abstract from PUBMED:9270631, in a high-risk group of patients undergoing cardiac operations, postoperative prolonged antibiotic prophylaxis was not effective. The study involved patients who had postoperative low cardiac output necessitating inotropic support and intraaortic balloon pumping. Despite the administration of an extended regimen of prophylaxis with ticarcillin/clavulanate for 2 days and a low dose of vancomycin until the removal of the intraaortic balloon pump, the infectious outcome could not be effectively influenced. The study found no significant difference in the rate of infections or colonizations of intravascular catheters with gram-positive organisms between the prophylaxis group and the control group. Additionally, no side effects were observed due to the antibiotic regimen. Therefore, the conclusion was that prolonged postoperative antibiotic prophylaxis did not reduce the rate of infections in this high-risk group of patients undergoing cardiac operations. |
Instruction: Increase of sexually transmitted infections, but not HIV, among young homosexual men in Amsterdam: are STIs still reliable markers for HIV transmission?
Abstracts:
abstract_id: PUBMED:15681720
Increase of sexually transmitted infections, but not HIV, among young homosexual men in Amsterdam: are STIs still reliable markers for HIV transmission? Objectives: The incidence of HIV and STIs increased among men who have sex with men (MSM) visiting our STI clinic in Amsterdam. Interestingly, HIV increased mainly among older (> or =35 years) MSM, whereas infection rates of rectal gonorrhoea increased mainly in younger men. To explore this discrepancy we compared trends in STIs and HIV in a cohort of young HIV negative homosexual men from 1984 until 2002.
Methods: The study population included 863 men enrolled at < or =30 years of age from 1984 onward in the Amsterdam Cohort Studies (ACS). They had attended at least one of the 6 monthly follow up ACS visits at which they completed a questionnaire (including self reported gonorrhoea and syphilis episodes) and were tested for syphilis and HIV. Yearly trends in HIV and STI incidence and risk factors were analysed using Poisson regression.
Results: Mean age at enrollment was 25 years. The median follow up time was 4 years. Until 1995 trends in HIV and STI incidence were concurrent, however since 1995 there was a significant (p<0.05) increase in syphilis (0 to 1.4/100 person years (PY)) and gonorrhoea incidence (1.1 to 6.0/100 PY), but no change in HIV incidence (1.1 and 1.3/100 PY).
Conclusions: The incidence of syphilis and gonorrhoea has increased among young homosexual men since 1995, while HIV incidence has remained stable. Increasing STI incidence underscores the potential for HIV spread among young homosexual men. However, several years of increasing STIs without HIV, makes the relation between STI incidence and HIV transmission a subject for debate.
abstract_id: PUBMED:17069728
Sexual transmission of hepatitis C among HIV-positive men Infections with the hepatitis C virus (HCV) occur primarily through percutaneous transmission, while sexual transmission seems to be rare. Recently, in some European cities, an increasing incidence of sexually transmitted HCV infection among HIV-infected homosexual males has been reported. We describe four cases of acute HCV infection among HIV-infected homosexual males, where sexual transmission was likely.
abstract_id: PUBMED:11467016
Test of HIV incidence shows continuing HIV transmission in homosexual/bisexual men in England and Wales. It has been suggested that HIV incidence will decrease with the increased use of antiretroviral Therapy (ART) in HIV infected homosexual/bisexual men. HIV incidence was measured using a sensitive/less sensitive assay technique, at a time when combination ART was widespread. The Serological Testing Algorithm for Recent HIV Seroconversion (STARHS)13 technique was applied to syphilis test specimens collected from homosexual/bisexual men attending 15 sexually transmitted infections (STI) clinics which participated in an unlinked anonymous serosurvey of HIV infection during 1998. The HIV incidence rate was adjusted to compensate for patients who had a repeat syphilis test within the same year. Leftover syphilis test sera from 6202 men had been unlinked and anonymised, of which 415 were HIV positive. Sera from 412 (99.3%) patients were available. The STARHS assay showed 62 to have been recently infected with HIV (approximately in the last four months), giving an incidence of 3.33% per annum (95% CI: 2.06%-5.27%). The highest incidence was seen in those aged 35-44 years. About 46% of all HIV-infected homosexual/bisexual men were probably receiving combination ART at this time. If 10% of those on treatment were misclassified as recent infections the incidence would have been 2.58% per annum (95% CI: 1.53%-4.24%). In homosexual/bisexual men having syphilis tests at STI clinics in the UK during 1998 the incidence of HIV infection was between two and three per hundred per year. Treatment with combination ART of almost a half of homosexual/bisexual men who are HIV infected in the population is compatible with appreciable continuing HIV transmission among those at high behavioural risk. Public health surveillance systems for those at high risk for HIV infection should, as soon as possible, incorporate the STARHS methodology for monitoring recent HIV incidence.
abstract_id: PUBMED:12576605
Is use of antiretroviral therapy among homosexual men associated with increased risk of transmission of HIV infection? Background/objective: There is concern that use of highly active antiretroviral therapy (HAART) may be linked to increased sexual risk behaviour among homosexual men. We investigated sexual risk behaviour in HIV positive homosexual men and the relation between use of HAART and risk of HIV transmission.
Methods: A cross sectional study of 420 HIV positive homosexual men attending a London outpatient clinic. Individual data were collected from computer assisted self interview, STI screening, and clinical and laboratory databases.
Results: Among all men, sexual behaviour associated with a high risk of HIV transmission was commonly reported. The most frequently reported type of partnership was casual partners only, and 22% reported unprotected anal intercourse with one or more new partners in the past month. Analysis of crude data showed that men on HAART had fewer sexual partners (median 9 versus 20, p=0.28), less unprotected anal intercourse (for example, 36% versus 27% had insertive unprotected anal intercourse with a new partner in the past year, p=0.03) and fewer acute sexually transmitted infections (33% versus 19%, p=0.004 in the past 12 months) than men not on HAART. Self assessed health status was similar between the two groups: 72% on HAART and 75% not on HAART rated their health as very or fairly good, (p=0.55). In multivariate analysis, differences in sexual risk behaviour between men on HAART and men not on HAART were attenuated by adjustment for age, time since HIV infection. CD4 count and self assessed health status.
Conclusion: HIV positive homosexual men attending a London outpatient clinic commonly reported sexual behaviour with a high risk of HIV transmission. However, behavioural and clinical risk factors for HIV transmission were consistently lower in men on HAART than men not on HAART. Although use of HAART by homosexual men with generally good health is not associated with higher risk behaviours, effective risk reduction interventions targeting known HIV positive homosexual men are still urgently needed.
abstract_id: PUBMED:12131206
HIV incidence on the increase among homosexual men attending an Amsterdam sexually transmitted disease clinic: using a novel approach for detecting recent infections. Objective: Dramatic increases have occurred in sexually transmitted diseases (STD) and in sexual risk behaviour among homosexual men in Amsterdam and internationally. We investigated whether these trends indicate a resurgence of the HIV epidemic.
Methods: HIV incidence was determined among homosexual attendees of an STD clinic in Amsterdam, who had participated in semi-annual anonymous unlinked cross-sectional HIV prevalence studies from 1991 to 2001. Stored HIV-seropositive samples were tested with a less-sensitive HIV assay and, if non-reactive, were further tested for the presence of antiretroviral drugs, indicative of the use of highly active antiretroviral therapy. Seropositive men who tested non-reactive on the less-sensitive assay and had not used antiretroviral drugs were classified as recently infected (< 170 days). Annual HIV incidence and its changes were examined.
Results: Among 3090 homosexual participants (median age 34 years), 454 were HIV infected, of whom 37 were recently infectioned. From 1991 to 2001 the overall incidence was 3.0 infections/100 person-years. Incidence increased over time (P = 0.02) and, strikingly, the increase was evident in older (> or = 34 years) men (P < 0.01), but not in the young. Of men recently infected, 84% (n = 31) were unaware of their infection and 70.3% (n = 26) had a concurrent STD. These 26 men reportedly had sex with a total of 315 men in the preceding 6 months.
Conclusion: HIV incidence is increasing among homosexual attendees of an STD clinic. It is imperative to trace recently infected individuals, because they are highly infectious, and can thus play a key role in the spread of HIV.
abstract_id: PUBMED:26198239
Seminal Shedding of CMV and HIV Transmission among Men Who Have Sex with Men. As in many urban areas in the United States, the largest burden of the HIV epidemic in San Diego is borne by men who have sex with men (MSM). Using data from well-characterized HIV transmitting and non-transmitting partner pairs of MSM in San Diego, we calculated the population attributable risk (PAR) of HIV transmissions for different co-infections common among MSM in this area. We found that over a third of HIV transmissions could be potentially attributed to genital shedding of cytomegalovirus (CMV) (111 transmission events), compared to 21% potentially attributed to bacterial sexually transmitted infections (STI) (62 events) and 17% to herpes simplex virus type-2 (HSV-2) (51 events). Although our study cannot infer causality between the described associations and is limited in sample size, these results suggest that interventions aimed at reducing CMV shedding might be an attractive HIV prevention strategy in populations with high prevalence of CMV co-infection.
abstract_id: PUBMED:11957387
Syphilis epidemic and an increase of the number of HIV infections among homosexual men attending the Amsterdam venereal disease clinic The registered number of cases of early infectious syphilis and of (ano)genital gonorrhoea among the attendees of the outpatient clinic for sexually transmitted diseases of the Amsterdam municipal health service shows a strong increase for both diagnoses in the period 1990-2001, notably in the last few years. Nearly all of this increase is accounted for by homosexual men. Syphilis increased mostly among men aged 35 years and over, gonorrhoea mostly among younger men. The population of older men also showed a distinct increase since 1997 in HIV incidence.
abstract_id: PUBMED:7974077
Detection of human immunodeficiency virus antibody among homosexual men from Bombay. Background And Objectives: In India, heterosexual transmission of HIV-infection is considered to be the major mode of transmission. However, no report is available on transmission of HIV-infection among homosexually active men. The prevalence of human immunodeficiency virus-1 (HIV-1) and human immunodeficiency virus-2 (HIV-2) infections among homosexual men from Bombay is discussed.
Goal Of The Study: To determine the extent of presence of anti-HIV-1, anti-HIV-2 antibodies, or both anti-HIV-1 and anti-HIV-2 antibodies among homosexual men in India.
Study Design: Sixty-three blood samples were collected from two STD clinics of Bombay over a 6-month period from men with a history of homosexual behavior who were asymptomatic for HIV-infection. The mean age of the subjects was 31.6 years. For serological detection anti-HIV-1 antibody ELISA was used as the primary screening test followed by Western blot to confirm the results. For distinction between anti-HIV-1 and anti-HIV-2 antibody, line immunoassay was used. The sexually transmitted diseases (STDs) were diagnosed clinically, although Venereal Disease Research Laboratory (VDRL) tests were carried out as a routine test for screening STDs. For detection of gonorrhea, Gram stains of urethral smear were done routinely.
Results: From the 63 blood samples tested, 10 samples were reactive by ELISA for HIV-1 infection, and three samples were borderline reactive. These three samples were found to be reactive for anti-HIV-2 by the line immunoassay. The above 10 samples were also positive by Western blot for anti-HIV-1 antibody. Two blood samples were positive for both anti-HIV-1 and anti-HIV-2 antibodies. Using clinical diagnosis as the criteria, the different types of STD among the 63 subjects were as follows: condylomata (22), herpes (20), gonorrhea (15), candidiasis (3), and syphilis (3). However with VDRL, seven subjects were found to be reactive. Gram stains indicated gonorrhea in all the 15 subjects.
Conclusions: This study reports for the first time the homosexual transmission of both HIV-1 and HIV-2 infections in India, although heterosexual transmission still is the major mode of transmission of the infection. The associated incidence of STDs among these men and that a few of these subjects were bisexual make them at high risk for transmission of HIV infection.
abstract_id: PUBMED:16472482
Reemergence of infectious syphilis among homosexual men and HIV coinfection in Barcelona, 2002-2003 Background And Objectives: An increase in syphilis infections since the mid 1990s has been documented, especially in homosexual men, in different European and North American cities. We intended to describe the characteristics of newly diagnosed cases of syphilis at the Sexually Transmitted Infections Unit of Barcelona in 2002 and 2003.
Patients And Method: Descriptive analysis of cases with infectious syphilis and multivariate analysis of factors associated with HIV coinfection.
Results: 102 cases were diagnosed with infectious syphilis, 98 males (88 homosexual men). HIV coinfection was present in 34% of cases. Predictive factors of HIV coinfection were age > 30 years (p = 0.003) and having a HIV positive partner (p = 0.044). Clinically, there were no differences between cases coinfected or not with HIV.
Conclusions: There has been a recent increase of syphilis in Barcelona, especially among some core groups of homosexual men with high rates of HIV coinfection.
abstract_id: PUBMED:36630166
HIV Prevalence and Risk Factors Among Young Men Who Have Sex With Men in Southwest China: Cross-sectional Questionnaire Study. Background: Previous studies showed an increase in HIV prevalence among young men who have sex with men aged 25 years or younger in China.
Objective: This study aimed to assess HIV prevalence and associated factors among young men who have sex with men in the Guangxi Zhuang Autonomous Region.
Methods: This study was conducted in 4 cities (Guilin, Liuzhou, Beihai, and Nanning) in the Guangxi Zhuang Autonomous Region between June 2014 and May 2016. Participants were reached through web-based and site recruitment approaches. Laboratory tests were performed to detect HIV and syphilis infections. A self-administered questionnaire was used to collect data from 632 eligible young men who have sex with men.
Results: The prevalence of HIV and syphilis was 9.3% (59/632) and 11.4% (72/632), respectively. Multivariable logistic analysis showed that ethnic minority (adjusted odds ratio [AOR] for Han Chinese vs other minorities 0.28, 95% CI 0.11-0.71, P=.007), receptive sexual positioning in the past 6 months (AOR 2.94, 95% CI 1.32-6.53, P=.008), current syphilis infection (AOR for individuals without vs those with infection 0.38, 95% CI 0.19-0.75, P=.005), inconsistent condom use in the past 6 months (AOR 1.91, 95% CI 1.06-3.45, P=.03), and psychotropic drug use before last anal intercourse (AOR 16.70, 95% CI 2.34-119.18, P=.005) were independently associated with HIV infection.
Conclusions: There is an urgent need to scale up HIV and syphilis interventions in young men who have sex with men. Some subgroups might need specific attention for HIV prevention, including ethnic minority men, individuals with a history of sexually transmitted infections, and individuals who have been engaging in receptive anal sex.
Answer: The increase in sexually transmitted infections (STIs) among young homosexual men in Amsterdam has been observed, particularly with syphilis and gonorrhea, while HIV incidence has remained stable (PUBMED:15681720). This discrepancy raises questions about the reliability of STIs as markers for HIV transmission. Historically, trends in HIV and STI incidence were concurrent until 1995, but since then, STIs have increased without a corresponding rise in HIV incidence among the young homosexual population (PUBMED:15681720). This suggests that while STIs can indicate potential for HIV spread, they may not be as reliable as once thought in predicting HIV transmission trends.
The increase in STIs without a corresponding increase in HIV incidence could be due to several factors, including the effectiveness of HIV prevention strategies, changes in sexual behavior, or the use of antiretroviral therapy (ART). For instance, HIV-positive homosexual men on HAART have been found to exhibit lower behavioral and clinical risk factors for HIV transmission compared to those not on HAART (PUBMED:12576605). Additionally, the use of ART in almost half of HIV-infected homosexual/bisexual men is compatible with continuing HIV transmission among those at high behavioral risk, indicating that ART alone may not be sufficient to prevent new infections (PUBMED:11467016).
Moreover, other studies have shown that HIV incidence is on the rise among older homosexual men attending an STD clinic, with a significant proportion of recently infected individuals unaware of their infection and having concurrent STDs (PUBMED:12131206). This highlights the importance of tracing recently infected individuals who can play a key role in the spread of HIV.
In conclusion, while STIs remain important indicators of sexual health and potential HIV risk, they may not be as reliable as direct markers for HIV transmission, especially in the context of effective HIV prevention and treatment strategies. The relationship between STI incidence and HIV transmission continues to be a subject for debate, and it is imperative to consider multiple factors, including age, sexual practices, and the use of ART, when assessing HIV risk among young homosexual men (PUBMED:15681720). |
Instruction: Is preoperative radiotherapy suitable for all patients with primary soft tissue sarcoma of the limbs?
Abstracts:
abstract_id: PUBMED:25130960
Is preoperative radiotherapy suitable for all patients with primary soft tissue sarcoma of the limbs? Aim: To evaluate the indications and results of preoperative radiotherapy (RT) on a series of selected patients treated at our institution with curative intent for a limb sarcoma (STS).
Patients And Methods: From 05/1993 to 12/2011, 64 STS patients received preoperative RT.
Results: RT was delivered as a "limb salvage treatment" prior to surgery for the following reasons: as the preferential induction treatment in 53 patients (83%) or as a second intent (17%) after the failure of neoadjuvant systemic chemotherapy/isolated limb perfusion. Surgery was performed after RT in 54 (84%) patients and final limb salvage was performed in 98%. Musculo-cutaneous flap reconstruction was planned upfront in 44% patients, and 19% had a skin graft. Seven patients (13%) had a postoperative RT boost. Thirteen (20%) patients had grade (G) 3/4 adverse events, one after RT and 12 after surgery. At a median follow-up of 3.5 years, the 3-year actuarial overall survival (OS) and distant relapse (DR) rates were 83% and 31%, respectively. Two patients developed a local relapse and two a local progression (non-operated patients). In the multivariate analysis (MVA), histological subtype (leiomyosarcoma) and grade 3 were predictive of poorer survival. Patients with >3 month delay between the start of RT and surgery at our institution had an increased risk of DR in the MVA.
Conclusion: Induction RT should be personalised according to histological subtype, tumour site and risks-benefit ratio of preoperative radiotherapy and is best managed by a multidisciplinary surgical and oncology team in a specialist sarcoma centre.
abstract_id: PUBMED:34198676
Efficacy and Safety of Hypofractionated Preoperative Radiotherapy for Primary Locally Advanced Soft Tissue Sarcomas of Limbs or Trunk Wall. Background: The use of adjuvant radiotherapy (RT) shows a significantly decreased incidence of local recurrence (LR) in soft tissue sarcomas (STS). This study aimed to assess the treatment scheme's effect in patients with primary STS treated at one institution.
Methods: In this phase 2 trial, 311 patients aged ≥18 years with primary, locally advanced STS of the extremity or trunk wall were assigned to multimodal therapy conducted at one institution. The preoperative RT scheme consisted of 5 Gy per fraction for a total dose of 25 Gy. Surgery was performed within 2-4 days from the last day of RT. The primary endpoint was LR-free survival (LRFS). Adverse events of the treatment were assessed.
Results: We included 311 patients with primary locally advanced STS. The median tumor size was 11 cm. In total, 258 patients (83%) had high-grade tumors. In 260 patients (83.6%), clear surgical margins (R0) were obtained. Ninety-six patients (30.8%) had at least one type of treatment adverse event. LR was observed in 13.8% patients. The 5-year overall survival was 63%.
Conclusion: In this group, with a significant percentage of patients with extensive, high-grade STS, hypofractionated preoperative RT was associated with good local control and tolerance.
abstract_id: PUBMED:25282099
Preoperative hypofractionated radiotherapy in the treatment of localized soft tissue sarcomas. Background: The primary treatment of soft tissue sarcomas (STS) is a radical resection of the tumor with adjuvant radiotherapy. Conventional fractionation of preoperative radiotherapy is 50 Gy in fraction of 2 Gy a day. The purpose of the conducted study was to assess the efficacy and safety of hypofractionated radiotherapy in preoperative setting in STS patients.
Methods: 272 patients participated in this prospective study conducted from 2006 till 2011. Tumors were localized on the extremities or trunk wall. Median tumor size was 8.5 cm, 42% of the patients had tumor larger than 10 cm, whereas 170 patients (64.6%) had high grade (G3) tumors. 167 patients (61.4%) had primary tumors. Patients were treated with preoperative radiotherapy for five consecutive days in 5 Gy per fraction, with an immediate surgery. Median follow up is 35 months.
Results: 79 patients died at the time of the analysis, the 3-year overall survival was 72%. Local recurrences were observed in 19.1 % of the patients. Factors that had a significant adverse impact on local recurrence were tumor size of 10 cm or more and G3 grade. 114 patients (42%) had any kind of treatment toxicity, vast majority with tumors located on lower limbs. 7% (21) of the patients required surgery for treatment of the complications.
Conclusion: In this non-selected group of locally advanced STS use of hypofractionated preoperative radiotherapy was associated with similar local control (81%) when compared to previously published studies. The early toxicity is tolerable, with small rate of late complications. Presented results warrant further evaluation.
abstract_id: PUBMED:12103287
Preoperative versus postoperative radiotherapy in soft-tissue sarcoma of the limbs: a randomised trial. Background: External-beam radiotherapy (delivered either preoperatively or postoperatively) is frequently used in local management of sarcomas in the soft tissue of limbs, but the two approaches differ substantially in their potential toxic effects. We aimed to determine whether the timing of external-beam radiotherapy affected the number of wound healing complications in soft-tissue sarcoma in the limbs of adults.
Methods: After stratification by tumour size (< or = 10 cm or >10 cm), we randomly allocated 94 patients to preoperative radiotherapy (50 Gy in 25 fractions) and 96 to postoperative radiotherapy (66 Gy in 33 fractions). The primary endpoint was rate of wound complications within 120 days of surgery. Analyses were per protocol for primary outcomes and by intention to treat for secondary outcomes.
Findings: Median follow-up was 3.3 years (range 0.27-5.6). Four patients, all in the preoperative group, did not undergo protocol surgery and were not evaluable for the primary outcome. Of those patients who were eligible and evaluable, wound complications were recorded in 31 (35%) of 88 in the preoperative group and 16 (17%) of 94 in the postoperative group (difference 18% [95% CI 5-30], p=0.01). Tumour size and anatomical site were also significant risk factors in multivariate analysis. Overall survival was slightly better in patients who had preoperative radiotherapy than in those who had postoperative treatment (p=0.0481).
Interpretation: Because preoperative radiotherapy is associated with a greater risk of wound complications than postoperative radiotherapy, the choice of regimen for patients with soft-tissue sarcoma should take into account the timing of surgery and radiotherapy, and the size and anatomical site of the tumour.
abstract_id: PUBMED:22154883
Value of PET scan in patients with retroperitoneal sarcoma treated with preoperative radiotherapy. Purpose: Preoperative radiotherapy provides advantages in the management of retroperitoneal sarcoma (RPS). We describe our experience treating a cohort who underwent pre- and post-radiotherapy functional imaging with FDG-PET scan.
Methods And Materials: Consecutive patients presenting between January 1999 and December 2009 with a diagnosis of either primary or recurrent RPS were identified from the hospital patient record database using ICD codes, and cross-referenced with the completed radiotherapy course database. Those patients suitable for preoperative radiotherapy and surgery who underwent both pre- and post-radiotherapy FDG-PET were included. Exclusions included presence of metastatic disease, age under 18 years and/or paediatric histology, and treatment with palliative intent.
Results: Eleven patients were included, of whom six were male. Median age was 63 years (range, 38-78 years). The majority of patients had Stage T2b, high-grade disease. Ten patients were treated at initial presentation and one at first local recurrence. A malignant diagnosis was confirmed in all patients who underwent CT-guided core biopsy; a diagnosis of sarcoma was reached in 91%. Sensitivity of FDG-PET imaging was 100%. Metabolic partial or complete response did not correlate with change in tumour size, nor pathological response assessment. Pulmonary and hepatic metastatic disease was detected in one patient on post-treatment imaging. All patients in the cohort completed preoperative radiotherapy. There was no grade 3 or 4 toxicity. Sixty-four percent proceeded to radical resection. Complete macroscopic excision was achieved in all cases. There was no perioperative mortality.
Conclusion: Combined therapy with preoperative radiotherapy and surgery has acceptable levels of toxicity. CT-guided core biopsy is an accurate means of confirming a diagnosis of RPS prior to definitive treatment. Utility of PET scan in the management of RPS is evolving and further investigation is warranted.
abstract_id: PUBMED:34246458
Preoperative versus postoperative radiotherapy in soft tissue sarcomas: State of the art and perspectives Radiation therapy is a standard treatment for limbs soft tissue sarcomas. Preoperative versus postoperative radiotherapy has been a controversial topic for years. With preoperative irradiation, the treatment volume is more limited, the delivered dose possibly lower and the tumor volume easier to delimit. Only one randomized trial compared these two irradiation sequences. The results in terms of local control and survival were equivalent but the risk of acute postoperative complications was higher if irradiation was administered before surgery. However, in the latest update of this trial, patients who received adjuvant irradiation exhibited more severe late toxicity than those treated preoperatively. In addition, with modern irradiation techniques such as conformal with image-guided intensity modulated radiotherapy and flap coverage techniques, the incidence of complications after preoperative irradiation were lower than historically published rates. Locally advanced proximal sarcomas and the failure of other neoadjuvant treatments are nowadays classical indications for preoperative irradiation. As with other neoadjuvant treatments, induction radiotherapy must be personalized according to the histological subtype, the tumor site and the benefit/risk ratio, which is best appreciated by a multidisciplinary surgical and oncological team in a specialized center in the management of soft-tissue sarcomas.
abstract_id: PUBMED:14612625
Preoperative radiotherapy is effective in the treatment of fibromatosis. The use of preoperative radiation is well-established for soft tissue sarcoma, but its use in fibromatosis is not well-characterized. The purpose of this study was to examine the impact of preoperative radiotherapy on the local control of fibromatosis and to assess treatment-related morbidity in this setting. In particular we assessed complication rates in comparison with soft tissue sarcoma treatment. All patients with fibromatosis referred to this unit who received preoperative radiotherapy (50 Gy in 25 fractions) from 1988 to 2000 and who had at least 2 years of followup were included in this study. The rate of recurrence in this group was ascertained. Similarly constructed datasets from all patients with soft tissue sarcomas of the extremities who received preoperative radiation from 1986 to 1997 also were analyzed. The rates of complications in the two groups were compared. Fifty-eight patients were treated with preoperative radiation for fibromatosis and the median followup was 69 months. There were 11 local recurrences (19%). Major wound complications manifested in two patients (3.4%). Wound-related complications arose in 89 of 265 patients with soft tissue sarcomas (33.5%). There was a significant difference in the rate of major wound complications observed in the two groups. The use of radiotherapy before surgery is effective in the combined treatment of fibromatosis.
abstract_id: PUBMED:1736977
Preoperative radiotherapy for initially inoperable extremity soft tissue sarcomas. The results and complications of a combination of preoperative radiotherapy and surgery in the treatment of 70 patients with large or fixed extremity soft tissue sarcomas are reported. Sixty-one patients were referred with a primary tumour and 9 had recurrences. Thirty-three patients had tumours in the thigh and 38 tumours were fixed to neighbouring structures. The mean preoperative dose was 53 Gy (range 21-75). Eleven patients received a postoperative boost to tumour site. Four patients received preoperative intra-arterial Adriamycin. Overall, 42 patients (60%) responded to the radiotherapy, 4 with complete tumour resolution. Eighty per cent of those receiving greater than or equal to 60 Gy responded and a significant correlation between 2 Gy equivalent dose and response was demonstrated (P less than 0.005). The degree of tumour necrosis was increased in 23 of 52 evaluable patients following radiotherapy, although there was no correlation with dose or clinical response. There have been eight local recurrences and 17 deaths after a median follow-up of 2 years. Tumour size less than 10 cm was the only significant factor in the development of local recurrence (P = 0.04). Thirty-six patients developed immediate postoperative complications: 9 major (13%), 13 moderate (19%) and 14 minor (20%). Increasing patient age was the only significant independent factor for the development of complications (P = 0.015). Preoperative radiotherapy will usually permit limb conservation of extremity sarcomas which otherwise would be inoperable or require amputation. However, the increased incidence of wound complications in older patients demands meticulous technique.
abstract_id: PUBMED:34821137
Hypofractionated preoperative radiotherapy for high risk soft tissue sarcomas in a geriatric patient population. Background: Standard therapy for localised, resectable high risk soft tissue sarcomas consists of wide excision and radiotherapy over several weeks. This treatment schedule is hardly feasible in geriatric and frail patients. In order not to withhold radiotherapy from these patients, hypofractionated radiotherapy with 25 Gy in 5 fractions was evaluated in a geriatric patient population.
Patients And Methods: A retrospective analysis was performed of 18 geriatric patients with resectable high risk soft tissue sarcomas of extremities and thoracic wall. Wound healing and short term oncologic outcome were analysed. In addition, dose constraints for radiotherapy of the extremities were transferred from normofractionated to hypofractionated radiotherapy regimens.
Results: Feasibility was good with 17/18 patients completing treatment as planned. Wound healing complication rate was in the range of published data. Two patients developed local and distant recurrence, two patients isolated distant recurrences. No isolated local recurrences were observed. Keeping the constraints was possible in all cases without compromising the coverage of the target volume.
Conclusions: Hypofractionated radiotherapy and surgery was well tolerated even in this specific patient population. With feasibility concerning early wound healing problems and adapted constraints, which allow for the treatment of most resectable extremity tumours, the concept warrants further evaluation in patients unfit for standard radiotherapy.
abstract_id: PUBMED:23504279
Volume changes in soft tissue sarcomas during preoperative radiotherapy of extremities evaluated using cone-beam CT. Objective: The objective of this study is to quantify volume changes in the gross target volume (GTV) during preoperative radiotherapy for extremity soft tissue sarcomas (ESTS).
Methods: Twenty-seven patients with ESTS, treated with preoperative radiotherapy, were included in this study. Weekly cone-beam CT scans acquired for setup correction were used for GTV delineation in order to quantify volume changes over the course of treatment. Age, anatomical location, tumour type and tumour volume were evaluated as predictive factors for volume changes. Finally, the optimal time point for adaptive intervention was quantified.
Results: A GTV increase to a maximum of 28 % occurred in five patients. Thirteen patients showed no change and nine patients (all diagnosed with myxoid liposarcoma (MLS)) showed a GTV decrease to a maximum of 57 % of the GTV volume at start of treatment. In the multivariate analysis, only the relative volume change for tumour type was significant (p = 0.001). The optimal time point for adaptive intervention in non-MLS patients was the first week and for MLS patients the third week.
Conclusions: Volume changes were quantified during preoperative RT of ESTS. Volume decrease was observed only in MLS patients. Individualised treatment resulting in plan adaptations could result in a clinically useful volume reduction for MLS patients.
Answer: Preoperative radiotherapy is not universally suitable for all patients with primary soft tissue sarcoma of the limbs. The decision to use preoperative radiotherapy should be personalized according to the histological subtype, tumor site, and the risk-benefit ratio of the treatment. It is best managed by a multidisciplinary surgical and oncology team in a specialist sarcoma center (PUBMED:25130960).
While preoperative radiotherapy has been associated with good local control and tolerance in patients with extensive, high-grade soft tissue sarcomas (PUBMED:34198676; PUBMED:25282099), it is also associated with a greater risk of wound complications compared to postoperative radiotherapy (PUBMED:12103287). The choice of regimen should take into account the timing of surgery and radiotherapy, as well as the size and anatomical site of the tumor.
In certain cases, such as with fibromatosis, preoperative radiotherapy has been shown to be effective in local control and has a lower rate of major wound complications compared to soft tissue sarcomas (PUBMED:14612625). However, in older patients, the increased incidence of wound complications demands meticulous technique (PUBMED:1736977).
Hypofractionated preoperative radiotherapy has been evaluated in a geriatric patient population and was found to be well tolerated, with wound healing complication rates within the range of published data (PUBMED:34821137). Volume changes during preoperative radiotherapy have also been observed, particularly in patients with myxoid liposarcoma, where individualized treatment resulting in plan adaptations could be clinically useful (PUBMED:23504279).
In summary, while preoperative radiotherapy can be beneficial for some patients with primary soft tissue sarcoma of the limbs, it is not suitable for all patients and should be carefully considered on a case-by-case basis by a specialized team. |
Instruction: Can we cure atrial flutter with radiofrequency ablation in an hour?
Abstracts:
abstract_id: PUBMED:16623275
Can we cure atrial flutter with radiofrequency ablation in an hour? Background: Radiofrequency ablation of common atrial flutter requires the creation of a complete transmural ablation line across cavotricuspid region to achieve bidirectional conduction block. Irrigated tip catheters facilitate rapid achievement of this block by creation larger and deeper lesions. The EASTHER registry was organized to collect data about the efficacy of the procedure in small and middle volume centres in Central and Eastern Europe, all using THERMOCOOL catheter technology.
Methods: Easther is a prospective registry (April 2002-February 2003). 133 consecutive patients (81.1% male, age 59.0 +/- 10.4 years, range 30-81 years) with common atrial flutter were enrolled. Coincidence with atypical flutter was observed in 2.7%. Patients had a history of flutter of 31.0 +/- 53.6 month (range 1-403) and concomitant atrial fibrillation was observed in 42.9%. Structural heart disease was present in 38.9%. Amount of re-ablated cases was 14%. RF energy was applied during 60 sec in power-controlled mode at a setting between 40 to 50 W with an average flow rate of 19.0 ml/min.
Results: Acute success rate defined as bi-directional block was achieved in 93.1%, although 94.7% of cases were assessed successful by the treating electrophysiologist. Average number of RF applications was 12.0 +/- 7.0 (range 2-40) per procedure. Average delivered power varied between a minimum of 36.1 +/- 15.1 W till a maximum of 45.3 +/- 13.0 W, while the average maximum temperature observed at the same time was varied between 39.0 +/- 3.4 degrees C and 45.4 +/- 4.0 degrees C. Total procedure time was 100.1 +/- 42.7 min (range 20-280 min) and fluoroscopy time was 15.8 +/- 9.6 min (range 4-45 min). In comparable French TC registry Average total and fluoroscopy time were 46.4 +/- 33.6 min, and 10.0 +/- 6.8 min resp. In the Middle European centres total and fluoroscopy time was 96.1 +/- 40.9 min, and 15.0 +/- 8.9 min resp. In centres from Eastern Europe it was 120.3 +/- 51.2 min, and 20.4 +/- 11.9 min resp. Two adverse events were reported. Both patients had strong chest pain during ablation. These results are comparable with the literature data published.
Conclusions: Irrigated tip catheters are effective and safe in ablation of common atrial flutter. This technology helps to accelerate and facilitate achievement of bi-directional isthmus block. Most of procedures were terminated to one hour in experienced centers in France as early as 2002. Procedures not exceeding one hour are feasible in case of spreading this method as method of first choice with gaining of experiences in centres of Middle and Eastern Europe.
abstract_id: PUBMED:9483231
Radiofrequency ablation for cure of atrial flutter. Background: Atrial flutter is a common arrhythmia which frequently recurs after cardioversion and is relatively difficult to control with antiarrhythmic agents.
Aims: To evaluate the success rate, recurrence rate and safety of radiofrequency, (RF) ablation for atrial flutter in a consecutive series of patients with drug refractory chronic or paroxysmal forms of the arrhythmia.
Methods: Electrophysiologic evaluation of atrial flutter included activation mapping with a 20 electrode halo catheter placed around the tricuspid annulus and entrainment mapping from within the low right atrial isthmus. After confirmation of the arrhythmia mechanism with these techniques, an anatomic approach was used to create a linear lesion between the inferior tricuspid annulus and the eustachian ridge at the anterior margin of the inferior vena cava. In order to demonstrate successful ablation, mapping techniques were employed to show that bi-directional conduction block was present in the low right atrial isthmus.
Results: Successful ablation was achieved in 26/27 patients (96%). In one patient with a grossly enlarged right atrium, isthmus block could not be achieved. Of the 26 patients with successful ablation, there has been one recurrence of typical flutter (4%) during a mean follow-up period of 5.5 +/- 2.7 months. This patient underwent a successful repeat ablation procedure. Of eight patients with documented clinical atrial fibrillation (in addition to atrial flutter) prior to the procedure, five continued to have atrial fibrillation following the ablation. There were no procedural complications and all patients had normal AV conduction at the completion of the ablation.
Conclusions: RF ablation is a highly effective and safe procedure for cure of atrial flutter. In patients with chronic or recurrent forms of atrial flutter RF ablation should be considered as a first line therapeutic option.
abstract_id: PUBMED:38239309
Cryoballoon ablation of peri-mitral atrial flutter refractory to radiofrequency ablation: a case report. Background: The radiofrequency catheter ablation of peri-mitral atrial flutter is occasionally difficult, mostly due to epicardial or intramural conduction on the mitral isthmus (MI). However, cryoballoon ablation (CBA) of peri-mitral atrial flutter refractory to radiofrequency ablation has not been reported.
Case Summary: We report a case of a 66-year-old male patient who experienced a recurrence of atypical atrial flutter and underwent the sixth catheter ablation. The activation and entrainment maps showed that this atypical atrial flutter (AFL) was peri-mitral AFL via pathways other than endocardial conduction in the MI. Previous radiofrequency catheter ablation attempts on the MI line, including endocardial, coronary sinus, and epicardial ablations, failed to achieve a bidirectional block of the MI. In this case, we selected CBA for the MI area and successfully achieved a bidirectional block of the MI.
Discussion: Although using CBA in the MI is off-label, it could be safely implemented using CARTOUNIVU™. We attributed the success of the bidirectional block of the MI in this case to the crimping of the northern hemisphere of the CBA to the mitral isthmus area, which resulted in the formation of a broad, uniform, and deep ablation lesion site.
abstract_id: PUBMED:29042948
The efficacy of radiofrequency ablation in the treatment of pediatric arrhythmia and its effects on serum IL-6 and hs-CRP. The aim of this study was to investigate the efficacy of radiofrequency ablation in the treatment of pediatric arrhythmia and to assess the changes in serum interleukin-6 (IL-6) and hs-CRP levels after treatment. Hundred and six children with tachyarrhythmia who were admitted to Xuzhou Children's Hospital from November, 2014 to December, 2015 were recruited for study. The efficacies of radiofrequency in the treatment of different types of arrhythmia were analyzed. Successful ablation was found in 104 cases (98.11%) and recurrence was found in 7 cases (6.73%). Among 62 cases of atrioventricular reentrant tachycardia (AVRT), successful ablation was found in 60 cases (96.77%) and recurrence was found in 3 cases (4.84%). Among 33 cases of atrioventricular nodal reentrant tachycardia (AVNRT), successful ablation was found in 33 cases (100%) and recurrence was found in 2 cases (6.06%). Among 5 cases of ventricular tachycardia (VT), successful ablation was found in 5 cases (100%) and no recurrence was found. Among 4 cases of atrial tachycardia (AT), successful ablation was found in 4 cases (100%) and recurrence was found in 1 case (25%). Among 2 cases of atrial flutter (AFL), successful ablation was found in both (100%) and recurrence was found in 1 case (50%). After operation, the levels of IL-6 and hs-CRP were increased and were continually increased within 6 h after operation. The levels of IL-6 and hs-CRP at 24 h after operation were reduced but still higher than preoperative levels. The duration of radiofrequency and ablation energy were positively correlated with the levels of IL-6 and hs-CRP, while the number of discharges was not significantly correlated with either. In conclusion, radiofrequency ablation is a safe and effective treatment for pediatric arrhythmia. Postoperative monitoring of IL-6 and hs-CRP levels is conducive to understanding postoperative myocardial injury and inflammatory response.
abstract_id: PUBMED:33024468
Optimal local impedance drops for an effective radiofrequency ablation during cavo-tricuspid isthmus ablation. Purpose: A novel ablation catheter capable of local impedance (LI) monitoring (IntellaNav MiFi OI, Boston Scientific) has been recently introduced to clinical practice. We aimed to determine the optimal LI drops for an effective radiofrequency ablation during cavo-tricuspid isthmus (CTI) ablation.
Methods: This retrospective observational study enrolled 50 consecutive patients (68 ± 9 years; 34 males) who underwent a CTI ablation using the IntellaNav MiFi OI catheter, guided by Rhythmia. The LI at the start of radiofrequency applications (initial LI) and minimum LI during radiofrequency applications were evaluated. The absolute and percentage LI drops were defined as the difference between the initial and minimum LIs and 100× absolute LI drop/initial LI, respectively.
Results: A total of 518 radiofrequency applications were analyzed. The absolute and percentage LI drops were significantly greater at effective ablation sites than ineffective sites (median, 15 ohms vs 8 ohms, P < .0001; median, 14.7% vs 8.3%, P < .0001). A receiver-operating characteristic analysis demonstrated that at optimal cutoffs of 12 ohms and 11.6% for the absolute and percentage LI drops, the sensitivity and specificity for predicting the effectiveness of the ablation were 66.5% and 88.2%, and 65.1% and 88.2%, respectively. Finally, bidirectional conduction block along the CTI was achieved in all patients.
Conclusions: During the LI-guided CTI ablation, the effective RF ablation sites exhibited significantly greater absolute and percentage LI drops than the ineffective RF ablation sites. Absolute and percentage LI drops of 12 ohms and 11.6% may be suitable targets for effective ablation.
abstract_id: PUBMED:11933536
Biochemical markers of myocardial damage after high-energy radiofrequency ablation of atrial flutter. Value of troponin I Creatinine phosphokinase and its MB iso-enzyme do not allow assessment of the degree of tissue necrosis after radiofrequency ablation. Cardiac Troponin I and myoglobin, new markers of myocardial lesions, are rarely used in this indication. The aim of this prospective study was to measure and compare serum markers of myocardial damage after high energy radiofrequency ablation of atrial flutter with an 8 mm distal electrode catheter. The authors measured serum cardiac Troponin I, myoglobin, creatinine phosphokinase and its MB iso-enzyme levels before and 4, 12 and 24 hours after radiofrequency ablation of common atrial flutter in 23 consecutive patients. The same markers were also measured in a control group of 9 patients undergoing electrophysiological investigation without radiofrequency ablation. All ablation procedures were simple with an average of 12.6 +/- 6 applications of radiofrequency. Bidirectional isthmic block was obtained in 22 of the 23 patients. The mean Troponin I levels were 0.01 microgram/l before ablation, 0.87 +/- 0.77 at the 4th hour (p < 0.001 versus control), 1.16 +/- 1.2 at the 12th hour (p < 0.001 versus control) and 0.7 +/- 0.63 microgram/l at the 24th hour (p < 001 versus control) after ablation. Only 13% of patients had cardiac troponin levels greater than the threshold of significant myocardial damage (> 2 micrograms/l) with a higher average number of radiofrequency applications than the rest of the group: 15.2 +/- 1 versus 11.5 +/- 5.1 (p < 0.05). An abnormally high level of markers was found in the ablation group for 19 patients (84%) with Troponin I (> 0.4 microgram/l), for 10 patients (43%) with the MB iso-enzyme (> 8 Ul/L), and for 1 patient (4%) with myoglobin (> 90 micrograms/l), and in no patient for creatinine phosphokinase (> 290 IU/L). All values were normal in the control group. The authors conclude that cardiac Troponin I is the most sensitive marker for myocardial cellular damage after high energy radiofrequency ablation of atrial flutter. The level of cardiac Troponin I seems to correlate with the number of applications of radiofrequency.
abstract_id: PUBMED:22259261
Radiofrequency catheter ablation in children with supraventricular tachycardias: intermediate term follow up results. The Purpose Of The Study: Radiofrequency (RF) catheter ablation represents an important advance in the management of children with cardiac arrhythmias and has rapidly become the standard and effective line of therapy for supraventricular tachycardias (SVTs) in pediatrics. The purpose of this study was to evaluate the intermediate term follow up results of radiofrequency catheter ablation in treatment of SVT in pediatric age group.
Methods: A total of 60 pediatric patients (mean age = 12.4 ± 5.3 years, ranged from 3 years to 18 years; male: female = 37:23; mean body weight was 32.02 ± 12.3 kg, ranged from 14 kg to 60 kg) with clinically documented SVT underwent an electrophysiologic study (EPS) and RF catheter ablation at Children's Hospital Mansoura University, Mansoura, Egypt during the period from January 2008 to December 2009 and they were followed up until October 2011.
Results: The arrhythmias included atrioventricular reentrant tachycardia (AVRT; n = 45, 75%), atrioventricular nodal reentrant tachycardia (AVNRT; n = 6, 10%), and atrial tachycardia (AT; n = 9, 15%). The success rate of the RF catheter ablation was 93.3% for AVRT, 66.7% for AVNRT, and 77.8% for AT, respectively. Procedure-related complications were infrequent (7/60, 11.7%), (atrial flutter during RF catheter ablation (4/60, 6.6%); ventricular fibrillation during RF catheter ablation (1/60, 1.6%); transient complete heart block during RF catheter ablation (2/60, 3.3%)). The recurrence rate was 8.3% (5/60) during a follow-up period of 34 ± 12 months.
Conclusion: RF catheter ablation is an effective and safe method to manage children with SVT.
abstract_id: PUBMED:33146479
Outcome of the elective or online radiofrequency ablation of typical atrial flutter. Background: Radiofrequency ablation of the cavotricuspid isthmus is currently the first-choice treatment of typical atrial flutter and usually it is performed electively. The purpose of this study was to see whether performing on-line ablation has similar clinical results compared to the conventional strategy.
Methods: Consecutive patients (465) who underwent ablation of the cavotricuspid isthmus for typical atrial flutter (AFL) at our electrophysiology laboratory in the 2008-2017 decade were studied. We evaluated the acute and long-term clinical outcomes of those who were treated electively (337) compared to those who had online ablation (128), that is within 24 hours of presenting to the Department of Cardiology. In patients treated on an emergency basis, a transesophageal echocardiogram was performed to rule atrial thrombi when needed.
Results: No significant intraprocedural difference was observed between the 2 patient groups, with comparable acute electrophysiological success (99% vs. 98%) and serious complications. Even at the subsequent 4-year follow-up, there were no significant differences in the recurrence of typical AFL, onset of atrial fibrillation and other clinical events.
Conclusions: Online ablation of typical atrial flutter performed at the time of the clinical presentation of the arrhythmia, was shown to be comparable in terms of procedural safety and clinical efficacy in the short and long term compared to an elective ablation strategy.
abstract_id: PUBMED:11265797
Radiofrequency ablation: a cure for tachyarrhythmias. Radiofrequency (RF) ablation is a new modality of pennanently curing patients with various tachycardias using radiofrequency energy, a technique evolved in the past decade. RF ablation was performed on 913 patients with different tachyarrhythmias from April, 1994 to July, 1999. There were 491 men and 422 females aged 42 +/- 34 years (range 1 to 76 years). Supraventricular tachycardia (SVT) was present in 462 patients, accessory pathway mediated atrioventricular re-entrant tachycardia (AVRT) in 355 patients (377 accessory pathways) and idiopathic ventricular tachycardia (VT) in 96 patients. Amongst the patients with SVT, 402 had atrioventricular nodal re-entrant tachycardia (AVNRT), 22 had atrial flutter, 20 had ectopic atrial tachycardia and 18 had atrial fibrillation. RF successfully abolished the tachycardia in 400/402 patients (99.5%) with AVNRT, 330/377 (87.5%) accessory pathways in patients with AVRT, 14/22 patients (63.6%) of atrial flutter, 18/20 patients (90%) of atrial tachycardia and 79/96 patients (82.3%) with idiopathicVT. Successful AV nodal ablation with pacemaker implantation was done in 10/18 patients with chronic atrial fibrillation with fast ventricular rate and tachycardia induced cardiomyopathy. AV nodal modulation for atrial fibrillation was tried in the remaining 8 patients and was successful in 4 (4/8). The overall success rate for all arrhythmias was 93.6%, and there was no mortality. At a follow-up of 6.8 +/- 5.4 months, there was a recurrence in 34/420 patients (8%), in whom successful re-ablation was performed. One patient with AVNRT and another with a parahisian pathway developed complete heart block and were given pacemakers. One patient developed inferior wall infarction on the next day post RF. There were 4 patients who had pericardial tamponade necessitating pericardiocentesis and 2 patients developed deep vein thrombosis, which was treated conservatively. Thus RF ablation is an effective, safe and curative therapy for various arrhythmias.
abstract_id: PUBMED:31057797
Fatal esophageal-pericardial fistula as a complication of radiofrequency catheter ablation. The clinical role of catheter ablation using radiofrequency or cryothermal energy has become an important therapy in the management of patients with recurrent or persistent tachyarrhythmia that is refractory to medical therapy. It is regarded as a safe and reliable procedure and is performed routinely in health care facilities across the country. Like all procedures, there are associated risks and benefits. Development of an esophageal-atrial fistula is a rare but often-fatal complication of radiofrequency ablation. It is the second most frequent cause of death caused by the procedure, with mortality rates in excess of 70%. Death usually occurs as a result of cerebral or myocardial air embolism, endocarditis, massive gastrointestinal bleeding, and/or septic shock. Electrophysiologists have instituted a number of safeguard techniques to diminish the risk of developing esophageal-atrial fistula. Despite these measurements, instances of fistulous development still occur. Herein, we report a case of a 74-year-old male who presented with chest pain secondary to esophageal-pericardial fistula 19 days after pulmonary vein isolation using radiofrequency energy for atrial fibrillation in order to illustrate the clinical variability and diagnostic challenges associated with this dreaded gastrointestinal complication.
Answer: Yes, it is possible to cure atrial flutter with radiofrequency ablation within an hour. The EASTHER registry, which collected data from small and middle volume centers in Central and Eastern Europe using THERMOCOOL catheter technology, reported that most procedures were terminated within one hour in experienced centers in France as early as 2002. The registry noted that procedures not exceeding one hour are feasible with the spread of this method and the gaining of experience in centers of Middle and Eastern Europe (PUBMED:16623275). Additionally, a study evaluating the success rate, recurrence rate, and safety of radiofrequency ablation for atrial flutter in a consecutive series of patients found that successful ablation was achieved in 96% of patients, suggesting that radiofrequency ablation is a highly effective and safe procedure for the cure of atrial flutter (PUBMED:9483231).
However, it is important to note that the total procedure time can vary depending on the center's experience and the complexity of the case. For example, in the EASTHER registry, the total procedure time was 100.1 +/- 42.7 minutes, with fluoroscopy time being 15.8 +/- 9.6 minutes, indicating that while procedures within an hour are possible, they may take longer in some instances (PUBMED:16623275).
Moreover, the success of the procedure also depends on achieving a bidirectional conduction block, which can be facilitated by using irrigated tip catheters that create larger and deeper lesions (PUBMED:16623275). The efficacy of radiofrequency ablation has also been demonstrated in pediatric arrhythmia, with a high success rate and a positive impact on serum inflammatory markers post-operation (PUBMED:29042948).
In conclusion, while it is feasible to cure atrial flutter with radiofrequency ablation within an hour, especially in experienced centers, the actual duration of the procedure may vary based on several factors, including the center's experience and the specific characteristics of the patient's condition. |
Instruction: Ischaemia imaging in type 2 diabetic kidney transplant candidates--is coronary angiography essential?
Abstracts:
abstract_id: PUBMED:17550928
Ischaemia imaging in type 2 diabetic kidney transplant candidates--is coronary angiography essential? Background: Coronary artery disease (CAD) remains the leading cause of death in type 2 diabetes mellitus (DM) patients undergoing renal transplantation. There is a high prevalence of silent CAD in these patients. Controversy exists regarding the role of dobutamine stress echocardiography (DSE) in detection of CAD. Our purpose was to compare DSE with coronary angiography (CA) for the detection of CAD in type 2 diabetic patients undergoing evaluation for renal transplantation.
Methods: Forty (36 male, four female) type 2 diabetic patients with end-stage renal disease (ESRD) were subjected to DSE followed by CA as a part of their pre-renal transplant evaluation. The ability of DSE to predict 70% stenosis in one or more coronary arteries as determined by CA was evaluated. Mean age of the patients was 49.2 +/- 5 years (range 39-60 years).
Results: DSE was positive in 10 (25%) patients, while 19 patients (48%) had a more than 70% lesion in at least one epicardial vessel on CA (six patients had single vessel, three had double vessel and 10 had triple vessel disease). The sensitivity and specificity in identifying CAD was 47.3 and 95.2%, respectively, while positive predictive value and negative predictive value was 90% and 66%. Accuracy of DSE was 72.5%. All four patients with diffuse diabetic coronary artery disease had negative DSE.
Conclusion: DSE is a poor predictor of coronary artery disease in type 2 DM patients being evaluated for renal transplantation. CA should be included in evaluation of type 2 diabetic patients who are renal transplant candidates.
abstract_id: PUBMED:10352196
Dobutamine stress echocardiography for the detection of significant coronary artery disease in renal transplant candidates. Prophylactic coronary revascularization may reduce the risk for cardiac events in diabetic renal transplant candidates. No published data exist on the accuracy of dobutamine stress echocardiography (DSE) for the diagnosis of angiographically defined coronary artery disease (CAD) in renal transplant candidates. The purpose of this study is to examine the accuracy of DSE for the detection of CAD in high-risk renal transplant candidates compared with coronary angiography. Fifty renal transplant candidates with diabetic nephropathy (39 patients) or end-stage renal disease (ESRD) from other causes (11 patients) underwent prospectively performed DSE, followed by quantitative coronary angiography (QCA) and qualitative visual assessment of CAD severity. Twenty of 50 DSE tests were positive for inducible ischemia. Twenty-seven patients (54%) had a stenosis of 50% or greater by QCA, 12 patients (24%) had a stenosis of greater than 70% by QCA, and 16 patients (32%) had a stenosis greater than 75% by visual estimation. The sensitivity and specificity of DSE for CAD diagnosis were respectively 52% and 74% compared with QCA stenosis of 50% or greater, 75% and 71% compared with QCA stenosis greater than 70%, and 75% and 76% for stenosis greater than 75% by visual estimate. On long-term follow-up (22.5 +/- 10.1 months), 6 of 30 patients (20%) with negative DSE results and 11 of 20 patients (55%) with positive DSE results had a cardiac death, myocardial infarction (MI), or coronary revascularization. Six of 27 patients (22%) with a QCA stenosis of 50% or greater had a cardiac death or MI compared with none of the 23 patients (0%) with QCA stenosis less than 50% (P = 0.025). We conclude that DSE is a useful but imperfect screening test for angiographically defined CAD in renal transplant candidates.
abstract_id: PUBMED:2226509
Cardiac evaluation of candidates for kidney transplantation: value of exercise radionuclide angiocardiography. In view of the high incidence and mortality of coronary artery disease (CAD) in patients with kidney transplantation, a systematic cardiac evaluation was prospectively performed in 103 uraemic patients eligible for transplantation. After clinical examination, 28 patients with symptoms of CAD or diabetes mellitus were referred directly for coronary angiography, whereas the remaining 75 patients had rest and exercise radionuclide angiocardiography for evaluation of possible asymptomatic CAD. Among them, left ventricular ejection fraction was below 40% at rest or fell during exercise by at least 5 EF% in 12 patients; coronary angiography in nine showed CAD in four and hypertensive heart disease in five. In the remaining 63 (of 75) patients without severe resting left ventricular dysfunction or exercise ischaemia, the follow-up of 28 +/- 7 months revealed no clinical manifestation of CAD. Overall incidence of CAD in symptomatic and asymptomatic patients during a follow-up of 27 months after cardiac evaluation was 20 and 25% in nondiabetic and diabetic candidates for kidney transplantation, respectively (P = n.s.). Thus, clinical examination combined with exercise radionuclide angiocardiography in patients without signs or symptoms of heart disease had a high predictive accuracy for presence or absence of late manifestations of CAD. Exercise radionuclide angiocardiography is therefore a useful method for screening kidney transplantation candidates for asymptomatic CAD.
abstract_id: PUBMED:20518007
Coronary angiography is a better predictor of mortality than noninvasive testing in patients evaluated for renal transplantation. Objectives: The goal of this study was to compare whether coronary angiography or noninvasive imaging more accurately identifies coronary artery disease (CAD) and predicts mortality in patients with end-stage renal disease (ESRD) under evaluation for transplantation.
Background: CAD is a leading cause of mortality in patients with ESRD. The optimal method for identifying CAD in ESRD patients evaluated for transplantation remains controversial with a paucity of prognostic data currently available comparing noninvasive methods to coronary angiography.
Methods: The study cohort consisted of 57 patients undergoing both coronary angiography and stress perfusion imaging. Severe CAD was defined by angiography as ≥ 70% stenosis, and by noninvasive testing as ischemia in ≥ 1 zone. Follow-up for all cause mortality was 3.3 years.
Results: On noninvasive imaging, 63% had ischemia. On angiography, 40% had at least one vessel with severe stenoses. Abnormal perfusion was observed in 56% of patients without severe disease angiographically. Noninvasive imaging had poor specificity (24%) and poor positive predictive value (43%) for identifying severe disease. Angiography but not noninvasive imaging predicted survival; 3 year survival was 50% and 73% for patients with and without severe CAD by angiography (p<0.05).
Conclusions: False positive scintigrams limited noninvasive imaging in patients with ESRD. Angiography was a better predictor of mortality compared with noninvasive testing.
abstract_id: PUBMED:27392506
Role of Coronary Angiography in the Assessment of Cardiovascular Risk in Kidney Transplant Candidates. Cardiovascular disease is the leading cause of death among those with renal insufficiency, those requiring dialysis, and in recipients of kidney transplants reflecting the greatly increased cardiovascular burden that these patients carry. The best method by which to assess cardiovascular risk in such patients is not well established. In the present study, 1,225 patients seeking a kidney transplant, over a 30-month period, underwent cardiovascular evaluation. Two hundred twenty-five patients, who met selected criteria, underwent coronary angiography that revealed significant coronary artery disease (CAD) in 47%. Those found to have significant disease underwent revascularization. Among the patients found to have significant CAD, 74% had undergone a nuclear stress test before angiography and 65% of these stress tests were negative for ischemia. The positive predictive value of a nuclear stress test in this patient population was 0.43 and the negative predictive value was 0.47. During a 30-month period, 28 patients who underwent coronary angiography received an allograft. None of these patients died, experienced a myocardial infarction, or lost their allograft. The annual mortality rate of those who remained on the waiting list was well below the national average. In conclusion, our results indicate that, in renal failure patients, noninvasive testing fails to detect the majority of significant CAD, that selected criteria may identify patients with a high likelihood of CAD, and that revascularization reduces mortality both for those on the waiting list and for those who receive an allograft.
abstract_id: PUBMED:12198223
Kidney transplantation in type 2 diabetic patients: a comparison with matched non-diabetic subjects. Background: Because they generally are older and frequently have co-morbidities, patients with type 2 diabetes mellitus and end-stage renal disease seldom are selected for renal transplantation. Thus, information on transplantation results from controlled studies in this high-risk category of patients is scarce. We have compared the results of kidney transplantations in type 2 diabetic patients with carefully matched non-diabetic subjects.
Methods: All first cadaveric renal transplants performed in type 2 diabetic patients from January 1, 1988 to December 31, 1998 in our centre were included. Non-diabetic controls were individually matched with diabetic patients with respect to year of transplantation, sex, age, selected immunological parameters, and graft cold ischaemia.
Results: We included 64 type 2 diabetic and 64 non-diabetic patients who were followed for a mean period of 37+/-27 and 41+/-31 months, respectively, after renal transplantation. Patient survival at 1 and 5 years post-transplant was 85 and 69 vs 84 and 74% (P=0.43, NS), while graft survival rates censored for patient death were 84 and 77 vs 82 and 77% for diabetic and non-diabetic subjects, respectively (P=0.52, NS). With graft survival results not censored for death with functioning graft, no significant change was seen (diabetic vs non-diabetic group: 77 and 54 vs 73 and 61%, P=0.19, NS). Age, but not the presence of diabetes, was the only factor significantly affecting patient survival when both patient groups were pooled. With regard to post-transplant complications requiring hospitalization, there was a significant difference only in the number of patients who had amputations (diabetic vs non-diabetic group: 8 vs 0, P=0.01).
Conclusions: Patient and graft survival after kidney transplantation was similar in type 2 diabetic and matched non-diabetic subjects, with more amputations occurring in the diabetic group. Thus, at a single-centre level renal transplantation results almost equivalent to those in non-diabetic patients may be achieved in type 2 diabetes mellitus.
abstract_id: PUBMED:10147633
Noninvasive assessment of cardiac risk in insulin-dependent diabetic patients being evaluated for pancreatic transplantation using thallium-201 myocardial perfusion scintigraphy. We examined the value of thallium-201 myocardial perfusion scintigraphy in noninvasive assessment of cardiac risk in 36 insulin-dependent (type 1) diabetic patients being evaluated for pancreas or combined pancreas/kidney transplantation. An extensive cardiovascular evaluation including electrocardiogram was performed in all patients, and most patients were also evaluated by two-dimensional and Doppler echocardiography. Exercise thallium studies were performed in 31 patients. Five patients were unable to exercise and underwent dipyridamole-thallium study. The thallium images were abnormal in 12 patients, 10 of whom underwent coronary arteriography. Significant coronary artery disease was found in 7 of these patients. Nineteen patients underwent pancreatic (3 patients) or pancreato-renal (16) transplantation without any occurrence of cardiac death or nonfatal myocardial infarction peri-operatively or on follow-up ranging from 7 months to 21 months. In contrast, 3 cardiac events occurred in 12 patients not approved for transplantation, each of whom had an abnormal thallium study exhibiting significant ischemia. Resting left ventricular global and regional function was not helpful in determining perioperative risk. Thus, thallium-201 myocardial perfusion scintigraphy may be useful in identifying diabetic patients at low risk for pancreas transplantation and may obviate the need for routine coronary angiography in these patients.
abstract_id: PUBMED:17505644
Coronary artery surgery in patients with diabetes mellitus Diabetes mellitus is present in 25-30% of patients undergoing coronary artery bypass grafts surgery. Early and late post-operative prognoses are different for the diabetic patient. Coronary artery bypass grafts are indicated in 2 or more vessel lesions, but it can also be preferred to percutaneous angioplasty in 1-vessel lesions, when this is the anterior descending artery or there is a great area under ischemia. Diabetic candidates to renal transplant must be investigated and revascularized pre-operatively, if necessary. Morbidity is greater in these patients, mainly due to respiratory, renal and cerebral complications and wound infections. Intensive care unit and hospital length of stay are more prolonged, but there is not increased early mortality. Diabetes mellitus represents an independent risk factor for late graft failure and mortality from cardiac and general causes. Although under an increased risk, coronary artery surgery results in better quality of life and late survival in the diabetic patients with severe coronary artery disease, as compared to medical treatment and percutaneous coronary angioplasty, specially in those who use insulin and when internal thoracic arterial grafts are implanted.
abstract_id: PUBMED:36275988
Cardiac evaluation for end-stage kidney disease patients on the transplant waitlist: a single-center cohort study. Background: Cardiac evaluation before deceased donor kidney transplant (DDKT) remains a matter of debate. Data on Asian countries and countries with prolonged waiting times are lacking. This study aimed to assess the outcomes of patients referred for DDKT after a cardiac evaluation at an Asian tertiary transplant center.
Methods: This single-center retrospective review analyzed patients who were referred for waitlist placement and underwent cardiac stress testing between January 2009 and December 2015. Patients with cardiac symptoms were excluded. The primary outcome was three-point major adverse cardiovascular events (MACE), a composite of non-fatal myocardial infarction, non-fatal stroke, and cardiovascular death.
Results: Of 468 patients referred for DDKT, 198 who underwent cardiac stress testing (myocardial perfusion studies in 159 patients and stress echocardiography in 39 patients) were analyzed. MACE occurred in 20.7% of the patients over a median follow-up of 4.6 years. Cardiac stress tests were positive for ischemia in 19.7% of the patients. Coronary angiography was performed in 63 patients, including 29 patients with diabetic kidney disease and negative cardiac stress tests. Significant coronary artery disease (CAD) was detected in 27 patients (42.8%), of whom 18 underwent revascularization. MACE was associated with significant CAD on coronary angiography in the multivariable analysis. Cardiac stress test results were not associated with MACE. Amongst diabetic patients who had negative cardiac stress tests, 37.9% had significant CAD on coronary angiography.
Conclusions: The cardiovascular disease burden is significant amongst DDKT waitlist candidates. Pretransplant cardiac screening may identify patients with significant CAD at higher risk of MACE.
abstract_id: PUBMED:21321004
Is standardized cardiac assessment of asymptomatic high-risk renal transplant candidates beneficial? Background: Perioperative cardiovascular events in renal transplantation are common and non-invasive cardiac stress tests are recommended in high-risk renal transplant candidates. In 2004, we introduced a standardized preoperative cardiac risk assessment programme with the aim of reducing perioperative cardiac events.
Methods: Since 2004, all asymptomatic high-risk renal transplant candidates had to undergo non-invasive cardiac stress testing. Patients with a positive stress test went for a coronary angiography and if indicated for revascularization. The incidence of perioperative cardiac events (≤30 days of transplantation) was analysed in all high-risk patients who received a transplantation (screening group) and compared with high-risk renal transplant recipients evaluated in the 4 years before the introduction of the cardiac assessment programme (historical control group).
Results: Since 2004, 227 of 349 asymptomatic high-risk renal transplant candidates underwent non-invasive cardiac stress testing. In 15 patients (6.6%), significant ischaemia was found. Ten of these 15 patients underwent coronary angiography (eight patients had significant coronary artery disease and in five patients, percutaneous coronary intervention was performed). One hundred and sixty of 349 renal transplant candidates have received renal transplantation so far (screening group). In the screening group, 6 perioperative cardiac events (3.8%) occurred compared to 13 perioperative events (7.6%) in the historical control group (n = 172) (P = 0.136).
Conclusions: The incidence of significant cardiac ischaemia in high-risk renal transplant patients was low and was followed by revascularization in a small percentage of patients. No significant decrease in perioperative cardiac events was observed after the introduction of the standardized cardiac assessment programme.
Answer: The question of whether coronary angiography is essential for ischaemia imaging in type 2 diabetic kidney transplant candidates has been explored in several studies. The evidence suggests that coronary angiography may indeed be essential for accurately detecting coronary artery disease (CAD) in this patient population.
One study found that dobutamine stress echocardiography (DSE) had a sensitivity of 47.3% and a specificity of 95.2% for identifying CAD in type 2 diabetic patients being evaluated for renal transplantation. However, the study concluded that DSE is a poor predictor of CAD in these patients, and that coronary angiography should be included in their evaluation (PUBMED:17550928).
Another study indicated that while DSE is a useful screening test for angiographically defined CAD in renal transplant candidates, it is not perfect. The sensitivity and specificity of DSE varied depending on the degree of stenosis considered significant, and the study suggested that prophylactic coronary revascularization may reduce the risk for cardiac events in diabetic renal transplant candidates (PUBMED:10352196).
Further research comparing coronary angiography with noninvasive testing found that noninvasive imaging had poor specificity and positive predictive value for identifying severe CAD. Coronary angiography was a better predictor of mortality compared with noninvasive testing, suggesting that angiography may be more accurate in identifying CAD and predicting outcomes in patients with end-stage renal disease (ESRD) (PUBMED:20518007).
Additionally, a study involving 1,225 kidney transplant candidates found that noninvasive testing failed to detect the majority of significant CAD. The study concluded that selected criteria may identify patients with a high likelihood of CAD and that revascularization reduces mortality for those on the waiting list and for those who receive an allograft (PUBMED:27392506).
In summary, the evidence suggests that coronary angiography is a more accurate and essential method for detecting CAD in type 2 diabetic kidney transplant candidates compared to noninvasive ischaemia imaging methods. This is particularly important given the high cardiovascular risk associated with this patient population and the potential impact on transplant outcomes. |
Instruction: Is there a risk in placing a ceramic head on a previously implanted trunion?
Abstracts:
abstract_id: PUBMED:20700672
Is there a risk in placing a ceramic head on a previously implanted trunion? Background: Strategies for revising a ceramic-on-ceramic total hip prosthesis are controversial. Some consider reimplantation of a ceramic head on a well-fixed femoral stem inadvisable as it may lead to a fracture of the newly implanted head.
Questions/purposes: We assessed (1) the risk of fracture when a new ceramic head was placed on a previously implanted trunion; (2) the survival rate of the revised hips; and (3) hip function and acetabular and femoral component loosening at midterm followup.
Patients And Methods: We retrospectively reviewed all 126 patients (139 hips) who had revision of alumina-alumina hip arthroplasties between January 1977 and December 2005. Of these, 99 patients (110 hips) had revision of the socket only with retention of the femoral component. The femoral head was left in place in 33 hips, the same alumina head was re-implanted in seven hips, a new alumina head was implanted in 45 hips, a metallic head in 16, and a zirconia head in nine. Twenty-six patients (29 hips) died and nine (10 hips) were lost to followup before 5 years; this left 71 hips for review. Minimum followup was 60 months (mean, 112 months; range, 60-319 months).
Results: Eighteen hips required rerevision surgery, 11 for aseptic loosening, two for septic loosening, two for fracture of a ceramic liner, one for recurrent dislocation, one for ipsilateral femoral fracture, and one for unexplained pain. Among the 61 ceramic heads implanted on a well-fixed stem, no fracture of the head occurred at a mean 88 months' followup. The survival rate at 10 years with mechanical failure as the end point was 81.6%.
Conclusions: We observed no fractures of the ceramic heads implanted on a previous titanium trunion. This approach is possible if inspection shows no major imperfection of the Morse taper.
abstract_id: PUBMED:24142666
Ceramic head fracture in ceramic-on-polyethylene total hip arthroplasty. Revision rates of total hip arthroplasty have decreased after introducing total hip arthroplasty (THA) using ceramic component, since ceramic components could reduce components wear and osteolysis. The fracture of a ceramic component is a rare but potentially serious event. Thus, ceramic on polyethylene articulation is gradually spotlighted to reduce ceramic component fracture. There are a few recent reports of ceramic head fracture with polyethylene liner. Herein, we describe a case of a ceramic head component fracture with polyethylene liner. The fractured ceramic head was 28 mm short neck with conventional polyethylene liner. We treated the patient by total revision arthroplasty using 4th generation ceramic on ceramic components.
abstract_id: PUBMED:28043037
Stem taper mismatch has a critical effect on ceramic head fracture risk in modular hip arthroplasty. Background: Modular total hip prostheses with ceramic heads are well established in orthopedic surgery and widely used. With the variety of different manufacturers and available designs, components are at risk for mismatch. Several case studies show the potentially devastating effects of mismatch.
Methods: The aim of this study was to investigate the outcome of one arbitrary component mismatch with commercially available components that appear to provide a stable fixation during assembly. A biomechanical in-vitro analysis of fracture strength (n=5) was carried out in accordance with ISO 7206-10. "Type1" Bi-Metric®-stems were mismatched with "V40" Al2O3 ceramic heads.
Findings: Mean fracture strength was reduced to about 50% of the recommended FDA minimum by the mismatch (Mean 23.68kN, SD 2.35kN). A small contact area between head and stem taper was identified as a potential key parameter.
Interpretation: Mixing and matching components can put a patient at greater risk for ceramic head fracture and must be avoided.
abstract_id: PUBMED:37398526
Late Atraumatic Ceramic Head Fracture in Total Hip Arthroplasty: A Case Report. Introduction: Atraumatic ceramic femoral head fracture is an uncommon but overwhelming complication of total hip arthroplasty (THA). The complication rate is low, with few reports in the literature. It is critical to continue researching late fracture risk to mitigate these instances.
Case Report: A 68-year-old Caucasian female presented with an atraumatic ceramic femoral head fracture in the setting of a ceramic-on-ceramic THA 17 years after primary surgery. The patient was successfully revised to a dual-mobility construct with a ceramic femoral head and a highly cross-linked polyethylene liner. The patient returned to normal function without pain.
Conclusion: The complication rate for fracture of the ceramic femoral head is as low as 0.001% for fourth-generation aluminum matrix composite designs, while the complication rate of late atraumatic ceramic fracture is largely unknown. We present this case to add to the current literature.
abstract_id: PUBMED:32478987
Early Fracture of the Trunion in Total Hip Arthroplasty: A Case Report. A 59-year-old man who had previously undergone total hip arthroplasty (THA) with the use of a dual modular (head and neck) total hip implant presented with a mechanical failure at the trunion 9 yr after index surgery with a low-energy mechanism. The fractured stem was then removed and a revision stem implanted to restore the patient's ability to ambulate. We demonstrate and contribute to the small but growing evidence of failure of modular THA systems at the trunion.
abstract_id: PUBMED:33133408
Outcome of Ceramic-on-Ceramic Total Hip Arthroplasty with 4th Generation 36 mm Head Compared to that with 3rd Generation 28 mm Head by Propensity Score Matching. Background: With the development of 4th generation ceramic bearing, the large ceramic head is available for ceramic-on-ceramic total hip arthroplasty (THA). This retrospective study aimed to compare the outcomes of ceramic-on-ceramic THA with 4th generation 36 mm head to those with 3rd generation 28 mm head using propensity score matching.
Methods: We retrospectively reviewed the results of 133 ceramic-on-ceramic THAs with 4th generation 36 mm ceramic head in 129 patients and 133 ceramic-on-ceramic THAs identified from 405 ceramic-on-ceramic THAs with 3rd generation 28 mm head by propensity score matching. There were 83 males and 50 females in both groups with a mean age of 55 years. There was no significant difference in other demographic features except for follow-up period (4.2 years in the 36 mm group and 6.4 years in the 28 mm group, p < 0.001). Clinical and radiological results and occurrence of complication were compared between the two groups.
Results: Harris Hip Score was increased significantly from 46.4 to 92.1 in the 36 mm group and from 46.7 to 93.6 in the 28 mm group. No loosening or osteolysis was observed in the 36 mm group. However, one hip showed radiologic sign of loosening in the 28 mm group. As for complication, postoperative dislocation was more frequent in the 28 mm group (6 in the 28 mm group vs. 0 in the 36 mm group, p = 0.03). Otherwise, there was no significant difference in other results including inguinal pain, squeaking or ceramic fracture.
Conclusion: Ceramic-on-ceramic THA with 4th generation 36 mm head significantly reduced postoperative dislocation rate without increasing the rate of inguinal pain, squeaking, or ceramic fracture compared to that with 3rd generation 28 mm head.
abstract_id: PUBMED:33842105
Late Onset Atraumatic Ceramic Head Fracture of a Hybrid Ceramic Bearings Total Hip Arthroplasty. Ceramic head fracture is a major complication of ceramic-on-ceramic (CoC) total hip arthroplasty (THA) and though new generation ceramics have lowered the rates, although it is still a great concern. We report a case of late onset (more than 10 years after surgery) ceramic head fracture of a hybrid ceramic bearings to emphasize on its unusual clinical manifestation. Furthermore, we highlight the late onset presentation and also the rarity of this complication with this particular hybrid ceramic bearings. A relevant review of the literature revealed that hybrid ceramic bearings need to be more thoroughly studied to understand modes of their failure and to reach a consensus on how to reduce and prevent these disastrous complications.
abstract_id: PUBMED:35658905
A case-driven hypothesis for multi-stage crack growth mechanism in fourth-generation ceramic head fracture. Background: Ceramic bearings are used in total hip arthroplasty due to their excellent wear behaviour and biocompatibility. The major concern related to their use is material brittleness, which significantly impacts on the risk of fracture of ceramic components. Fracture toughness improvement has contributed to the decrease in fracture rate, at least of the prosthetic head. However, the root cause behind these rare events is not fully understood. This study evaluated head fracture occurrence in a sizeable cohort of patients with fourth-generation ceramic-on-ceramic implants and described the circumstances reported by patients in the rare cases of head fracture.
Methods: The clinical survivorship of 29,495 hip prostheses, with fourth-generation ceramic bearings, was determined using data from a joint replacement registry. The average follow-up period was 5.2 years (range 0.1-15.6). Retrieval analysis was performed in one case for which the ceramic components were available.
Results: Clinical outcomes confirmed the extremely low fracture rate of fourth-generation ceramic heads: only two out of 29,495 heads fractured. The two fractures, both involving 36 mm heads, occurred without a concurrent or previous remarkable trauma. Considering the feature of the fractured head, a multi-stage crack growth mechanism has been hypothesized to occur following damage at the head-neck taper interface.
Conclusions: Surgeons must continue to pay attention to the assembly of the femoral head: achieving a proper head seating on a clean taper is a prerequisite to decrease the risk of occurrence of any damage process within head-neck junction, which may cause high stress concentration at the contact surface, promoting crack nucleation and propagation even in toughened ceramics.
abstract_id: PUBMED:22927896
Traumatic ceramic femoral head fracture: an initial misdiagnosis. Background And Purpose: Ceramic heads are widely used in modern total hip arthroplasty (THA). Although a rare complication, fractures of ceramic heads are described in the literature, evoking uncertainties regarding the in vivo stability of this material, especially when impaction of a fractured ceramic head can lead to disastrous results.
Methods: In this case report, we present a fracture of a ceramic head after trauma. A misinterpretation of the initial radiographs led to severe fragmentation of the ceramic head two weeks after the incident, later resulting in complete destruction of the arthroplasty.
Results And Interpretation: Remarkably, radiographs obtained more than one year after the trauma led to this delayed diagnosis. A single radiographic evaluation, especially in patients with persistent pain after major trauma, with a ceramic head in a THA, seems to be insufficient.
abstract_id: PUBMED:29725513
Acetabulum Protrusion Following Ceramic Liner Fracture: A Case Report. Hip arthroplasty is an extremely satisfying treatment method for coxarthrosis which is in increasing use throughout the world. However, loosening of the prosthesis is a significant complication and to overcome this, ceramic liners are increasingly being selected. If the survival of ceramic surfaces is prolonged, there is a risk of fracture of the ceramic materials. New ceramic materials developed to overcome this problem are more resistant. The case presented here is of a patient in whom liner fracture developed following ceramic-ceramic hip arthroplasty. The ceramic femoral head was observed to have protruded into the defect created in the acetabular component. Acetabular revision was applied to the patient.
Answer: Yes, there is a risk associated with placing a new ceramic head on a previously implanted trunion. The concern is that this may lead to a fracture of the newly implanted head due to potential imperfections or damage to the trunion that occurred during the lifespan of the original implant. However, a study assessing the risk of fracture when a new ceramic head was placed on a previously implanted trunion found that among 61 ceramic heads implanted on a well-fixed stem, no fracture of the head occurred at a mean 88 months' followup (PUBMED:20700672). This suggests that with careful inspection to ensure no major imperfection of the Morse taper, it is possible to place a new ceramic head on a previously implanted trunion without observing fractures at midterm followup.
It is important to note that other factors can contribute to the risk of ceramic head fracture, such as stem taper mismatch, which can critically affect fracture risk and should be avoided to prevent patient risk (PUBMED:28043037). Additionally, the fracture of ceramic components, while rare, is a serious event and can occur even with newer generation ceramics (PUBMED:37398526, PUBMED:35658905). The fracture toughness of ceramic materials has improved over time, reducing the rate of fractures (PUBMED:33133408), but the risk is not entirely eliminated, and late onset fractures have been reported (PUBMED:33842105).
In conclusion, while placing a new ceramic head on a previously implanted trunion can be done with a low observed risk of fracture in the midterm, it is essential to ensure that there are no major imperfections on the trunion and to be aware of the potential risks associated with ceramic material brittleness and the importance of proper component matching. |
Instruction: Is surgery a risk factor for Creutzfeldt-Jakob disease?
Abstracts:
abstract_id: PUBMED:18257690
Is surgery a risk factor for Creutzfeldt-Jakob disease? Outcome variation by control choice and exposure assessments. Objective: To determine whether methodological differences explain divergent results in case-control studies examining surgery as a risk factor for Creutzfeldt-Jakob disease (CJD).
Methods: After case-control studies were systematically identified using PubMed, we performed a homogeneity analysis and applied models to effect sizes (odds ratio [OR] with 95% confidence interval [CI]) using 2 parameters: type of control subject used and consistency of data ascertainment. The hospitals and communities were located in Europe, Japan, and Australia. Patients were CJD case subjects and age- and sex-matched control subjects in the hospital or community. Because of the natural history of the disease, CJD subjects are not considered reliable sources of information for these studies. Therefore, individuals who are considered close to the subjects and who have knowledge of their medical history, including spouses and relatives, are necessarily identified as proxy informants for the surgical record of the case subjects.
Results: Overall, the effect sizes lacked homogeneity (P<.0001). Three studies that used control subjects from the community revealed a significantly elevated risk of CJD for patients who underwent surgery (OR, 1.82; 95% CI, 1.41-2.35 [P<.0001]), whereas 3 investigations that used control subjects from the hospital revealed a significantly reduced risk (OR, 0.69; 95% CI, 0.52-0.90 [P=.0069]). Two studies that used proxy informants to acquire information about case subjects and control subjects (consistent ascertainment) found that the risk of CJD was significantly lower in those subjects who underwent surgery (OR, 0.65; 95% CI, 0.48-0.87 [P=.0043]). Conversely, 4 studies in which proxy informants acted only on behalf of case subjects (inconsistent data ascertainment) found a significant positive association between surgery and CJD (OR, 1.67; 95% CI, 1.32-2.12 [P<.0001]). Both models fit the data very well, leaving no remaining variance in effect sizes to explain.
Conclusion: Variation in the type of control subjects used and in exposure assessment in case-control studies may partially explain conflicting data regarding the association between surgery and CJD. However, there was almost complete confounding of these 2 parameters, making interpretation more difficult. Planning of future investigations must carefully consider these design elements.
abstract_id: PUBMED:10941953
European surveillance on Creutzfeldt-Jakob disease: a case-control study for medical risk factors. Medical risk factors for Creutzfeldt-Jakob disease (CJD) were analyzed in a prospective ongoing case-control study based on European CJD surveillance. Detailed data on past and recent medical history were analyzed in 405 cases and controls matched by sex, age, and hospital. Data were correlated with polymorphism at codon 129 of the prion protein gene. Our analysis did not support a number of previously reported associations and failed to identify any common medical risk factor for CJD. Although not statistically significant, brain surgery was associated with an increased risk of CJD. A detailed medical history should be obtained in every suspected CJD case in order to identify iatrogenic sources of CJD.
abstract_id: PUBMED:19659942
The risk of iatrogenic Creutzfeldt-Jakob disease through medical and surgical procedures. There have been more than 400 patients who contracted Creutzfeldt-Jakob disease (CJD) via a medical procedure, that is, through the use of neurosurgical instruments, intracerebral electroencephalographic electrodes (EEG), human pituitary hormone, dura mater grafts, corneal transplant, and blood transfusion. The number of new patients with iatrogenic CJD has decreased; however, cases of variant CJD that was transmitted via blood transfusion have been reported since 2004. Clearly, iatrogenic transmission of CJD remains a serious problem. Recently, we investigated medical procedures (any surgery, neurosurgery, ophthalmic surgery, and blood transfusion) performed on patients registered by the CJD Surveillance Committee in Japan during a recent 9-year period. In a case-control study comprising 753 sporadic CJD (sCJD) patients and 210 control subjects, we found no evidence that prion disease was transmitted via the investigated medical procedures before onset of sCJD. In a review of previously reported case-control studies, blood transfusion was never shown to be a significant risk factor for CJD; our study yielded the same result. Some case-control studies reported that surgery was a significant risk factor for sCJD. However, when surgical procedures were categorized by type of surgery, the results were conflicting, which suggests that there is little possibility of prion transmission via surgical procedures. In our study, 4.5% of sCJD patients underwent surgery after onset of sCJD, including neurosurgeries in 0.8% and ophthalmic surgeries in 1.9%. The fact that some patients underwent surgery, including neurosurgery, even after the onset of sCJD indicates that we cannot exclude the possibility of prion transmission via medical procedures. We must remain vigilant against prion diseases to reduce the risk of iatrogenesis.
abstract_id: PUBMED:12791449
The perceived risk of variant Creutzfeld-Jakob disease and the effect of additional delay in tonsillectomy: a questionnaire based parents perspective. Objectives: In February 2001 the United Kingdom Department of Health in conjunction with the British Association of Otolaryngology, Head and Neck Surgeons decreed that all non-emergency tonsillectomies should be performed using disposable instruments because of the theoretical risk of transmission of variant Creutzfeld-Jakob disease (vCJD). There was an understandable delay in the provision of these instruments by the various manufacturers, leading to an increase in waiting time for surgery. It was decided to assess parental attitudes to the risk of vCJD, and assess the effect the additional delay had on their child.
Method: A questionnaire was sent to the parents of all 249 children on the waiting list for tonsillectomy.
Results: Seventy percent replied, and of these, 37% felt there was a risk of reusing instruments, only 10% felt there was no risk, and the remaining 53% did not know if there was any risk. Nevertheless 41% of parents would have gone ahead using old instruments if allowed. All parents of the 73 children waiting greater than 6 months were questioned on the effect of the additional delay. Only 7% reported improvement in symptoms, and 68% felt the additional delay had badly affected their child's health and wellbeing. Ninety percent of parents felt their child's symptoms still warranted tonsillectomy.
Conclusion: There is an awareness of risk of vCJD among parents whose children await tonsillectomy, although understandably the level of risk they feel is hard to quantify. The rate of symptom resolution whilst on the waiting list was very low.
abstract_id: PUBMED:18074392
Risk factors for sporadic Creutzfeldt-Jakob disease. Objective: Although surgical transmission of Creutzfeldt-Jakob disease (CJD) has been demonstrated, these iatrogenic cases account for only a small proportion of all CJD cases. The majority are sporadic CJD (sCJD) cases of unknown cause. This study investigated whether some cases classified as sCJD might have an unrecognized iatrogenic basis through surgical or other medical procedures
Methods: This study compared medical risk factors from 431 sCJD cases referred 1998 to 2006 with 454 population control subjects. Possible geographic and temporal links between neurological and gynecological operations in 857 sCJD cases referred from 1990 to 2006 were investigated
Results: A reported history of ever having undergone surgery was associated with increased risk for sCJD (odds ratio, 2.0; 95% confidence interval, 1.3-2.1; p = 0.003). Increased risk was not associated with surgical categories chosen a priori but was confined to the residual category "other surgery," in which the increase in risk appeared most marked for three subcategories: skin stitches, nose/throat operations, and removal of growths/cysts/moles. No convincing evidence was found of links (same hospital, within 2 years) between cases undergoing neurosurgery or gynecological surgery
Interpretation: It is unlikely that a high proportion of UK sCJD cases are the result of transmission during surgery, but we cannot exclude the possibility that such transmission occurs occasionally. A study based on accurate surgical histories obtained from medical records is required to determine whether the increased risk associated with reported surgical history reflects a causal association or recall bias.
abstract_id: PUBMED:15667663
Tissue classification for the epidemiological assessment of surgical transmission of sporadic Creutzfeldt-Jakob disease. A proposal on hypothetical risk levels. Background: Epidemiological studies on the potential role of surgery in Creutzfeldt-Jakob Disease transmission have disclosed associations with history of specific surgical interventions or reported negative results.
Methods: Within the context of a case-control study designed to address surgical risk of sporadic Creutzfeldt-Jakob Disease in Nordic European countries (EUROSURGYCJD Project), a strategy was adopted to categorise reported surgical procedures in terms of potential risk of Creutzfeldt-Jakob Disease acquisition. We took into account elements of biological plausibility, either clinically or experimentally demonstrated, such as tissue infectivity, PrP expression content or successful route of infection.
Results: We propose a classification of exposed tissues and anatomic structures, drawn up on the basis of their specific putative role as entry site for prion transmission through contact with surgical instruments that are not fully decontaminated.
Conclusions: This classification can serve as a reference, both in our study and in further epidemiological research, for categorisation of surgical procedures in terms of risk level of Creutzfeldt-Jakob Disease acquisition.
abstract_id: PUBMED:3897896
Creutzfeldt-Jakob disease: possible medical risk factors. To explore possible risk factors in the past medical history of patients with Creutzfeldt-Jakob disease (CJD), we conducted a case-control study among 26 cases and 40 matched controls. Statistically significant odds ratios were obtained for intraocular pressure testing; injury to or surgery on the head, face or neck; and trauma to other parts of the body. The odds ratios were nearly significant for head trauma and procedures requiring sutures. These data suggest that the CJD agent may be acquired by inoculation through injury or during surgery, and perhaps on certain absorbable sutures of animal origin. The tonometer used for glaucoma testing may also be a vehicle of transmission.
abstract_id: PUBMED:22777385
Sensitivity to biases of case-control studies on medical procedures, particularly surgery and blood transfusion, and risk of Creutzfeldt-Jakob disease. Background: Evidence of risk of Creutzfeldt-Jakob disease (CJD) associated with medical procedures, including surgery and blood transfusion, is limited by susceptibility to bias in epidemiological studies.
Methods: Sensitivity to bias was explored using a central-birth-cohort model using data from 18 case-control studies obtained after a review of 494 reports on medical procedures and risk of CJD, systematic for the period January 1, 1989 to December 31, 2011.
Results: The validity of the findings in these studies may have been undermined by: recall; control selection; exposure assessment in life-time periods of different duration, out of time-at-risk of effect, or asymmetry in case/control data; and confounding by concomitant blood transfusion at the time of surgery. For sporadic CJD (sCJD), a history of surgery or blood transfusion was associated with risk in some, but not all, recent studies at a ≥10 year lag time, when controls were longitudinally sampled. Space-time aggregation of surgical events was not seen. Surgery at early clinical onset might be overrepresented among cases. Neither surgical history nor blood transfusion unlabelled for donor status, dental treatments or endoscopic examinations were linked to variant CJD (vCJD).
Conclusions: These results indicate the need for further research. Common challenges within these studies include access to and content of past medical/dental treatment records for diseases with long incubation periods.
abstract_id: PUBMED:9660576
Case-control study of risk factors of Creutzfeldt-Jakob disease in Europe during 1993-95. European Union (EU) Collaborative Study Group of Creutzfeldt-Jakob disease (CJD). Background: Creutzfeldt-Jakob disease (CJD) is a transmissible spongiform encephalopathy. Genetic and iatrogenic forms have been recognised but most are sporadic and of unknown cause. We have studied risk factors for CJD as part of the 1993-95 European Union collaborative studies of CJD in Europe.
Methods: The 405 patients with definite or probable CJD who took part in our study had taken part in population-based studies done between 1993 and 1995 in Belgium, France, Germany, Italy, the Netherlands, and the UK. Data on putative risk factors from these patients were compared with data from 405 controls.
Findings: We found evidence for familial aggregation of CJD with dementia due to causes other than CJD (relative risk [RR] 2.26, 95% CI 1.31-3.90). No significant increased risk of CJD in relation to a history of surgery and blood transfusion was shown. There was no evidence for an association between the risk of CJD and the consumption of beef, veal, lamb, cheese, or milk. No association was found with occupational exposure to animals or leather. The few positive findings of the study include increased risk in relation to consumption of raw meat (RR 1.63 [95% CI 1.18-2.23]) and brain (1.68 [1.18-2.39]), frequent exposure to leather products (1.94 [1.13-3.33]), and exposure to fertiliser consisting of hoofs and horns (2.32 [1.38-2.91]). Additional analyses, for example stratification by country and of exposures pre-1985 and post-1985, suggest that these results should be interpreted with great caution.
Interpretation: Within the limits of the retrospective design of the study, our findings suggest that genetic factors other than the known CJD mutations may play an important part in CJD. Iatrogenic transmission of disease seems rare in this large population-based sample of patients with CJD. There is little evidence for an association between the risk of CJD and either animal exposure, or consumption of processed bovine meat or milk products for the period studied.
abstract_id: PUBMED:20547628
Nosocomial transmission of sporadic Creutzfeldt-Jakob disease: results from a risk-based assessment of surgical interventions. Objectives: Evidence of surgical transmission of sporadic Creutzfeldt-Jakob disease (sCJD) remains debatable in part due to misclassification of exposure levels. In a registry-based case-control study, the authors applied a risk-based classification of surgical interventions to determine the association between a history of surgery and sCJD.
Design: Case-control study, allowing for detailed analysis according to time since exposure.
Setting: National populations of Denmark and Sweden.
Participants: From national registries of Denmark and Sweden, the authors included 167 definite and probable sCJD cases with onset during the period 1987-2003, 835 age-, sex- and residence-matched controls and 2224 unmatched. Surgical procedures were categorised by anatomical structure and presumed risk of transmission level. The authors used logistic regression to determine the odds ratio (OR) for sCJD by surgical interventions in specified time-windows before disease-onset.
Results: From comparisons with matched controls, procedures involving retina and optic nerve were associated with an increased risk at a latency of ≥1 year OR (95% CI) 5.53 (1.08 to 28.0). At latencies of 10 to 19 years, interventions on peripheral nerves 4.41 (1.17 to 16.6) and skeletal muscle 1.58 (1.01 to 2.48) were directly associated. Interventions on blood vessels 4.54 (1.01 to 20.0), peritoneum 2.38 (1.14 to 4.96) and skeletal muscle 2.04 (1.06 to 3.92), interventions conducted by vaginal approach 2.26 (1.14 to 4.47) and a pooled category of lower-risk procedures 2.81 (1.62 to 4.88) had an increased risk after ≥20 years. Similar results were found when comparing with unmatched controls.
Interpretation: This observation is in concordance with animal models of prion neuroinvasion and is likely to represent a causal relation of surgery with a non-negligible proportion of sCJD cases.
Answer: The question of whether surgery is a risk factor for Creutzfeldt-Jakob disease (CJD) has been explored in various studies with mixed results. Some case-control studies have suggested an association between surgery and an increased risk of CJD. For instance, a systematic review found that three studies using community controls showed a significantly elevated risk of CJD for patients who underwent surgery, while three studies using hospital controls showed a significantly reduced risk (PUBMED:18257690). Another study found that a reported history of ever having undergone surgery was associated with an increased risk for sporadic CJD (sCJD) (PUBMED:18074392).
Conversely, other studies have not found a significant association between surgery and CJD. A European surveillance case-control study did not identify any common medical risk factor for CJD, although brain surgery was associated with an increased risk, albeit not statistically significant (PUBMED:10941953). Similarly, a study in Japan found no evidence that prion disease was transmitted via investigated medical procedures before the onset of sCJD (PUBMED:19659942).
The variability in findings may be due to methodological differences in studies, such as the type of control subjects used, consistency of data ascertainment, and potential biases like recall bias or control selection bias (PUBMED:18257690; PUBMED:22777385). Additionally, the risk associated with surgery might be confined to specific types of procedures or tissues involved, as suggested by a proposal for tissue classification based on hypothetical risk levels (PUBMED:15667663).
Overall, while some evidence suggests that surgery could be a risk factor for CJD, particularly when involving high-risk tissues or procedures, the association is not consistently supported across all studies. Further research with careful consideration of study design and potential biases is required to clarify the relationship between surgical procedures and the risk of CJD (PUBMED:18257690; PUBMED:22777385). |
Instruction: Do radiology and other health care presentations posted on the Internet contain accessible protected health information?
Abstracts:
abstract_id: PUBMED:18796683
Do radiology and other health care presentations posted on the Internet contain accessible protected health information? Purpose: To determine the prevalence of protected health information (PHI) in PowerPoint presentations available for downloading from the Internet.
Materials And Methods: No institutional review board approval was needed for this project, which involved no patient subjects. Two Google searches, each limited to PowerPoint files, were performed by using the criteria "Cardiac CT" and "Magnetic Resonance Imaging." The first 100 hits of each search were downloaded from the source Web site. The presentations were examined for the PHI contained on any images, links, or notes pages.
Results: Two hundred presentations were evaluated. There were 143 presentations with images, image links, or notes, and 52 (36%) of these contained PHI. There were 129 presentations containing radiologic images; 51 (40%) of these contained PHI, and 31 (24%) showed the patient's name. At least 132 (66%) of the 200 presentations originated from the United States. Thirty-five (37%) of 94 presentations with images, image links, or notes contained PHI. Eighty-six U.S. presentations contained radiologic images; 34 (40%) of these contained PHI, and 19 (22%) showed the patient's name.
Conclusion: Online or other distributions of PowerPoint presentations that contain radiologic images often contain PHI, and this may violate laws, including the U.S. Health Insurance Portability and Accountability Act.
abstract_id: PUBMED:28634156
Classifying Chinese Questions Related to Health Care Posted by Consumers Via the Internet. Background: In question answering (QA) system development, question classification is crucial for identifying information needs and improving the accuracy of returned answers. Although the questions are domain-specific, they are asked by non-professionals, making the question classification task more challenging.
Objective: This study aimed to classify health care-related questions posted by the general public (Chinese speakers) on the Internet.
Methods: A topic-based classification schema for health-related questions was built by manually annotating randomly selected questions. The Kappa statistic was used to measure the interrater reliability of multiple annotation results. Using the above corpus, we developed a machine-learning method to automatically classify these questions into one of the following six classes: Condition Management, Healthy Lifestyle, Diagnosis, Health Provider Choice, Treatment, and Epidemiology.
Results: The consumer health question schema was developed with a four-hierarchical-level of specificity, comprising 48 quaternary categories and 35 annotation rules. The 2000 sample questions were coded with 2000 major codes and 607 minor codes. Using natural language processing techniques, we expressed the Chinese questions as a set of lexical, grammatical, and semantic features. Furthermore, the effective features were selected to improve the question classification performance. From the 6-category classification results, we achieved an average precision of 91.41%, recall of 89.62%, and F1 score of 90.24%.
Conclusions: In this study, we developed an automatic method to classify questions related to Chinese health care posted by the general public. It enables Artificial Intelligence (AI) agents to understand Internet users' information needs on health care.
abstract_id: PUBMED:32570572
Accessible Rates to Health Information on the Internet in Elderlies Increased Among Fifteen Years. Yuzawa Town, located in the Niigata prefecture of Japan, is famous for its hot springs. A citizen-centered health promotion program, Yuzawa family health plan, was initiated in 2002. It has been held for seventeen years. We evaluated changes their accessible rates to health information on the internet between 2002 and 2017 in elderlies. 431 and 435 questionnaires were corrected from elderly people at least 65 years old at 2002 and 2017. The accessible rates to health information on the internet in elderlies increased (p<0.001). Profiles of accessible elderlies to health information on the internet were shown. Son and daughter might give them health information. Elderlies having any health concern or anxiety might be afraid to take health information. Accessible elderlies to health information were able to resolve suffering and breach risks by oneself.
abstract_id: PUBMED:22884947
How does searching for health information on the Internet affect individuals' demand for health care services? The emergence of the Internet made health information, which previously was almost exclusively available to health professionals, accessible to the general public. Access to health information on the Internet is likely to affect individuals' health care related decisions. The aim of this analysis is to determine how health information that people obtain from the Internet affects their demand for health care. I use a novel data set, the U.S. Health Information National Trends Survey (2003-07), to answer this question. The causal variable of interest is a binary variable that indicates whether or not an individual has recently searched for health information on the Internet. Health care utilization is measured by an individual's number of visits to a health professional in the past 12 months. An individual's decision to use the Internet to search for health information is likely to be correlated to other variables that can also affect his/her demand for health care. To separate the effect of Internet health information from other confounding variables, I control for a number of individual characteristics and use the instrumental variable estimation method. As an instrument for Internet health information, I use U.S. state telecommunication regulations that are shown to affect the supply of Internet services. I find that searching for health information on the Internet has a positive, relatively large, and statistically significant effect on an individual's demand for health care. This effect is larger for the individuals who search for health information online more frequently and people who have health care coverage. Among cancer patients, the effect of Internet health information seeking on health professional visits varies by how long ago they were diagnosed with cancer. Thus, the Internet is found to be a complement to formal health care rather than a substitute for health professional services.
abstract_id: PUBMED:28216191
Is the Internet a Suitable Patient Resource for Information on Common Radiological Investigations?: Radiology-Related Information on the Internet. Rationale And Objective: This study aimed to assess the quality of Internet information about common radiological investigations.
Materials And Methods: Four search engines (Google, Bing, Yahoo, and Duckduckgo) were searched using the terms "X-ray," "cat scan," "MRI," "ultrasound," and "pet scan." The first 10 webpage results returned for each search term were recorded, and their quality and readability were analyzed by two independent reviewers (DJB and LCY), with discrepancies resolved by consensus. Analysis of information quality was conducted using validated instruments for the assessment of health-care information (DISCERN score is a multi-domain tool for assessment of health-care information quality by health-care professionals and laypeople (max 80 points)) and readability (Flesch-Kincaid and SMOG or Simple Measure of Gobbledygook scores). The search result pages were further classified into categories as follows: commercial, academic (educational/institutional), and news/magazine. Several organizations offer website accreditation for health-care information, and accreditation is recognized by the presence of a hallmark or logo on the website. The presence of any valid accreditation marks on each website was recorded. Mean scores between groups were compared for significance using the Student t test.
Results: A total of 200 webpages returned (108 unique website addresses). The average DISCERN score was <50 points for all modalities and search engines. No significant difference was seen in readability between modalities or between search engines. Websites carrying validated accreditation marks were associated with higher average DISCERN scores: X-ray (39.36 vs 25.35), computed tomography (45.45 vs 31.33), and ultrasound (40.91 vs 27.62) (P < .01). Academic/government institutions produced material with higher DISCERN scores: X-ray (40.06 vs 22.23), magnetic resonance imaging (44.69 vs 29), ultrasound (46 vs 31.91), and positron emission tomography (45.93 vs 38.31) (P < .01). Commercial websites produced material with lower mean DISCERN scores: X-ray (17.25 vs 31.69), magnetic resonance imaging (20.8 vs 40.1), ultrasound (24.11 vs 42.35), and positron emission tomography (24.5 vs 44.45) (P < .01).
Conclusions: Although readability is adequate, the overall quality of radiology-related health-care information on the Internet is poor. High-quality online resources should be identified so that patients may avoid the use of poor-quality information derived from general search engine queries.
abstract_id: PUBMED:35715655
Ensemble Approaches to Recognize Protected Health Information in Radiology Reports. Natural language processing (NLP) techniques for electronic health records have shown great potential to improve the quality of medical care. The text of radiology reports frequently constitutes a large fraction of EHR data, and can provide valuable information about patients' diagnoses, medical history, and imaging findings. The lack of a major public repository for radiological reports severely limits the development, testing, and application of new NLP tools. De-identification of protected health information (PHI) presents a major challenge to building such repositories, as many automated tools for de-identification were trained or designed for clinical notes and do not perform sufficiently well to build a public database of radiology reports. We developed and evaluated six ensemble models based on three publically available de-identification tools: MIT de-id, NeuroNER, and Philter. A set of 1023 reports was set aside as the testing partition. Two individuals with medical training annotated the test set for PHI; differences were resolved by consensus. Ensemble methods included simple voting schemes (1-Vote, 2-Votes, and 3-Votes), a decision tree, a naïve Bayesian classifier, and Adaboost boosting. The 1-Vote ensemble achieved recall of 998 / 1043 (95.7%); the 3-Votes ensemble had precision of 1035 / 1043 (99.2%). F1 scores were: 93.4% for the decision tree, 71.2% for the naïve Bayesian classifier, and 87.5% for the boosting method. Basic voting algorithms and machine learning classifiers incorporating the predictions of multiple tools can outperform each tool acting alone in de-identifying radiology reports. Ensemble methods hold substantial potential to improve automated de-identification tools for radiology reports to make such reports more available for research use to improve patient care and outcomes.
abstract_id: PUBMED:31719266
Assessment of internet usage for health-related information among clients utilizing primary health care services. Objective: This study aimed to identify the frequency and goals of Internet usage to access health-related information among primary health care service clients.
Methods: The study was conducted in a primary health care centre with a sample of 788 adults. The data were collected through a questionnaire developed by the researchers.
Results: The results showed that 81% (n = 640) of the participants used the Internet. All Internet user participants reported that they used the Internet to access health-related information. Of the participants, 67% reported that they used the Internet primarily to obtain information about diseases with 94% reporting that they found the online information reliable and 92% reported that they did not confirm the information they obtained online. The frequency of Internet use to obtain health-related information increased with increase in the level of education of participants. Participants with higher education found the online information to be more reliable and comprehensible. The results showed that while the use of Internet to obtain health-related information was high, the information presented online was not always checked for accuracy.
Conclusion: Hence, provision of current and evidence-based information on health-related websites is crucial to preserve community health care.
abstract_id: PUBMED:10848396
Digital health care--the convergence of health care and the Internet. The author believes that interactive media (the Internet and the World Wide Web) and associated applications used to access those media (portals, browsers, specialized Web-based applications) will result in a substantial, positive, and measurable impact on medical care faster than any previous information technology or communications tool. Acknowledging the dynamic environment, the author classifies "pure" digital health care companies into three business service areas: content, connectivity, and commerce. Companies offering these services are attempting to tap into a host of different markets within the health care industry including providers, payers, pharmaceutical and medical products companies, employers, distributors, and consumers. As the fastest growing medium in history, and given the unique nature of health care information and the tremendous demand for content among industry professionals and consumers, the Internet offers a more robust and targeted direct marketing opportunity than traditional media. From the medical consumer's standpoint (i.e., the patient) the author sees the Internet as performing five critical functions: (1) Disseminate information, (2) Aid informed decision making, (3) Promote health, (4) Provide a means for information exchange and support--the community concept, and (5) Increase self-care and manage demand for health services, lowering direct medical costs. The author firmly submits the Web will provide overall benefits to the health care economy as health information consumers manage their own health problems that might not directly benefit from an encounter with a health professional. Marrying the Internet to other interactive technologies, including voice recognition systems and telephone-based triage lines among others, holds the promise of reducing unnecessary medical services.
abstract_id: PUBMED:11780707
Consumer health information seeking on the Internet: the state of the art. Increasingly, consumers engage in health information seeking via the Internet. Taking a communication perspective, this review argues why public health professionals should be concerned about the topic, considers potential benefits, synthesizes quality concerns, identifies criteria for evaluating online health information and critiques the literature. More than 70 000 websites disseminate health information; in excess of 50 million people seek health information online, with likely consequences for the health care system. The Internet offers widespread access to health information, and the advantages of interactivity, information tailoring and anonymity. However, access is inequitable and use is hindered further by navigational challenges due to numerous design features (e.g. disorganization, technical language and lack of permanence). Increasingly, critics question the quality of online health information; limited research indicates that much is inaccurate. Meager information-evaluation skills add to consumers' vulnerability, and reinforce the need for quality standards and widespread criteria for evaluating health information. Extant literature can be characterized as speculative, comprised of basic 'how to' presentations, with little empirical research. Future research needs to address the Internet as part of the larger health communication system and take advantage of incorporating extant communication concepts. Not only should research focus on the 'net-gap' and information quality, it also should address the inherently communicative and transactional quality of Internet use. Both interpersonal and mass communication concepts open avenues for investigation and understanding the influence of the Internet on health beliefs and behaviors, health care, medical outcomes, and the health care system.
abstract_id: PUBMED:12186060
Finding reliable health-care information on the internet. The number of Web sites offering health and health-care information has grown dramatically over the past several years. However, it can be difficult to tell whether or not the information that you find is reliable and accurate. In this brief we describe the nature of this issue and offer some practical tips for judging reliability when searching for information on the Internet.
Answer: Yes, radiology and other health care presentations posted on the Internet often contain accessible protected health information (PHI). A study that evaluated 200 PowerPoint presentations available for download from the Internet, with a focus on "Cardiac CT" and "Magnetic Resonance Imaging," found that 36% of the presentations with images, image links, or notes contained PHI. Specifically, 40% of the presentations containing radiologic images had PHI, and 24% showed the patient's name. This may violate laws, including the U.S. Health Insurance Portability and Accountability Act (HIPAA) (PUBMED:18796683). |
Instruction: Does the cannabinoid dronabinol reduce central pain in multiple sclerosis?
Abstracts:
abstract_id: PUBMED:18615144
Role of the cannabinoid system in pain control and therapeutic implications for the management of acute and chronic pain episodes. Cannabis extracts and synthetic cannabinoids are still widely considered illegal substances. Preclinical and clinical studies have suggested that they may result useful to treat diverse diseases, including those related with acute or chronic pain. The discovery of cannabinoid receptors, their endogenous ligands, and the machinery for the synthesis, transport, and degradation of these retrograde messengers, has equipped us with neurochemical tools for novel drug design. Agonist-activated cannabinoid receptors, modulate nociceptive thresholds, inhibit release of pro-inflammatory molecules, and display synergistic effects with other systems that influence analgesia, especially the endogenous opioid system. Cannabinoid receptor agonists have shown therapeutic value against inflammatory and neuropathic pains, conditions that are often refractory to therapy. Although the psychoactive effects of these substances have limited clinical progress to study cannabinoid actions in pain mechanisms, preclinical research is progressing rapidly. For example, CB(1)mediated suppression of mast cell activation responses, CB(2)-mediated indirect stimulation of opioid receptors located in primary afferent pathways, and the discovery of inhibitors for either the transporters or the enzymes degrading endocannabinoids, are recent findings that suggest new therapeutic approaches to avoid central nervous system side effects. In this review, we will examine promising indications of cannabinoid receptor agonists to alleviate acute and chronic pain episodes. Recently, Cannabis sativa extracts, containing known doses of tetrahydrocannabinol and cannabidiol, have granted approval in Canada for the relief of neuropathic pain in multiple sclerosis. Further double-blind placebo-controlled clinical trials are needed to evaluate the potential therapeutic effectiveness of various cannabinoid agonists-based medications for controlling different types of pain.
abstract_id: PUBMED:23108552
Targeting the endocannabinoid system with cannabinoid receptor agonists: pharmacological strategies and therapeutic possibilities. Human tissues express cannabinoid CB(1) and CB(2) receptors that can be activated by endogenously released 'endocannabinoids' or exogenously administered compounds in a manner that reduces the symptoms or opposes the underlying causes of several disorders in need of effective therapy. Three medicines that activate cannabinoid CB(1)/CB(2) receptors are now in the clinic: Cesamet (nabilone), Marinol (dronabinol; Δ(9)-tetrahydrocannabinol (Δ(9)-THC)) and Sativex (Δ(9)-THC with cannabidiol). These can be prescribed for the amelioration of chemotherapy-induced nausea and vomiting (Cesamet and Marinol), stimulation of appetite (Marinol) and symptomatic relief of cancer pain and/or management of neuropathic pain and spasticity in adults with multiple sclerosis (Sativex). This review mentions several possible additional therapeutic targets for cannabinoid receptor agonists. These include other kinds of pain, epilepsy, anxiety, depression, Parkinson's and Huntington's diseases, amyotrophic lateral sclerosis, stroke, cancer, drug dependence, glaucoma, autoimmune uveitis, osteoporosis, sepsis, and hepatic, renal, intestinal and cardiovascular disorders. It also describes potential strategies for improving the efficacy and/or benefit-to-risk ratio of these agonists in the clinic. These are strategies that involve (i) targeting cannabinoid receptors located outside the blood-brain barrier, (ii) targeting cannabinoid receptors expressed by a particular tissue, (iii) targeting upregulated cannabinoid receptors, (iv) selectively targeting cannabinoid CB(2) receptors, and/or (v) adjunctive 'multi-targeting'.
abstract_id: PUBMED:12617697
Therapeutic potential of cannabinoids in CNS disease. The major psychoactive constituent of Cannabis sativa, delta(9)-tetrahydrocannabinol (delta(9)-THC), and endogenous cannabinoid ligands, such as anandamide, signal through G-protein-coupled cannabinoid receptors localised to regions of the brain associated with important neurological processes. Signalling is mostly inhibitory and suggests a role for cannabinoids as therapeutic agents in CNS disease where inhibition of neurotransmitter release would be beneficial. Anecdotal evidence suggests that patients with disorders such as multiple sclerosis smoke cannabis to relieve disease-related symptoms. Cannabinoids can alleviate tremor and spasticity in animal models of multiple sclerosis, and clinical trials of the use of these compounds for these symptoms are in progress. The cannabinoid nabilone is currently licensed for use as an antiemetic agent in chemotherapy-induced emesis. Evidence suggests that cannabinoids may prove useful in Parkinson's disease by inhibiting the excitotoxic neurotransmitter glutamate and counteracting oxidative damage to dopaminergic neurons. The inhibitory effect of cannabinoids on reactive oxygen species, glutamate and tumour necrosis factor suggests that they may be potent neuroprotective agents. Dexanabinol (HU-211), a synthetic cannabinoid, is currently being assessed in clinical trials for traumatic brain injury and stroke. Animal models of mechanical, thermal and noxious pain suggest that cannabinoids may be effective analgesics. Indeed, in clinical trials of postoperative and cancer pain and pain associated with spinal cord injury, cannabinoids have proven more effective than placebo but may be less effective than existing therapies. Dronabinol, a commercially available form of delta(9)-THC, has been used successfully for increasing appetite in patients with HIV wasting disease, and cannabinoid receptor antagonists may reduce obesity. Acute adverse effects following cannabis usage include sedation and anxiety. These effects are usually transient and may be less severe than those that occur with existing therapeutic agents. The use of nonpsychoactive cannabinoids such as cannabidiol and dexanabinol may allow the dissociation of unwanted psychoactive effects from potential therapeutic benefits. The existence of other cannabinoid receptors may provide novel therapeutic targets that are independent of CB(1) receptors (at which most currently available cannabinoids act) and the development of compounds that are not associated with CB(1) receptor-mediated adverse effects. Further understanding of the most appropriate route of delivery and the pharmacokinetics of agents that act via the endocannabinoid system may also reduce adverse effects and increase the efficacy of cannabinoid treatment. This review highlights recent advances in understanding of the endocannabinoid system and indicates CNS disorders that may benefit from the therapeutic effects of cannabinoid treatment. Where applicable, reference is made to ongoing clinical trials of cannabinoids to alleviate symptoms of these disorders.
abstract_id: PUBMED:16014264
Effect of the synthetic cannabinoid dronabinol on central pain in patients with multiple sclerosis--secondary publication Cannabinoids reduce allodynia/hyperalgesia in animal pain models, but few clinical studies evaluated the analgesic action in humans. We aimed to evaluate the effect of delta-9-tetrahydrocannabinol (dronabinol) on central pain in MS patients. Twenty-four MS patients participated in a double-blind placebo-controlled crossover trial. Dronabinol reduced the spontaneous pain intensity significantly compared with placebo (4.0 (2.3-6.0) vs. 5.0 (4.0-6.4), median (25th-75th percentiles), p = 0.02). Though dronabinol's analgesic effect is modest, its use should be evaluated considering the general difficulty in treating central pain.
abstract_id: PUBMED:10575284
Recent advantages in cannabinoid research. Although the active component of cannabis Delta9-THC was isolated by our group 35 years ago, until recently its mode of action remained obscure. In the last decade it was established that Delta9-THC acts through specific receptors - CB1 and CB2 - and mimics the physiological activity of endogenous cannabinoids of two types, the best known representatives being arachidonoylethanolamide (anandamide) and 2-arachidonoylglycerol (2-AG). THC is officially used against vomiting caused by cancer chemotherapy and for enhancing appetite, particularly in AIDS patients. Illegally, usually by smoking marijuana, it is used for ameliorating the symptoms of multiple sclerosis, against pain, and in a variety of other diseases. A synthetic cannabinoid, HU-211, is in advanced clinical tests against brain damage caused by closed head injury. It may prove to be valuable against stroke and other neurological diseases.
abstract_id: PUBMED:9543785
Therapeutic applications and biomedical effects of cannabinoids; pharmacological starting points A broad range of therapeutic applications has been suggested for cannabis or its pharmacologically active compound (tetrahydrocannabinol; THC) in many publications. Psychotropic side effects and the anecdotal character of the research have limited the pharmacotherapeutic use of THC until now. Therefore, the Netherlands Health Council recently decided negatively on this matter. Besides several cannabinoid receptor subtypes present in the central nervous system and peripheral tissues endogenous cannabinoids have been detected. These endogenous cannabinoids appear to play an important role in signal transduction, which may be starting points for therapy regarding: cardiovascular diseases, multiple sclerosis and spinal cord disorders. cerebrovascular accident and brain trauma, neurodegenerative diseases, epilepsy, pain management, glaucoma, oncologic and aids-related disorders such as nausea, vomiting and appetite problems.
abstract_id: PUBMED:15033046
News about therapeutic use of Cannabis and endocannabinoid system Growing basic research in recent years led to the discovery of the endocannabinoid system with a central role in neurobiology. New evidence suggests a therapeutic potential of cannabinoids in cancer chemotherapy-induced nausea and vomiting as well as in pain, spasticity and other symptoms in multiple sclerosis and movement disorders. Results of large randomized clinical trials of oral and sublingual Cannabis extracts will be known soon and there will be definitive answers to whether Cannabis has any therapeutic potential. Although the immediate future may lie in plant-based medicines, new targets for cannabinoid therapy focuses on the development of endocannabinoid degradation inhibitors which may offer site selectivity not afforded by cannabinoid receptor agonists.
abstract_id: PUBMED:15325960
Role of endocannabinoid system in mental diseases. In the last decade, a large number of studies using Delta9-tetrahydrocannabinol (THC), the main active principle derivative of the marijuana plant, or cannabinoid synthetic derivatives have substantially contributed to advance the understanding of the pharmacology and neurobiological mechanisms produced by cannabinoid receptor activation. Cannabis has been historically used to relieve some of the symptoms associated with central nervous system disorders. Nowadays, there are anecdotal evidences for the use of cannabis in many patients suffering from multiple sclerosis or chronic pain. Following the historical reports of the use of cannabis for medicinal purposes, recent research has highlighted the potential of cannabinoids to treat a wide variety of clinical disorders. Some of these disorders that are being investigated are pain, motor dysfunctions or psychiatric illness. On the other hand, cannabis abuse has been related to several psychiatric disorders such as dependence, anxiety, depression, cognitive impairment, and psychosis. Considering that cannabis or cannabinoid pharmaceutical preparations may no longer be exclusively recreational drugs but may also present potential therapeutic uses, it has become of great interest to analyze the neurobiological and behavioral consequences of their administration. This review attempts to link current understanding of the basic neurobiology of the endocannabinoid system to novel opportunities for therapeutic intervention and its effects on the central nervous system.
abstract_id: PUBMED:18671173
Pain relief with cannabinoids-- the importance of endocannabinoids and cannabinoids for pain therapy The endocannabinoid system reduces sensitization processes. Low doses of cannabinoids may enhance the potency of opioid-drugs and reduce the risk of tolerance to opioids. So far no cannabinoid has been approved for the treatment of acute pain due to lack of consistent data. In contrast, a Cannabis Based Medicine spray consisting of delta9-tetrahydrocannabinol and cannabidiol has been approved for the treatment of neuropathic pain in patients with multiple sclerosis. The adjunct of cannabidiol and the oromucosal formulation increase the therapeutic index of delta9-tetrahydrocannabinol. The differentiation of analgetic effects and cannabimimetic effects may be increased while compounds--such as ajulemic acid--are used which preferentially act on peripheral cannabinoid receptors and exert receptor independent effects. A further approach in this direction is the use of enzymes which metabolize endocannabinoids.
abstract_id: PUBMED:12137404
Cannabinoids in the treatment of pain and spasticity in multiple sclerosis. There is a large amount of evidence to support the view that the psychoactive ingredient in cannabis, delta9-tetrahydrocannabinol (delta9-THC), and cannabinoids in general, can reduce muscle spasticity and pain under some circumstances. Cannabinoid (CB1) receptors in the CNS appear to mediate both of these effects and endogenous cannabinoids may fulfil these functions to some extent under normal circumstances. However, in the context of multiple sclerosis (MS), it is still questionable whether cannabinoids are superior to existing, conventional medicationsfor the treatment of spasticity and pain. In the case of spasticity, there are too few controlled clinical trials to draw any reliable conclusion at this stage. In the case of pain, most of the available trials suggest that cannabinoids are not superior to existing treatments; however, few trials have examined chronic pain syndromes that are relevant to MS. Whether or not cannabinoids do have therapeutic potential in the treatment of MS, a further issue will be whether synthetic cannabinoids should be used in preference to cannabis itself. Smoking cannabis is associated with significant risks of lung cancer and other respiratory dysfunction. Furthermore, delta9-THC, as a broad-spectrum cannabinoid receptor agonist, will activate both CB1 and CB2 receptors. Synthetic cannabinoids, which target specific cannabinoid receptor subtypes in specific parts of the CNS, are likely to be of more therapeutic use than delta9-THC itself. If rapid absorption is necessary, such synthetic drugs could be delivered via aerosol formulations.
Answer: Yes, the cannabinoid dronabinol has been shown to reduce central pain in patients with multiple sclerosis. A double-blind placebo-controlled crossover trial involving 24 MS patients found that dronabinol significantly reduced the intensity of spontaneous pain compared with placebo (PUBMED:16014264). |
Instruction: Are impairments of action monitoring and executive control true dissociative dysfunctions in patients with schizophrenia?
Abstracts:
abstract_id: PUBMED:14514505
Are impairments of action monitoring and executive control true dissociative dysfunctions in patients with schizophrenia? Objective: Impaired self-monitoring is considered a critical deficit of schizophrenia. The authors asked whether this is a specific and isolable impairment or is part of a global disturbance of cognitive and attentional functions.
Method: Internal monitoring of erroneous actions, as well as three components of attentional control (conflict resolution, set switching, and preparatory attention) were assessed during performance of a single task by eight high-functioning patients with schizophrenia and eight comparison subjects.
Results: The patients exhibited no significant dysfunction of attentional control during task performance. In contrast, their ability to correct errors without external feedback and, by inference, to self-monitor their actions was markedly compromised.
Conclusions: This finding suggests that dysfunction of self-monitoring in schizophrenia does not necessarily reflect a general decline in cognitive function but is evidence of disproportionately pronounced impairment of action monitoring, which may be mediated by a distinct subsystem within the brain's executive attention networks.
abstract_id: PUBMED:29560881
Neuroimaging Intermediate Phenotypes of Executive Control Dysfunction in Schizophrenia. Genetic risk for schizophrenia is associated with impairments in the initiation and performance of executive control of cognition and action. The nature of these impairments and of the neural dysfunction that underlies them has been extensively investigated using experimental psychology and neuroimaging methods. In this article, we review schizophrenia-associated functional connectivity and activation abnormalities found in subjects performing experimental tasks that engage different aspects of executive function, such as working memory, cognitive control, and response inhibition. We focus on heritable traits associated with schizophrenia risk (intermediate phenotypes or endophenotypes) that have been revealed using imaging genetics approaches. These data suggest that genetic risk for schizophrenia is associated with dysfunction in systems supporting the initiation and application of executive control in neural circuits involving the anterior cingulate and dorsolateral prefrontal cortex. This article discusses current findings and limitations and their potential relevance to symptoms and disease pathogenesis.
abstract_id: PUBMED:33389057
Co-occurrence of schizo-obsessive traits and its correlation with altered executive control network functional connectivity. The prevalence of obsessive-compulsive symptoms (OCS) in schizophrenia patients is as around 30%. Evidence suggested that mild OCS could reduce symptoms of schizophrenia, supporting the presence of compensatory functions. However, severe OCS could aggravate various impairments in schizophrenia patients, supporting the "double jeopardy hypothesis". Patients with schizo-obsessive comorbidity, schizophrenia patients and obsessive-compulsive disorder patients have been found to have similarities in executive dysfunctions and altered resting-state functional connectivity within the executive control network (ECN). Executive functions could be associated with the ECN. However, little is known as to whether such overlap exists in the subclinical populations of individuals with schizo-obsessive traits (SOT), schizotypal individuals and individuals with high levels of obsessive-compulsive symptoms (OCS). In this study, we recruited 30 schizotypal individuals, 25 individuals with OCS, 29 individuals with SOT and 29 controls for a resting-state ECN-related functional connectivity (rsFC) and a go/shift/no-go task. We found that individuals with SOT exhibited increased rsFC within the ECN compared with controls, while schizotypal individuals exhibited the opposite. Individuals with OCS exhibited decreased rsFC within the ECN and between the ECN and the default mode network (DMN), relative to controls. No significant correlational results between altered rsFC related to the ECN with executive function performance were found after corrections for multiple comparisons in three subclinical groups. Our findings showed that individuals with SOT had increased rsFC within the ECN, while schizotypal individuals and individuals with OCS showed the opposite. Our findings provide evidence for possible neural substrates of subclinical comorbidity of OCS and schizotypy.
abstract_id: PUBMED:20076939
Functional MRI in schizophrenia. Diagnostics and therapy monitoring of cognitive deficits of schizophrenic patients by functional MRI Cognitive impairments are core psychopathological components of the symptomatic of schizophrenic patients. These dysfunctions are generally related to attention, executive functions and memory. This report provides information on the importance of using functional magnetic resonance imaging (fMRI) for the diagnostics and therapy monitoring of the different subtypes of cognitive dysfunctions. Furthermore, it describes the typical differences in the activation of individual brain regions between schizophrenic patients and healthy control persons. This information should be helpful in identifying the deficit profile of each patient and create an individual therapy plan.
abstract_id: PUBMED:20537400
Executive control in schizophrenia in task involving semantic inhibition and working memory. Executive dysfunctions have been consistently demonstrated in patients with schizophrenia. This study aimed to investigate deficits in specific executive functioning components, namely working memory and inhibition, in schizophrenia. In study 1, a set of neurocognitive function tests was administered to 41 patients with schizophrenia and 25 healthy controls to capture specific components of executive functioning, including semantic inhibition (the Stroop-like paradigm and the Chinese Version of the Hayling Sentence Completion Test (HSC)), working memory (the spatial n-back), and response inhibition (the stop signal task (SST)). Results showed that schizophrenia patients did significantly worse than controls under both working memory and inhibition demands in the Stroop-like paradigm. In particular, patients were impaired when inhibiting a semantically associated response; and performance was correlated with negative symptoms. In study 2, we employed a modified semantic inhibitory error monitoring paradigm to examine whether patients with schizophrenia (n=11) were impaired in semantic inhibitory error monitoring or not as compared to 11 healthy controls. The results suggested that patients with schizophrenia in this study remained intact in semantic inhibition error monitoring. There was no difference in the semantic inhibitory monitoring performance between healthy controls and patients with schizophrenia. Taken together, these results suggested impaired working memory context maintenance and semantic inhibition in schizophrenia patients, and these impairments were related to clinical symptoms of schizophrenia.
abstract_id: PUBMED:26691725
Shared and divergent neurocognitive impairments in adult patients with schizophrenia and bipolar disorder: Whither the evidence? Recent data from genetic and brain imaging studies have urged rethinking of bipolar disorder (BD) and schizophrenia (SCZ) as lying along a continuum of major endogenous psychoses rather than dichotomous disorders. We systematically reviewed extant studies (from January 2000 to July 2015) that directly compared neurocognitive impairments in adults with SCZ and BD. Within 36 included studies, comparable neurocognitive impairments were found in SCZ and BD involving executive functioning, working memory, verbal fluency and motor speed. The extent and severity of neurocognitive impairments in patients with schizoaffective disorder, and BD with psychotic features occupy positions intermediate between SCZ and BD without psychotic features, suggesting spectrum of neurocognitive impairments across psychotic spectrum conditions. Neurocognitive impairments correlated with socio-demographic (lower education), clinical (more hospitalizations, longer duration of illness, negative psychotic symptoms and non-remission status), treatment (antipsychotics, anti-cholinergics) variables and lower psychosocial functioning. The convergent neurocognitive findings in both conditions support a continuum concept of psychotic disorders and further research is needed to clarify common and dissimilar progression of specific neurocognitive impairments longitudinally.
abstract_id: PUBMED:21076791
Does dissociative schizophrenia exist? On a phenomenological level, there's an important overlap between dissociative and psychotic symptoms. Furthermore, traumatic etiology, recognized in dissociative disorders, is also increasingly considered in psychosis. These similarities create confusion in clinical settings with important repercussions for individuals suffering from these disorders. Indeed, difficulties encountered in differential diagnoses could result in an erroneous diagnosis or in an undetected comorbidity. Some authors are going further in suggesting that there is a sub-type of schizophrenia having dissociation behind the expression of psychotic symptoms.
abstract_id: PUBMED:27543828
A pilot randomized controlled trial of the Occupational Goal Intervention method for the improvement of executive functioning in patients with treatment-resistant schizophrenia. Schizophrenia is a chronic disabling mental disorder that involves impairments in several cognitive domains, especially in executive functions (EF), as well as impairments in functional performance. This is particularly true in patients with Treatment-Resistant Schizophrenia (TRS). The aim of this study was to test the efficacy of the Occupational Goal Intervention (OGI) method for the improvement of EF in patients with TRS. In this randomized, controlled, single-blind pilot study, 25 TRS patients were randomly assigned to attend 30 sessions of either OGI or craft activities (control) over a 15-week period and evaluated by the Behavioural Assessment of the Dysexecutive Syndrome (BADS) as the primary outcome and the Direct Assessment of Functional Status (DAFS-BR) as well as the Independent Living Skills Survey (ILSS-BR) as secondary outcomes, all adapted for the Brazilian population. The Positive and Negative Syndrome Scale (PANSS) was used for monitoring symptom severity. Results showed significant statistical differences, favoring the OGI group in terms of improvement on the BADS, both in subtests (Action Program and Key Search) and the total score. Improvements in EFs were observed by families in various dimensions as measured by different subtests of the ILSS-BR inventory. The OGI group showed no significant results in secondary outcomes (DAFS-BR) except in terms of improvement of communication skills. Although preliminary, our results indicate that the OGI method is efficacious and effective for patients with TRS.
abstract_id: PUBMED:28117515
Multimodal investigation of triple network connectivity in patients with 22q11DS and association with executive functions. Large-scale brain networks play a prominent role in cognitive abilities and their activity is impaired in psychiatric disorders, such as schizophrenia. Patients with 22q11.2 deletion syndrome (22q11DS) are at high risk of developing schizophrenia and present similar cognitive impairments, including executive functions deficits. Thus, 22q11DS represents a model for the study of neural biomarkers associated with schizophrenia. In this study, we investigated structural and functional connectivity within and between the Default Mode (DMN), the Central Executive (CEN), and the Saliency network (SN) in 22q11DS using resting-state fMRI and DTI. Furthermore, we investigated if triple network impairments were related to executive dysfunctions or the presence of psychotic symptoms. Sixty-three patients with 22q11DS and sixty-eighty controls (age 6-33 years) were included in the study. Structural connectivity between main nodes of DMN, CEN, and SN was computed using probabilistic tractography. Functional connectivity was computed as the partial correlation between the time courses extracted from each node. Structural and functional connectivity measures were then correlated to executive functions and psychotic symptom scores. Our results showed mainly reduced structural connectivity within the CEN, DMN, and SN, in patients with 22q11DS compared with controls as well as reduced between-network connectivity. Functional connectivity appeared to be more preserved, with impairments being evident only within the DMN. Structural connectivity impairments were also related to executive dysfunctions. These findings show an association between triple network structural alterations and executive deficits in patients with the microdeletion, suggesting that 22q11DS and schizophrenia share common psychopathological mechanisms. Hum Brain Mapp 38:2177-2189, 2017. © 2017 Wiley Periodicals, Inc.
abstract_id: PUBMED:24995025
Reviewing the Dissociative Symptoms in Patients With Schizophreniaand their Association With Positive and Negative Symptoms. Objective: The present study aimed to clarify dissociative symptoms in patients with schizophrenia and its association with negative and positive symptom of schizophrenia.
Methods: Based on the 4(th) edition of Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) criteria, 80 schizophrenic subjects were randomly selected from patients who referred to the clinics of psychiatry hospital in Kerman, Iran. Eighty non-schizophrenic patients were chosen as the control group. Both groups were evaluated for dissociation symptom using the Dissociative Experience Scale (DES). Positive and Negative Syndrome Scale (PANSS) score was also used in the case group for determination of positive and negative symptom of schizophrenia.
Results: The prevalence of dissociation symptom was 13% and 4% among schizophrenic and control groups, respectively (p = 0.02). In addition, there was a statistical significant association between DES score and positive symptom in schizophrenia (p = 0.02).
Conclusion: The association between dissociative symptom and schizophrenia was significant and dissociative symptoms were associated with positive symptoms of schizophrenia.
Declaration Of Interest: None.
Answer: Impairments of action monitoring and executive control in patients with schizophrenia are not necessarily true dissociative dysfunctions. The study by PUBMED:14514505 found that while patients with schizophrenia exhibited a markedly compromised ability to correct errors without external feedback, suggesting an impairment in self-monitoring of actions, this did not reflect a general decline in cognitive function. Instead, it indicated a disproportionately pronounced impairment of action monitoring, which may be mediated by a distinct subsystem within the brain's executive attention networks.
Neuroimaging studies have associated genetic risk for schizophrenia with impairments in the initiation and performance of executive control of cognition and action, involving neural circuits that include the anterior cingulate and dorsolateral prefrontal cortex (PUBMED:29560881). This suggests that executive control dysfunction in schizophrenia is related to specific neural abnormalities rather than dissociative phenomena.
Furthermore, the co-occurrence of schizo-obsessive traits has been linked to altered functional connectivity within the executive control network (ECN), indicating that executive dysfunctions in schizophrenia may overlap with other psychiatric conditions, such as obsessive-compulsive disorder (PUBMED:33389057).
Studies have also shown that schizophrenia patients exhibit deficits in specific executive functioning components, such as working memory and inhibition, which are related to clinical symptoms of schizophrenia (PUBMED:20537400). These findings support the idea that executive control impairments are integral to the cognitive deficits observed in schizophrenia rather than being dissociative in nature.
The concept of a continuum of neurocognitive impairments across psychotic spectrum conditions, including schizophrenia and bipolar disorder, further supports the notion that executive control dysfunctions are shared neurocognitive impairments rather than dissociative symptoms (PUBMED:26691725).
While there is an overlap between dissociative and psychotic symptoms, and some authors suggest the existence of a subtype of schizophrenia with dissociation behind the expression of psychotic symptoms (PUBMED:21076791), the evidence points more towards executive control impairments being a core feature of the cognitive deficits in schizophrenia rather than dissociative dysfunctions. This is further supported by interventions aimed at improving executive functions in patients with treatment-resistant schizophrenia, which have shown efficacy (PUBMED:27543828). |
Instruction: Partial nephrectomy: is there an advantage of the self-retaining barbed suture in the perioperative period?
Abstracts:
abstract_id: PUBMED:29448861
Self-retaining barbed suture reduces warm ischemia time during laparoscopic partial nephrectomy. Objective: To evaluate the efficacy and safety of self-retaining barbed suture in renorrhaphy during laparoscopic partial nephrectomy by comparing surgical outcomes in a prospective randomized manner.
Material And Methods: From July 2014 to July 2015, a total of 60 patients with T1 renal tumor were randomized into two equal groups: self-retaining barbed suture (SRBS) and conventional absorbable polyglactin suture (non-SRBS group). All patients were treated by retroperitoneal laparoscopic partial nephrectomy. One surgeon with high volume experience performed all procedures. The patient demographics and perioperative outcomes were compared.
Results: The patient demographics and tumor characteristics were comparable. The mean tumor size and R.E.N.A.L. scores were comparable between the two groups. LPN was successfully accomplished in all patients without open conversion. The warm ischemia and renorrhaphy times were significantly shorter in the SRBS group (18.8 ± 8.2 vs. 22.9 ± 7.3 min, P = .04; 10.4 ± 3.7 vs. 13.8 ± 5.6 min, P = .01). The minor complication rate was 13.3% vs. 10.0%, which was comparable. No major complication occurred.
Conclusions: The randomized controlled trial demonstrates that SRBS for renorrhaphy during retroperitoneal laparoscopic partial nephrectomy is safe and efficient. Application of barbed suture simplifies the parenchymal repair procedure and reduces warm ischemia time in comparison with conventional suture.
abstract_id: PUBMED:31215335
The Self-Retaining Barbed Suture for Parenchymal Repair in Laparoscopic Partial Nephrectomy: A Systematic Review and Meta-Analysis. Objectives. The warm ischemia time (WIT) is key to successful laparoscopic partial nephrectomy (LPN). The aim of this study was to perform a meta-analysis comparing the self-retaining barbed suture (SRBS) with a non-SRBS for parenchymal repair during LPN. Methods. A systematic search of PubMed, Scopus, and the Cochrane Library was performed up to March 2018. Inclusion criteria for this study were randomized controlled trials (RCTs) and observational comparative studies assessing the SRBS and non-SRBS for parenchymal repair during LPN. Outcomes of interest included WIT, complications, overall operative time, estimated blood loss, length of hospital stay, and change of renal function. Results. One RCT and 7 retrospective studies were identified, which included a total of 461 cases. Compared with the non-SRBS, use of the SRBS for parenchymal repair during LPN was associated with shorter WIT (P < .00001), shorter overall operative time (P < .00001), lower estimated blood loss (P = .02), and better renal function preservation (P = .001). There was no significant difference between the SRBS and non-SRBS with regard to complications (P = .08) and length of hospital stay (P = .25). Conclusions. The SRBS for parenchymal repair during LPN can significantly shorten the WIT and overall operative time, decrease blood loss, and preserve renal function.
abstract_id: PUBMED:31649463
Retrospective comparison of outcomes of laparoscopic pyeloplasty using barbed suture versus nonbarbed suture: A single-center experience. Introduction: laparoscopic pyeloplasty is an important tool in urology armamentarium. The most important & also the difficult part of this surgery is intracorporial suturing and knotting. There are only a few reports of knotless Barbed sutures for upper tract reconstruction. We report the comparative outcomes of Laparoscopic Pyeloplasty with barbed suture vs non barbed sutures used for uretero-pelvic anastomosis.
Materials And Methods: We retrospectively reviewed patients' records that underwent Laparoscopic pyeloplasty at our Institution from January 2013 to May 2014. Total 37 patients were underwent LP in this period. Whole of the procedure was same as conventional LP except suture material. 3-0 barbed suture was used in 21 patients and 3-0 vicryl used in 16 patients for uretero-pelvic anastomosis and continuous suturing technique was employed. Patients' demographics, total operative time, intracorporial suturing time, post operative complications, symptoms & renal isotope scan were recorded.
Results: Average total operative time was significantly less in barbed suture group vs vicryl group (162 vs 208 minutes) (p=0.0811). Average time taken for intracorporial suturing was 31.2 minutes vs 70 minutes (p=0.0576). 1 patient developed post operative urine leak which persisted for 5 days in barbed group (4.76 %) vs no leak in vicryl group. Most common complication was UTI presented in 2 patients (9.5 %) vs 2 in vicryl (12.5%). JJ stent was removed at 4 weeks. Median follow up was 3 months with 7 patients lost to follow up. None of the patients found to have obstructive drainage or deterioration of split function on follow up isotope renogram at 3 months.
Conclusions: In this study, Laparoscopic pyeloplasty with barbed suture has acceptable outcome when compared to conventional non barbed suture on short term basis. Laparoscopic Pyeloplasty with barbed suture can potentially become the standard approach in near future.
abstract_id: PUBMED:22956042
Partial nephrectomy: is there an advantage of the self-retaining barbed suture in the perioperative period? A matched case-control comparison. Objective: To evaluate the efficacy of the self-retaining barbed suture (SRBS) in renal defect repair during partial nephrectomy (PN), by assessing perioperative outcomes.
Methods: From June 2010 on we have been using the SRBS for superficial layer closure during open and laparoscopic PN in two European centers. These data were collected prospectively and matched with historical PN cases performed with conventional suture. Cases were matched for PADUA score, surgical approach (laparoscopic or open) and the center where surgery was performed. Comparisons were made in patient characteristics and perioperative outcomes including warm ischemia time (WIT), changes in hemoglobin (Hb), changes in estimated glomerular filtration rate (eGFR) and perioperative complications between the SRBS and non-SRBS groups. Statistical tests of significance were performed using Student's t test and chi-square test for continuous and categorical variables, respectively.
Results: Thirty-one consecutive cases of PN under WIT were performed with SRBS. These cases were matched with cases from the historical database of PN performed with conventional suture. The rate of perioperative complications was statistically significantly lower in the SRBS cohort (6.5 vs. 22.6 %, p = 0.038). Mean ischemia time was 19.6 min (SD, 7.5) in the SRBS group versus 21.8 min (SD, 9.5) in the conventional suture group (p = 0.312). There were no significant differences between groups for postoperative changes in creatinine, eGFR and Hb. Limitations of this study include the absence of randomization and the relative small sample size.
Conclusions: SRBS can be safely used during partial nephrectomy. SRBS reduces significantly the number of perioperative complications.
abstract_id: PUBMED:24856489
Application of self-retaining bidirectional barbed absorbable suture in retroperito- neoscopic partial nephrectomy. Objective: To investigate the safety and feasibility of self-retaining bidirectional barbed absorbable suture application in retroperitoneoscopic partial nephrectomy.
Materials And Methods: From Sep 2011 and Aug 2012, 76 cases of retroperitoneoscopic partial nephrectomy were performed at our hospital. The patients were divided into two groups: self-retaining barbed suture (SRBS) group (n = 36) and non-SRBS group (n = 40). There was no significant difference in age, sex, tumor size and location between the two groups. Clinical data and outcomes were analyzed retrospectively.
Results: All 76 cases of retroperitoneoscopic partial nephrectomy were successfully performed, without conversion to open surgery or serious intraoperative complications. In the SRBS group, the suture time, warm ischemia time and operation blood loss were significantly shorter than that of non-SRBS group (p < 0.01), and operation time and hospital stay were shorter than that of non-SRBS group (p < 0.05).
Conclusions: The application of self-retaining bidirectional barbed absorbable suture in retroperitoneoscopic partial nephrectomy could shorten suture time and warm ischemia time, with good safety and feasibility, worthy of being used in clinic.
abstract_id: PUBMED:30630449
The application of barbed suture during the partial nephrectomy may modify perioperative results: a systematic review and meta-analysis. Background: Barbed sutures can avoid knot tying and speed the suture placement in the PN(partial nephrectomy). On account of the impact on clinical outcomes are ambiguous, this study is determined to identify the application of barbed suture during PN.
Methods: ClinicalTrials.gov, Cochrane Register of Clinical Studies, PubMed and EMBASE were searched for RCTs(randomized controlled trials) and cohort studies focusing on the comparison of barbed and traditional sutures in PN(last updated on Feb in 2015). According to Cochrane Library's suggestion, quality assessment was performed. Review Manager was applied to analyze all the data and sensitivity analyses were performed through omitting each study sequentially.
Results: Eight cohort studies and none of RCTs proved eligible (risk of bias: moderate to low,431 patients). Warm ischemia time(MD = - 6.55,95% CI -8.86 to - 4.24, P < 0.05) decreased statistically in the barbed suture group, as well as operative time(MD = - 11.29,95% CI -17.87 to-4.71, P < 0.05). Postoperative complications also reduced significantly(OR = 0.44, 95% CI 0.24 to0.80, P < 0.05). Unidirectional barbed suture resulted in fewer postoperative complications based on the subgroup analysis(OR = 0.48,95% CI 0.24 to 0.94, P < 0.05).
Conclusions: The barbed suture may be a useful surgical innovation which can modify perioperative results for surgeons and patients. Randomly-designed studies with longer follow up and larger sample sizes are in the need of to explore the applicability.
abstract_id: PUBMED:21094097
Use of bidirectional barbed suture in laparoscopic myomectomy: evaluation of perioperative outcomes, safety, and efficacy. Study Objective: To compare perioperative outcomes during laparoscopic myomectomy using a bidirectional barbed suture vs conventional smooth suture.
Design: Retrospective analysis of 138 consecutive laparoscopic myomectomies performed by a single surgeon over 3 years (Canadian Task Force classification II-2).
Setting: Major university teaching hospital.
Patients: One hundred thirty-eight women with symptomatic uterine myomas.
Interventions: In women undergoing laparoscopic myomectomy from February 2007 through April 2010, conventional smooth sutures were used in 31 patients, and bidirectional barbed suture in 107 patients.
Measurements And Main Results: The primary indications for laparoscopic myomectomy in either group were pelvic pain or pressure and abnormal uterine bleeding. Use of bidirectional barbed suture was found to significantly shorten the mean (SD) duration of surgery (118 [53] minutes vs 162 [69] minutes; p <.05) and reduce the duration of hospital stay (0.58 [0.46] days vs 0.97 [0.45] days; p <.05). No significant differences were observed between the 2 groups insofar as incidence of perioperative complications, estimated blood loss, and number or weight of myomas removed during surgery.
Conclusion: Use of bidirectional barbed suture seems to facilitate closure of the hysterotomy site in laparoscopic myomectomy.
abstract_id: PUBMED:21952745
Perioperative closure-related complication rates and cost analysis of barbed suture for closure in TKA. Background: The use of barbed suture for surgical closure has been associated with lower operative times, equivalent wound complication rate, and comparable cosmesis scores in the plastic surgery literature. Similar studies would help determine whether this technology is associated with low complication rates and reduced operating times for orthopaedic closures.
Questions/purposes: We compared a running barbed suture with an interrupted standard suture technique for layered closure in primary TKA to determine if the barbed suture would be associated with (1) shorter estimated closure times; (2) lower cost; and (3) similar closure-related perioperative complication rates.
Methods: We retrospectively compared two-layered closure techniques in primary TKA with either barbed or knotted sutures. The barbed group consisted of 104 primary TKAs closed with running barbed suture. The standard group consisted of 87 primary TKAs closed with interrupted suture. Cost analysis was based on cost of suture and operating room time. Clinical records were assessed for closure-related complications within the 6-week perioperative period.
Results: Average estimated closure time was 2.3 minutes shorter with the use of barbed suture. The total closure cost was similar between the groups. The closure-related perioperative complication rates were similar between the groups.
Conclusions: Barbed suture is associated with a slightly shorter estimated closure time, although this small difference is of questionable clinical importance. With similar overall cost and no difference in perioperative complications in primary TKA, this closure methodology has led to more widespread use at our institution.
abstract_id: PUBMED:38312973
Does Unidirectional Polyglycolide Caprolactone Barbed Suture Have Improved Outcomes Over Non-barbed Suture When Applied for Intra-oral Incision Closure? Purpose: The choice of wound closure material may influence the clinical outcomes of intra-oral incision closure. Studies evaluating the application of barbed suture in the oral cavity are scarce. Hence, the present study was carried out with the aim to monitor and compare the efficacy and ease of handling of monofilament polyglycolide caprolactone (PGCL) unidirectional barbed and non-barbed sutures used for intra-oral incision closure in patients undergoing transalveolar extraction of impacted mandibular third molar and mandible fracture open reduction internal fixation.
Methods: A prospective randomized open label study was carried out among subjects requiring intra-oral incision closure following mandibular third molar extraction and isolated mandible fracture fixation. The difficulty index of the impacted third molars was evaluated pre-operatively. Subjects were randomized to receive either 3-0 monofilament PGCL unidirectional barbed or non-barbed sutures. Incision closure time and ease of suture handling were recorded intra-operatively. Post-operatively, patients were monitored for incision healing using the Hollander wound evaluation scale (HWES) and intensity of pain using visual analog scale (VAS) on post-operative days 1, 3 and 7. Data analysis involved descriptive statistics, Chi-square, unpaired t test and multivariate analysis using the IBM SPSS-PC software (v.25.0).
Results: A total of 60 subjects completed the study protocol, who were randomized into two groups (n1 = n2 = 30), comparable in terms of age, gender and treatment (TAE = 51; ORIF = 9) received. The incision healing outcomes were significantly better (p = 0.016) with barbed suture using HWES on day 7. The mean closure time using barbed suture (142.50 ± 34.803 secs) was significantly (p = 0.001) shorter than that with non-barbed suture (204.56 ± 52.94 secs). The mean VAS for the barbed suture (0.97 ± 1.89) was less (p = 0.015, 95% CI) than the non-barbed suture (2.50 ± 2.91) on day 3. The suture handling ease was comparable between the two groups.
Conclusion: Monofilament unidirectional PGCL barbed suture has merits over the non-barbed suture with regards to superior post-operative incision healing, reduced incision closure time (43%), lower post-operative pain and comparable ease of suture handling. Hence, knotless PGCL suture is a promising alternative for intra-oral surgical incision closure in oral and maxillofacial surgery.
abstract_id: PUBMED:21991931
Self-retaining barbed suture for parenchymal repair during laparoscopic partial nephrectomy; initial clinical experience. Objectives: To evaluate the efficacy of a self-retaining barbed suture (SRBS) in achieving a secure and haemostatic renorrhaphy during laparoscopic partial nephrectomy (LPN). To compare perioperative outcomes for LPN with SRBS with those for LPN with conventional polyglactin suture, with specific attention to warm ischaemia time, blood loss and need for postoperative bleeding interventions.
Patients And Methods: Patients who underwent LPN between June 2007 and October 2010 were identified through an Institutional Review Board approved registry of oncological patients. Before July 2009, parenchymal repair after tumour excision was performed using absorbable polyglactin suture (Group 1), and subsequently, using SRBS (Group 2). Demographic, clinical, intraoperative and postoperative outcomes were compared for each group.
Results: LPN was performed in 49 patients in Group 1 and 29 in Group 2. Baseline demographic and clinical features, estimated blood loss, and transfusion and embolization rates were statistically similar for the cohorts. Mean warm ischaemia time (±SD) was significantly shorter for the SRBS group (26.4 ± 8.3 vs 32.8 ± 7.9; P= 0.0013). Bleeding requiring intervention (open conversion or transfusion ± embolization) was more common for Group 1 (9/49, 18.4% vs 1/29, 3.4%; P= 0.06).
Conclusions: The use of SRBS for parenchymal repair during LPN in humans is safe and is associated with a significant reduction in warm ischaemia time. SRBS use during LPN may also reduce rates of clinically significant bleeding. Prospective, larger studies to confirm the value of SRBS use in minimally invasive partial nephrectomy are warranted.
Answer: Yes, there is evidence suggesting that the use of self-retaining barbed suture (SRBS) during partial nephrectomy offers advantages in the perioperative period. Several studies have reported that SRBS can reduce warm ischemia time (WIT) and overall operative time when compared to conventional sutures. For instance, a randomized controlled trial demonstrated that SRBS for renorrhaphy during retroperitoneal laparoscopic partial nephrectomy is safe and efficient, reducing warm ischemia time and renorrhaphy times significantly (PUBMED:29448861). A systematic review and meta-analysis also found that SRBS for parenchymal repair during laparoscopic partial nephrectomy was associated with shorter WIT, shorter overall operative time, lower estimated blood loss, and better renal function preservation (PUBMED:31215335).
Moreover, retrospective comparisons have shown that laparoscopic pyeloplasty using barbed suture resulted in significantly less total operative time and intracorporeal suturing time compared to non-barbed sutures (PUBMED:31649463). Another matched case-control comparison found that the rate of perioperative complications was statistically significantly lower in the SRBS cohort, although the mean ischemia time difference was not statistically significant (PUBMED:22956042). Additionally, the application of SRBS in retroperitoneoscopic partial nephrectomy could shorten suture time and warm ischemia time, with good safety and feasibility (PUBMED:24856489).
A systematic review and meta-analysis also indicated that the application of barbed suture during partial nephrectomy may reduce warm ischemia time, operative time, and postoperative complications (PUBMED:30630449). Furthermore, the use of bidirectional barbed suture in laparoscopic myomectomy, which is a different surgical procedure, was found to significantly shorten the duration of surgery and reduce the duration of hospital stay, suggesting its potential benefits in other surgical contexts as well (PUBMED:21094097).
In summary, the use of SRBS during partial nephrectomy appears to offer several perioperative advantages, including reduced warm ischemia time, operative time, and potentially lower complication rates, which can contribute to improved surgical outcomes and patient recovery. |
Instruction: Can myringoplasty close the air-bone gap?
Abstracts:
abstract_id: PUBMED:33570432
Association Between the Air-Bone Gap and Vibration of the Tympanic Membrane After Myringoplasty. Air-bone gap (ABG) is an important indicator of hearing status after myringoplasty. A number of factors have been associated with ABG, but some patients still have ABG without identifiable cause. This study aimed to evaluate the relationship between tympanic membrane (TM) vibration using laser Doppler vibrometry (LDV) and ABG after myringoplasty. Between January 2013 and January 2015, 24 patients with ABG of unknown cause after myringoplasty were enrolled at the Beijing Tongren Hospital. Thirty normal controls were recruited from the hospital staff. All patients underwent primary overlay myringoplasty. Pre- and postoperative air conduction (AC) and bone-conduction (BC) thresholds, and ABG were measured. Umbo velocity transfer function (UVTF) for vibration of TM was measured with LDV. Air conduction thresholds were significantly reduced after myringoplasty (all P < .05), while BC thresholds were not significantly changed (all P > .05). ABG was significantly reduced after myringoplasty (all P < .05). Air-bone gap was correlated with UVTF at 1.0 kHz (r = -0.46; P = .024). For patients with UVTF >0.08 mm/s/Pa, ABG was correlated with UVTF (r = -0.56; P = .029). For post-myringoplasty ABG without readily observable causes, there was a significant relationship between ABG and TM vibration. These results provide new insights in the understanding of this relationship and may help explain ABG after myringoplasty when there are no clear contributing factors.
abstract_id: PUBMED:23652328
Can myringoplasty close the air-bone gap? Objective: The aim of this study is to evaluate whether closure of a tympanic membrane perforation with an intact ossicular chain results in a closure of the air-bone gap.
Study Design: Prospectively collected data from 154 patients undergoing temporalis fascia myringoplasty for chronic otitis media simplex were identified.
Setting: Tertiary referral center.
Patients: Between 2001 and 2009, overall, 106 patients with a central tympanic membrane perforation and, an intact ossicular chain were further analyzed.
Interventions: All patients underwent myringoplasty using temporalis fascia in an underlay technique.
Main Outcome Measures: Comparison of the preoperative and postoperative hearing results in patients undergoing myringoplasty for chronic otitis media simplex.
Results: The mean postoperative air-bone gap (ABG) was 8.2 dB for the frequencies 0.5 to 4 kHz. Eighty-three patients (78%) showed postoperatively a mean ABG of 10 dB or lower. The ABG difference (improvement) was statistically significant for each single frequency (0.5, 1, 2, 3, and 4 kHz) (p < 0.0001). There is a linear correlation between the preoperative tympanic membrane perforation size and the postoperative ABG (p = 0.0017) for the frequencies 0.5 to 4 kHz. No statistical significant correlation was seen between the state of the middle-ear mucosa, temporal bone pneumatization, tympanometric middle-ear/mastoid volume, and the postoperative ABG.
Conclusion: Complete ABG closure by myringoplasty could be achieved in only approximately 20% of the cases. 80% respectively presented with a mean residual ABG of 8 dB. We found a significant linear correlation between the preoperative size of the tympanic membrane perforation and the postoperative ABG, whereas mastoid volume, temporal bone pneumatization, and the condition of the mucosa did not affect the outcome.
abstract_id: PUBMED:35999674
Exploring Mechanisms Underlying Unexplained Air-Bone Gaps Post-Myringoplasty: Temporal Bone Model and Finite Element Analysis. Purpose: Air-bone gap (ABG) is an essential indicator of middle ear transfer function after myringoplasty. However, there is still uncertainty about the mechanisms behind unexplained ABGs in patients post-myringoplasty. The present study investigated these mechanisms using cadaveric temporal bone (TB) measurement and finite element (FE) modeling.
Methods: Three conditions of tympanic membrane (TM) perforation were modeled with a perforated area of 6%, 24%, and 50% of the total TM area to simulate a small, medium, or large TM perforation of TB model. A piece of paper was used to patch the TM perforation to simulate the situation post-myringoplasty. In the FE model for post-operation, the material properties at the perforation area were changed. Measurement of TM vibration at the umbo was undertaken with a laser Doppler vibrometer (LDV).
Results: As the perforated area increased vibration of the TM at the umbo decreased in both the TB and FE models. But the reduction of TM vibration is more minor in the FE model than in the TB model. After the perforation was repaired, the displacement of TM at the umbo could not be recovered totally in the TB and FE models. In the FE model, the displacement of TM at the umbo decreased markedly when the cone shape of TM flattened, and the reduction was almost the same as that in the TB model in the condition of large perforation.
Conclusion: The material properties and the anatomical shape of the repaired TM could influence the TM's modal motion and wave motion. Except for appearance and shape current clinical instruments are unable to resolve factors that affect TM motion. Consequently the ABG seen post-myringoplasty remains unexplained.
abstract_id: PUBMED:31750166
Microscopic Versus Endoscopic Myringoplasty: A comparative study. To compare the results of myringoplasty by using operating microscope (postaural) with that of myringoplasty by using endoscope (permeatal). Our study was conducted in Department of ENT of in Chirayu Medical College and Hospital. Total 60 patients of age group 18-60 were taken for study having chronic otitis media or trauma with central perforation. Patients were randomly selected microscopic or endoscopic myringoplasty. 30 patients for Microscopic Myringoplasty and 30 patients for endoscopic Myringoplasty were selected. Out of total 60 patients 35 were females and the 25 were males, 27 were in the age group 15-30 and 23 were in age group 31-45 and only 10 in the age group of 46-60. 18-30 age group cohort was predominant. The average time taken for endoscopic myringoplasty was 65.5 ± 3.45 min and for microscopic myringoplasty 85.7 ± 3.42 min. 26 were having Large central perforation (LCP), of which 13 underwent microscopic and 13 underwent endoscopic myringoplasty. The graft was taken up in situ in 22 patients while 4 patients had small residual central perforation. Out of these four residual perforations 3 were done by endoscopy and 1 by microscopy. 19 (of 60) were having Medium size central perforation (MCP), 10 were operated with endoscope and 9 with microscope. 15 (60) were having Small central perforation (SCP), 7 done with endoscope and 8 with microscope. In all patient graft take up was well. Large central perforation present in maximum patient and had least graft uptake as compared to MCP and SCP. Out of the 30 these endoscopic myringoplasty 27 patients had good graft uptake and 3 had small central residual perforation after 3 months. Out of the 30 microscopic myringoplasty 29 patients had good graft uptake and 1 patient had small central residual perforation after 3 months. In our study pre operative and post operative Air Bone Gaps (ABGs) were 22.05 ± 2.04 and 9.05 ± 1.36 db respectively in endoscopic myringoplasty and 21.81 ± 1.85 and 8.55 ± 1.44 db respectively in microscopic myringoplasty. Microscopic myringoplasty has greater success rate in larger perforations that is LCP and MCP and equal result in SCP. Advantage of microscope is depth perception and both hands are free for procedure which is limitation of endoscopic myringoplasty (need to use endoscope holder). Advantage of endoscopic permeatal myringoplasty is superior visualization, least tissue trauma and better cosmetic outcome, almost equal graft uptake and hearing outcome with less operative time. Endoscope system is portable, so convenient for surgeon where microscope is not available. Also endoscope is a less costly armamentarium. Our study shows better result in myringoplasty can be achieved if both methods of surgery are used in combination.
abstract_id: PUBMED:34692576
Long Term Versus Short Term Hearing Results in Endoscopic Sandwich Myringoplasty. Introduction: The use of the endoscope in otological surgeries has both diagnostic and therapeutic values. It provides an excellent view in difficult nooks and corners. The use of endoscopic sandwich myringoplasty using cartilage and perichondrium has its benefit in hearing outcome and graft uptake in long-term follow-up. The main objective was to compare the long-term with short- term hearing outcomes in those who have undergone endoscopic sandwich myringoplasty with Dhulikhel hospital (D‑HOS) technique.
Materials And Methods: Forty-two patients who underwent endoscopic sandwich myringoplasty with D-HOS technique using tragal cartilage perichondrium were enrolled in the study. The hearing outcome was analyzed by comparing the pre-operative findings with post-operative findings and amongst post-operative patients, long-term with short-term air bone gap (ABG) and ABG closure in speech frequencies (0.5kHz, 1kHz, 2kHz, 4kHz) were compared.
Results: Amongst forty-two patients, 40 (95.2%) had graft uptake in both short-term (6.08 months) and in long-term (20 months) follow-up. The mean pre-operative ABG was 28.1±9.3dB whereas the mean short-term post-operative ABG was 14.5±7.2dB, it showed statistical significance (P=0.001). Likewise, while comparing pre-operative with long-term post-operative ABG (13.4±4.8 dB), it showed statistical significance of P=0.000. While comparing short-term with long-term post-operative ABG, it did not show any statistical significance (P=0.065).The mean closure in ABG in both short-term and long-term hearing assessment was not statistically significant (P=0.077).
Conclusion: Endoscopic sandwich myringoplasty with D-HOS technique is a reliable procedure with good hearing outcome and graft uptake in both short and long-term follow-up.
abstract_id: PUBMED:18603953
Hearing results after myringoplasty. Background: Myringoplasty is one of the various surgical techniques for the management of chronic supurative otitis media of tubotympanic type (CSOM-TT). The presence of a perforation of tympanic membrane with intermittent discharge and hearing loss of conductive nature are the indications of myringoplasty. It is a beneficial procedure done for closing tympanic membrane perforation and improving hearing.
Objective: The aim of this study was to assess hearing improvement after myringoplasty within ten weeks following surgery.
Material And Methods: The study population consisted of 50 patients who were suffering from CSOM-TT. Preoperative and postoperative examinations of the patients were conducted clinically as well as audiologically. Pre and postoperative air-bone (A-B) gap were calculated by taking the averages of bone conduction and air conduction at the frequencies of 500, 1000 and 2000 Hz. Myringoplasty was performed with underlay technique under local anaesthesia by either permeatal or endaural approach. Temporal muscle fascia was used as grafting material for reconstruction of the tympanic membrane.
Results: Preoperatively, air-bone gap of 30 db or more was observed in 39 (76%) patients whereas post operatively A-B gap of 30 db or more was observed in only one patient. Using hearings gain exceeding 15 dB as the criterion, thirty-nine (78%) patients had their hearing gain exceeding 15 dB. Using postoperative A-B gap within 20 dB as the criterion, 42 (84%) patients had their A-B gap within 20 dB.
Conclusion: Myringoplasty is a beneficial procedure for hearing improvement. Using the proportion of patients with a postoperative A-B gap of 30 dB as the criterion, in this study, 98% of patients achieved their A-B gap closer within 30 dB. Using hearing gain exceeding 15 dB as the criterion, 78% patient had their hearing gain exceeding 15 dB.
abstract_id: PUBMED:7997839
Myringoplasty. A conventional and extended high-frequency, air- and bone-conduction audiometric study. Comparison of the pre- and postoperative air- and bone-conduction thresholds in 22 subjects in whom successful myringoplasty was performed has been made in the conventional and extended high-frequency ranges. Air-conduction thresholds improved through 4 kHz, but were elevated postoperatively for the frequencies 6 through 18 kHz. Postoperative bone-conduction thresholds were elevated at 0.25 and 0.5 kHz, were lower by 2-8 dB for 1 through 3 kHz and not significantly altered in the extended high-frequency range of 8 through 16 kHz. The extended high-frequency air-conduction threshold loss following myringoplasty in this study is, therefore, due to changes in middle ear transmission and is not indicative of iatrogenic cochlear damage.
abstract_id: PUBMED:35477110
Audiological and Surgical Correlates of Myringoplasty Associated with Ethnography in the Bay of Plenty, New Zealand. Introduction: This retrospective cohort study of myringoplasty performed at Tauranga Hospital, Bay of Plenty, New Zealand from 2010 to 2020 sought to identify predictive factors for successful myringoplasty with particular consideration given to the known high prevalence of middle ear conditions in New Zealand Māori.
Methods: Outcomes were surgical success (perforation closure at 1 month) and hearing improvement, which were correlated against demographic, pathological, and surgical variables.
Results: 174 patients underwent 221 procedures (139 in children under 18 years old), with 66.1% of patients being New Zealand Māori and 24.7% New Zealand European ethnicity. Normalized by population demographics, New Zealand Māori were 2.3 times overrepresented, whereas New Zealand Europeans were underrepresented by 0.34 times (a 6.8 times relative treatment differential). The rate of surgical success was 84.6%, independent of patient age, gender, and ethnicity. A postauricular approach and the use of temporalis fascia grafts were both correlated with optimal success rates, whereas early postoperative infection (<1 month) was correlated with ∼3 times increased failure. Myringoplasty improved hearing in 83.1% of patients (average air-bone gap reduction of 10.7 dB). New Zealand Māori patients had ∼4 times greater preoperative conductive hearing loss compared to New Zealand Europeans, but benefited the most from myringoplasty.
Discussion/conclusion: New Zealand Māori and pediatric populations required greater access to myringoplasty, achieving good surgical and audiological outcomes. Myringoplasty is highly effective and significantly improves hearing, particularly for New Zealand Māori. Pediatric success rates were equivalent to adults, supporting timely myringoplasty to minimize morbidity from untreated perforations.
abstract_id: PUBMED:31346722
Endoscopic butterfly inlay myringoplasty for large perforations. Purpose: Nowadays, the use of otoendoscopy is becoming increasingly popular in ear surgery. Data on endoscopic tympanoplasty are quite current but not yet sufficient. This study aims to present the anatomical and functional results of endoscopic butterfly inlay myringoplasty in large perforations.
Methods: The graft success rates and audiological outcomes of 26 ears of 23 patients who underwent endoscopic cartilage inlay myringoplasty due to large perforation (more than two-thirds of the area of tympanic membrane) from March 2016 to August 2018 were evaluated.
Results: The mean follow-up period was 13.3 ± 7.1 (6-32) months. Graft success rate was 96% (25/26 ears). The preoperative mean air-bone gap (ABG) was 21.4 dB and postoperative mean was 14.4 dB. Significant improvement was observed in postoperative air-bone gap values compared to preoperative period (p 0.005).
Conclusions: Endoscopic inlay butterfly myringoplasty is a safe surgical technique and has satisfactory anatomic and audiological outcomes in large tympanic perforations.
abstract_id: PUBMED:26956980
A new graft material for myringoplasty: bacterial cellulose. We aimed to determine the success rate of bacterial cellulose (BC) myringoplasty in tympanic membrane (TM) perforation. We reviewed the clinical records of 12 patients (16 ears) who underwent BC myringoplasty for long-standing TM perforations (>6 months) between March 2012 and January 2015. The mean duration of postoperative follow-up was 12.8 (range 6-24) months. In all patients, the perforation involved less than 50 % of the tympanic membrane, and the air-bone gap was ≤30 dB on the operative side. The age, gender, preoperative air average, bone average, air-bone gap, perforation size and location, and postoperative TM microscopic examinations of the patients were recorded. Following the BC myringoplasty, complete coverage of the perforation occurred in the first 6 months in 13 of 16 ears (81.3 %), although retraction occurred in two of these ears within 6 months. In three ears (18.8 %), the perforation persisted during the first postoperative month. The pre- and postoperative average air threshold was 19.56 (range 7-75) and 15.69 (range 5-75) dB, respectively. The pre- and postoperative average air-bone gap was 9.25 (range 0-25) and 5.63 (0-25) dB, respectively. Both the air threshold and air-bone gap improved postoperatively (both p = 0.008). No complications such as infection and granulation tissue formation were detected in any patient. BC is a safe graft material that is inexpensive, easy-to-use, and provides a high success rate in small tympanic membrane perforations. However, further studies of large tympanic membrane perforations with more samples and long-term follow-up are required.
Answer: Myringoplasty is a surgical procedure aimed at repairing a perforated tympanic membrane (TM) and can potentially close the air-bone gap (ABG), which is an indicator of conductive hearing loss. The success of myringoplasty in closing the ABG varies among patients and depends on several factors.
According to the studies, myringoplasty can reduce the ABG significantly. One study found that the mean postoperative ABG was 8.2 dB for the frequencies 0.5 to 4 kHz, with 78% of patients showing a postoperative mean ABG of 10 dB or lower (PUBMED:23652328). Another study reported that myringoplasty improved hearing in 83.1% of patients, with an average ABG reduction of 10.7 dB (PUBMED:35477110). Additionally, a study comparing pre- and postoperative ABG found that 84% of patients had their postoperative ABG within 20 dB (PUBMED:18603953).
However, complete closure of the ABG by myringoplasty is achieved in only approximately 20% of cases, with the majority of patients presenting with a mean residual ABG of 8 dB (PUBMED:23652328). Factors such as the preoperative size of the TM perforation have been found to correlate with the postoperative ABG, while other factors like mastoid volume and the condition of the mucosa did not affect the outcome (PUBMED:23652328).
In cases where the cause of the ABG is not readily observable, studies have explored the relationship between ABG and TM vibration. One study found a significant relationship between ABG and TM vibration as measured by laser Doppler vibrometry (LDV), suggesting that the material properties and anatomical shape of the repaired TM could influence its motion and contribute to the ABG seen post-myringoplasty (PUBMED:33570432, PUBMED:35999674).
In summary, myringoplasty can reduce the ABG and improve hearing, but complete closure of the ABG is not always achieved. The success of the procedure in closing the ABG depends on various factors, including the size of the TM perforation and the vibration characteristics of the repaired TM. Further research is needed to fully understand the mechanisms behind persistent ABG post-myringoplasty and to optimize surgical outcomes. |
Instruction: Difference in survival after out-of-hospital cardiac arrest between the two largest cities in Sweden: a matter of time?
Abstracts:
abstract_id: PUBMED:15715681
Difference in survival after out-of-hospital cardiac arrest between the two largest cities in Sweden: a matter of time? Background: Dramatic differences in survival after out-of-hospital cardiac arrests (OHCA) reported from different geographical locations require analysis. We therefore compared patients with OHCA in the two largest cities in Sweden with regard to various factors at resuscitation and outcome.
Setting: All patients suffering an OHCA in Stockholm and Goteborg between 1 January 2000 and 30 June 2001, in whom cardiopulmonary resuscitation (CPR) was attempted were included in this retrospective analysis.
Results: All together, 969 OHCA in Stockholm and 398 in Goteborg were registered during the 18-month study period. There were no differences in terms of age, gender, and percentage of witnessed cases or percentage of patients who had received bystander CPR. However, the percentage of patients with ventricular fibrillation (VF) at arrival of the ambulance crew was 18% in Stockholm versus 31% in Goteborg (P <0.0001). The percentage of patients who were alive 1 month after cardiac arrest was 2.5% in Stockholm versus 6.8% in Goteborg (P=0.0008). Various time intervals such as cardiac arrest to calling for an ambulance, cardiac arrest to the start of CPR and calling for an ambulance to its arrival were all significantly longer in Stockholm than in Goteborg.
Conclusion: Survival was almost three times higher in Goteborg than in Stockholm amongst patients suffering an OHCA. This is primarily explained by a higher occurrence of VF at the time of arrival of the ambulance crew, which in turn probably is explained by shorter delays in Goteborg. The reason for the difference in time intervals is most likely multifactorial, with a significantly higher ambulance density in Goteborg as one possible explanation.
abstract_id: PUBMED:24184782
Association between resuscitation time interval at the scene and neurological outcome after out-of-hospital cardiac arrest in two Asian cities. Background And Aim: It is unclear whether the scene time interval (STI) for cardiopulmonary resuscitation (CPR) is associated with outcomes of out-of-hospital cardiac arrest (OHCA) or not. The present study aimed to determine the association between STI and neurological outcome after OHCA using two large population-based cohorts covering two metropolitan cities in Asia.
Methods: A retrospective analysis based on two large population-based cohorts from Seoul (2008-2010) and Osaka (2007-2009) was performed for witnessed adult OHCA with presumed cardiac aetiology. The STI, defined as time from wheel arrival at the scene to departure to hospital, was categorised as short (<8min), intermediate (from 8 to <16min) and long (16min or longer) STI on the basis of sensitivity analysis. The primary outcome was good neurological outcome (cerebral performance category 1 or 2). Adjusted odds ratios (AORs) with 95% confidence intervals (CIs) were calculated to determine the association between STIs and outcomes in comparison to the short STI group adjusting for potential risk factors and interaction products.
Results: A total of 7757 patients, 3594 from Seoul and 4163 from Osaka, were finally analysed. There were significant differences among the STI groups for most potential risk variables. Survival to admission was higher in the intermediate STI group (35.7%) than in the short (31.8%) or long STI group (32.6%) (p=0.004). Survival to discharge was not different among groups, at 13.7%, 13.1% and 11.5%, respectively (p=0.094). The intermediate STI group had a significantly better neurological outcome compared with the short STI group (7.7% vs. 4.6%; AOR=1.32; 95% CI, 1.03-1.71), while the long STI (6.6%) did not.
Conclusion: Data from two metropolitan cities demonstrated a positive association between intermediate STI from 8 to 16min and good neurological outcome after OHCA.
abstract_id: PUBMED:29128033
Association of the Emergency Medical Services-Related Time Interval with Survival Outcomes of Out-of-Hospital Cardiac Arrest Cases in Four Asian Metropolitan Cities Using the Scoop-and-Run Emergency Medical Services Model. Background: Response time interval (RTI) and scene time interval (STI) are key time variables in the out-of-hospital cardiac arrest (OHCA) cases treated and transported via emergency medical services (EMS).
Objective: We evaluated distribution and interactive association of RTI and STI with survival outcomes of OHCA in four Asian metropolitan cities.
Methods: An OHCA cohort from Pan-Asian Resuscitation Outcome Study (PAROS) conducted between January 2009 and December 2011 was analyzed. Adult EMS-treated cardiac arrests with presumed cardiac origin were included. A multivariable logistic regression model with an interaction term was used to evaluate the effect of STI according to different RTI categories on survival outcomes. Risk-adjusted predicted rates of survival outcomes were calculated and compared with observed rate.
Results: A total of 16,974 OHCA cases were analyzed after serial exclusion. Median RTI was 6.0 min (interquartile range [IQR] 5.0-8.0 min) and median STI was 12.0 min (IQR 8.0-16.1). The prolonged STI in the longest RTI group was associated with a lower rate of survival to discharge or of survival 30 days after arrest (adjusted odds ratio [aOR] 0.59; 95% confidence interval [CI] 0.42-0.81), as well as a poorer neurologic outcome (aOR 0.63; 95% CI 0.41-0.97) without an increasing chance of prehospital return of spontaneous circulation (aOR 1.12; 95% CI 0.88-1.45).
Conclusions: Prolonged STI in OHCA with a delayed response time had a negative association with survival outcomes in four Asian metropolitan cities using the scoop-and-run EMS model. Establishing an optimal STI based on the response time could be considered.
abstract_id: PUBMED:17363131
An evaluation of post-resuscitation care as a possible explanation of a difference in survival after out-of-hospital cardiac arrest. Background: A recently published study has shown that survival after out-of-hospital cardiac arrest (OHCA) in Göteborg is almost three times higher than in Stockholm. The aim of this study was to investigate whether in-hospital factors were associated with outcome in terms of survival.
Methods: All patients suffering from OHCA in Stockholm and Göteborg between January 1, 2000 and June 30, 2002 were included. The two groups were compared with reference to patient characteristics, medical history, pre-hospital and hospital course (including in-hospital investigations and interventions) and mortality. All medical charts from patients admitted alive to the different hospitals were studied. Data from the Swedish National Register of Deaths regarding long-term survival were analysed. Pre-hospital data were collected from the Swedish Ambulance Cardiac Arrest Register.
Results: In all, 1542 OHCA in Stockholm and 546 in Göteborg were registered during the 30-month study period. In Göteborg, 28% (153 patients) were admitted alive to the two major hospitals whereas in Stockholm 16% (253 patients) were admitted alive to the seven major hospitals (p<0.0001). On admission to the emergency rooms, a larger proportion of patients in Stockholm was unconscious (p=0.006), received assisted breathing (p=0.008) and ongoing CPR (p=0.0002). Patient demography, medical history, in-hospital investigations and interventions and in-hospital mortality (78% in Göteborg, 80% in Stockholm) did not differ between the two groups. Various pre-hospital time intervals were significantly longer in Stockholm than in Göteborg. Total survival to discharge after OHCA was 3.3% in Stockholm and 6.1% in Göteborg (p=0.01).
Conclusion: An almost 2-fold difference in survival after OHCA between Stockholm and Göteborg appears to be associated with pre-hospital factors only (predominantly in form of prolonged intervals in Stockholm), rather than with in-hospital factors or patient characteristics.
abstract_id: PUBMED:21767757
Emergency response time after out-of-hospital cardiac arrest. Objectives: To investigate the emergency response time after out-of-hospital cardiac arrest (OHCA) in four cities in Serbia.
Methods: A prospective, two-year, multicenter study was designed. Using the Utstein template we recorded out-of-hospital CPR (OHCPR) and analyzed the time sequence segment of the variables in OHCA and CPR gold standards. Multivariable logistic regression models were developed using emergency response time as the primary independent variable and survival to return of spontaneous circulation (ROSC), survival to hospital discharge (HD), and one-year survival (1y) as the dependent variable. ROC curves represent cut off time dependent survival data.
Results: During the study period, the median time of recognition OHCA was 5.5 min, call receipt was 1 min and the call-response interval was 7 min. The median time required to verify OHCA and ALS onset was 10 min. ALS was carried on for 30.5 min (SD=21.3). Abandonment of further CPR/death occurred after 29 min. The first defibrillation shock was performed after 13.3±9.0 min, endotracheal tube was placed after 16.8±9.4 min and the first adrenaline dose was injected after 18.9±9.3 min. Higher survival (ROSC, HD, 1y) rate was found when CPR is performed within the first 4 min after OHCA.
Conclusion: The emergency response time within 4 min was associated with improved survival to ROSC, HD and 1y after OHCA. Despite the fact that our results are in accordance with the findings published in other papers, there is still a need to take all appropriate measures in order to decrease the emergency response time after OHCA.
abstract_id: PUBMED:32586339
Identifying the relative importance of predictors of survival in out of hospital cardiac arrest: a machine learning study. Introduction: Studies examining the factors linked to survival after out of hospital cardiac arrest (OHCA) have either aimed to describe the characteristics and outcomes of OHCA in different parts of the world, or focused on certain factors and whether they were associated with survival. Unfortunately, this approach does not measure how strong each factor is in predicting survival after OHCA.
Aim: To investigate the relative importance of 16 well-recognized factors in OHCA at the time point of ambulance arrival, and before any interventions or medications were given, by using a machine learning approach that implies building models directly from the data, and arranging those factors in order of importance in predicting survival.
Methods: Using a data-driven approach with a machine learning algorithm, we studied the relative importance of 16 factors assessed during the pre-hospital phase of OHCA. We examined 45,000 cases of OHCA between 2008 and 2016.
Results: Overall, the top five factors to predict survival in order of importance were: initial rhythm, age, early Cardiopulmonary Resuscitation (CPR, time to CPR and CPR before arrival of EMS), time from EMS dispatch until EMS arrival, and place of cardiac arrest. The largest difference in importance was noted between initial rhythm and the remaining predictors. A number of factors, including time of arrest and sex were of little importance.
Conclusion: Using machine learning, we confirm that the most important predictor of survival in OHCA is initial rhythm, followed by age, time to start of CPR, EMS response time and place of OHCA. Several factors traditionally viewed as important, e.g. sex, were of little importance.
abstract_id: PUBMED:33107394
Shortening Ambulance Response Time Increases Survival in Out-of-Hospital Cardiac Arrest. Background The ambulance response time in out-of-hospital cardiac arrest (OHCA) has doubled over the past 30 years in Sweden. At the same time, the chances of surviving an OHCA have increased substantially. A correct understanding of the effect of ambulance response time on the outcome after OHCA is fundamental for further advancement in cardiac arrest care. Methods and Results We used data from the SRCR (Swedish Registry of Cardiopulmonary Resuscitation) to determine the effect of ambulance response time on 30-day survival after OHCA. We included 20 420 cases of OHCA occurring in Sweden between 2008 and 2017. Survival to 30 days was our primary outcome. Stratification and multiple logistic regression were used to control for confounding variables. In a model adjusted for age, sex, calendar year, and place of collapse, survival to 30 days is presented for 4 different groups of emergency medical services (EMS)-crew response time: 0 to 6 minutes, 7 to 9 minutes, 10 to 15 minutes, and >15 minutes. Survival to 30 days after a witnessed OHCA decreased as ambulance response time increased. For EMS response times of >10 minutes, the overall survival among those receiving cardiopulmonary resuscitation before EMS arrival was slightly higher than survival for the sub-group of patients treated with compressions-only cardiopulmonary resuscitation. Conclusions Survival to 30 days after a witnessed OHCA decreases as ambulance response times increase. This correlation was seen independently of initial rhythm and whether cardiopulmonary resuscitation was performed before EMS-crew arrival. Shortening EMS response times is likely to be a fast and effective way of increasing survival in OHCA.
abstract_id: PUBMED:30646171
Association Between Time to Defibrillation and Survival in Pediatric In-Hospital Cardiac Arrest With a First Documented Shockable Rhythm. Importance: Delayed defibrillation (>2 minutes) in adult in-hospital cardiac arrest (IHCA) is associated with worse outcomes. Little is known about the timing and outcomes of defibrillation in pediatric IHCA.
Objective: To determine whether time to first defibrillation attempt in pediatric IHCA with a first documented shockable rhythm is associated with survival to hospital discharge.
Design, Setting, And Participants: In this cohort study, data were obtained from the Get With The Guidelines-Resuscitation national registry between January 1, 2000, and December 31, 2015, and analyses were completed by October 1, 2017. Participants were pediatric patients younger than 18 years with an IHCA and a first documented rhythm of pulseless ventricular tachycardia or ventricular fibrillation and at least 1 defibrillation attempt.
Exposures: Time between loss of pulse and first defibrillation attempt.
Main Outcomes And Measures: The primary outcome was survival to hospital discharge. Secondary outcomes were return of circulation, 24-hour survival, and favorable neurologic outcome at hospital discharge.
Results: Among 477 patients with a pulseless shockable rhythm (median [interquartile range] age, 4 years [3 months to 14 years]; 285 [60%] male), 338 (71%) had a first defibrillation attempt at 2 minutes or less after pulselessness. Children were less likely to be shocked in 2 minutes or less for ward vs intensive care unit IHCAs (48% [11 of 23] vs 72% [268 of 371]; P = .01]). Thirty-eight percent (179 patients) survived to hospital discharge. The median (interquartile range) reported time to first defibrillation attempt was 1 minute (0-3 minutes) in both survivors and nonsurvivors. Time to first defibrillation attempt was not associated with survival in unadjusted analysis (risk ratio [RR] per minute increase, 0.96; 95% CI, 0.92-1.01; P = .15) or adjusted analysis (RR, 0.99; 95% CI, 0.94-1.06; P = .86). There was no difference in survival between those with a first defibrillation attempt in 2 minutes or less vs more than 2 minutes in unadjusted analysis (132 of 338 [39%] vs 47 of 139 [34%]; RR, 0.87; 95% CI, 0.66-1.13; P = .29) or multivariable analysis (RR, 0.99; 95% CI, 0.75-1.30; P = .93). Time to first defibrillation attempt was also not associated with secondary outcome measures.
Conclusions And Relevance: In contrast to published adult IHCA and pediatric out-of-hospital cardiac arrest data, no significant association was observed between time to first defibrillation attempt in pediatric IHCA with a first documented shockable rhythm and survival to hospital discharge.
abstract_id: PUBMED:31155852
Time from arrest to extracorporeal cardiopulmonary resuscitation and survival after out-of-hospital cardiac arrest. Objectives: The association between the time from arrest to extracorporeal cardiopulmonary resuscitation (ECPR) and survival from out-of-hospital cardiac arrest (OHCA) is unclear. The aim of this study was to determine whether time to ECPR is associated with survival in OHCA.
Methods: We analysed the Korean national OHCA registry from 2013 to 2016. We included adult witnessed OHCA patients with presumed cardiac aetiology who underwent ECPR. Patients were excluded if their arrest times or outcomes were unknown. The primary outcome was survival to discharge. Multivariable logistic regression analysis controlling for potential confounders was conducted and adjusted odds ratios (AORs) and 95% confidence intervals (CIs) were calculated to determine the association between time to ECPR and survival.
Results: There were 40 352 witnessed OHCAs with presumed cardiac aetiology. One hundred and forty patients had ECPR applied on arriving at their ED, 13 of these patients survived to discharge and seven were neurologically intact. Median time from arrest to ECPR was 74 min (IQR 61-90). Time from arrest to ECPR was significantly and inversely associated with survival to discharge for every 10 min increase in time (AOR 0.73, 95% CI 0.53-1.00) in 10 min intervals. AOR for time from arrest to ECPR ≤60 min was independently associated with improved survival (AOR 6.48, 95% CI 1.54-27.20).
Conclusion: Early initiation of ECPR is associated with improved survival after OHCA. Because we analysed a nationwide OHCA registry, which lacks uniform selection criteria for ECPR, further prospective study is warranted.
abstract_id: PUBMED:20207470
The relationship between time to arrival of emergency medical services (EMS) and survival from out-of-hospital ventricular fibrillation cardiac arrest. Aim: We examined the relationship between time from collapse to arrival of emergency medical services (EMS) and survival to hospital discharge for out-of-hospital ventricular fibrillation cardiac arrests in order to determine meaningful interpretations of this association.
Methods: We calculated survival rates in 1-min intervals from collapse to EMS arrival. Additionally, we used logistic regression to determine the absolute probability of survival per minute of delayed EMS arrival. We created a logistic regression model with spline terms for the time variable to examine the decline in survival in intervals that are hypothesized to be physiologically relevant.
Results: The observed data showed survival declined, on average, by 3% for each minute that EMS was delayed following collapse. Survival rates did not decline appreciably if the time between collapse and arrival of EMS was 4 min or less but they declined by 5.2% per minute between 5 and 10 min. EMS arrival 11-15 min after collapse showed a less steep decline in survival of 1.9% per minute. The spline model that incorporated changes in slope in the time interval variable modeled this relationship more accurately than a model with a continuous term for time (p=0.01).
Conclusions: The results of our analyses show that survival from out-of-hospital cardiac arrest does not decline at a constant rate following collapse. Models that incorporate changes that reflect the physiological alterations that occur following cardiac arrests are a more accurate way to describe changes in survival rates over time than models that include only a continuous term for time.
Answer: The difference in survival after out-of-hospital cardiac arrest (OHCA) between the two largest cities in Sweden, Stockholm and Göteborg, appears to be significantly influenced by time-related factors. A study comparing patients with OHCA in these cities found that survival was almost three times higher in Göteborg than in Stockholm. This disparity was primarily explained by a higher occurrence of ventricular fibrillation (VF) at the time of arrival of the ambulance crew in Göteborg, which in turn was likely due to shorter delays in the city. Various time intervals, such as the time from cardiac arrest to calling for an ambulance, from cardiac arrest to the start of CPR, and from calling for an ambulance to its arrival, were all significantly longer in Stockholm than in Göteborg. The reason for the difference in time intervals is likely multifactorial, with a significantly higher ambulance density in Göteborg being one possible explanation (PUBMED:15715681).
Further supporting the importance of time, another study evaluating post-resuscitation care found that the nearly two-fold difference in survival between Stockholm and Göteborg was associated with pre-hospital factors only, predominantly in the form of prolonged intervals in Stockholm, rather than with in-hospital factors or patient characteristics (PUBMED:17363131).
These findings are consistent with broader research indicating that shorter emergency response times are associated with improved survival to return of spontaneous circulation (ROSC), hospital discharge, and one-year survival after OHCA (PUBMED:21767757). Moreover, a machine learning study identified initial rhythm, age, early CPR, time to CPR, and EMS response time as the top five factors predicting survival in OHCA, with initial rhythm being the most important predictor (PUBMED:32586339).
In summary, the difference in survival after OHCA between Stockholm and Göteborg seems to be a matter of time, with shorter time intervals to advanced medical intervention being crucial for improving survival rates (PUBMED:15715681; PUBMED:17363131). |
Instruction: Are ruminal bacteria protected against environmental stress by plant antioxidants?
Abstracts:
abstract_id: PUBMED:12358692
Are ruminal bacteria protected against environmental stress by plant antioxidants? Aims: To investigate the activity response of the antioxidant enzymes superoxide dismutase (SOD) and glutathione peroxidase (GSHPx) of the rumen bacterium Streptococcus bovis following exposure to mercury(II) chloride (HgCl(2) in the presence of plant antioxidants.
Methods And Results: Streptococcus bovis was grown with 0 or 5 microg ml(-1) of HgCl(2) alone or together with antioxidant substances (AOS): seleno-l-methionine (Se), alpha-tocopherol (alpha toc), beta-carotene (beta car), melatonin (mel). The activities of SOD and GHPx were estimated in supernatants of disrupted bacterial cells. A significant decrease in the Strep. bovis SOD activity in the presence of HgCl(2) and tested AOS, except mel, was observed. The GSHPx activity of Strep. bovis was under the same cultivation conditions nonsignificantly changed and a significant decrease in the GSHPx activity was recorded only in the presence of beta car.
Conclusions: The positive effect of Se, alpha toc and beta car on the elimination of environmental stress, evoked by mercury, in ruminal bacterium Strep. bovis in vitro was documented.
Significance And Impact Of The Study: The potential role of plant antioxidants in elimination of the environmental stress of ruminal bacteria evoked by heavy metals is discussed.
abstract_id: PUBMED:11399842
Plant antioxidants: colour me healthy. Plants make a variety of compounds in response to environmental stress, many of which function as antioxidants when consumed. The plants' own defences against oxidative stress can be used for your benefit, prolonging your life by acquiring their protection. By eating plenty of vegetables and fruit, you may help to significantly reduce the risk of many age-related degenerative diseases.
abstract_id: PUBMED:37848152
Biofilms formation in plant growth-promoting bacteria for alleviating agro-environmental stress. Biofilm formation represents a pivotal and adaptable trait among microorganisms within natural environments. This attribute plays a multifaceted role across diverse contexts, including environmental, aquatic, industrial, and medical systems. While previous research has primarily focused on the adverse impacts of biofilms, harnessing their potential effectively could confer substantial advantages to humanity. In the face of escalating environmental pressures (e.g., drought, salinity, extreme temperatures, and heavy metal pollution), which jeopardize global crop yields, enhancing crop stress tolerance becomes a paramount endeavor for restoring sufficient food production. Recently, biofilm-forming plant growth-promoting bacteria (PGPB) have emerged as promising candidates for agricultural application. These biofilms are evidence of microorganism colonization on plant roots. Their remarkable stress resilience empowers crops to thrive and yield even in harsh conditions. This is accomplished through increased root colonization, improved soil properties, and the synthesis of valuable secondary metabolites (e.g., ACC deaminase, acetin, 2,3-butanediol, proline, etc.). This article elucidates the mechanisms underpinning the role of biofilm-forming PGPB in bolstering plant growth amidst environmental challenges. Furthermore, it explores the tangible applications of these biofilms in agriculture and delves into strategies for manipulating biofilm formation to extract maximal benefits in practical crop production scenarios.
abstract_id: PUBMED:20214435
Roles of enzymatic and nonenzymatic antioxidants in plants during abiotic stress. Reactive oxygen species (ROS) are produced in plants as byproducts during many metabolic reactions, such as photosynthesis and respiration. Oxidative stress occurs when there is a serious imbalance between the production of ROS and antioxidant defense. Generation of ROS causes rapid cell damage by triggering a chain reaction. Cells have evolved an elaborate system of enzymatic and nonenzymatic antioxidants which help to scavenge these indigenously generated ROS. Various enzymes involved in ROS-scavenging have been manipulated, over expressed or downregulated to add to the present knowledge and understanding the role of the antioxidant systems. The present article reviews the manipulation of enzymatic and nonenzymatic antioxidants in plants to enhance the environmental stress tolerance and also throws light on ROS and redox signaling, calcium signaling, and ABA signaling.
abstract_id: PUBMED:32100101
Growth-promoting bacteria and natural regulators mitigate salt toxicity and improve rapeseed plant performance. Salinity is a major environmental stress that limits plant production and portraits a critical challenge to food security in the world. In this research, the impacts of plant growth-promoting bacteria (Pseudomonas RS-198 and Azospirillum brasilense RS-SP7) and foliar application of plant hormones (salicylic acid 1 mM and jasmonic acid 0.5 mM) on alleviating the harmful effects of salt stress in rapeseed plants (Brassica napus cv. okapi) were examined under greenhouse condition. Salt stress diminished rapeseed biomass, leaf area, water content, nitrogen, phosphorus, potassium, calcium, magnesium, and chlorophyll content, while it increased sodium content, endogenous salicylic and jasmonic acids, osmolyte production, H2O2 and O2•- generations, TBARS content, and antioxidant enzyme activities. Plant growth, nutrient content, leaf expansion, osmolyte production, and antioxidant enzyme activities were increased, but oxidative and osmotic stress indicators were decreased by bacteria inoculation + salicylic acid under salt stress. Antioxidant enzyme activities were amplified by jasmonic acid treatments under salt stress, although rapeseed growth was not generally affected by jasmonic acid. Bacterial + hormonal treatments were superior to individual treatments in reducing detrimental effects of salt stress. The best treatment in rectifying rapeseed growth under salt stress was combination of Pseudomonas and salicylic acid. This combination attenuated destructive salinity properties and subsequently amended rapeseed growth via enhancing endogenous salicylic acid content and some essential nutrients such as potassium, phosphorus, and magnesium.
abstract_id: PUBMED:34732249
A simplified synthetic community rescues Astragalus mongholicus from root rot disease by activating plant-induced systemic resistance. Background: Plant health and growth are negatively affected by pathogen invasion; however, plants can dynamically modulate their rhizosphere microbiome and adapt to such biotic stresses. Although plant-recruited protective microbes can be assembled into synthetic communities for application in the control of plant disease, rhizosphere microbial communities commonly contain some taxa at low abundance. The roles of low-abundance microbes in synthetic communities remain unclear; it is also unclear whether all the microbes enriched by plants can enhance host adaptation to the environment. Here, we assembled a synthetic community with a disease resistance function based on differential analysis of root-associated bacterial community composition. We further simplified the synthetic community and investigated the roles of low-abundance bacteria in the control of Astragalus mongholicus root rot disease by a simple synthetic community.
Results: Fusarium oxysporum infection reduced bacterial Shannon diversity and significantly affected the bacterial community composition in the rhizosphere and roots of Astragalus mongholicus. Under fungal pathogen challenge, Astragalus mongholicus recruited some beneficial bacteria such as Stenotrophomonas, Achromobacter, Pseudomonas, and Flavobacterium to the rhizosphere and roots. We constructed a disease-resistant bacterial community containing 10 high- and three low-abundance bacteria enriched in diseased roots. After the joint selection of plants and pathogens, the complex synthetic community was further simplified into a four-species community composed of three high-abundance bacteria (Stenotrophomonas sp., Rhizobium sp., Ochrobactrum sp.) and one low-abundance bacterium (Advenella sp.). Notably, a simple community containing these four strains and a thirteen-species community had similar effects on the control root rot disease. Furthermore, the simple community protected plants via a synergistic effect of highly abundant bacteria inhibiting fungal pathogen growth and less abundant bacteria activating plant-induced systemic resistance.
Conclusions: Our findings suggest that bacteria with low abundance play an important role in synthetic communities and that only a few bacterial taxa enriched in diseased roots are associated with disease resistance. Therefore, the construction and simplification of synthetic communities found in the present study could be a strategy employed by plants to adapt to environmental stress. Video abstract.
abstract_id: PUBMED:35504236
Psychrotrophic plant beneficial bacteria from the glacial ecosystem of Sikkim Himalaya: Genomic evidence for the cold adaptation and plant growth promotion. Commercial biofertilizers tend to be ineffective in cold mountainous regions due to reduced metabolic activity of the microbial inoculants under low temperatures. Cold-adapted glacier bacteria with plant growth-promoting (PGP) properties may prove significant in developing cold-active biofertilizers for improving mountain agriculture. With this perspective, the cultivable bacterial diversity was documented from the East Rathong glacier ecosystem lying above 3900 masl of Sikkim Himalaya. A total of 120 bacterial isolates affiliated to Gammaproteobacteria (53.33%), Bacteroidetes (16.66%), Actinobacteria (15.83%), Betaproteobacteria (6.66%), Alphaproteobacteria (4.16%), and Firmicutes (3.33%) were recovered. Fifty-two isolates showed many in vitro PGP activities of phosphate solubilization (9-100 µg/mL), siderophore production (0.3-100 psu) and phytohormone indole acetic acid production (0.3-139 µg/mL) at 10 °C. Plant-based bioassays revealed an enhancement of shoot length by 21%, 22%, and 13% in ERGS5:01, ERMR1:04, and ERMR1:05, and root length by 14%, 17%, 11%, and 22% in ERGS4:06, ERGS5:01, ERMR1:04, and ERMR1:05 treated seeds respectively. An increased shoot dry weight of 4-29% in ERMR1:05 and ERMR1:04, and root dry weight of 42-98% was found in all the treatments. Genome analysis of four bacteria from diverse genera predicted many genes involved in the bacterial PGP activity. Comparative genome study highlighted the presence of PGP-associated unique genes for glucose dehydrogenase, siderophore receptor, tryptophan synthase, phosphate metabolism (phoH, P, Q, R, U), nitrate and nitrite reductase, TonB-dependent receptor, spermidine/putrescine ABC transporter etc. in the representative bacteria. The expression levels of seven cold stress-responsive genes in the cold-adapted bacterium ERGS4:06 using real-time quantitative PCR (RT-qPCR) showed an upregulation of all these genes by 6-17% at 10 °C, and by 3-33% during cold-shock, which indicates the cold adaptation strategy of the bacterium. Overall, this study signifies the psychrotrophic bacterial diversity from an extreme glacier environment as a potential tool for improving plant growth under cold environmental stress.
abstract_id: PUBMED:36033871
Plant growth-promoting bacteria in metal-contaminated soil: Current perspectives on remediation mechanisms. Heavy metal contamination in soils endangers humans and the biosphere by reducing agricultural yield and negatively impacting ecosystem health. In recent decades, this issue has been addressed and partially remedied through the use of "green technology," which employs metal-tolerant plants to clean up polluted soils. Furthermore, the global climate change enhances the negative effects of climatic stressors (particularly drought, salinity, and extreme temperatures), thus reducing the growth and metal accumulation capacity of remediating plants. Plant growth-promoting bacteria (PGPB) have been widely introduced into plants to improve agricultural productivity or the efficiency of phytoremediation of metal-contaminated soils via various mechanisms, including nitrogen fixation, phosphate solubilization, phytohormone production, and biological control. The use of metal-tolerant plants, as well as PGPB inoculants, should hasten the process of moving this technology from the laboratory to the field. Hence, it is critical to understand how PGPB ameliorate environmental stress and metal toxicity while also inducing plant tolerance, as well as the mechanisms involved in such actions. This review attempts to compile the scientific evidence on this topic, with a special emphasis on the mechanism of PGPB involved in the metal bioremediation process [plant growth promotion and metal detoxification/(im)mobilization/bioaccumulation/transformation/translocation] and deciphering combined stress (metal and climatic stresses) tolerance.
abstract_id: PUBMED:25410828
Biotechnological application and taxonomical distribution of plant growth promoting actinobacteria. Plant growth promoting (PGP) bacteria are involved in various interactions known to affect plant fitness and soil quality, thereby increasing the productivity of agriculture and stability of soil. Although the potential of actinobacteria in antibiotic production is well-investigated, their capacity to enhance plant growth is not fully surveyed. Due to the following justifications, PGP actinobacteria (PGPA) can be considered as a more promising taxonomical group of PGP bacteria: (1) high numbers of actinobacteria per gram of soil and their filamentous nature, (2) genome dedicated to the secondary metabolite production (~5 to 10 %) is distinctively more than that of other bacteria and (3) number of plant growth promoter genera reported from actinobacteria is 1.3 times higher than that of other bacteria. Mechanisms by which PGPA contribute to the plant growth by association are: (a) enhancing nutrients availability, (b) regulation of plant metabolism, (c) decreasing environmental stress, (d) control of phytopathogens and (e) improvement of soil texture. Taxonomical and chemical diversity of PGPA and their biotechnological application along with their associated challenges are summarized in this paper.
abstract_id: PUBMED:16323271
Impact and interaction of lipophilic antioxidants in mutants and transgenic plants. Carotenoids and tocopherols are lipophilic antioxidants with important functions in plants and humans. Due to their nutritional value and putative health benefits, they have become a focus of intensive research. The identification of all genes of the carotenoid and tocopherol biosynthesis has enabled the manipulation of their biosynthetic pathways, aiming for quantitative and qualitative improvement. In plants, carotenoids and tocopherols are of crucial importance because of their protective abilities, which help to keep them alive even under light stress conditions. A wealth of information has accumulated concerning the responses of plants to various environmental stress factors. Here, we summarize some of the recent data concentrating on the impact and possible interaction of lipophilic antioxidants in mutants and transgenic plants with altered status of lipophilic antioxidants.
Answer: Yes, ruminal bacteria are protected against environmental stress by plant antioxidants. The study involving the rumen bacterium Streptococcus bovis showed that the presence of plant antioxidants such as seleno-l-methionine (Se), alpha-tocopherol (alpha toc), and beta-carotene (beta car) had a positive effect on eliminating environmental stress caused by mercury in vitro (PUBMED:12358692). This suggests that plant antioxidants can play a role in protecting ruminal bacteria from heavy metal-induced stress.
Moreover, plants produce a variety of compounds in response to environmental stress, many of which function as antioxidants when consumed (PUBMED:11399842). These plant-derived antioxidants can be beneficial not only to humans but also to bacteria that are exposed to similar oxidative stresses. The protective abilities of carotenoids and tocopherols, which are lipophilic antioxidants, help plants survive under light stress conditions and could similarly confer protection to bacteria against environmental stressors (PUBMED:16323271).
Additionally, plant growth-promoting bacteria (PGPB) form biofilms that enhance crop stress tolerance, which is achieved through improved soil properties and the synthesis of valuable secondary metabolites that may include antioxidants (PUBMED:37848152). These biofilms and the associated antioxidant mechanisms can protect both plants and the bacteria themselves from environmental stresses.
In summary, the available evidence indicates that plant antioxidants can indeed protect ruminal bacteria against environmental stress, highlighting the potential role of these compounds in the resilience of microbial communities within the rumen and possibly other ecosystems. |
Instruction: Analysis of pharmaceutical safety-related regulatory actions in Japan: do tradeoffs exist between safer drugs and launch delay?
Abstracts:
abstract_id: PUBMED:21098757
Analysis of pharmaceutical safety-related regulatory actions in Japan: do tradeoffs exist between safer drugs and launch delay? Background: Prediction and management of drug safety is a global regulatory issue. Safety-related regulatory actions (SRRAs) are taken mostly when unexpected adverse drug reactions occur. Currently, Japan is reconciled to delayed access to new drugs (ie, launch delay compared to Western countries), but may have been benefiting by free-riding on safety data accumulated in other countries prior to Japanese launch.
Objective: To identify factors that are significantly associated with SRRAs, and to discuss the challenges that Japan might have to face with increasing access to new drugs.
Methods: The SRRAs of 135 new drugs approved from January 2000 to December 2005 were analyzed to investigate association with launch lag, company and drug characteristics, market size, submission data, and regulatory status. SRRAs were measured in terms of the number of emergency safety information notifications and official safety instructions issued by the Japanese regulatory agency within 3 years after approval. A negative binomial distribution model was used for regression analysis.
Results: Longer launch lags and presence of drugs with similar modes of action were associated with fewer SRRAs. Bridging strategy showed increased SRRAs. No significant association was observed between SRRAs and the subject number in clinical data packages. Occurrence of SRRAs was varied among development strategy, preceding products, and regional regulations.
Conclusions: The occurrence of SRRAs was associated with the accumulation of both foreign and domestic postmarketing evidence rather than with clinical trial data upon launch. Considering the paradigm shift to simultaneous global drug development and filing for regulatory approval, this study indicates the importance of intensive data collection in the early postmarketing phase and use of safety information in early markets. However, even if we would be sufficiently cautious about safety risks of new drugs, a population that enjoys first-in-class drugs probably has to bear the risks.
abstract_id: PUBMED:28722235
Analysis of safety-related regulatory actions by Japan's pharmaceutical regulatory agency. Purpose: To evaluate the safety-related regulatory actions implemented by Japan's Pharmaceuticals and Medical Devices Agency (PMDA) in 2012.
Methods: We analyzed serious safety issues appended to drug package inserts (PIs) in Japan in 2012. The issues were characterized according to drug class, adverse event, years since drug approval, initiator of regulatory actions, revised section of PI, and evidence source. We also quantified the durations from signal detection to tentative decision and from tentative decision to regulatory action.
Results: We identified 144 serious safety issues during the study period, and the majority of evidence originated from spontaneous reports (83.5%). The PMDA initiated regulatory actions for half of all safety issues, and the median duration from drug approval to regulatory action was 8 years (interquartile range [IQR], 3-26.5 years). The median duration was 49 days (IQR, 0-362 days) from signal detection to tentative decision and 84 days (IQR, 63-136 days) from tentative decision to regulatory action. Several safety issues involving older drugs and multiple products had protracted decision-making durations.
Conclusions: Most safety issues led to prompt regulatory actions predominantly based on spontaneous reports. Some safety issues that were not easily detected by the spontaneous reporting system were identified years after approval. In addition, several safety issues required assessments of multiple drug products, which prolonged the decision-making process.
abstract_id: PUBMED:32798058
Launch Delay of New Drugs in China and Effect on Patients' Health. Purpose: Although the launch delay of new drugs in China has been a deep concern during the past few years, research on this topic is scarce. The effect of recent regulatory efforts, such as initiating fast review channels to improve access to medical innovations, remains unclear. In this work, we measure the launch delay in China and study whether the fast channels contribute to shorter delays. We also offer an examination of the effect of launch delay on patients' health.
Methods: We examined the launch delays of 40 new drugs engaged in the 3 national price negotiations in China. Launch delay was defined as the differences between the approval dates of the United States or the European Union and that of China and was measured according to approved indications of every specific drug. Thirty-four health impact models comparing the new drugs and their corresponding comparator therapies were populated with open data from published studies to assess the loss of health attributable to launch delay. The time horizon for each model was the specified delay time.
Findings: A total of 40 new drugs with 54 approvals were studied. The median delay was 44.40 months (range, 7.30-196.24 months). For the 20 approvals granted with the fast channels, the median delay was 38.14 months, which was shorter than the 68.25 months of those 34 approvals on the track of standard procedure (P = 0.0276). Moreover, among the 34 health models for 27 new drugs, the largest loss of health was 5.76 life-years and 4.14 quality-adjusted life-years per potential patient, whereas the least loss was 0.006 life-years per head and 0.0035 quality-adjusted life-years per head, respectively.
Implications: Access to new drugs is delayed significantly in China, which may undermine patient benefit by causing loss of life-years and quality of life. The fast review procedures in China have appeared to mitigate the launch delay.
abstract_id: PUBMED:36451186
Changes in launch delay and availability of pharmaceuticals in 30 European markets over the past two decades. Background: The timing of the launch of a new drug is an important factor that determines access for patients. We evaluated patient access to pharmaceuticals in 30 European markets over the past two decades.
Methods: Launch dates were extracted from the IQVIA (formerly IMS) database for 30 European countries for all pharmaceuticals launched internationally between 2000 and 2017. We defined launch delay as the difference between the first international launch date and the corresponding national launch date, and calculated these for each country in our sample over time. Additionally, we ranked countries according to their launch delays and looked at changes in the ranking order over time. Lastly, we determined the availability of new pharmaceuticals in each country, calculating this as the percentage of these pharmaceuticals that were available in each country during a pre-specified interval.
Results: There was a clear trend towards a decrease in launch delays across all countries from 2000 (37.2 months) to 2017 (11.8 months). Over the entire observation period, the three fastest launching countries were the Netherlands, Sweden, and Germany, whereas the three slowest were Bosnia-Herzegovina, Serbia, and Turkey. Germany had the highest availability of new pharmaceuticals with 85.7%, followed by the United Kingdom (83.1%) and Norway (82.9%). Countries with the lowest availability of pharmaceuticals were Bosnia-Herzegovina, Serbia, and Latvia. Gross domestic product per capita was negatively correlated with launch delay (-0.67, p < 0.000) and positively correlated with the availability of pharmaceuticals (+ 0.19, p < 0.000).
Conclusion: Launch delay and the availability of pharmaceuticals varied substantially across all 30 European countries. Using countries with above-average availability and below-average launch delays as a benchmark, stakeholders may discuss or modify current pharmaceutical policy, if needed, to improve access to pharmaceutical care.
abstract_id: PUBMED:38491770
Harmonizing regulatory market approval of products with high safety requirements: Evidence from the European pharmaceutical market. We causally analyzed whether being a member of the European Union (EU) and having access to a centralized marketing authorization procedure (centralized procedure [CP]) affects availability and time to launch of new pharmaceuticals. We employed multiple difference-in-differences models, exploiting the eastern enlargement of the EU as well as changes in the indications that fall within the compulsory or voluntary scope of the CP. Results showed that countries experienced a mean decrease in launch delay of 10.9 months (p = 0.004) after joining the EU. Effects were higher among pharmaceuticals that belong to indications that might voluntarily participate in the CP but are not obliged to. These are often financially less attractive to manufacturers than pharmaceuticals within the compulsory scope. Availability of new pharmaceuticals launched remained unaffected. We found signs that the magnitude of the country-specific effect of centralized marketing authorization on launch delay may be influenced by strategic decisions of manufacturers at the national level (e.g., parallel trade or reference pricing).
abstract_id: PUBMED:36794271
Characteristics of drug safety alerts issued by the Spanish Medicines Agency. Objectives: To describe the characteristics of safety alerts issued by the Spanish Medicines Agency (AEMPS) and the Spanish Pharmacovigilance System over a 7-year period and the regulatory actions they generated. Methods: A retrospective analysis was carried out of drug safety alerts published on the AEMPS website from 1 January 2013 to 31 December 2019. Alerts that were not drug-related or were addressed to patients rather than healthcare professionals were excluded. Results: During the study period, 126 safety alerts were issued, 12 of which were excluded because they were not related to drugs or were addressed to patients and 22 others were excluded as they were duplications of previous alerts. The remaining 92 alerts reported 147 adverse drug reactions (ADRs) involving 84 drugs. The most frequent source of information triggering a safety alert was spontaneous reporting (32.6%). Four alerts (4.3%) specifically addressed health issues related to children. ADRs were considered serious in 85.9% of the alerts. The most frequent ADRs were hepatitis (seven alerts) and congenital malformations (five alerts), and the most frequent drug classes were antineoplastic and immunomodulating agents (23%). Regarding the drugs involved, 22 (26.2%) were "under additional monitoring." Regulatory actions induced changes in the Summary of Product Characteristics in 44.6% of alerts, and in eight cases (8.7%), the alert led to withdrawal from the market of medicines with an unfavorable benefit/risk ratio. Conclusion: This study provides an overview of drug safety alerts issued by the Spanish Medicines Agency over a 7-year period and highlights the contribution of spontaneous reporting of ADRs and the need to assess safety throughout the lifecycle of medicines.
abstract_id: PUBMED:35781506
Association between Post-marketing Safety-related Regulatory Actions and Characteristics of New Drugs Approved in Japan between 2005 and 2016 The pharmacovigilance activities of new drugs are usually planned and conducted based on the clinical safety information obtained at approval. Revealing pre- and post-marketing drug characteristics associated with post-marketing safety-related regulatory actions (PSRAs) would help facilitate pharmacovigilance activities as these activities are not sufficient for early detection of safety signals that require warning. Therefore, we investigated the association between PSRAs and characteristics of new drugs in Japan. New active substances approved in Japan between fiscal year 2005 and 2015 were analyzed. PSRAs were defined as "revisions of precautions in drug package insert" instructed by the regulatory authority within the first 5 years after the initial approval (up to 2021). Drug characteristics included therapeutic area, number of Japanese subjects in clinical trials, dose-response study in Japanese subjects, approval lag between Japan and the United States or Europe (US/EU), novelty of the drug, estimated number of target patients, and number of supplemental approvals. Negative binomial regression and path analyses were performed to investigate the association between PSRAs and drug characteristics. PSRAs were more common among antineoplastic agents and drugs with a larger estimated number of target patients and were less common among drugs with a longer approval lag between Japan and the US/EU. Supplemental approval was more common among antineoplastic agents, and there were fewer target patients for novel drugs. For new drugs with the characteristics identified in the present study, it is important to proactively collect post-market safety information by intensifying patient monitoring.
abstract_id: PUBMED:30094880
Analysis of factors related to the occurrence of important drug-specific postmarketing safety-related regulatory actions: A cohort study focused on first-in-class drugs. Purpose: First-in-class (FIC) drugs with novel modes of action pose concerns regarding important postmarketing safety issues. The purpose of this study was to analyze the factors related to the occurrence of postmarketing safety-related regulatory actions (PSRAs) for drugs approved in the United States (US), with a focus on FIC drugs.
Methods: New molecular entities and new therapeutic biologics approved in the United States between 1 January 2003 and 31 December 2013 were included in the analysis. Important drug-specific PSRAs were defined as market withdrawal or the addition of new black box warnings or warnings due to adverse drug reactions. The relationship between baseline characteristics and the occurrence of important drug-specific PSRAs was investigated using a multivariate logistic regression model. We also defined the event as the first important PSRA and estimated the time-to-event for each factor.
Results: ATC category L (antineoplastic and immunomodulating agents) and FIC drug classification were shown to be statistically significant factors, with odds ratios of 2.15 (95% CI: 1.12-4.11; P = 0.0203) and 1.87 (95% CI: 1.06-3.31; P = 0.0309), respectively. ATC category L and FIC drugs were also significant factors for time to occurrence of the first event.
Conclusion: FIC designation and ATC category L were identified as factors related to important drug-specific PSRAs. These factors were also associated with the time to occurrence of the first important drug-specific PSRAs.
abstract_id: PUBMED:28582876
FDA safety actions for antidiabetic drugs marketed in the US, 1980-2015. Objectives: Concerns about safety and complexity of diabetes treatments have increased overtime. We assessed secular trends in the FDA approvals, market discontinuations, and safety actions of all antidiabetic drugs marketed in the US in the period 1980-2015.
Methods: Regulatory and safety related information about FDA-approved pharmaceuticals for diabetes treatment was collected from the FDA databases, the Orange Book, and Drugs@FDA. Descriptive statistics were performed to describe trends in approvals, discontinuations, and safety actions.
Results: The number of insulins and analogue approvals declined after the 1980s; whereas, the approvals of non-insulin antidiabetic drugs increased after 1995. The number of antidiabetic drugs with FDA safety actions significantly increased overtime. Overall, 59.0% of insulins and analogues and 5.7% of non-insulin antidiabetic drugs were discontinued from the market. The FDA released at least one safety action for 7.7% of insulins and analogues and 88.7% of non-insulin antidiabetic drugs.
Conclusion: Newly approved antidiabetic drugs have raised safety concerns and led to FDA safety regulatory actions including boxed warnings, risk evaluation and mitigation strategies, medication guides, and safety communications to health care providers. There is a need for systematic post-marketing studies assessing the long-term safety of antidiabetic drugs to improve patient safety and health outcomes.
abstract_id: PUBMED:31846100
Post-marketing safety-related regulatory actions on first-in-class drugs: A double-cohort study. What Is Known And Objective: New first-in-class (FIC) drugs with novel mechanisms of action may be highly effective, but lack adequate safety information, and therefore may be associated with crucial post-marketing safety issues. The objective of this study was to evaluate the post-marketing risk of FIC drug with comparison occurrence of Post-marketing safety-related regulatory actions (PSRAs) due to FIC drugs to that due to other new drugs.
Methods: A full list of all new molecular entities and therapeutic biologics, except diagnostic agents and vaccines, which were approved in the United States between 1 January 2003, and 31 December 2013, were included in this study. Drugs with novel mechanisms of action at the time of approval were classified as the FIC cohort and other new drugs as the control cohort. PSRAs were defined as safety-related post-marketing withdrawal, new issuance or the addition of black box warnings. Specifically, we identified PSRAs associated with adverse drug reactions (ADR-PSRAs). Subsequently, we identified drug allergy ADR-PSRAs and class-effect ADR-PSRAs, and also extracted drug-specific ADR-PSRAs. To evaluate the post-marketing safety risk of FIC drugs, we estimated the odds ratio of the occurrence of ADR-PSRAs between the FIC cohort and the control cohort.
Results And Discussion: The odds ratio of the occurrence of all ADR-PSRA in the FIC cohort was 0.96 (95% CI: 0.57-1.61, P = .8758), showing no difference compared to that of the control cohort. However, the odds ratio of the occurrence of drug-specific ADR-PSRAs in the FIC cohort was 2.06 (95% CI: 1.20-3.55, P = .0091).
What Is New And Conclusion: This study demonstrated that a strong relationship existed between FIC drugs and the occurrence of drug-specific ADR-PSRAs, suggesting that post-marketing safety risk for FIC drugs is higher than that for other new drugs given the same class at approval.
Answer: The analysis of pharmaceutical safety-related regulatory actions (SRRAs) in Japan suggests that there may indeed be tradeoffs between the safety of drugs and their launch delay. A study examining new drugs approved in Japan from January 2000 to December 2005 found that longer launch lags were associated with fewer SRRAs, indicating that when drugs are introduced later in Japan compared to Western countries, they may benefit from additional safety data accumulated abroad, potentially leading to a safer post-marketing experience (PUBMED:21098757). This could imply that Japan's delayed access to new drugs allows for a "free-riding" effect on safety data from other countries, which might contribute to a safer drug profile upon their eventual launch in Japan.
However, this tradeoff is complex and multifaceted. For instance, another study analyzing safety issues appended to drug package inserts in Japan in 2012 found that most safety issues led to prompt regulatory actions predominantly based on spontaneous reports, with a median duration from drug approval to regulatory action of 8 years (PUBMED:28722235). This suggests that while Japan may benefit from delayed drug launches in terms of safety, the system still relies heavily on post-marketing surveillance to identify and act upon safety issues.
Moreover, the association between post-marketing safety-related regulatory actions and characteristics of new drugs approved in Japan between 2005 and 2016 showed that antineoplastic agents and drugs with a larger estimated number of target patients were more likely to have PSRAs, while drugs with a longer approval lag between Japan and the US/EU were less likely to have PSRAs (PUBMED:35781506). This further supports the notion that a longer approval lag can be associated with fewer safety issues post-launch.
In summary, the evidence suggests that there is a tradeoff between safer drugs and launch delay in Japan, with delayed access potentially allowing for a more robust safety profile at the cost of slower availability of new treatments to patients. However, the relationship is influenced by various factors, including the type of drug, the size of the target patient population, and the effectiveness of post-marketing surveillance systems. |
Instruction: Do mixed histological features affect survival benefit from neoadjuvant platinum-based combination chemotherapy in patients with locally advanced bladder cancer?
Abstracts:
abstract_id: PUBMED:21105991
Do mixed histological features affect survival benefit from neoadjuvant platinum-based combination chemotherapy in patients with locally advanced bladder cancer? A secondary analysis of Southwest Oncology Group-Directed Intergroup Study (S8710). Objective: • To determine whether the effect of neoadjuvant chemotherapy with methotrexate, vinblastine, doxorubicin and cisplatin (MVAC) on the survival of patients with locally advanced urothelial carcinoma (UC) of the bladder treated with radical cystectomy varies with the presence of non-urothelial components in the tumour.
Patients And Methods: • This is a secondary analysis of the Southwest Oncology Group-directed intergroup randomized trial S8710 of neoadjuvant MVAC followed by cystectomy versus cystectomy alone for treatment of locally advanced UC of the bladder. • For the purpose of these analyses, tumours were classified based on the presence of non-urothelial components as either pure UC (n= 236) or mixed tumours (n= 59). Non-urothelial components included squamous and glandular differentiation. • Cox regression models were used to estimate the effect of neoadjuvant MVAC on all-cause mortality for patients with pure UC and for patients with mixed tumours, with adjustment for age and clinical stage.
Results: • There was evidence of a survival benefit from chemotherapy in patients with mixed tumours (hazard ratio 0.46; 95% CI 0.25-0.87; P= 0.02). Patients with pure UC had improved survival on the chemotherapy arm but the survival benefit was not statistically significant (hazard ratio 0.90; 95% CI 0.67-1.21; P= 0.48). • There was marginal evidence that the survival benefit of chemotherapy in patients with mixed tumours was greater than it was for patients with pure UC (interaction P= 0.09).
Conclusion: • Presence of squamous or glandular differentiation in locally advanced UC of the bladder does not confer resistance to MVAC and in fact may be an indication for the use of neoadjuvant chemotherapy before radical cystectomy.
abstract_id: PUBMED:35603905
Survival after neoadjuvant/induction combination immunotherapy vs combination platinum-based chemotherapy for locally advanced (Stage III) urothelial cancer. Despite treatment with cisplatin-based chemotherapy and surgical resection, clinical outcomes of patients with locally advanced urothelial carcinoma (UC) remain poor. We compared neoadjuvant/induction platinum-based combination chemotherapy (NAIC) with combination immune checkpoint inhibition (cICI). We identified 602 patients who attended our outpatient bladder cancer clinic in 2018 to 2019. Patients were included if they received NAIC or cICI for cT3-4aN0M0 or cT1-4aN1-3M0 UC. NAIC consisted of cisplatin-based chemotherapy or gemcitabine-carboplatin in case of cisplatin-ineligibility. A subset of patients (cisplatin-ineligibility or refusal of NAIC) received ipilimumab plus nivolumab in the NABUCCO-trial (NCT03387761). Treatments were compared using the log-rank test and propensity score-weighted Cox regression models. We included 107 Stage III UC patients treated with NAIC (n = 83) or cICI (n = 24). NAIC was discontinued in 11 patients due to progression (n = 6; 7%) or toxicity (n = 5; 6%), while cICI was discontinued in 6 patients (25%) after 2 cycles due to toxicity (P = .205). After NAIC, patients had surgical resection (n = 50; 60%), chemoradiation (n = 26; 30%), or no consolidating treatment due to progression (n = 5; 6%) or toxicity (n = 2; 2%). After cICI, all patients underwent resection. After resection (n = 74), complete pathological response (ypT0N0) was achieved in 11 (22%) NAIC-patients and 11 (46%) cICI-patients (P = .056). Median (IQR) follow-up was 26 (20-32) months. cICI was associated with superior progression-free survival (P = .003) and overall survival (P = .003) compared to NAIC. Our study showed superior survival in Stage III UC patients pretreated with cICI if compared to NAIC. Our findings provide a strong rationale for validation of cICI for locally advanced UC in a comparative phase-3 trial.
abstract_id: PUBMED:19468366
Treatment of locally advanced and metastatic bladder cancer. Background: There is a significant variation in the treatment strategies adopted for the treatment of locally advanced T3b, T4a, N1-3 and metastatic bladder cancer. There is increasing evidence that we would be able to offer them some benefit in terms of disease-free survival and improving the quality of life. This article is aimed at reviewing the current literature on the treatment strategies in locally advanced and metastatic bladder cancer.
Materials And Methods: Extensive literature search was done on Medline/Pubmed from 1980-2007 using the key words - treatment of locally advanced, metastatic bladder cancer. Standard textbooks on urology, urologic oncology and monograms were reviewed. Guidelines such as National Comprehensive Cancer Network guidelines, European Urology Association guidelines and American Urology Association guidelines were also studied.
Results And Conclusions: There is a place for radical cystectomy in locally advanced T3b-T4 and N1-3 bladder cancer. Radical cystectomy alone rarely cures this subgroup of patients. There is increasing evidence that meticulous surgical clearance and extended lymphadenectomy has significant impact on disease-free survival. Adjuvant chemotherapy has been found to be effective in terms of recurrence-free survival and better than cystectomy alone. Neoadjuvant chemotherapy followed by radical cystectomy also has beneficial effects in terms of downstaging the disease and improving recurrence-free survival. This perioperative chemotherapy (adjuvant/neoadjuvant) has 5-7% survival benefit and 10% reduction in the death due to cancer disease. Excellent five-year survival rates have been achieved in patients achieving pT0 stage at surgery following chemotherapy (around 80%) and overall 40% five-year survival in node positive patients, which is promising. Though practiced widely, perioperative chemotherapy is not considered as a standard of care as yet. Current ongoing trials are likely to help us in reaching a consensus over this. There is no role of preoperative or postoperative radiotherapy in locally advanced/metastatic bladder cancer except in non TCC bilharzial/squamous cell carcinoma of bladder. Use of nomograms and prognostic factor evaluation may help us in the future in predicting the disease relapse and may help us in tailoring the treatment accordingly. Newer and more effective chemotherapeutic drugs and ongoing trials will have a significant impact on the treatment strategies and outcome of these patients in the future.
abstract_id: PUBMED:30728860
Contemporary best practice in the use of neoadjuvant chemotherapy in muscle-invasive bladder cancer. Background: We aimed to provide a comprehensive literature review on the best practice management of patients with nonmetastatic muscle-invasive bladder cancer (MIBC) using neoadjuvant chemotherapy (NAC).
Method: Between July and September 2018, we conducted a systematic review using MEDLINE and EMBASE electronic bibliographic databases. The search strategy included the following terms: Neoadjuvant Therapy and Urinary Bladder Neoplasms.
Results: There is no benefit of a single-agent platinum-based chemotherapy. Platinum-based NAC is the gold standard therapy and mainly consists of a combination of cisplatin, vinblastine, methotrexate, doxorubicin, gemcitabine or even epirubicin (MVAC). At 5 years, the absolute overall survival benefit of MVAC was 5% and the absolute disease-free survival was improved by 9%. This effect was observed independently of the type of local treatment and did not vary between subgroups of patients. Moreover, a ypT0 stage (complete pathological response) after radical cystectomy was a surrogate marker for improved oncological outcomes. High-density MVAC has been shown to decrease toxicity (with a grade 3-4 toxicity ranging from 0% to 26%) without impacting oncological outcomes. To date, there is no role for carboplatin administration in the neoadjuvant setting in patients that are unfit for cisplatin-based NAC administration. So far, there is no published trial evaluating the role of immunotherapy in a neoadjuvant setting, but many promising studies are ongoing.
Conclusion: There is a strong level of evidence supporting the clinical use of a high-dose-intensity combination of methotrexate, vinblastine, doxorubicin and cisplatin in a neoadjuvant setting. The landscape of MIBC therapies should evolve in the near future with emerging immunotherapies.
abstract_id: PUBMED:27053504
Neoadjuvant Chemotherapy for Muscle-Invasive Bladder Cancer: A Systematic Review and Two-Step Meta-Analysis. Background: Platinum-based neoadjuvant chemotherapy has been shown to improve survival outcomes in muscle-invasive bladder cancer patients. We performed a systematic review and meta-analysis to provide updated results of previous findings. We also summarized published data to compare clinical outcomes of methotrexate, vinblastine, doxorubicin, and cisplatin (MVAC) versus gemcitabine and cisplatin/carboplatin (GC) in the neoadjuvant setting.
Methods: A meta-analysis of 15 randomized clinical trials was performed to compare neoadjuvant chemotherapy plus local treatment with the same local treatment alone. Because no randomized trials have investigated MVAC versus GC in the neoadjuvant setting, a meta-analysis of 13 retrospective studies was performed to compare MVAC with GC.
Results: A total of 3,285 patients were included in 15 randomized clinical trials. There was a significant overall survival (OS) benefit associated with cisplatin-based neoadjuvant chemotherapy (hazard ratio [HR], 0.87; 95% confidence interval [CI], 0.79-0.96). A total of 1,766 patients were included in 13 retrospective studies. There was no significant difference in pathological complete response between MVAC and GC. However, GC was associated with a significantly reduced overall survival (HR, 1.26; 95% CI, 1.01-1.57). After excluding carboplatin data, GC still seemed to be inferior to MVAC in OS (HR, 1.31; 95% CI, 0.99-1.74), but the difference was no longer statistically significant.
Conclusion: These results support the use of cisplatin-based combination neoadjuvant chemotherapy in muscle-invasive bladder cancer. Although GC and MVAC had similar treatment response rates, the different survival outcome observed in this study requires further investigation.
Implications For Practice: Platinum-based neoadjuvant chemotherapy (NCT) has been shown to improve survival outcomes in muscle-invasive bladder cancer (MIBC) patients, but the optimal neoadjuvant regimen has not been established. Methotrexate, vinblastine, doxorubicin, and cisplatin (MVAC) and gemcitabine and cisplatin/carboplatin (GC) are two of the most commonly used chemotherapy regimens in modern oncology. In this two-step meta-analysis, an updated and more precise estimate of the survival benefit of cisplatin-based NCT in MIBC is provided. This study also demonstrated that MVAC might have superior overall survival compared with GC (with or without carboplatin data) in the neoadjuvant setting. The findings suggest that NCT should be standard care in MIBC, and MVAC could be the preferred neoadjuvant regimen.
abstract_id: PUBMED:10565159
The prognostic value of adjuvant and neoadjuvant chemotherapy in total cystectomy for locally advanced bladder cancer Purpose: Adjuvant chemotherapy and neoadjuvant chemotherapy have been widely used as adjuvant treatment in patients requiring total cystectomy for locally advanced transitional cell carcinoma of the bladder. However, there has been no conclusive evidence that the adjunctive chemotherapy improves survival and no agreement exists concerning what subsets of such patients receive significant benefits from the adjunctive chemotherapy. The study retrospectively sought to clarify these points.
Patients And Methods: We retrospectively analyzed clinical and pathological records of the 229 patients with transitional cell carcinoma of the bladder who underwent total cystectomy with or without lymph node dissection in our University Hospital from January 1975 to December 1997. Forty-two patients received 1-4 cycles (mean = 1.7) of adjuvant chemotherapy with VPMisCF (n = 19), CisCA (n = 4), MVAC (n = 8), or MEC (Methotrexate, Epirubicin and Cisplatin) (n = 11). Twenty-three patients received 1-4 cycles (mean = 2.1) of neoadjuvant chemotherapy with CisCA (n = 2), MVAC (n = 5), or MEC (n = 16). Using the Kaplan-Meier method, disease-specific survival rate was assessed according to various clinical and pathological factors as well as the administration of adjuvant or neoadjuvant chemotherapy. The generalized-Wilcoxon test was used to evaluate statistical significance (p < 0.05) of survival curves for two or more groups. In addition, a multivariate analysis using the Cox proportional hazards model was performed with respect to multiple clinical and pathological parameters, and treatment modalities.
Results: In patients who received neither adjuvant chemotherapy nor radiotherapy, the disease-specific survival rate was significantly lower in those with pT3a and/or more advanced tumors compared with those with pT2 or less advanced tumors. The survival rate in patients with positive lymph node metastasis was significantly lower than that in patients without lymph node metastasis. No apparent survival benefit was noted for those patients who received adjuvant chemotherapy when compared with patients who had pT3a or more advanced tumor and were followed without any adjunctive therapy. In patients with pN2 or more advanced lymph node metastasis, the survival rate of those who received adjuvant CisCA/MVAC/MEC chemotherapy was significantly higher than that those without any adjunctive therapy. Although no apparent survival benefit was observed in patients who received neoadjuvant chemotherapy, the survival rate in patients whose tumor was considered to be down-staged to pT1 or lower was significantly higher than patients who did not receive neoadjuvant chemotherapy and had pT3a or higher pT-stage tumor. The survival rate in patients whose tumor showed clinical partial or complete response by neoadjuvant chemotherapy was also significantly higher than the same control patients. However, the multivariate analysis revealed no significant survival benefit after adjuvant chemotherapy or after neoadjuvant chemotherapy.
Conclusions: Adjuvant chemotherapy after total cystectomy is an acceptable approach in patients with pN2 or higher pN-stage bladder cancer. The significant survival benefit may be obtained who acquired pathological downstaging or partial to complete clinical response after neoadjuvant chemotherapy. To get maximum survival benefit from the present chemotherapeutic regimens and exclude administration of toxic chemotherapeutic agents to unresponsive patients, there should be more reliable markers that give clear information to differentiate tumors that will respond fairly to present chemotherapeutic regimens from tumors that will respond poorly.
abstract_id: PUBMED:15939524
Neoadjuvant chemotherapy in invasive bladder cancer: update of a systematic review and meta-analysis of individual patient data advanced bladder cancer (ABC) meta-analysis collaboration. Objectives: To update a systematic review and meta-analysis that assesses the effect of neoadjuvant chemotherapy in the treatment of patients with invasive bladder cancer.
Methods: Following a prespecified protocol, we analysed updated individual patient data from all eligible randomised controlled trials that compared neoadjuvant chemotherapy plus local treatment with the same local treatment alone.
Results: Updated results are based on 11 trials, 3005 patients; comprising 98% of all patients from known eligible randomised controlled trials. We found a significant survival benefit associated with platinum-based combination chemotherapy (HR = 0.86, 95% CI 0.77-0.95, p = 0.003). This is equivalent to a 5% absolute improvement in survival at 5 years. There was also a significant disease-free survival benefit associated with platinum-based combination chemotherapy (HR = 0.78 95% CI 0.71-0.86, p < 0.0001), equivalent to a 9% absolute improvement at 5 years.
Conclusions: These results provide the best available evidence in support of the use of neoadjuvant platinum-based combination chemotherapy.
abstract_id: PUBMED:28424325
FDA Approval Summary: Atezolizumab for the Treatment of Patients with Progressive Advanced Urothelial Carcinoma after Platinum-Containing Chemotherapy. Until recently in the United States, no products were approved for second-line treatment of advanced urothelial carcinoma. On May 18, 2016, the U.S. Food and Drug Administration approved atezolizumab for the treatment of patients with locally advanced or metastatic urothelial carcinoma whose disease progressed during or following platinum-containing chemotherapy or within 12 months of neoadjuvant or adjuvant treatment with platinum-containing chemotherapy. Atezolizumab is a programmed death-ligand 1 (PD-L1) blocking antibody and represents the first approved product directed against PD-L1. This accelerated approval was based on results of a single-arm trial in 310 patients with locally advanced or metastatic urothelial carcinoma who had disease progression after prior platinum-containing chemotherapy. Patients received atezolizumab 1,200 mg intravenously every 3 weeks until disease progression or unacceptable toxicity. Key efficacy measures were objective response rate (ORR), as assessed by Independent Review per RECIST 1.1, and duration of response (DoR). With a median follow-up of 14.4 months, confirmed ORR was 14.8% (95% CI: 11.1, 19.3) in all treated patients. Median DoR was not reached and response durations ranged from 2.1+ to 13.8+ months. Of the 46 responders, 37 patients had an ongoing response for ≥ 6 months. The most common adverse reactions (≥20%) were fatigue, decreased appetite, nausea, urinary tract infection, pyrexia, and constipation. Infection and immune-related adverse events also occurred, including pneumonitis, hepatitis, colitis, endocrine disorders, and rashes. Overall, the benefit-risk assessment was favorable to support accelerated approval. The observed clinical benefits need to be verified in confirmatory trial(s).
Implications For Practice: This accelerated approval of atezolizumab for second-line use in advanced urothelial carcinoma provides patients with an effective, novel treatment option for the management of their disease. This represents the first immunotherapy approved in this disease setting.
abstract_id: PUBMED:21717438
A comparison of the outcomes of neoadjuvant and adjuvant chemotherapy for clinical T2-T4aN0-N2M0 bladder cancer. Background: Despite evidence supporting perioperative chemotherapy, few randomized studies compare neoadjuvant and adjuvant chemotherapy for bladder cancer. Consequently, the standard of care regarding the timing of chemotherapy for locally advanced bladder cancer remains controversial. We compared patient outcomes following neoadjuvant or adjuvant systemic chemotherapy for cT2-T4aN0-N2M0 bladder cancer.
Methods: In a retrospective review of a single institutional database from 1988 through 2009, we identified patients receiving neoadjuvant or adjuvant multiagent platinum-based systemic chemotherapy for locally advanced bladder cancer. Survival analysis was performed comparing disease-specific survival (DSS) and overall survival (OS).
Results: A total of 146 patients received systemic perioperative chemotherapy (73 neoadjuvant, 73 adjuvant). Of these, 84% (122/146) received cisplatin-based chemotherapy compared with carboplatin-based chemotherapy (24/146, 16.4%). Most patients receiving cisplatin-based chemotherapy were treated with methotrexate/vinblastine/adriamycin/cisplatin (79/122, 64.8%), whereas the remaining patients received gemcitabine/cisplatin (GC) (43/122, 35.2%). In multivariable analysis, there was no significant difference in DSS (P = .46) or OS (P = .76) between neoadjuvant or adjuvant chemotherapy groups. There was statistically significant improvement in DSS when patients received neoadjuvant GC rather than adjuvant GC (P = .049, hazard ratio, 10.6; 95% confidence interval, 1.01-112.2).
Conclusion: In this study, there was no statistically significant difference in OS and DSS between patients receiving neoadjuvant versus adjuvant systemic platinum-based chemotherapy for locally advanced bladder cancer. In addition, there was no significant difference between neoadjuvant and adjuvant cisplatin- or carboplatin-based chemotherapy. Chemotherapy sequence relative to surgery appeared less important than whether or not a patient actually received perioperative chemotherapy.
abstract_id: PUBMED:18627327
Neoadjuvant chemotherapy preceding cystectomy for bladder cancer. Background: Occult micrometastasis at the time of radical cystectomy leads predominantly to distant failures in patients with locally advanced, muscle-invasive transitional cell carcinoma of the bladder.
Objectives: Data supporting neoadjuvant chemotherapy for locally advanced bladder cancer are reviewed.
Results: Studies evaluating adjuvant chemotherapy have been limited by inadequate statistical power. However, randomized clinical trials have demonstrated a survival benefit for neoadjvuant cisplatin-based combination chemotherapy. Trials employing neoadjuvant therapy are particularly valuable in the development of novel systemic therapy regimens, since pathological complete remission appears to be a powerful prognostic factor for long-term survival.
Conclusion: Neoadjuvant chemotherapy is a standard for the therapy of locally advanced bladder cancer, and the neoadjuvant paradigm may assist in accelerating novel agent development.
Answer: The presence of mixed histological features in locally advanced urothelial carcinoma (UC) of the bladder does not confer resistance to neoadjuvant chemotherapy with methotrexate, vinblastine, doxorubicin, and cisplatin (MVAC). In fact, a secondary analysis of the Southwest Oncology Group-directed intergroup randomized trial S8710 indicated that patients with mixed tumors, which include non-urothelial components such as squamous and glandular differentiation, may experience a survival benefit from chemotherapy (hazard ratio 0.46; 95% CI 0.25-0.87; P= 0.02). This benefit was more pronounced than in patients with pure UC, although the survival benefit for pure UC was not statistically significant (hazard ratio 0.90; 95% CI 0.67-1.21; P= 0.48). There was marginal evidence suggesting that the survival benefit of chemotherapy in patients with mixed tumors was greater than for those with pure UC (interaction P= 0.09) (PUBMED:21105991).
In summary, mixed histological features in locally advanced bladder cancer do not negatively impact the survival benefit from neoadjuvant platinum-based combination chemotherapy. Instead, these features may indicate a potential advantage for the use of such neoadjuvant chemotherapy before radical cystectomy. |
Instruction: Does early surgical intervention improve left ventricular mass regression after mitral valve repair for leaflet prolapse?
Abstracts:
abstract_id: PUBMED:21168021
Does early surgical intervention improve left ventricular mass regression after mitral valve repair for leaflet prolapse? Background: Left ventricular hypertrophy is associated with adverse cardiovascular outcomes. It is unclear whether hypertrophy caused by severe chronic mitral regurgitation regresses after mitral valve repair and, if so, which factors promote reverse remodeling and influence its prognostic significance.
Methods: Between March 1995 and December 2005, 2589 patients had mitral valve repair. Five hundred thirty patients (346 of whom were male) underwent isolated repair for leaflet prolapse and had echocardiographic data available from which the left ventricular mass index could be calculated. Concomitant preoperative tricuspid valve regurgitation was more than mild in 95 (18%) patients. Those with preoperative atrial fibrillation and other cardiac pathologies necessitating intracardiac repair were not included.
Results: Significant regression of left ventricular mass index occurred during the first 3 years (-28 g/m(2), P < .001) and was maintained during follow-up for more than 3 years (-26 g/m(2), P < .001). Higher preoperative left ventricular ejection fraction and greater preoperative left ventricular mass index independently predicted improved left ventricular mass index regression at 3 years. During follow-up of greater than 3 years, greater preoperative left ventricular mass index persisted in predicting improved mass regression (P < 0.001), and greater than mild preoperative tricuspid valve regurgitation was associated with less mass regression (P < .001). Late recovery of normal left ventricular ejection fraction was impaired in those with the greatest residual left ventricular mass; however, there was no difference in late symptoms or survival.
Conclusions: Performing mitral valve repair before a decrease in left ventricular ejection fraction and the development of significant secondary tricuspid valve regurgitation is associated with a greater likelihood of significant regression of left ventricular mass, possibly predicting improved recovery of normal left ventricular function after surgical intervention. These data provide additional support for early degenerative mitral valve repair.
abstract_id: PUBMED:26189162
Left ventricular performance early after repair for posterior mitral leaflet prolapse: Chordal replacement versus leaflet resection. Objective: To review hemodynamic performance early after valve repair with chordal replacement versus leaflet resection for posterior mitral leaflet prolapse.
Methods: Between April 2006 and September 2014, 72 consecutive patients underwent valve repair with chordal replacement (30 patients) or leaflet resection (42 patients) for isolated posterior mitral leaflet prolapse. Left ventricular ejection fraction, end-systolic elastance, effective arterial elastance, and ventricular efficiency were noninvasively measured by echocardiography and analyzed preoperatively and ∼ 1 month postoperatively. Mitral valve repair was accomplished in all patients, and no regurgitation (including trivial) was observed postoperatively.
Results: Chordal replacement resulted in significantly less reduction in left ventricular ejection fraction, and significantly greater increase in end-systolic elastance than leaflet resection (left ventricular ejection fraction, 4.8% vs 16.7% relative decrease [P = .005] and end-systolic elastance, 19.0% vs -1.3% relative increase [P = .012]). Despite comparable preoperative ventricular efficiency between the groups, the postoperative ventricular efficiency in the chordal replacement group was superior to that in the leaflet resection group (ventriculoarterial coupling, 32.0% vs 89.3% relative increase [P = .007] and ratio of stroke work to pressure-volume area, 4.3% vs 13.4% relative decrease [P = .008]). In multivariate analysis, operative technique was a significant determinant of left ventricular ejection fraction and ratio of stroke work to pressure-volume area (P = .030 and P = .030, respectively).
Conclusions: Chordal replacement might provide patients undergoing valve repair for posterior mitral leaflet prolapse with better postoperative ventricular performance than leaflet resection. Longer follow-up is required to compare long-term outcomes.
abstract_id: PUBMED:33744010
Mitral valve repair for isolated posterior mitral valve leaflet prolapse: The effect of respect and resect techniques on left ventricular function. Objective: Posterior mitral valve leaflet prolapse repair can be performed by leaflet resection or chordal replacement techniques. The impact of these techniques on left ventricular function remains a topic of debate, considering the presumed better preservation of mitral-ventricular continuity when leaflet resection is avoided. We explored the effect of different posterior mitral valve leaflet repair techniques on postoperative left ventricular function.
Methods: In total, 125 patients were included and divided into 2 groups: leaflet resection (n = 82) and isolated chordal replacement (n = 43). Standard and advanced echocardiographic assessments were performed preoperatively, directly postoperatively, and at late follow-up. In addition, left ventricular global longitudinal strain was measured and corrected for left ventricular end-diastolic volume to adjust for the significant changes in left ventricular volumes.
Results: At baseline, no significant intergroup difference in left ventricular function was observed measured with the corrected left ventricular global longitudinal strain (resect: 1.76% ± 0.58%/10 mL vs respect: 1.70% ± 0.57%/10 mL, P = .560). Postoperatively, corrected left ventricular global longitudinal strain worsened in both groups but improved significantly during late follow-up, returning to preoperative values (resect: 1.39% ± 0.49% to 1.71% ± 0.56%/10 mL, P < .001 and respect: 1.30% ± 0.45% to 1.70% ± 0.54%/10 mL, P < .001). Mixed model analysis showed no significant effect on the corrected left ventricular global longitudinal strain when comparing the 2 different surgical repair techniques over time (P = .943).
Conclusions: Our study showed that both leaflet resection and chordal replacement repair techniques are effective at preserving postoperative left ventricular function in patients with posterior mitral valve leaflet prolapse and significant regurgitation.
abstract_id: PUBMED:28378353
Reduction of left ventricular outflow tract obstruction with transcatheter mitral valve repair. Many patients with severe mitral regurgitation cannot undergo conventional mitral valve surgery due to prohibitive surgical risk and are candidates for transcatheter repair with an edge-to-edge technique. Prior reports suggest efficacy with this approach for mitral regurgitation due to hypertrophic cardiomyopathy with left ventricular outflow obstruction. We present a case report of transcatheter mitral valve repair for posterior leaflet prolapse with concomitant left ventricular outflow tract obstruction due to systolic anterior motion of the mitral valve in the absence of hypertrophic cardiomyopathy.
abstract_id: PUBMED:19379969
Recovery of left ventricular function after surgical correction of mitral regurgitation caused by leaflet prolapse. Objective: Recovery of ventricular function after surgical correction of mitral regurgitation is often incomplete. We studied clinical and echocardiographic factors influencing return of normal left ventricular ejection fraction after mitral valve repair or replacement for mitral regurgitation caused by leaflet prolapse.
Methods: We evaluated 1063 patients who had mitral valve repair or replacement between January 1, 1980, and December 31, 2000. A total of 2488 echocardiograms with follow-up ejection fractions were available for analysis.
Results: Of the patients, 761 were men, 924 had valve repair, and 85% underwent surgery during the study's second decade. Compared with patients who had the operation in the 1980s, patients who had surgery in the 1990s had significantly smaller preoperative left heart dimensions and a 2.4-fold greater likelihood of an ejection fraction more than 60% during follow-up. Factors independently associated with higher ejection fraction at follow-up included valve repair (vs replacement), freedom from preoperative myocardial infarction, operation in the 1990s, greater preoperative ejection fraction, and smaller left ventricular dimensions. Patients with an ejection fraction of less than 50% at discharge were 3.5-fold less likely to recover normal ejection fraction during long-term follow-up (P < .001). Patients had a greater likelihood of a follow-up ejection fraction more than 60% if preoperative ejection fraction was more than 65% (hazard ratio, 1.7) or left ventricular end-systolic dimension was less than 36 mm (hazard ratio, 2.0).
Conclusion: Early repair of mitral regurgitation caused by leaflet prolapse, before deterioration in left heart size or function, increases the likelihood of subsequent normalization of left ventricular ejection fraction.
abstract_id: PUBMED:8311600
Carpentier "sliding leaflet" technique for repair of the mitral valve: early results. Reconstructive mitral valve operation is now the preferred technique for the surgical treatment of prolapse of the posterior leaflet due to degenerative disease. Systolic anterior motion of the mitral valve with left ventricular outflow tract obstruction has been observed after such repair, with an incidence ranging from 4.5% to 10%. In an attempt to reduce the incidence of this complication, Carpentier has devised a new technique: the sliding leaflet plasty of the posterior leaflet. We report on 48 patients who underwent this new procedure between July 1990 and July 1992. One patient died perioperatively (2.1%). All other patients were able to be discharged on the ninth postoperative day. All patients underwent M-mode, two-dimensional, and Doppler echocardiography before discharge. Forty-one patients (85%) had no evidence of postoperative regurgitation, whereas 7 patients (15%) showed mild mitral valve insufficiency. Left ventricular outflow tract obstruction due to systolic anterior motion of the mitral valve was never detected. We believe that this technique of mitral valve repair is safe and seems to be effective in achieving a decreased incidence of left ventricular outflow tract obstruction.
abstract_id: PUBMED:29858366
Residual Mitral Regurgitation After Repair for Posterior Leaflet Prolapse-Importance of Preoperative Anterior Leaflet Tethering. Background: Carpentier's techniques for degenerative posterior mitral leaflet prolapse have been established with excellent long-term results reported. However, residual mitral regurgitation (MR) occasionally occurs even after a straightforward repair, though the involved mechanisms are not fully understood. We sought to identify specific preoperative echocardiographic findings associated with residual MR after a posterior mitral leaflet repair.
Methods And Results: We retrospectively studied 117 consecutive patients who underwent a primary mitral valve repair for isolated posterior mitral leaflet prolapse including a preoperative 3-dimensional transesophageal echocardiography examination. Twelve had residual MR after the initial repair, of whom 7 required a corrective second pump run, 4 underwent conversion to mitral valve replacement, and 1 developed moderate MR within 1 month. Their preoperative parameters were compared with those of 105 patients who had an uneventful mitral valve repair. There were no hospital deaths. Multivariate analysis identified preoperative anterior mitral leaflet tethering angle as a significant predictor for residual MR (odds ratio, 6.82; 95% confidence interval, 1.8-33.8; P=0.0049). Receiver operator characteristics curve analysis revealed a cut-off value of 24.3° (area under the curve, 0.77), indicating that anterior mitral leaflet angle predicts residual MR. In multivariate regression analysis, smaller anteroposterior mitral annular diameter (P<0.001) and lower left ventricular ejection fraction (P=0.002) were significantly associated with higher anterior mitral leaflet angle, whereas left ventricular and left atrial dimension had no significant correlation.
Conclusions: Anterior mitral leaflet tethering in cases of posterior mitral leaflet prolapse has an adverse impact on early results following mitral valve repair. The findings of preoperative 3-dimensional transesophageal echocardiography are important for consideration of a careful surgical strategy.
abstract_id: PUBMED:18692655
Determinants of early decline in ejection fraction after surgical correction of mitral regurgitation. Objective: We sought to echocardiographically examine the early changes in left ventricular size and function after mitral valve repair or replacement for mitral regurgitation caused by leaflet prolapse.
Methods: Preoperative and early postoperative echocardiograms of 861 patients with mitral regurgitation caused by leaflet prolapse who underwent mitral valve repair or replacement (with or without coronary revascularization) were studied. Among the patients, 625 (73%) were men and 779 (90%) had mitral valve repair.
Results: The rate of valve repair increased from 78% in the first decade of the study to 92% in the second decade. At early echocardiography (mean, 5 days postoperatively), we observed significant decreases in left ventricular ejection fraction (mean, -8.8) and left ventricular end-diastolic dimension (mean, -7.5). The magnitude of the early decline in ejection fraction was similar in patients who had mitral valve repair and replacement. The decrease in postoperative ejection fraction was independently associated with a lower preoperative ejection fraction, the presence of atrial fibrillation, advanced New York Heart Association functional class, greater left ventricular end-diastolic and end-systolic dimensions, and larger left atrial size.
Conclusion: Surgical correction of mitral regurgitation results in an early decrease in ejection fraction, particularly in symptomatic patients with increased left heart dimensions.
abstract_id: PUBMED:20398554
Efficacy of mitral valve repair for anterior leaflet prolapse of mitral valve Objective: To evaluate the therapeutic effects of mitral valve repair for the treatment of the anterior leaflet prolapse of mitral valve.
Methods: From November 1998 to October 2007, 210 patients with severe anterior leaflet prolapse of mitral valve underwent valve repair. The condition of valve was preoperative, intraoperative, and postoperative assessed with echocardiography.
Results: Edge-to-edge repair technique was used in 134 cases (63.8%). The cardiac function was NYHA class I in 168 cases and class II in 40 cases after operation. Patients were followup for 1 - 150 (25.7 +/- 29.0) months, two patients (0.95%) died of postoperative low cardiac output syndrome. Echocardiography examination indicated that the mean JP2 postoperative left atrial diameter was (37.7 +/- 9.2) mm against the preoperative value of (47.5 +/- 12.7) mm (P < 0.05), the mean postoperative left ventricular end-diastolic diameter was (51.7 +/- 7.9) mm against the preoperative value of (67.7 +/- 10.3) mm (P < 0.05), the mean postoperative left ventricular ejection fraction was (62.2 +/- 3.2)% against the preoperative value of (52.2 +/- 6.4)% (P < 0.05), and the mean preoperative regurgitation area was (10.4 +/- 4.1) cm(2) against the postoperative value of (4.1 +/- 1.7) cm(2) (P < 0.01).
Conclusions: Optimal outcome was achieved by appropriate edge-to-edge technique or other mitral valve repair techniques for anterior leaflet prolapse of mitral valve. Edge-to-edge technique is a reliable and efficient surgical technique.
abstract_id: PUBMED:37551944
Neochordae implantation versus leaflet resection in mitral valve posterior leaflet prolapse and dilated left ventricle: a propensity score matching comparison with long-term follow-up. Objectives: Uncorrected severe mitral regurgitation (MR) due to posterior prolapse leads to left ventricular dilatation. At this stage, mitral valve repair becomes mandatory to avoid permanent myocardial injury. However, which technique among neochoardae implantation and leaflet resection provides the best results in this scenario remains unknown.
Methods: We selected 332 patients with left ventricular dilatation and severe degenerative MR due to posterior leaflet (PL) prolapse who underwent neochoardae implantation (85 patients) or PL resection (247 patients) at our institution between 2008 and 2020. A propensity score matching analysis was carried on to decrease the differences at baseline.
Results: Matching yielded 85 neochordae implantations and 85 PL resections. At 10 years, freedom from cardiac death and freedom from mitral valve reoperation were 92.6 ± 6.1% vs 97.8 ± 2.1% and 97.7 ± 2.2% vs 95 ± 3% in the neochordae group and in the PL resection group, respectively. The MR ≥2+ recurrence rate was 23.9 ± 10% in the neochordae group and 20.8 ± 5.8% in the PL resection group (P = 0.834) at 10 years. At the last follow-up, the neochordae group showed a higher reduction of left ventricular end-diastolic diameter (44 vs 48 mm; P = 0.001) and a better ejection fraction (60% vs 55%; P < 0.001) compared to PL resection group.
Conclusions: In this subgroup of patients, both neochordae implantation and leaflet resection provide excellent durability of the repair in the long term. Neochordae implantation might have a better effect on dilated left ventricle.
Answer: Yes, early surgical intervention does improve left ventricular mass regression after mitral valve repair for leaflet prolapse. A study found that significant regression of left ventricular mass index occurred during the first 3 years after mitral valve repair and was maintained during follow-up for more than 3 years. Factors such as higher preoperative left ventricular ejection fraction and greater preoperative left ventricular mass index independently predicted improved left ventricular mass index regression at 3 years. Additionally, performing mitral valve repair before a decrease in left ventricular ejection fraction and the development of significant secondary tricuspid valve regurgitation was associated with a greater likelihood of significant regression of left ventricular mass, which could predict improved recovery of normal left ventricular function after surgical intervention (PUBMED:21168021). |
Instruction: Are dipyridamole (sensitive) calcium channels present in esophageal smooth muscle?
Abstracts:
abstract_id: PUBMED:9199491
Are dipyridamole (sensitive) calcium channels present in esophageal smooth muscle? Unlabelled: Calcium (Ca2+) entry from the extra-cellular space into the cytoplasm through voltage-dependent Ca2+ channels, specifically dipyridamole (DHP) sensitive ones (L-type), control a variety of biological processes, including excitation-contraction coupling in vascular and GI muscle cells. It has also been proposed that these channels may control esophageal contractility. However, DHP-sensitive Ca2+ channels in esophagus have not been well characterized biochemically. Thus, it is not known if these channels are similar in number or affinity to those in vascular or neural tissues--organs for which clinical use of calcium channel blockers has been successful. Thus, the purpose of this study was to identify and characterize DHP-sensitive calcium channels in esophagus and compare them to vascular, neural, and other GI tissues.
Methods: We carried out in vitro receptor binding assays on lower esophageal muscle homogenates, gastric and intestinal and colonic homogenates, and aortic muscle homogenates from ca; and on brain homogenates from rat. We used a radio-labeled dihydropyridine derivative [3H]nitrendipine, to label these sites and co-administration of unlabeled nimodipine to define specific binding.
Results: As expected, ligand binding to L-type Ca2+ channels in aortic vascular smooth muscle and brain was readily detectable: brain, Bmax=252 fmol/mg protein, Kd=0.88 nM; aorta, Bmax=326 fmol/mg protein, Kd=0.84 nM. For esophagus (Bmax=97; Kd=0.73) and for other GI tissues, using the same assay conditions, we detected a smaller signal, suggesting that L-type Ca2+ channels are present in lower quantities.
Conclusion: L-type Ca2+ channel are present in esophagus and in other GI muscles, their affinity is similar, but their density is relatively sparse. These findings are consistent with the relatively limited success that has been experienced clinically in the use of calcium channel blockers for treatment of esophageal dysmotility.
abstract_id: PUBMED:19359806
Dipyridamole inhibits intracellular calcium transients in isolated rat arteriole smooth muscle cells. Dipyridamole, an inhibitor of adenosine uptake as well as a cGMP phosphodiesterase inhibitor, is commonly used in prophylactic therapy for patients with angina pectoris. However, the effects of dipyridamole on systemic blood vessels, especially on the peripheral vascular system, are not well understood. Therefore, the effect of dipyridamole on ATP-induced arteriole contraction was examined with special reference to intracellular Ca(2+) concentration ([Ca(2+)](i)) using real-time confocal microscopy. In cases of 0.1-10microM range, dipyridamole induced only slight [Ca(2+)](i) decreases in smooth muscle cells of both testicular and cerebral arterioles. However, 100microM dipyridamole induced substantial [Ca(2+)](i) decreases in the cells. In the presence of 10microM dipyridamole, changes in ATP-induced [Ca(2+)](i) were found to be inhibited in smooth muscle cells of testicular arterioles but not in those of cerebral arterioles. In addition, alpha, beta-methylene ATP-induced [Ca(2+)](i) increases in testicular arteriole smooth muscle cells were also partially inhibited in the presence of dipyridamole. When testicular arterioles were perfused with dipyridamole, no increases in nitric oxide levels were detected. High levels of K(+) induced a [Ca(2+)](i) increase in testicular arterioles that was also partially inhibited by dipyridamole. In the presence of substances that affect protein kinase A or G, ATP-induced [Ca(2+)](i) was not completely inhibited. These findings suggest that dipyridamole may act not only as an inhibitor of adenosine uptake and as a cGMP phosphodiesterase inhibitor, but also as a calcium channel blocker in arteriole smooth muscle cells.
abstract_id: PUBMED:1716451
Regulation of 1,4-dihydropyridine and beta-adrenergic receptor sites in coronary artery smooth muscle membranes. The receptor sites for 1,4-dihydropyridine (DHP) calcium channel ligands were identified and pharmacologically characterized in partially purified canine coronary artery smooth muscle (CSM) membranes (purification factor for 1,4-DHPs 2.8 and 2.2 respectively) using Ca2+ channel agonist (-)-S-[3H]BAYK 8644 and antagonist (+)-[3H]PN 200-110 as radioligands. The beta-adrenergic receptors were identified with (-)-3-[125I]iodocyanopindolol (ICYP). Specific binding of 1,4-DHPs and ICYP to membrane fraction was saturable, reversible and of both high and low affinity. The Kd for 1,4-DHP Ca2+ channel agonist was 0.59 +/- 0.05 and for antagonist 0.35 +/- 0.06 nmol/l and for low affinity binding sites Kd = 9.0 +/- 0.18 and 18.0 +/- 1.1 nmol/l. The high affinity 1,4-DHP binding (Bmax = 265 +/- 21 and 492 +/- 12 fmol/mg protein), showed stereoselectivity, temperature-dependence as well as pharmacological specificity: isoprenaline- and GTP-sensitivity, positive modulation with dilthiazem and negative modulation with verapamil, that is, properties characteristic of 1,4-DHP receptor sites on L-type Ca2+ channels. The low affinity binding sites were characterized as nonselective, temperature independent, dipyridamol-sensitive and represented a nucleoside transporter. The proportion of high affinity binding sites identified in the CSM membranes was 1.85 : 1.0 in favour of the antagonist. Results obtained with [125I]omega Conotoxin GVI A demonstrated that CSM membrane fractions isolated from median layers of coronary artery were devoid of substantial contamination with fragments of neuronal cells.
abstract_id: PUBMED:3632143
Effects of adenosine and theophylline on canine tracheal smooth muscle tone. The postulated mechanisms by which theophylline induces relaxation of airway smooth muscle include, among others, inhibition of cyclic nucleotide phosphodiesterase(s) and antagonism of adenosine-induced contraction. This latter possibility was examined by investigation of the interaction of theophylline and adenosine in canine tracheal smooth muscle preparations. Adenosine did not alter basal tone i.e. there is no evidence of a contractile response. However, when contraction was induced with methacholine, adenosine caused relaxation. It appears that this relaxation occurred as a consequence of the combination of adenosine with a site within the smooth muscle cell. The prior addition of theophylline (10(-6)-10(-3) M) did not alter adenosine-induced relaxation and in the reverse experiment, the prior addition of adenosine (10(-6)-10(-3) M) did not alter the relaxation produced by theophylline. It is concluded that adenosine relaxes canine tracheal smooth muscle by combination with an intracellular site, rather than a receptor on the cell surface. The hypothesis that theophylline relaxes airways smooth muscle by antagonism of adenosine or that adenosine antagonizes theophylline was not supported by our data.
abstract_id: PUBMED:3408349
Effect of anticoagulant and antiplatelet drugs on in vitro smooth muscle cell proliferation. Effects of anticoagulant and antiplatelet drugs on vascular smooth muscle cell plating efficiency and proliferation were assessed using in vitro tissue culture techniques. Canine carotid artery smooth muscle cells were derived, pooled and plated into tissue culture cluster wells to which various drugs were added. Regular beef lung and porcine intestinal and low molecular weight porcine intestinal heparins reduced smooth muscle cell counts. Among antiplatelet drugs, reduced smooth muscle cell counts were seen with dipyridamole and ibuprofen, as well as with the combinations of ASA and dipyridamole, and ASA and dazoxiben. Although in vitro results cannot necessarily be extrapolated to in vivo settings, especially in regard to antiplatelet drugs, results of this study indicate direct effects of certain commonly used clinical agents in reducing smooth muscle cell growth.
abstract_id: PUBMED:1036917
Pharmacological Properties of Fendiline in Cardiac and Smooth Muscle (author's transl) The pharmacological properties of fendiline N-(1-phenylethyl [1])-3,3 - diphenylpropylamine-hydrochloride; Sensit, were investigated in the isolated guinea pig heart and in isolated circular smooth muscle strips of bovine coronary arteries, pulmonary arteries and trachea. 1. Fendiline dose dependently increased coronary flow by up to 200% but, different from verapamil, did not inhibit contraction. 2. Fendiline dose dependently relaxed coronary strips and, more powerfully, tracheal and pulmonary arterial strips. 3. In cardiac muscle, fendiline was almost completely retained for a prolonged period of time. In the smooth muscle tissues under study, fendiline accumulated twenty-fold above the concentration present in the organ bath. 4. In paced hearts, fendiline non-competitively inhibited the positive inotropic effect of isoprenaline in doses that did not significantly depress the amplitude of isotonic contractions. 5. In the guinea pig heart, adenosine uptake becomes progressively inhibited when coronary flow increases. This "washout" effect was definitely counteracted by fendiline and by NaNO2, but was augmented by papaverine, hexobendine and dipyridamol. The counteraction of the "washout effect" by fendiline as well as by NaNO2 is most likely due to the opening of additional (previously closed) capillaries.
abstract_id: PUBMED:6258688
The effects of calcium concentration on the inhibition of cholinergic neurotransmission in the myenteric plexus of guinea-pig ileum by adenine nucleotides. 1 Adenosine and the adenine nucleotides AMP, ADP, ATP, cyclic AMP, NAD, NADP and NADH produced a dose-related inhibition of the contractile response of guinea-pig ileum longitudinal muscle-myenteric plexus strips to low frequencies (less than 1 Hz) of electrical field stimulation. 2 These compounds inhibited hexamethonium-sensitive contractions induced by nicotine but did not alter the responses to exogenous acetylcholine, and the acetylcholine output from the myenteric plexus was inhibited by the adenyl compounds. These findings indicate that adenine derivatives act at a presynaptic site on postganglionic cholinergic neurones. 3 The degree of inhibition produced by adenine compounds was inversely related to the calcium concentration of the bath fluid over a range of calcium concentrations (1 to 5 mM) that had no effect on the responses of the muscle to exogenous acetylcholine. 4 The inhibition produced by adenine derivatives was antagonized by theophylline and augmented by dipyridamole. Both of these interactions were sensitive to, and synergistic with, alterations of the concentration of calcium in the bath fluid. 5 The results suggest that adenine compounds inhibit acetylcholine release from the myenteric plexus by diminishing the availability of intracellular calcium ions required for neurotransmitter release.
abstract_id: PUBMED:3093824
Uptake and release of adenosine by cultured rat aortic smooth muscle. We wanted to determine whether CO2, H+ and K+ affect the adenosine metabolism of vascular smooth muscle in a way that could account for the effects of these substances on vascular reactivity and their ability to modulate adenosine-induced vascular relaxation. Accordingly, 1-week-old cultures of rat aortic smooth muscle were incubated in phosphate-buffered saline with various [K+]'s and pH's and aerated in an incubation chamber with gases containing various proportions of CO2. Uptake was measured as 14C incorporation into cellular constituents during exposure to 2 microM [14C]adenosine. Release was measured as net extracellular adenosine accumulation. Uptake of adenosine was not significantly affected by any of the experimental maneuvers, except that it was greatly attenuated by dipyridamole (10(-5) and 10(-4) M) and transiently enhanced by the low CO2 levels. Adenosine release, however, was depressed by lowering atmospheric CO2 (0% vs 5%) and also by normocapnic acidosis (pH 6.8 vs pH 7.4). We conclude that vascular smooth muscle in culture releases adenosine at a rate that might have vasoactive significance in vivo. Furthermore, some of the vascular actions of CO2 and H+, but not those of K+, may be partially explained by their effects on vascular smooth muscle's adenosine metabolism.
abstract_id: PUBMED:17169124
Mechanism of dipyridamole's action in inhibition of venous and arterial smooth muscle cell proliferation. Dipyridamole is a potential pharmacological agent to prevent vascular stenosis because of its antiproliferative properties. The mechanisms by which dipyridamole inhibits the growth of vascular smooth muscle cells, especially venous smooth muscle cells, are unclear. In the present study, dipyridamole transiently but significantly increased cyclic adenosine monophosphate (cAMP) and cyclic guanosine monophosphate (cGMP) levels in human venous and arterial smooth muscle cells in a time- and dose-dependent manner. Peak concentrations of both cyclic nucleotides were achieved at 15-30 min. and correlated with inhibition of proliferation in both cell types. The antiproliferative effects of dipyridamole observed at 48 hr were similar whether drug exposure was only 15 min. or sustained for 48 hr. Specific competitive inhibitors of protein kinases A and G attenuated the antiproliferative effects of subsaturating concentrations of dipyridamole, with the effects of protein kinase inhibition being particularly pronounced in venous smooth muscle cells. Flow cytometry analysis showed that dipyridamole caused an enrichment of cells in G(0)/G(1) and a corresponding reduction of cells in S phase. These data indicate that a transient increase in cGMP and cAMP is sufficient to induce downstream kinase activation and subsequent cell cycle arrest, and that protein kinase G may be more important than protein kinase A in mediating the growth inhibitory effect of dipyridamole in venous protein kinase.
abstract_id: PUBMED:8112814
Effect of nifedipine and dipyridamole on K+ and NA stimulated 45Ca-uptake into rabbit aortic strips. The effect of dipyridamole (DPM), a purine nucleoside uptake inhibitor, on the K+ and noradrenaline (NA)-stimulated 45Ca-uptake into Rabbit aortic strips was studied and compared with that of nifedipine (NFD). DPM (10(-6) M) significantly (P < 0.02) inhibited 45Ca-uptake in aortic strips stimulated with K+ (10(-1) M). However, it had less significant effect on NA (10(-4) M)-stimulated 45Ca-uptake. The percentage inhibition was 80 and 57 in K+ and NA-stimulated 45Ca-uptake, respectively. Similarly, NFD (10(-7) M) caused significant (P < 0.01, P < 0.05) decrease in K+ (0.060 +/- 0.009) and NA (0.082 + 0.01, mM)-stimulated 45Ca-uptake. The percentage inhibition by NFD was 98 and 64 in K+ and NA-stimulated 45Ca-uptake, respectively. The results suggest that DPM inhibits Ca2+ influx occurring through Voltage Operated Calcium Channels (VOCCs) but it has little effect on Receptor Operated Calcium Channels (ROCCs).
Answer: Yes, dipyridamole-sensitive calcium channels, specifically L-type calcium channels, are present in esophageal smooth muscle. This was determined through in vitro receptor binding assays on lower esophageal muscle homogenates using a radio-labeled dihydropyridine derivative [3H]nitrendipine to label these sites. The study found that while the density of L-type Ca2+ channels in the esophagus is lower compared to that in aortic vascular smooth muscle and brain, their affinity is similar (PUBMED:9199491). |
Instruction: Is obesity always a risk factor for all breast cancer patients?
Abstracts:
abstract_id: PUBMED:17357575
Early menarche as a risk factor of breast cancer Background: Most of the information in medical literature report diverse factors of breast cancer, which are related to the reproductive life of the woman. It is mentioned that menarche before the 12 years elevates the relative risk of this disease, in comparison with its appearance after the 13 years.
Objective: To determine if early menarche is a risk factor associated to breast cancer.
Material And Methods: This retrospective, observational and descriptive study included 162 women with breast cancer from a 3 years period (2002-2004), in the Juarez Hospital of Mexico. In addition other well known risk factors for breast cancer were evaluated. The statistical analysis was made with the software program SPSS; the descriptive analysis was made by means of summary of statistics, histograms, box and bar charts.
Results: Early menarche doesn't have correlation with breast cancer nor with the appearance of the disease in early ages; it was present in the 12.3% (n = 20) of the patients; the menarche initiated between 12 and 13 years in 64.4% (n = 104.3) of the cases. The average age at the time of the diagnosis of breast cancer in the early menarche group was of 55 years and for the group in general of 47.6 years. The factor that seems to be related to breast cancer is overweight and obesity with 54.26 and 17.11% respectively, with an average body mass index of 27.7 kg/m2.
Conclusions: There was not a correlation between early menarche as risk factor for breast cancer neither between the reproductive risk factors considered habitual and increased risk of breast cancer. Overweight and obesity seem to be related to the appearance of the disease, reason why it is required to investigate this with different random control groups in the country. We propose to study other factors that may be implicated in the genesis of breast cancer such as inflammatory factors, similar insulin growth factors and hyperinsulinism.
abstract_id: PUBMED:36947582
Cardiac impacts of postoperative radiotherapy for breast cancer in Japanese patients. Radiotherapy for breast cancer has attracted attention in Western countries because radiation to the heart can cause cardiac events. The purposes of this study were to evaluate the relationship between radiotherapy after breast-conserving surgery and the frequency of cardiac events in Japanese patients and to investigate the risk factors of cardiac events after postoperative radiotherapy in those patients. Female patients who received postoperative radiotherapy following breast-conserving surgery between 2007 and 2012 at our hospital were evaluated. In this study, we estimated the cumulative incidence of cardiac events including angina pectoris, myocardial infarction, ischemic heart disease, heart failure and cardiomyopathy after radiotherapy. Of 311 eligible patients, 7.1% of the patients had a smoking history, 20.3% of the patients were obese and 22.2% of the patients had hypertension. The median follow-up period was 118 months (interquartile range, 102-132 months). Twelve patients (3.9%) experienced cardiac events after treatment. The mean time to cardiac events was 126 months. The 10-year cumulative incidences of cardiac events after treatment were 4.2% and 4.3% for patients with left-sided and right-sided breast cancer, respectively, without a significant difference. Multivariate analysis showed that only hypertension was a risk factor for cardiac events (hazard ratio = 16.67, P = 0.0003). In conclusion, postoperative radiotherapy for breast cancer did not increase the incidence of cardiac events. Since at least 2007, postoperative radiotherapy for breast cancer has been safely performed without effects on the heart.
abstract_id: PUBMED:34059910
Association of premature atherosclerotic cardiovascular disease with higher risk of cancer: a behavioral risk factor surveillance system study. Aim: The aim of this study was to investigate a possible association between atherosclerotic cardiovascular disease (ASCVD) and risk of cancer in young adults.
Methods: We utilized data from the Behavioral Risk Factor Surveillance System, a nationally representative US telephone-based survey to identify participants in the age group of 18-55 years who reported a history of ASCVD. These patients were defined as having premature ASCVD. Weighted multivariable logistic regression models were used to study the association between premature ASCVD and cancer including various cancer subtypes.
Results: Between 2016 and 2019, we identified 28 522 (3.3%) participants with a history of premature ASCVD. Compared with patients without premature ASCVD, individuals with premature ASCVD were more likely to be Black adults, have lower income, lower levels of education, reside in states without Medicaid expansion, have hypertension, diabetes mellitus, chronic kidney disease, obesity, and had delays in seeking medical care. Individuals with premature ASCVD were more likely to have been diagnosed with any form of cancer (13.7% vs 3.9%), and this association remained consistent in multivariable models (odds ratio, 95% confidence interval: 2.08 [1.72-2.50], P < 0.01); this association was significant for head and neck (21.08[4.86-91.43], P < 0.01), genitourinary (18.64 [3.69-94.24], P < 0.01), and breast cancer (3.96 [1.51-10.35], P < 0.01). Furthermore, this association was consistent when results were stratified based on gender and race, and in sensitivity analysis using propensity score matching.
Conclusion: Premature ASCVD is associated with a higher risk of cancer. These data have important implications for the design of strategies to prevent ASCVD and cancer in young adults.
abstract_id: PUBMED:33487579
Prevalence, Outcome, and Management of Risk Factors in Patients With Breast Cancer With Peripheral Arterial Disease: A Tertiary Cancer Center's Experience. Introduction: The risk factors of breast cancer overlap with those of peripheral arterial disease (PAD), with increasing prevalence. In addition, there is under-utilization of risk factor modification measures in patients with PAD.
Materials And Methods: Electronic medical records of patients with breast cancer with International Classification of Diseases 9/10 codes for PAD spanning 10 years from June 1, 2009 to June 1, 2019 were reviewed.
Results: A total of 248 patients, 98% women, with a median age of 75 years and with a median follow-up of 76 months, were included. PAD risk factors were identified as smoking (44%), obesity (38%), hyperlipidemia (68%), hypertension (HTN) (74%), and diabetes (42%). Overall, survival was significantly impacted by smoking (P = .0301) and HTN (P = .0052). In a Cox proportion hazard ratio regression, HTN (overall death hazard ratio [HR], 3.1784; 95% CI, 1.0291-6.7490; P = .0070; cancer-related death HR, 2.6354; 95% confidence interval [CI], 1.0291-6.7490; P = .0434) and smoking (overall death HR, 1.7452; 95% CI 1.0707-2.8444; P = .0255; cancer-related death HR, 2.7432; 95% CI, 1.4190-5.3030; P = .0027) were predictors of overall death and cancer-related death. Of all patients, 48% were on statins and 54% were on antiplatelet therapies. Of the patients, 62% of current smokers were offered a smoking cessation program, 27% of obese patients were offered a nutrition consult, 42% of patients with diabetes had blood glucose controlled, and 54% of patients with HTN had blood pressure controlled.
Conclusion: Smoking and HTN are risk factors associated with decreased survival and predictive of overall death and cancer-related death. In this population, risk factor modification was under-utilized.
abstract_id: PUBMED:12085359
Is obesity an independent risk factor for hepatocellular carcinoma in cirrhosis? Recently, several epidemiologic observations have suggested that obesity might be an independent risk factor for certain malignancies such as breast cancer, colon cancer, renal cell carcinoma, and esophageal adenocarcinoma. However, there are no studies examining the risk of hepatocellular carcinoma (HCC) in obesity. The aim of the present study was to determine whether obesity is an independent risk factor for HCC in patients with cirrhosis. Explanted liver specimens from a national database on patients undergoing liver transplantation were examined for HCC, and the incidence was compared among patients with varying body mass indices according to the etiology of cirrhosis. A multivariate analysis was used for controlling other potentially confounding variables such as age and sex. Among 19,271 evaluable patients, the overall incidence of HCC was 3.4% (n = 659) with a slightly higher prevalence among obese patients compared with lean patients. Obesity was an independent predictor for HCC in patients with alcoholic cirrhosis (odds ratio [OR], 3.2; 95% CI, 1.5-6.6; P =.002) and cryptogenic cirrhosis (OR, 11.1; 95% CI, 1.5-87.4; P =.02). Obesity was not an independent predictor in patients with hepatitis C, hepatitis B, primary biliary cirrhosis, and autoimmune hepatitis. The higher risk of HCC in obese patients is confined to alcoholic liver disease and cryptogenic cirrhosis. In conclusion, more frequent surveillance for HCC may be warranted in obese patients with alcoholic and cryptogenic cirrhosis. However, as this study is based on patients with advanced cirrhosis, our findings need to be confirmed in a broader population of individuals with cirrhosis.
abstract_id: PUBMED:31759352
Risk Factor Analysis for Breast Cancer in Premenopausal and Postmenopausal Women of Punjab, India. Objective: Amritsar, the second largest town of agrarian state of Punjab, India reports high number of breast cancer cases every year. The present study investigated the etiology of breast cancer using various obesity indices and other epidemiological factors among breast cancer patients residing in and around Amritsar city.
Methods: In this case control study, risk factors for breast cancer were analyzed in 542 female subjects: 271 females with breast cancer patients and 271 unrelated healthy females matched for age as control females.
Results: Bivariate analysis for risk factors in cases and controls showed a lower risk (OR=0.65, 95% CI 0.43-0.99, p=0.04) in obese cases with BMI≥25kg/m2 as compared to subjects with normal BMI. Risk factor analysis showed that parameter which provided risk for cancer in postmenopausal women was obesity and in premenopausal women was parity. Postmenopausal women with BMI (overweight: OR=0.39, 95% CI 0.17-0.92, p=0.03; obese: OR= 0.26, 95% CI 0.13-0.52, p=0.00), WC (OR=0.17, 95% CI 0.05-0.52, p=0.00) and WHtR (p=0.02) had highr risk. Premenopausal women with 3 or less than 3 children had a higher risk (OR=5.54, 95 % CI 2.75-11.19, p=0.00) than postmenopausal women when compared to women with more than 3 children. Binary logistic regression analysis revealed that low parity (≤3) substantially increased the risk for breast cancer (OR=4.80, 95% CI 2.34-9.85, p=0.00) in premenopausal women.
Conclusion: Obesity, parity associated breast cancer risk and reduced breastfeeding cumulatively predispose the premenopausal women of this region to higher risk of breast cancer.<br />.
abstract_id: PUBMED:19116382
Insulin, insulin-like growth factor-I, and risk of breast cancer in postmenopausal women. Background: The positive association between obesity and postmenopausal breast cancer has been attributed, in part, to the fact that estrogen, a risk factor for breast cancer, is synthesized in adipose tissue. Obesity is also associated with high levels of insulin, a known mitogen. However, no prospective studies have directly assessed associations between circulating levels of insulin and/or insulin-like growth factor (IGF)-I, a related hormone, and the risk of breast cancer independent of estrogen level.
Methods: We conducted a case-cohort study of incident breast cancer among nondiabetic women who were enrolled in the Women's Health Initiative Observational Study (WHI-OS), a prospective cohort of 93,676 postmenopausal women. Fasting serum samples obtained at study entry from 835 incident breast cancer case subjects and from a subcohort of 816 randomly chosen WHI-OS subjects were tested for levels of insulin, glucose, total IGF-I, free IGF-I, insulin-like growth factor binding protein-3, and estradiol. Multivariable Cox proportional hazards models were used to estimate associations between levels of the serologic factors and baseline characteristics (including body mass index [BMI]) and the risk of breast cancer. All statistical tests were two-sided. Results Insulin levels were positively associated with the risk of breast cancer (hazard ratio [HR] for highest vs lowest quartile of insulin level = 1.46, 95% confidence interval [CI] = 1.00 to 2.13, P(trend) = .02); however, the association with insulin level varied by hormone therapy (HT) use (P(interaction) = .01). In a model that controlled for multiple breast cancer risk factors including estradiol, insulin level was associated with breast cancer only among nonusers of HT (HR for highest vs lowest quartile of insulin level = 2.40, 95% CI = 1.30 to 4.41, P(trend) < .001). Obesity (BMI >or=30 kg/m(2)) was also associated with the risk of breast cancer among nonusers of HT (HR for BMI >or=30 kg/m(2) vs 18.5 to <25 kg/m(2) = 2.12, 95% CI = 1.26 to 3.58, P(trend) = .003); however, this association was attenuated by adjustment for insulin (P(trend) = .40).
Conclusion: These data suggest that hyperinsulinemia is an independent risk factor for breast cancer and may have a substantial role in explaining the obesity-breast cancer relationship.
abstract_id: PUBMED:25747851
Obesity in breast cancer--what is the risk factor? Environmental factors influence breast cancer incidence and progression. High body mass index (BMI) is associated with increased risk of post-menopausal breast cancer and with poorer outcome in those with a history of breast cancer. High BMI is generally interpreted as excess adiposity (overweight or obesity) and the World Cancer Research Fund judged that the associations between BMI and incidence of breast cancer were due to body fatness. Although BMI is the most common measure used to characterise body composition, it cannot distinguish lean mass from fat mass, or characterise body fat distribution, and so individuals with the same BMI can have different body composition. In particular, the relation between BMI and lean or fat mass may differ between people with or without disease. The question therefore arises as to what aspect or aspects of body composition are causally linked to the poorer outcome of breast cancer patients with high BMI. This question is not addressed in the literature. Most studies have used BMI, without discussion of its shortcomings as a marker of body composition, leading to potentially important misinterpretation. In this article we review the different measurements used to characterise body composition in the literature, and how they relate to breast cancer risk and prognosis. Further research is required to better characterise the relation of body composition to breast cancer.
abstract_id: PUBMED:12914100
Obesity as a risk factor of breast carcinoma in woman The aim of the study was to determine the relationship between selected risk factors and the incidence of breast carcinoma in women living in the province of Podlasie, with special regard to obesity. The study involved 90 patients with breast carcinoma and 96 without the cancer. A questionnaire was used to evaluate risk factors of the disease. The mean age was 55 years in the carcinoma group and 53 years in the control group. Increased body mass was an essential risk factor of the cancer, which was statistically confirmed by multifactorial analysis. Statistically significant differences were also associated with family history of breast and other cancers, abortions, childlessness, early menarche and late menopause. Multidimensional regression analysis was performed with the application of the logistic model. Statistically significant factors included the history of cigarette smoking, abortions, overweight or obesity (high BMI index), family history of cancers (of the breast and other). The outcome of the multifactorial analysis allows for the assumption that the accumulation of these five factors in one patient increases the risk of breast carcinoma 9.45 times.
abstract_id: PUBMED:22855166
Is obesity always a risk factor for all breast cancer patients? c-erbB2 expression is significantly lower in obese patients with early stage breast cancer. Objectives: The purpose of this study was to evaluate the relationship between body mass index and lipid profiles with breast cancer prognosis together with the relationship of these parametres with known breast cancer prognostic indices including c-erbB2 expression.
Patients And Methods: Four hundred and thirty-three patients diagnosed with breast cancer at Ankara University, Faculty of Medicine, Department of Medical Oncology made up the study population. The primary endpoints were relapse and death. Body mass index at the time of diagnosis, lipid levels at the time of diagnosis, estrogen receptor status, progesterone receptor status, c-erbB2 expression, tumor grade, patient age, axillary lymph node involvement level, tumor stage, menopausal status and surgery details were taken into account.
Results: The mean body mass indices were similar in the remission, relapse and mortality groups. Patients with body mass indices higher than 30 kg/m² had a lower incidence of c-erbB2 expression when compared to patients with body mass indices < 18.5 kg/m(2) (19 vs. 50 %, p = 0.009). Survival analysis revealed that patients with body mass indices < 18.5 kg/m(2) had significantly shorter disease free survivals when compared to patients with body mass indices between 25 and 29.9 kg/m(2). Mean serum lipid levels were similar in the remission, relapse and mortality groups. A trend toward relapse was shown in patients with total cholesterol > 240 mg/dl, but this was statistically insignificant. Survival analysis revealed that patients with triglyceride levels lower than 150 mg/dl had a statistically significant longer disease-free survival when compared to the other groups. Again a trend towards shorter overall survival was seen in patients with total cholesterol > 240 mg/dl, but this relationship was also statistically insignificant.
Conclusion: Most large previous studies reported adverse breast cancer outcome with obesity. However in our study, patients with lower body weight had a shorter disease-free survival. This could be explained by the low number of patients in this study, genetic profile of the patient population, possible weight changes after treatment and the inverse relationship between body mass index and c-erbB2 expression.
Answer: No, obesity is not always a risk factor for all breast cancer patients. While obesity is generally associated with an increased risk of post-menopausal breast cancer and poorer outcomes in those with a history of breast cancer (PUBMED:25747851), the relationship between obesity and breast cancer risk is complex and may vary depending on other factors.
For instance, one study found that obesity was not an independent risk factor for breast cancer in premenopausal women, and in fact, postmenopausal women with higher body mass index (BMI) had a higher risk of breast cancer (PUBMED:31759352). Another study suggested that obesity might be an independent predictor for hepatocellular carcinoma in patients with alcoholic cirrhosis and cryptogenic cirrhosis, but not in patients with other types of liver disease (PUBMED:12085359).
Moreover, a study found that c-erbB2 expression, a known prognostic indicator in breast cancer, was significantly lower in obese patients with early-stage breast cancer, suggesting that obesity may not always be associated with worse breast cancer prognosis (PUBMED:22855166). Additionally, a study involving patients with breast cancer and peripheral arterial disease (PAD) identified smoking and hypertension as risk factors associated with decreased survival, rather than obesity alone (PUBMED:33487579).
It is also important to note that BMI, the most common measure used to characterize obesity, cannot distinguish between lean mass and fat mass or characterize body fat distribution, which means that individuals with the same BMI can have different body compositions and associated risks (PUBMED:25747851).
In conclusion, while obesity is a recognized risk factor for breast cancer, particularly in postmenopausal women, it is not a universal risk factor for all breast cancer patients, and its impact may vary depending on other individual and disease-related factors. |
Instruction: Head and neck cancer in India: need to formulate uniform national treatment guideline?
Abstracts:
abstract_id: PUBMED:22842161
Head and neck cancer in India: need to formulate uniform national treatment guideline? Background: In a large and diverse country like India, there is a wide variation in the availability of infrastructure and expertise to treat head-neck cancer patients. Lack of consistent adherence to evidence-based management is the biggest problem.
Aims: There is an unmet need to evaluate the existing treatment practices to form the basis for development of effective and uniform treatment policies.
Settings And Designs: Prospective case series.
Materials And Methods: A group of previously treated, potentially curable patients presenting to our institution (from April 2009 to March 2011) were evaluated for appropriateness of initial treatment based on National Comprehensive Cancer Network or Tata Memorial Hospital guidelines. Data regarding treatment center, protocol and accuracy of delivered treatment and their eventual outcome were analyzed.
Statistical Analysis: Descriptive.
Results: Amongst 450 newly registered patients, 77(17%) were previously treated with curative intent and 69(89%) of them were inappropriately treated. Seventeen (25%) patients were treated in clinics while 12(17%) in cancer centers and 34(50%) in corporate hospitals. Fourteen (20%) patients received chemotherapy, 22(32%) received radiotherapy and 14(20%) underwent surgery while 19(28%) patients received multimodality treatment. Disease stage changed to more advanced stage in 40(58%) patients and curative intent treatment could be offered only to 33(48%) patients. Amongst 56 patients available for outcome review, 18(32%) patients were alive disease-free, 20(36%) had died and 18(32%) were alive with disease.
Conclusion: Large numbers of potentially curable patients are inappropriately treated and their outcome is significantly affected. Many initiatives have been taken in the existing National Cancer Control Program but formulation of a uniform national treatment guideline should be prioritized.
abstract_id: PUBMED:34971883
Guideline - Adherence in advanced stage head and neck cancer is associated with improved survival - A National study. Objectives: Understanding the prevalence of guideline non-adherence among patients with advanced head and neck cancer (HNC) and its impact on survival may facilitate increased adherence. Our objective was to perform a detailed analysis of overall National Comprehensive Care Network (NCCN) guideline adherence in a national cohort.
Methods: Using the National Cancer Database, we analyzed site-specific NCCN guideline adherence for treatment of 100,074 overall stage III and IVA HNC patients from 2004 to 2013. Main outcomes were guideline adherence rates and overall survival (OS). Adherence was categorized by treatment: surgery/ radiation. Reasons were categorized as: (1) high risk; (2) refusal; (3) not planned.
Results: After exclusion, the care of 25,620 patients was defined as non-adherent (25.6%), yet adherence rates significantly improved across the study's years. After multivariate analysis, non-adherence was associated with age ≥ 65, female gender, black race, comorbidity score ≥ 1, insurance status, clinical staging, primary site, and facility type. Patients not managed according to NCCN guidelines had a significantly reduced OS compared with patients treated on-guideline (hazard ratio (HR) = 1.51 (95 %CI 1.48-1.54), p < 0.001). 'Not planned' patients had reduced OS when compared to adherent patients (HR = 1.27 (95 %CI 1.23-1.30), p < 0.001). Off-guideline treated patients due to 'risk factors' had a decrease in overall survival (OS) compared with other reasons (p < 0.001 for all).
Conclusions: Despite improvement over time, non-adherence to NCCN guidelines for advanced stage HNC remains high. Non-adherence is associated with decreased OS, regardless of the reason. Despite concerns from both patient and physician, efforts should be made to increase guideline awareness and adherence.
abstract_id: PUBMED:38081134
Patient Navigation for Timely, Guideline-Adherent Adjuvant Therapy for Head and Neck Cancer: A National Landscape Analysis. Background: Aligned with the NCCN Clinical Practice Guidelines in Oncology for Head and Neck Cancers, in November 2021 the Commission on Cancer approved initiation of postoperative radiation therapy (PORT) within 6 weeks of surgery for head and neck cancer (HNC) as its first and only HNC quality metric. Unfortunately, >50% of patients do not commence PORT within 6 weeks, and delays disproportionately burden racial and ethnic minority groups. Although patient navigation (PN) is a potential strategy to improve the delivery of timely, equitable, guideline-adherent PORT, the national landscape of PN for this aspect of care is unknown.
Materials And Methods: From September through November 2022, we conducted a survey of health care organizations that participate in the American Cancer Society National Navigation Roundtable to understand the scope of PN for delivering timely, guideline-adherent PORT for patients with HNC.
Results: Of the 94 institutions that completed the survey, 89.4% (n=84) reported that at least part of their practice was dedicated to navigating patients with HNC. Sixty-eight percent of the institutions who reported navigating patients with HNC along the continuum (56/83) reported helping them begin PORT. One-third of HNC navigators (32.5%; 27/83) reported tracking the metric for time-to-PORT at their facility. When estimating the timeframe in which the NCCN and Commission on Cancer guidelines recommend commencing PORT, 44.0% (37/84) of HNC navigators correctly stated ≤6 weeks; 71.4% (60/84) reported that they did not know the frequency of delays starting PORT among patients with HNC nationally, and 63.1% (53/84) did not know the frequency of delays at their institution.
Conclusions: In this national landscape survey, we identified that PN is already widely used in clinical practice to help patients with HNC start timely, guideline-adherent PORT. To enhance and scale PN within this area and improve the quality and equity of HNC care delivery, organizations could focus on providing better education and support for their navigators as well as specialization in HNC.
abstract_id: PUBMED:28108238
Effect of an evidence-based guideline on the treatment of maxillofacial cancer: A prospective analysis. Background: In 2012, a guideline for the diagnosis and treatment of oral cavity cancer based on the best available evidence was implemented at certified German cancer centres for head and neck carcinomas. The present analysis was performed to determine whether the implementation of the guideline via certification improved the level of care, leading to a benefit for the patients.
Methods: A prospective observational study was performed based on the annual operating figures at 31 certified head and neck cancer centres. From 76 statements and recommendations, 9 indicators were derived defining important steps during treatment. The annual shift of the figures was documented for each indicator and was used to measure the impact of the guideline. This was achieved by determining the number of patients having received the recommended treatment related to the total number in each centre over a period of 3 years.
Results: In 2014, 1570 primary cases with an oral cavity carcinoma were treated at our centres, 31.2% representing stage IVA. Except for two, all indicators showed increasing numbers of achievement from 2012 to 2014, reaching median values between 91% and 100% in 2014. In particular, median values for imaging and interdisciplinary treatment to evaluate the presence of second primaries and metastases increased by 20% and 30%, respectively. Median values decreased by 14% for recommended adjuvant radiation, because of non-acceptance by the patients. Moreover, elective neck dissection was performed less frequently in cN0 categories.
Conclusions: Implementation of the national cancer guideline by means of certification evidently had a positive impact on patients suffering from oral squamous cell carcinoma and led to the improved achievement of most evidence-based treatment recommendations over time. Further research involving high-level clinical studies is needed to cover all aspects of this specific tumour entity.
abstract_id: PUBMED:26690552
Improving patient outcomes through multidisciplinary treatment planning conference. Background: The purpose of this study was for us to determine National Comprehensive Cancer Network (NCCN) guideline-compliance of multidisciplinary conference (MDC) recommendations and actual treatment received, and to determine this impact on patient outcomes.
Methods: We conducted a retrospective review of patients presented at MDC between January 1, 2006, and December 31, 2006, with previously untreated incident cancers.
Results: We identified 232 patients, for whom MDC recommendations were NCCN guideline-compliant in 201 (86.6%). Actual treatment was NCCN guideline-compliant in 170 of 197 patients (86.3%). Adherence of MDC recommendations to NCCN guidelines was associated with superior overall survival (hazard ratio [HR] = 0.69; 95% confidence interval [CI] = 0.33-1.39; p = .3), as was guideline-compliance of actual treatment (HR = 0.6; 95% CI = 0.64-1.07; p = .07); congruence between MDC recommendations and actual treatment conferred a statistically significant overall survival benefit (HR = 0.49; 95% CI = 0.27-0.89; p = .02).
Conclusion: Our findings argue for patient-centered application of NCCN guidelines. Prospective evaluation will enable more timely identification of systematic NCCN guideline deviations that quality improvement interventions may address. © 2015 Wiley Periodicals, Inc. Head Neck 38: E1820-E1825, 2016.
abstract_id: PUBMED:33025133
Guideline on diagnosis, treatment, and follow-up of laryngeal cancer The German S3 guideline on diagnosis, treatment, and follow-up of laryngeal cancer was developed in 2019 as part of the oncology guideline program of the Association of the Scientific Medical Societies in Germany (Arbeitsgemeinschaft der Wissenschaftlichen Medizinischen Fachgesellschaften, AWMF) of the German Cancer Society (Deutsche Krebsgesellschaft, DKG) and German Cancer Aid (Deutsche Krebshilfe, DKH), published under the leadership of the German Society for Otorhinolaryngology, Head and Neck Surgery. The guideline was funded by DKH as part of the oncology guideline program. Since guidelines are an important tool for quality assurance and quality management in oncology, they should be incorporated into everyday care in a targeted and sustainable manner. The guideline should generally fulfil the interdisciplinary character of early diagnosis, diagnostics, treatment, rehabilitation, and follow-up, with the aim of developing evidence- and consensus-based recommendations and statements for treatment of laryngeal cancer with the aim of organ preservation, but also show their limits. The main recommendations of the original text are summarized. The guideline is available as a long and a short version in the guideline program of the DKG ( https://www.leitlinienprogramm-onkologie.de/leitlinien/larynxkarzinom/ ) and also as an app ( https://www.leitlinienprogramm-onkologie.de/app/ ).
abstract_id: PUBMED:32789706
Guideline on diagnosis, treatment, and follow-up of laryngeal cancer The German S3 guideline on diagnosis, treatment, and follow-up of laryngeal cancer was developed in 2019 as part of the oncology guideline program of the Association of the Scientific Medical Societies in Germany (Arbeitsgemeinschaft der Wissenschaftlichen Medizinischen Fachgesellschaften, AWMF) of the German Cancer Society (Deutsche Krebsgesellschaft, DKG) and German Cancer Aid (Deutsche Krebshilfe, DKH), published under the leadership of the German Society for Otorhinolaryngology, Head and Neck Surgery. The guideline was funded by DKH as part of the oncology guideline program. Since guidelines are an important tool for quality assurance and quality management in oncology, they should be incorporated into everyday care in a targeted and sustainable manner. The guideline should generally fulfil the interdisciplinary character of early diagnosis, diagnostics, treatment, rehabilitation, and follow-up, with the aim of developing evidence- and consensus-based recommendations and statements for treatment of laryngeal cancer with the aim of organ preservation, but also show their limits. The main recommendations of the original text are summarized. The guideline is available as a long and a short version in the guideline program of the DKG ( https://www.leitlinienprogramm-onkologie.de/leitlinien/larynxkarzinom/ ) and also as an app ( https://www.leitlinienprogramm-onkologie.de/app/ ).
abstract_id: PUBMED:12142094
Carcinoma of the larynx: the Dutch national guideline for diagnostics, treatment, supportive care and rehabilitation. Purpose: This evidence based guideline aims to facilitate proper management and to prevent diverging views concerning diagnosis, treatment and follow-up of carcinoma of the larynx between the major referral centers for head and neck cancer in The Netherlands.
Method: A multidisciplinary committee was formed representing all medical and paramedical disciplines involved in the management of laryngeal cancer and all head and neck oncology centers in The Netherlands. This committee reviewed the literature and formulated statements and recommendations based on the level of evidence and consistency of the literature data. Where reliable literature data were not available, recommendations were based on expert opinion.
Results: Strict criteria have been proposed for the radiological diagnostic procedures as well as for the pathology report. For carcinoma in situ and severe dysplasia, microsurgery, preferably by laser, is proposed. For all other stages of invasive carcinoma, a full course of radiotherapy as a voice conserving therapy is the treatment of choice. Only in cases with massive tumor volumes with invasion through the laryngeal skeleton, primary surgery is inevitable. For rehabilitation and supportive care, minimal criteria are described. Due to the complexity of therapy and relative rarity of larynx carcinoma, all patients should be seen at least once in a dedicated head and neck clinic.
Conclusion: This guideline for the management of larynx carcinoma was produced by a multidisciplinary national committee and based on scientific evidence wherever possible. This procedure of guideline development has created the optimal conditions for nationwide acceptance and implementation of the guideline.
abstract_id: PUBMED:33069583
Nationwide compliance with a multidisciplinary guideline on pancreatic cancer during 6-year follow-up. Background: Compliance with national guidelines on pancreatic cancer management could improve patient outcomes. Early compliance with the Dutch guideline was poor. The aim was to assess compliance with this guideline during six years after publication.
Materials And Methods: Nationwide guideline compliance was investigated for three subsequent time periods (2012-2013 vs. 2014-2015 vs. 2016-2017) in patients with pancreatic cancer using five quality indicators in the Netherlands Cancer Registry: 1) discussion in multidisciplinary team meeting (MDT), 2) maximum 3-week interval from final MDT to start of treatment, 3) preoperative biliary drainage when bilirubin >250 μmol/L, 4) use of adjuvant chemotherapy, and 5) chemotherapy for inoperable disease (non-metastatic and metastatic).
Results: In total, 14 491 patients were included of whom 2290 (15.8%) underwent resection and 4561 (31.5%) received chemotherapy. Most quality indicators did not change over time: overall, 88.8% of patients treated with curative intent were discussed in a MDT, 42.7% were treated with curative intent within the 3-week interval, 62.7% with a resectable head tumor and bilirubin >250 μmol/L underwent preoperative biliary drainage, 57.2% received chemotherapy after resection, and 36.6% with metastatic disease received chemotherapy. Only use of chemotherapy for non-metastatic, non-resected disease improved over time (23.4% vs. 25.6% vs. 29.7%).
Conclusion: Nationwide compliance to five quality indicators for the guideline on pancreatic cancer management showed little to no improvement during six years after publication. Besides critical review of the current quality indicators, these outcomes may suggest that a nationwide implementation program is required to increase compliance to guideline recommendations.
abstract_id: PUBMED:28241092
Adherence to National Comprehensive Cancer Network guidelines for time to initiation of postoperative radiation therapy for patients with head and neck cancer. Background: Adherence to evidence-based treatment guidelines has been proposed as a measure of cancer care quality. The objective of this study was to determine the rate and predictors of care that does not adhere to National Comprehensive Cancer Network guidelines regarding commencing postoperative radiation therapy (PORT) within 6 weeks of surgery for patients with head and neck squamous cell carcinoma (HNSCC).
Methods: The National Cancer Data Base was reviewed from 2006 to 2014, and patients with HNSCC who underwent curative-intent surgery followed by PORT were identified. Multivariable logistic regression analysis was used to determine the factors associated with nonadherence to guidelines regarding the timing of initiating PORT.
Results: In total, 47,273 patients were included in the study. 55.7% of patients (26,340/47,273) failed to commence PORT within 6 week of surgery. The percentage of patients who failed to initiate PORT within 6 week of surgery increased over time. On multivariable analysis, the factors associated with failure to initiate timely, guideline-adherent PORT included black race, public insurance [Medicare, Medicaid] or uninsured status, lower levels of education, increased severity of comorbidity, increased postoperative length of stay, 30-day unplanned hospital readmission, treatment at an academic medical center, and the receipt of surgery and PORT at different facilities.
Conclusions: Over 50% of patients with HNSCC who undergo surgery and PORT receive care that does not adhere to National Comprehensive Cancer Network guidelines with regard to initiating PORT within 6 weeks of surgery. Sociodemographic, oncologic, treatment, and hospital factors are all associated with failure to receive guideline-directed care and should be explored in future studies. Cancer 2017;123:2651-60. © 2017 American Cancer Society.
Answer: The abstracts provided highlight the importance of adhering to evidence-based guidelines in the treatment of head and neck cancer (HNC) and the potential impact on patient outcomes. In India, there is a significant issue with the lack of consistent adherence to evidence-based management of HNC, with a large number of potentially curable patients being inappropriately treated, which significantly affects their outcomes (PUBMED:22842161). This suggests a need for the formulation of a uniform national treatment guideline to standardize care and improve patient outcomes.
The importance of guideline adherence is further supported by a national study in the United States, which found that non-adherence to National Comprehensive Care Network (NCCN) guidelines for advanced stage HNC is associated with decreased overall survival (OS), regardless of the reason for non-adherence (PUBMED:34971883). Similarly, a national landscape analysis in the United States identified that over 50% of patients do not commence postoperative radiation therapy (PORT) within the recommended 6 weeks, and delays disproportionately burden racial and ethnic minority groups. Patient navigation (PN) is suggested as a strategy to improve the delivery of timely, equitable, guideline-adherent PORT (PUBMED:38081134).
In Germany, the implementation of an evidence-based guideline for the treatment of maxillofacial cancer led to improved levels of care and adherence to treatment recommendations over time (PUBMED:28108238). Additionally, a study on multidisciplinary treatment planning conferences found that NCCN guideline-compliant recommendations and actual treatment were associated with superior overall survival (PUBMED:26690552).
The German S3 guideline on the diagnosis, treatment, and follow-up of laryngeal cancer and the Dutch national guideline for the management of larynx carcinoma are examples of national efforts to standardize care and improve outcomes through evidence-based recommendations (PUBMED:33025133, PUBMED:32789706, PUBMED:12142094).
Overall, these abstracts support the notion that in India, as in other countries, the development and implementation of a uniform national treatment guideline for HNC could lead to more consistent, evidence-based care and potentially improve patient outcomes. |
Instruction: Pulse oximetry in very low birth weight infants: can oxygen saturation be maintained in the desired range?
Abstracts:
abstract_id: PUBMED:16598294
Pulse oximetry in very low birth weight infants: can oxygen saturation be maintained in the desired range? Objective: To determine if a change in the pulse oximeter goal range and high alarm limit for oxygen saturation (SpO2) alters the distribution of SpO2 for premature infants in oxygen.
Study Design: This was a prospective, observational analysis. For group 1 (February 2002 to April 2002, n = 23), pulse oximeter alarms were set at 80% (low) and 96% (high), and the goal range was 90-95%. For group 2 (May 2002 to August 2003, n = 49), the high alarm was lowered to 94%, and the goal range was 88 to 94%. The SpO2 values for 24 h were downloaded from Nellcor pulse oximeters during the two periods and the percent time within, above and below the goal range was derived and compared.
Results: Groups were similar except for use of post-natal steroids (group 2 > 1). The percent time within (57.7+/-9.8 vs 59.4+/-12.4%), above (15.4+/-10.6 vs 14+/-9.4%) and below (26.9+/-9.7 vs 26.6+/-10.2%) the goal range was similar for groups 1 and 2, respectively. However, the percent time with SpO2 <80% increased significantly for group 2 (4.0+/-2.7 vs 1.9+/-1.4%).
Conclusions: Changes in pulse oximeter policy and alarms in labile, sick premature infants need evaluation for their effects on the distribution of SpO2 values before routine use.
abstract_id: PUBMED:25459788
Pulse oximetry in very low birth weight infants. Pulse oximetry has become ubiquitous and is used routinely during neonatal care. Emerging evidence highlights the continued uncertainty regarding definition of the optimal range to target pulse oximetry oxygen saturation levels in very low birth weight infants. Furthermore, maintaining optimal oxygen saturation targets is a demanding and tedious task because of the frequency with which oxygenation changes, especially in these small infants receiving prolonged respiratory support. This article addresses the historical perspective, basic physiologic principles behind pulse oximetry operation, and the use of pulse oximetry in targeting different oxygen ranges at various time-points throughout the neonatal period.
abstract_id: PUBMED:26859420
Oxygen saturation profile of term equivalent extreme preterm infants at discharge - comparison with healthy term counterparts. Aim: Compare the oxygen saturation profiles before discharge of neonates born extremely preterm (<28 weeks), now at term equivalent age, with healthy term neonates and assess the impact of feeding on this profile in each group.
Methods: We prospectively evaluated and compared the oxygen saturation profile in 15 very low birthweight infants at term equivalent age, ready to be discharged home without any oxygen and 15 term newborns after 48 hours of life. We also evaluated and compared the saturations of these two groups during a one-hour period during and after feeding.
Results: Term equivalent preterm and term infants spent median 3% and 0%, respectively, of the time below 90% in a 12-hour saturation-recording period. Term infants spent a median 0.26% and 0.65% of the time in <90% saturation during feed time and no feed time, respectively. In contrast, preterm infants spent significantly more time <90% saturation (3.47% and 3.5% during feed time and no feed time, respectively).
Conclusion: Term equivalent preterm infants spent significantly more time in a saturation range <90% compared to term infants. Feeding had little effect on saturation profile overall within each group.
abstract_id: PUBMED:21791935
Discrepancies between arterial oxygen saturation and functional oxygen saturation measured with pulse oximetry in very preterm infants. Background: Discrepancies between pulse oximetry saturation (SpO(2)) and arterial saturation (SaO(2)) at low blood oxygenation values have been previously reported with significant variations among instruments and studies. Whether pulse oximeters that attenuate motion artifact are less prone to such discrepancies is not well known.
Objective: To prospectively assess the agreement of the Masimo V4 pulse oximeter within the critical 85-95% SpO(2) target range.
Patients And Methods: For all consecutive babies with gestational age <33 weeks, postnatal age <7 days, and an umbilical arterial line, SpO(2) was measured continuously and SaO(2) analyzed on an as-needed basis. Bland-Altman techniques provided estimates of the difference (D = SaO(2) - SpO(2)), standard deviation (SD), and 95% limits of agreement (D ± 2*SD).
Results: There were 1,032 measurements (114 babies) with SpO(2) between 85 and 95%. The 95% limits of agreement were -2.0 ± 5.8, -2.4 ± 9.2, and -1.9 ± 5.3 in the SpO(2) categories 85-95, 85-89, and 91-95%, respectively. For the SpO(2) categories 85-89% and 91-95%, only 52% (53/101) and 59% (523/886) of SpO(2) values, respectively, corresponded to the analogous SaO(2) categories. In the 85-89% SpO(2) category, SaO(2) was lower than 85% in 39 of the 101 (39%) measurements.
Conclusion: SaO(2) was lower on average than SpO(2) with an increased bias at lower saturation. The -2.4 ± 9.2 95% limits of agreement for SaO(2) - SpO(2) in the 85-89% SpO(2) category suggest that SpO(2) and SaO(2) are not interchangeable and intermittent SaO(2) assessments are warranted when the targeted SpO(2) is within this range.
abstract_id: PUBMED:28154110
Oxygen saturation ranges for healthy newborns within 24 hours at 1800 m. There are minimal data to define normal oxygen saturation (SpO2) levels for infants within the first 24 hours of life and even fewer data generalisable to the 7% of the global population that resides at an altitude of >1500 m. The aim of this study was to establish the reference range for SpO2 in healthy term and preterm neonates within 24 hours in Nairobi, Kenya, located at 1800 m. A random sample of clinically well infants had SpO2 measured once in the first 24 hours. A total of 555 infants were enrolled. The 5th-95th percentile range for preductal and postductal SpO2 was 89%-97% for the term and normal birthweight groups, and 90%-98% for the preterm and low birthweight (LBW) groups. This may suggest that 89% and 97% are reasonable SpO2 bounds for well term, preterm and LBW infants within 24 hours at an altitude of 1800 m.
abstract_id: PUBMED:34031027
Oxygen saturation reference ranges and factors affecting SpO2 among children living at altitude. Aims: To determine reference values for oxygen saturation (SpO2) among healthy children younger than 5 years living at moderately high altitude in Papua New Guinea and to determine other factors that influence oxygen saturation levels.
Methods: 266 well children living at 1810-2630 m above sea level were examined during immunisation clinic visits, and SpO2 was measured by pulse oximetry. Potential risk factors for hypoxaemia were recorded and analysed by multivariable analysis.
Results: The median SpO2 was 95% (IQR 93%-97%), with a normal range of 89%-99% (2.5-97.5 centiles). On multivariable analysis, younger children, children of parents who smoked, those asleep and babies carried in bilums, a traditional carry bag made of wool or string, had significantly lower SpO2.
Conclusion: The reference range for healthy children living in the highlands of Papua New Guinea was established. Besides altitude, other factors are associated with lower SpO2. Some higher-risk infants (preterm, very low birth weight, recurrent acute lower respiratory infection or chronic respiratory problem) may be more prone to hypoxaemia if they have additive risk factors: if parents smoke or they are allowed to sleep a bilum, as their baseline oxygen saturation may be significantly lower, or their respiratory drive or respiratory function is impaired. These findings need further research to determine the clinical importance.
abstract_id: PUBMED:17079621
Overcoming barriers to oxygen saturation targeting. Objective: To reduce hyperoxia in very low birth weight infants who receive supplemental oxygen, the Children's Mercy Hospital neonatal respiratory quality improvement committee introduced the potentially better practice of oxygen saturation targeting and identified strategies to overcome barriers to implementation of this practice.
Methods: Using rapid-cycle quality improvement projects, this center adapted an oxygen saturation targeting protocol and tracked hourly oxygen saturation as measured by pulse oximetry in very low birth weight infants who received supplemental oxygen.
Results: The percentage of time in the range of 90% to 94% of oxygen saturation as measured by pulse oximetry increased from 20% to an average of 35% after implementation of the protocol. The percentage of time with oxygen saturation as measured by pulse oximetry >98% dropped from 30% to an average of 5% to 10%.
Conclusions: A well-planned strategy for implementing oxygen saturation targeting can result in a sustained change in clinical practice as well as change in the culture of the NICU regarding the use of oxygen.
abstract_id: PUBMED:2434913
Pulse oximetry in very low birth weight infants with acute and chronic lung disease. With improved survival of very low birth weight infants, the incidence of bronchopulmonary dysplasia has significantly increased. Pulse oximetry appears to be an adequate alternative to transcutaneous PO2, for continuous arterial oxygen saturation (SaO2) monitoring in neonates; however, its usefulness has not been very well documented in very low birth weight infants. We studied 68 patients with birth weight less than 1,250 g; 44 neonates had respiratory distress syndrome and 24 had bronchopulmonary dysplasia. Using a Nellcor N-100 pulse oximeter, we compared transcutaneous oxygen saturation with simultaneous arterial samples analyzed for SaO2 (range 78% to 100%) using an IL 282 co-oximeter. Fetal hemoglobin was measured in 66 patients. We also evaluated the accuracy of transcutaneous PO2 in reflecting arterial PO2 in patients with bronchopulmonary dysplasia. Over a wide range of PO2, PCO2, pH, heart rate, BP, hematocrit, and fetal hemoglobin, linear regression analysis revealed a close correlation between pulse oximeter values and co-oximeter measured SaO2 in patients with acute (r = .88, Y = 19.41 + 0.79X) and chronic (r = .90, Y = 9.72 + 0.92X) disease. Regression analysis of transcutaneous v arterial PO2 in infants with bronchopulmonary dysplasia showed an r value of .78. In addition, in these patients with chronic disease, the mean difference between pulse oximeter SaO2 and co-oximeter measured SaO2 was 2.7 +/- 1.9% (SD); whereas the mean difference between transcutaneous and arterial PO2 was -14 +/- 10.7 mm Hg. Our findings indicate that pulse oximetry can be used reliably in very low birth weight infants with acute and chronic lung disease, for SaO2 values greater than 78%.(ABSTRACT TRUNCATED AT 250 WORDS)
abstract_id: PUBMED:27003898
Lower early postnatal oxygen saturation target and risk of ductus arteriosus closure failure. Background: Early postnatal hyperoxia is a major risk factor for retinopathy of prematurity (ROP) in extremely premature infants. To reduce the occurrence of ROP, we adopted a lower early postnatal oxygen saturation (SpO2 ) target range (85-92%) from April 2011. Lower SpO2 target range, however, may lead to hypoxemia and an increase in the risk of ductus arteriosus (DA) closure failure. The aim of this study was therefore to determine whether a lower SpO2 target range, during the early postnatal stage, increases the risk of DA closure failure.
Methods: Infants born at <28 weeks' gestation were enrolled in this study. Oxygen saturation target range during the first postnatal 72 h was 84-100% in study period 1 and 85-92% in period 2.
Results: Eighty-two infants were included in period 1, and 61 were included in period 2. The lower oxygen saturation target range increased the occurrence of hypoxemia during the first postnatal 72 h. Prevalence of DA closure failure in period 2 (21%) was significantly higher than that in period 1 (1%). On multivariate logistic regression analysis, the lower oxygen saturation target range was an independent risk factor for DA closure failure.
Conclusion: Lower early postnatal oxygen saturation target range increases the risk of DA closure failure.
abstract_id: PUBMED:25468620
A comparison of supine and prone positioning on improves arterial oxygenation in premature neonates. Background: Respiratory distress syndrome (RDS) is one of the most important causes of mortality and morbidity in premature newborns. The aim of this study was to determine the effect of body position on oxygen saturation in hospitalized premature infants with RDS.
Methods: From June 2010 to December 2012, 69 premature infants with RDS were evaluated in an interventional analytic study. Patients had a mean gestational age (±SD) of 31.4 (±2.41) weeks (range: 28-35 weeks) with a mean birth weight (±SD) of 1446.6 (±218.90) grams (range: 850-2400 grams). Infants were studied both in supine and in prone positions. Oxygen saturation was monitored by trans-cutaneous pulse oximeter and oxygen saturation was continually monitored for 3 hours in each position and mean oxygen saturation was calculated at the end of each 3 hour period. All patients were premature, NPO, oxygen dependent and had RDS. The range of the postnatal age of the studied infants was 24-48 hr.
Results: Means (±SD) of oxygen saturation during 3 hours in prone and supine positions were 92.54% (±2.24%) and 91.78% (±2.35%), respectively (p = 0.001). Also means (±SD) of oxygen saturation at the end of each 3-hours period prone and supine positions were 91.30% (±2.42%) and 90.30% (±3.15%), respectively (p = 0.006).
Conclusion: These findings suggest that, in premature infants with RDS, oxygen saturation was significantly higher in the prone compared with the supine posture.
Answer: The use of pulse oximetry in very low birth weight (VLBW) infants to maintain oxygen saturation (SpO2) within a desired range has been a subject of investigation and concern. A study aimed to determine if changing the pulse oximeter goal range and high alarm limit for SpO2 would alter the distribution of SpO2 in premature infants receiving oxygen. The study found that despite changing the high alarm limit from 96% to 94% and the goal range from 90-95% to 88-94%, the percent time within, above, and below the goal range remained similar between the two groups. However, the percent time with SpO2 <80% increased significantly in the group with the lower high alarm limit (PUBMED:16598294).
Maintaining optimal oxygen saturation targets is challenging due to the frequent changes in oxygenation, especially in VLBW infants who require prolonged respiratory support (PUBMED:25459788). Another study compared the oxygen saturation profiles of term equivalent preterm infants ready for discharge without supplemental oxygen to healthy term neonates. It found that preterm infants spent significantly more time with SpO2 <90% compared to term infants, indicating that maintaining desired SpO2 ranges can be more difficult in preterm VLBW infants (PUBMED:26859420).
Discrepancies between arterial oxygen saturation (SaO2) and SpO2 measured with pulse oximetry have been reported, particularly at low blood oxygenation values. A study assessing the agreement of the Masimo V4 pulse oximeter within the critical 85-95% SpO2 target range found that SaO2 was lower on average than SpO2, with an increased bias at lower saturation levels. This suggests that SpO2 and SaO2 are not interchangeable and that intermittent SaO2 assessments are warranted when targeting SpO2 within this range (PUBMED:21791935).
In conclusion, while pulse oximetry is a critical tool for monitoring SpO2 in VLBW infants, maintaining oxygen saturation within the desired range can be challenging due to the infants' labile condition and the limitations of pulse oximetry technology. Adjustments to pulse oximeter policy and alarms need careful evaluation for their effects on SpO2 distribution before routine use (PUBMED:16598294). Additionally, the presence of discrepancies between SpO2 and SaO2 at lower saturation levels indicates the need for cautious interpretation of pulse oximetry readings and possibly supplemental SaO2 assessments (PUBMED:21791935). |
Instruction: Are chest radiographs justified in pre-employment examinations?
Abstracts:
abstract_id: PUBMED:16079972
Are chest radiographs justified in pre-employment examinations? Presentation of legal position and medical evidence based on 1760 cases Background: The legal and medical basis for chest radiographs as part of pre-employment examinations (PEE) at a University Hospital is evaluated. The radiographs are primarily performed to exclude infectious lung disease.
Methods: A total of 1760 consecutive chest radiographs performed as a routine part of PEEs were reviewed retrospectively. Pathologic findings were categorized as "nonrelevant" or "relevant."
Results: No positive finding with respect to tuberculosis or any other infectious disease was found; 94.8% of the chest radiographs were completely normal. Only five findings were regarded as "relevant" for the individual. No employment-relevant diagnosis occurred.
Conclusions: The performance of chest radiography as part of a PEE is most often not justified. The practice is expensive, can violate national and European law, and lacks medical justification.
abstract_id: PUBMED:27857470
The routine pre-employment screening chest radiograph: Should it be routine? Background And Objective: A routine chest radiograph is mandatory in many institutions as a part of pre-employment screening. The usefulness of this has been studied over the years keeping in mind the added time, cost, and radiation concerns. Studies conducted outside India have shown different results, some for and some against it. To our knowledge, there is no published data from India on this issue.
Materials And Methods: A retrospective review of the reports of 4113 pre-employment chest radiographs done between 2007 and 2009 was conducted.
Results: Out of 4113 radiographs, 24 (0.58%) candidates required further evaluation based on findings from the screening chest radiograph. Out of these, 7 (0.17%) candidates required appropriate further treatment.
Interpretation And Conclusions: The percentage of significant abnormalities detected which needed further medical intervention was small (0.17%). Although the individual radiation exposure is very small, the large numbers done nation-wide would significantly add to the community radiation, with added significant cost and time implications. We believe that pre-employment chest radiographs should be restricted to candidates in whom there is relevant history and/or clinical findings suggestive of cardiopulmonary disease.
abstract_id: PUBMED:32270779
Is routine pre-entry chest radiograph necessary in a high tuberculosis prevalence country? Context: Chest radiographs have been used worldwide as a screening tool before employment and training, by various healthcare and other government and nongovernment institutions. Many studies done in the past have demonstrated a relatively low yield for tuberculosis detection and therefore, the authors have questioned this practice.
Aims: To compare the value of the preadmission/employment chest radiograph in two groups, namely, those who have been previously exposed to a healthcare setting (post-exposure group) and those who have not been exposed (pre-exposure group) and to determine if there is a significant difference in tuberculosis detection between these two groups.
Settings And Design: A retrospective review of the reports of the chest radiographs of all candidates appearing for admission to various undergraduate and postgraduate courses in our institute between 2014 and 2017 was performed.
Materials And Methods: The various abnormalities detected were recorded and the findings in the two groups were compared.
Statistical Analysis Used: Chi-square test was used to compare between two group proportions.
Results: Thirty out of 4333 (0.69%) candidates in the pre-exposure group and 53 out of 3379 (1.57%) candidates in the post-exposure group showed abnormalities on chest radiographs involving the lung parenchyma, mediastinum, heart, or pleura. In the pre-exposure group, six (0.14%) were found to have underlying cardiac disease and one (0.02%) had tuberculosis. Among the six candidates in the post-exposure group who underwent further investigations in our institute, five (0.15%) were diagnosed to have tuberculosis. Although there was no statistically significant difference in tuberculosis detection between the groups (P = 0.051), there is a trend towards higher detection of tuberculosis in the post-exposure group.
Conclusions: In a country where the prevalence of tuberculosis is high, the pre-employment chest radiograph may still have a role in detecting tuberculosis in the post-exposure group.
abstract_id: PUBMED:17225852
The futility of universal pre-employment chest radiographs. In a developmental center, a preemployment chest x-ray was required for all job applicants. We scrutinized the pros and cons of this practice through a review of the medical literature and our experience, and discussion with our colleagues. We concluded that such chest x-ray caused unwarranted radiation exposure, did not produce compliance with the tuberculosis laws, gave a false sense of security regarding workers' compensation risk management, was contrary to established occupational medicine practice guidelines, and was unnecessary and wasteful. We discontinued such chest x-rays. The purpose of the pre-employment examination should remain narrowly job related. Even long-established procedures require periodic utilization review.
abstract_id: PUBMED:28660612
Standards for quality assurance of pre-employment medical examinations of seafarers: the IMHA Quality experience. Standards to assess the quality of doctors and clinics performing pre-employment medical examinations (PEMEs) were developed for International Maritime Health Association (IMHA) Quality, a not for profit organisation, created to provide an ethically sound and professional accepted accreditation system that would benefit seafarers having PEMEs and employers, insurers and national maritime authorities seeking valid assessments of seafarers' fitness for duty. These standards followed a format widely used in other healthcare settings, where assessment of clinical performance is desirable. Uptake of these standards by doctors and clinics was not as expected, as they did not see sufficient business benefits coming from accreditation to justify the costs. This was, at least in part, because there was some antagonism to a professionally based accreditation system from commercial interest groups such as insurers, while national maritime authorities did not come forward to use the system as a recommendation or requirement for approval of doctors. The IMHA Quality accreditation system has now been closed and for this reason we are making the standards publicly available. Those who helped to develop them hope that doctors and clinics will now use them as a means of improving the quality of their practice when performing PEME.
abstract_id: PUBMED:22529510
An audit of 3859 preadmission chest radiographs of apparently healthy students in a Nigerian Tertiary Institution. Background: Chest radiographs are routinely requested as part of the medical screening process prior to admission to institutions. Literature on the yield of such an exercise is sparse especially in the Nigerian setting. This study was therefore carried out to assess the usefulness of routine chest radiography for students at the time of admission.
Materials And Methods: This was a prospective study of 3859 chest X-rays taken at the department of radiology, University of Benin Teaching Hospital for one admission screening for the 2008/2009 academic year. The age and sex of the subjects were also recorded. The heart, lung fields and bony thorax were examined for any abnormality.
Results: Out of the 3859 pre-admission chest radiographs studied, there were 1951 males or 50.56% and 1908 females or 49.44% subjects. The mean age for males was 21.15±3.
Conclusion: This study has shown that pre-admission routine chest radiography in asymptomatic patients remains a relevant screening tool for medical fitness during admissions into institutions. However because of dangers of exposure to ionizing radiation, we advise that a detailed medical history and physical examination be done to restrict its use to only those subjects with signs and symptoms suggestive of disease.
abstract_id: PUBMED:37756693
Utility of routine chest radiographs after chest drain removal in paediatric cardiac surgical patients-a retrospective analysis of 1076 patients. Objectives: Chest drains are routinely placed in children following cardiac surgery. The purpose of this study was to determine the incidence of a clinically relevant pneumothorax and/or pleural effusion after drain removal and to ascertain if a chest radiograph can be safely avoided following chest drain removal.
Methods: This single-centre retrospective cohort study included all patients under 18 years of age who underwent cardiac surgery between January 2015 and December 2019 with the insertion of mediastinal and/or pleural drains. Exclusion criteria were chest drain/s in situ ≥14 days and mortality prior to removal of chest drain/s. A drain removal episode was defined as the removal of ≥1 drains during the same episode of analgesia ± sedation. All chest drains were removed using a standard protocol. Chest radiographs following chest drain removal were reviewed by 2 investigators.
Results: In all, 1076 patients were identified (median age: 292 days, median weight: 7.8 kg). There were 1587 drain removal episodes involving 2365 drains [mediastinal (n = 1347), right pleural (n = 598), left pleural (n = 420)]. Chest radiographs were performed after 1301 drain removal episodes [mediastinal (n = 1062); right pleural (n = 597); left pleural (n = 420)]. Chest radiographs were abnormal after 152 (12%) drain removal episodes [pneumothorax (n = 43), pleural effusion (n = 98), hydropneumothorax (n = 11)]. Symptoms/signs were present in 30 (2.3%) patients. Eleven (<1%) required medical management. One required reintubation and 2 required chest drain reinsertion.
Conclusions: The incidence of clinically significant pneumothorax/pleural effusion following chest drain removal after paediatric cardiac surgery is low (<1%). Most patients did not require reinsertion of a chest drain. It is reasonable not to perform routine chest radiographs following chest drain removal in most paediatric cardiac surgical patients.
abstract_id: PUBMED:6146764
Pre-employment psychiatric examinations. A psychiatric referee carried out pre-employment psychiatric examinations on two groups of former psychiatric patients, 96 before and 256 after the Employment Protection Act 1975 became law. The Act prohibited terms of health probation lasting more than one year. The total percentage of subjects rejected (26%) was high. For schizophrenic patients, the proportion rejected increased from 17% before the Act to 46% afterwards, primarily because of the restriction on the length of probation. The inequality created by the Act should be redressed. And confidential returns of the work of all referees ought to be collated.
abstract_id: PUBMED:37760179
The Performance of a Deep Learning-Based Automatic Measurement Model for Measuring the Cardiothoracic Ratio on Chest Radiographs. Objective: Prior studies on models based on deep learning (DL) and measuring the cardiothoracic ratio (CTR) on chest radiographs have lacked rigorous agreement analyses with radiologists or reader tests. We validated the performance of a commercially available DL-based CTR measurement model with various thoracic pathologies, and performed agreement analyses with thoracic radiologists and reader tests using a probabilistic-based reference.
Materials And Methods: This study included 160 posteroanterior view chest radiographs (no lung or pleural abnormalities, pneumothorax, pleural effusion, consolidation, and n = 40 in each category) to externally test a DL-based CTR measurement model. To assess the agreement between the model and experts, intraclass or interclass correlation coefficients (ICCs) were compared between the model and two thoracic radiologists. In the reader tests with a probabilistic-based reference standard (Dawid-Skene consensus), we compared diagnostic measures-including sensitivity and negative predictive value (NPV)-for cardiomegaly between the model and five other radiologists using the non-inferiority test.
Results: For the 160 chest radiographs, the model measured a median CTR of 0.521 (interquartile range, 0.446-0.59) and a mean CTR of 0.522 ± 0.095. The ICC between the two thoracic radiologists and between the model and two thoracic radiologists was not significantly different (0.972 versus 0.959, p = 0.192), even across various pathologies (all p-values > 0.05). The model showed non-inferior diagnostic performance, including sensitivity (96.3% versus 97.8%) and NPV (95.6% versus 97.4%) (p < 0.001 in both), compared with the radiologists for all 160 chest radiographs. However, it showed inferior sensitivity in chest radiographs with consolidation (95.5% versus 99.9%; p = 0.082) and NPV in chest radiographs with pleural effusion (92.9% versus 94.6%; p = 0.079) and consolidation (94.1% versus 98.7%; p = 0.173).
Conclusion: While the sensitivity and NPV of this model for diagnosing cardiomegaly in chest radiographs with consolidation or pleural effusion were not as high as those of the radiologists, it demonstrated good agreement with the thoracic radiologists in measuring the CTR across various pathologies.
abstract_id: PUBMED:20710107
Contact tracing/pre-employment screening for pulmonary tuberculosis: should positive Mantoux test necessitates routine chest X-ray? Background: Pulmonary tuberculosis (PTB) remains a scourge in most developing countries. Mantoux skin test and more commonly chest radiography are some of the methods of diagnosing the disease, especially with regard to contact tracing and pre-employment screening. Regular use of radiography has both safety and cost implications. This study aimed at establishing if any justification exists in requesting for chest radiographs in asymptomatic subjects with a positive Mantoux skin test reaction.
Methods: 174 adults comprising PTB contacts and newly employed/admitted university staff/students were recruited into the study. They were 89 males (51.1%) and 85 females (48.9%). All subjects had Mantoux test (using purified protein derivative, PPD). Patients who had positive Mantoux reaction subsequently were subjected to posteroanterior chest radiographic examination. Mantoux test and chest radiographic findings were then correlated with each other.
Results: 102 subjects (59.2%) had positive Mantoux reactions, while 27 subjects (31.1%) had abnormal chest radiographs. There was no significant correlation between Mantoux readings and chest radiographic findings (P=0.106).
Conclusion: It is concluded that other ancillary tests like sputum examination and/or, where available, Interferon and ESAT tests should be carried out before chest radiograph is requested in subjects with positive Mantoux reaction.
Answer: The justification for chest radiographs in pre-employment examinations is a subject of debate. Several studies have evaluated the necessity, effectiveness, and implications of routine chest radiographs for pre-employment screening.
One study found that out of 1760 chest radiographs performed as part of pre-employment examinations, no cases of tuberculosis or other infectious diseases were detected, and only five findings were considered relevant for the individual. The study concluded that chest radiography as part of a pre-employment examination is often not justified, as it can be expensive, potentially violate laws, and lacks medical justification (PUBMED:16079972).
Another study in India reviewed 4113 pre-employment chest radiographs and found that only 0.58% of candidates required further evaluation based on the screening, with only 0.17% needing medical intervention. The study suggested that the practice should be restricted to candidates with a relevant history or clinical findings suggestive of cardiopulmonary disease, considering the added time, cost, and radiation exposure (PUBMED:27857470).
A study from a high tuberculosis prevalence country showed a slightly higher detection of tuberculosis in the post-exposure group compared to the pre-exposure group, but the difference was not statistically significant. The authors suggested that pre-employment chest radiographs may still have a role in detecting tuberculosis in the post-exposure group in such countries (PUBMED:32270779).
Conversely, another study argued that universal pre-employment chest radiographs are futile, causing unwarranted radiation exposure and not producing compliance with tuberculosis laws, among other issues. The study recommended discontinuing such practices (PUBMED:17225852).
In Nigeria, a study of 3859 pre-admission chest radiographs found that routine chest radiography remains a relevant screening tool for medical fitness during admissions into institutions. However, they advised that its use should be restricted to subjects with signs and symptoms suggestive of disease due to the dangers of ionizing radiation (PUBMED:22529510).
In summary, the justification for routine chest radiographs in pre-employment examinations is not strong, especially in low-risk populations without relevant history or clinical findings. The practice may still be considered in specific high-risk groups or settings with a high prevalence of tuberculosis, but overall, the trend is towards more selective and judicious use of chest radiography in pre-employment screenings to avoid unnecessary radiation exposure, costs, and potential legal issues. |
Instruction: A comparison of the ICECAP-O with EQ-5D in a falls prevention clinical setting: are they complements or substitutes?
Abstracts:
abstract_id: PUBMED:22723152
A comparison of the ICECAP-O with EQ-5D in a falls prevention clinical setting: are they complements or substitutes? Purpose: Our research explored whether two preference-based outcome measures (EuroQol EQ-5D and ICECAP-O) are complements or substitutes in the context of the Vancouver Falls Prevention Clinic for seniors.
Methods: The EQ-5D and ICECAP-O were administered once at 12 months post first clinic attendance. We report descriptive statistics for all baseline characteristics collected at first clinic visit and primary outcomes of interest. We ascertain feasibility by reporting item completion rates for the EQ-5D and ICECAP-O. Contingency tables for a priori assertions between the ICECAP-O and EQ-5D were used to demonstrate whether unique or similar aspects of benefit were captured. We used exploratory factor analysis, to ascertain the number of unique underlying latent factors associated with the attributes assessed by the EQ-5D and ICECAP-O.
Results: We report data on 215 seniors who attended the Vancouver Falls Prevention Clinic who had a mean age of 79.3 (6.2) years. The item completion rate was 99 % for the EQ-5D and 92 % for the ICECAP-O. The two contingency tables detailed few discrepancies. The results of the exploratory factor analysis indicate that the two instruments are tapping into distinct factors that are complementary.
Conclusion: Our study suggests that the EQ-5D and ICECAP-O provide complementary information.
abstract_id: PUBMED:31741897
Health-related Quality of Life of Patients with Type 2 Diabetes Mellitus at A Tertiary Care Hospital in India Using EQ 5D 5L. Objective: To assess the health-related quality of life of Type 2 Diabetes mellitus patients attending outpatient departments of a tertiary hospital using EQ-5D-5L.
Methods: The study was conducted at a tertiary care hospital in India. The quality of life of patients with type 2 Diabetes mellitus, age 18 years and older, attending outpatient departments of Medicine and Endocrinology was assessed with the help of EQ-5D-5L, a measure of self-reported health related quality of life. Data was analyzed to obtain EQ-5D-5L scores for the five dimensions and EQ VAS score. Correlation of EQ VAS score with different variables was analyzed.
Results: Out of total 358 participants, 208 had comorbidities, hypertension being the most common. Mean age was 60.71 ± 11.41 years and 216 (58.9%) were female participants. Out of five dimensions, Mobility, Self-care, Usual activities, and Pain/discomfort were most affected in age group 71 years and above while anxiety/depression affected age group 18-30 years the most. Mean EQ VAS score was 78.83 ± 15.02. Female participants had significantly higher EQ VAS score (P = 0.00) than male participants. EQ VAS score showed significant negative correlation with uncontrolled state of diabetes (P = 0.000). There was significant difference in EQ VAS score between patients with and without comorbidities. (P =0.004) Cronbach alpha for EQ-5D-5L was 0.76.
Conclusion: The results suggest that EQ-5D-5L is a reliable measure for assessing health related quality of life of patients with Type 2 Diabetes mellitus. Type 2 Diabetes adversely affects the quality of life of patients. Uncontrolled disease and comorbidities can further compromise the quality of life.
abstract_id: PUBMED:28005242
Are the EQ-5D-3L and the ICECAP-O responsive among older adults with impaired mobility? Evidence from the Vancouver Falls Prevention Cohort Study. Purpose: Preference-based generic measures are gaining increased use in mobility research to assess health-related quality of life and wellbeing. Hence, we examined the responsiveness of these two measures among individuals at risk of mobility impairment among adults aged ≥70 years.
Methods: We conducted a 12-month prospective cohort study of community-dwelling older adults (n = 288 to n = 341 depending on analysis) who were seen at the Vancouver Falls Prevention Clinic who had a history of at least one fall in the previous 12 months. We compared the responsiveness of the EuroQol-5 Domain-3 Level (EQ-5D-3L) and the index of capability for older adults (ICECAP-O) by examining changes in these measures over time (i.e., over 6 and 12 months) and by examining whether their changes varied as a function of having experienced 2 or more falls over 6 and 12 months.
Results: Only the ICECAP-O showed a significant change over time from baseline through 12 months; however, neither measure showed change that exceeded the standard error of the mean. Both measures were responsive to falls that occurred during the first 6 months of the study (p < .05). These effects appeared to be amplified among individuals identified as having mild cognitive impairment (MCI) at baseline (p < .01). Additionally, the EQ-5D-3L was responsive among fallers who did not have MCI as well as individuals with MCI who did not fall (p < .05).
Conclusion: This study provides initial evidence suggesting that the EQ-5D-3L is generally more responsive, particularly during the first 6 months of falls tracking among older adults at risk of future mobility impairment.
abstract_id: PUBMED:23095570
Exploration of the association between quality of life, assessed by the EQ-5D and ICECAP-O, and falls risk, cognitive function and daily function, in older adults with mobility impairments. Background: Our research sought to understand how falls risk, cognitive function, and daily function are associated with health related quality of life (using the EuroQol-5D) and quality of life (using the ICECAP-O) among older adults with mobility impairments.
Methods: The EQ-5D and ICECAP-O were administered at 12 months post first clinic attendance at the Vancouver Falls Prevention Clinic. We report descriptive statistics for all baseline characteristics collected at first clinic visit and primary outcomes of interest. Using multivariate stepwise linear regression, we assessed the construct validity of the EQ-5D and ICECAP-O using three dependent measures that are recognized indicators of "impaired mobility" - physiological falls risk, general balance and mobility, and cognitive status among older adults.
Results: We report data on 215 seniors who attended the Vancouver Falls Prevention Clinic and received their first clinic assessment. Patients had a mean age of 79.3 (6.2) years. After accounting for known covariates (i.e., age and sex), the ICECAP-O domains explained a greater amount of variation in each of the three dependent measures compared with the EQ-5D domains.
Conclusion: Both the EQ-5D and ICECAP-O demonstrate associations with falls risk and general balance and mobility; however, only the ICECAP-O was associated with cognitive status among older adults with mobility impairments.
Trial Registration: ClinicalTrials.gov Identifier: NCT01022866.
abstract_id: PUBMED:32468403
Capability of well-being: validation of the Hungarian version of the ICECAP-A and ICECAP-O questionnaires and population normative data. Purpose: We aimed to develop and assess the psychometric characteristics of the Hungarian language version of two well-being capability measures, the ICEpop CAPability measure for Adults/Older people (ICECAP-A/-O), and to establish population norms.
Methods: A cross-sectional survey was performed involving a representative sample of the Hungarian population. Socio-demographic characteristics, the use and provision of informal care were recorded. The Minimum European Health Module (MEHM), EQ-5D-5L, WHO-5 well-being index, happiness and life satisfaction visual analogue scale (VAS), Satisfaction with Life Scale (SWLS) measures were applied alongside the ICECAP-A (age-group 18-64) and ICECAP-O (age-group 65+).
Results: Altogether 1568 and 453 individuals completed the ICECAP-A/-O questionnaires, respectively. Cronbach's alpha was 0.86 for both measures (internal consistency). Subgroup analyses showed positive associations between ICECAP-A/-O scores and marital status, employment, income, health status (MEHM) and informal care use (construct validity). Pearson correlations were strong (r > 0.5; p < 0.01) between ICECAP-A/-O indexes and EQ-5D-5L, WHO-5, happiness and satisfaction VAS and SWLS scores (convergent validity). The age, education, and marital status were no longer significant in the multiple regression analysis. Test-retest average (SD) scores were 0.88 (0.11) and 0.89 (0.10) for the ICECAP-A, and equally 0.86 (0.09) for the ICECAP-O (reliability).
Conclusion: This is the first study to provide ICECAP-A/-O population norms. Also, it is the first to explore associations with WHO-5 well-being index which, alongside the MEHM measures, enable estimates from routinely collected international health statistics. The Hungarian ICECAP-A/-O proved to be valid and reliable measurement tools. Socio-demographic characteristics had minor or no impact on ICECAP-A/-O. Other influencing factors deserve further investigation in future research.
abstract_id: PUBMED:24975078
Assessment and prevention of falls in older people. In June 2013 the National Institute for Health and Care Excellence updated and replaced its 2004 clinical guideline 21 (CG21) on falls with clinical guideline 161 (CG161). Two priorities were outlined in the latter: preventing falls in older people (unchanged from CG21) and preventing falls in older people during a hospital stay (new). CG161 is for health and social care clinicians who care for older people who have fallen or who are at risk of falling. It provides clinicians and commissioners with evidence to implement effective care pathways and recommendations on the assessment and prevention of falls in older people. The amalgamation of the two guidelines has resulted in some disconnection. This article summarises the evidence and supports clinicians in the interpretation of the revised falls guideline.
abstract_id: PUBMED:35191810
Falls prevention and osteoarthritis: time for awareness and action. Osteoarthritis (OA) and falls both commonly affect older people. While high-level evidence exists to prevent falls in older people, falls prevention is rarely considered within contemporary OA management. OA care and falls prevention have for too long been considered as separate clinical constructs. In the context of ageing populations and growing numbers of people with OA, the time to raise awareness and enact appropriate action is now. This Perspectives on Rehabilitation article draws on the findings from a comprehensive mixed-methods falls and OA research program (which uniquely spanned population, clinician, and consumer perspectives) to better understand existing evidence-practice gaps and identify key opportunities for improvements in clinical care.IMPLICATIONS FOR REHABILITATIONWhile high-level evidence exists to prevent falls in older people, falls prevention is rarely considered within contemporary OA management and this represents a concerning knowledge-to-practice gap.Given ageing populations and growth in the number of people with OA, it is time for falls prevention to be incorporated within routine OA care for older people.To achieve this, we need to re-shape current messaging around falls prevention and develop targeted resources to optimise clinician knowledge and skills in this area.
abstract_id: PUBMED:29236437
FALLS PREVENTION. The following excerpt on Falls Prevention is from the newest addition to the topics list on the Continuing Professional Education (CPE) website. The tutorial covers falls risk, contributing factors, screening and assessment tools, strategies to reduce falls and the nurses and midwives role in prevention and treatment. The ANMF falls prevention tutorial is relevant to all levels of nurses and midwives.
abstract_id: PUBMED:33407525
Assessing the reliability and validity of the ICECAP-A instrument in Chinese type 2 diabetes patients. Purpose: We aimed to conduct psychometric tests for the Chinese version of ICECAP-A and compare the differences between ICECAP-A and EQ-5D-3L for patients with T2DM and explore the relationship between clinical conditions and ICECAP-A through diabetes-related clinical indicators.
Methods: Data were collected from a sample of 492 Chinese T2DM patients. The reliability and validity of the ICECAP-A were verified. Exploratory factor analysis (EFA), correlation analysis and regression analysis were conducted for both the ICECAP-A and EQ-5D-3L.
Results: Our results show that the Chinese version of ICECAP-A has good internal consistency with an overall Cronbach's Alpha coefficient of 0.721. The mean scores of ICECAP-A and EQ-5D-3L are 0.85 vs. 0.94. A weak correlation (r = 0.116) was found between the ICECAP-A tariff and EQ-5D-3L utility. EFA showed that although the five dimensions of the ICECAP-A and EQ-5D-3L scales were loaded into two different factors respectively. However, the two scales captured different dimensions of quality of life and can complement each other. The ICECAP-A, EQ-5D-3L, and EQ-VAS scores showed differences across different socio-demographic characteristics and clinic conditions groups.
Conclusion: The Chinese version of the ICECAP-A capability instrument can be for assessing outcomes in adults with T2DM. It may capture more dimensions of QoL than traditional Health-related QoL (HRQoL) instruments and may be useful for economic evaluations of health care and social care for people with T2DM or other chronic diseases.
abstract_id: PUBMED:32809876
Perceptions of falls risk and falls prevention among people with osteoarthritis. Purpose: To understand the perceptions of falls risk and falls prevention, and the perceived enablers and barriers to engaging in falls prevention strategies/activities among people with doctor-diagnosed hip and/or knee osteoarthritis.
Methods: A qualitative study utilising semi-structured telephone interviews. Researchers independently analysed qualitative data using an inductive method guided by the COM-B framework. Interviews were analysed thematically using open, axial, and selective coding. Recruitment ceased at 20 interviews, once data saturation was evident.
Results: Participants were 18 women and two men aged 52-84 years and half had fallen in the last 12 months. Main themes were the absence of recommendations to access activities after having a fall, inconsistencies between perceptions of the relationship between OA and falls, and individual beliefs of not being at risk of falling because of taking precautions. Knowledge about falls prevention programs was limited, the term "falls prevention" was considered stigmatising and only applicable to older frail people. Home modifications were perceived as broadcasting negativity; participants felt falls terminology should be changed from a negative to a positive focus.
Conclusions: Falls were often seen as inevitable consequence of keeping active. Re-framing the language used to discuss falls is recommended to promote uptake of falls prevention activities.Implications for rehabilitationDespite growing evidence that osteoarthritis (OA) is an independent risk factor for falls, people with OA do not perceive themselves to be at risk and falls prevention is for those who are "old and frail".Re-framing the language used to discuss falls and falls prevention to focus on positive messages may promote the uptake of falls prevention activities in this population.Improving the awareness of falls and falls risk among people with OA through effective health education is needed in order to foster the uptake of, and engagement with, falls prevention activities.
Answer: The research comparing the ICECAP-O with EQ-5D in a falls prevention clinical setting aimed to determine whether these two preference-based outcome measures are complements or substitutes. The study involved seniors who attended the Vancouver Falls Prevention Clinic, and both the EQ-5D and ICECAP-O were administered at 12 months post first clinic attendance. The results indicated that the item completion rate was high for both measures, and exploratory factor analysis suggested that the two instruments are tapping into distinct factors that are complementary (PUBMED:22723152). This suggests that the EQ-5D and ICECAP-O provide complementary information rather than acting as substitutes for each other in the context of assessing outcomes in a falls prevention clinical setting. |
Instruction: Do Appendicitis and Diverticulitis Share a Common Pathological Link?
Abstracts:
abstract_id: PUBMED:27270518
Do Appendicitis and Diverticulitis Share a Common Pathological Link? Objective: The aim of this study was to determine whether there is an association between appendicitis and diverticulitis.
Design: This study is a retrospective cohort analysis.
Setting: This study was conducted in a subspecialty practice at a tertiary care facility.
Patients: We examined the rate of appendectomy among 4 cohorts of patients: 1) patients with incidentally identified diverticulosis on screening colonoscopy, 2) inpatients with medically treated diverticulitis, 3) patients who underwent left-sided colectomy for diverticulitis, and 4) patients who underwent colectomy for left-sided colorectal cancer.
Interventions: There were no interventions.
Main Outcome Measures: The primary outcome measured was the appendectomy rate.
Results: We studied a total of 928 patients in this study. There were no differences in the patient characteristics of smoking status, nonsteroidal use, or history of irritable bowel syndrome across the 4 study groups. Patients with surgically treated diverticulitis had significantly more episodes of diverticulitis (2.8 ± 1.9) than the medically treated group (1.4 ± 0.8) (p < 0.0001). The rate of appendectomy was 8.2% for the diverticulosis control group, 13.5% in the cancer group, 23.5% in the medically treated diverticulitis group, and 24.5% in the surgically treated diverticulitis group (p < 0.0001). After adjusting for demographics and other clinical risk factors, patients with diverticulitis had 2.8 times higher odds of previous appendectomy (p < 0.001) than the control groups.
Limitations: The retrospective study design is associated with selection, documentation, and recall bias.
Conclusions: Our data reveal significantly higher appendectomy rates in patients with a diagnosis of diverticulitis, medically or surgically managed, in comparison with patients with incidentally identified diverticulosis. Therefore, we propose that appendicitis and diverticulitis share similar risk factors and potentially a common pathological link.
abstract_id: PUBMED:34830534
The Adverse Impact of the COVID-19 Pandemic on Abdominal Emergencies: A Retrospective Clinico-Pathological Analysis. The COVID-19 pandemic has caused a worldwide significant drop of admissions to the emergency department (ED). The aim of the study was to retrospectively investigate the pandemic impact on ED admissions, management, and severity of three abdominal emergencies (appendicitis, diverticulitis, and cholecystitis) during the COVID-19 pandemic using 2017-2019 data as a control. The difference in clinical and pathological disease severity was the primary outcome measure while differences in (i) ED admissions, (ii) triage urgency codes, and (iii) surgical rates were the second ones. Overall, ED admissions for the selected conditions decreased by 34.9% during the pandemic (control: 996, 2020: 648) and lower triage urgency codes were assigned for cholecystitis (control: 170/556, 2020: 66/356, p < 0.001) and appendicitis (control: 40/178, 2020: 21/157, p = 0.031). Less surgical procedures were performed in 2020 (control: 447, 2020: 309), but the surgical rate was stable (47.7% in 2020 vs. 44.8% in 2017-2019). Considering the clinical and pathological assessments, a higher percentage of severe cases was observed in the four pandemic peak months of 2020 (control: 98/192, 2020: 87/109; p < 0.001 and control: 105/192, 2020: 87/109; p < 0.001). For the first time in this study, pathological findings objectively demonstrated an increased disease severity of the analyzed conditions during the early COVID-19 pandemic.
abstract_id: PUBMED:21422362
Epidemiological similarities between appendicitis and diverticulitis suggesting a common underlying pathogenesis. Background: Nonperforating appendicitis is primarily a disease of children, and nonperforating diverticulitis affects mostly older adults. Apart from these age differences, the diseases share many epidemiological features, such as association with better hygiene and low-fiber diets.
Hypothesis: Nonperforating appendicitis and nonperforating diverticulitis are different manifestations of the same underlying colonic process and, if so, should be temporally related.
Design: Data from the National Hospital Discharge Survey were analyzed to investigate the incidence of admissions for appendicitis in children and diverticulitis in adults between 1979 and 2006.
Setting: Statistical sampling of all US hospitals.
Patients: Children admitted for appendicitis and adults with diverticulitis.
Main Outcome Measures: Time trends were assessed for stationarity using unit root analysis, and similarities between time trends were tested using cointegration analysis.
Results: The incidence rates of nonperforating appendicitis and nonperforating diverticulitis exhibited U-shaped secular trends. The rates of perforating appendicitis and perforating diverticulitis rose slowly across all the study years. Cointegration analysis demonstrated that the rates of nonperforating and perforating diverticulitis did not cointegrate significantly over time. The rates of nonperforating and perforating appendicitis did not vary together. Nonperforating appendicitis and nonperforating diverticulitis rates were significantly cointegrated over time.
Conclusions: Childhood appendicitis and adult diverticulitis seem to be similar diseases, suggesting a common underlying pathogenesis. Secular trends for their nonperforating and perforating forms are strikingly different. At least for appendicitis, perforating disease may not be an inevitable outcome from delayed treatment of nonperforating disease. If appendicitis represents the same pathophysiologic process as diverticulitis, it may be amenable to antibiotic rather than surgical treatment.
abstract_id: PUBMED:37309705
The Utilization of ChatGPT in Reshaping Future Medical Education and Learning Perspectives: A Curse or a Blessing? Background: ChatGPT has substantial potential to revolutionize medical education. We aim to assess how medical students and laypeople evaluate information produced by ChatGPT compared to an evidence-based resource on the diagnosis and management of 5 common surgical conditions.
Methods: A 60-question anonymous online survey was distributed to third- and fourth-year U.S. medical students and laypeople to evaluate articles produced by ChatGPT and an evidence-based source on clarity, relevance, reliability, validity, organization, and comprehensiveness. Participants received 2 blinded articles, 1 from each source, for each surgical condition. Paired-sample t-tests were used to compare ratings between the 2 sources.
Results: Of 56 survey participants, 50.9% (n = 28) were U.S. medical students and 49.1% (n = 27) were from the general population. Medical students reported that ChatGPT articles displayed significantly more clarity (appendicitis: 4.39 vs 3.89, P = .020; diverticulitis: 4.54 vs 3.68, P < .001; SBO 4.43 vs 3.79, P = .003; GI bleed: 4.36 vs 3.93, P = .020) and better organization (diverticulitis: 4.36 vs 3.68, P = .021; SBO: 4.39 vs 3.82, P = .033) than the evidence-based source. However, for all 5 conditions, medical students found evidence-based passages to be more comprehensive than ChatGPT articles (cholecystitis: 4.04 vs 3.36, P = .009; appendicitis: 4.07 vs 3.36, P = .015; diverticulitis: 4.07 vs 3.36, P = .015; small bowel obstruction: 4.11 vs 3.54, P = .030; upper GI bleed: 4.11 vs 3.29, P = .003).
Conclusion: Medical students perceived ChatGPT articles to be clearer and better organized than evidence-based sources on the pathogenesis, diagnosis, and management of 5 common surgical pathologies. However, evidence-based articles were rated as significantly more comprehensive.
abstract_id: PUBMED:33872846
Surgical Diseases are Common and Complicated for Criminal Justice Involved Populations. Background: At any given time, almost 2 million individuals are in prisons or jails in the United States. Incarceration status has been associated with disproportionate rates of cancer and infectious diseases. However, little is known about the burden emergency general surgery (EGS) in criminal justice involved (CJI) populations.
Materials And Methods: The California Office of Statewide Health Planning and Development (OSHPD) database was used to evaluate all hospital admissions with common EGS diagnoses in CJI persons from 2012-2014. The population of CJI individuals in California was determined using United States Bureau of Justice Statistics data. Primary outcomes were rates of admission and procedures for five common EGS diagnoses, while the secondary outcome was probability of complex presentation.
Results: A total of 4,345 admissions for CJI patients with EGS diagnoses were identified. The largest percentage of EGS admissions were with peptic ulcer disease (41.0%), followed by gallbladder disease (27.5%), small bowel obstruction (14.0%), appendicitis (13.8%), and diverticulitis (10.5%). CJI patients had variable probabilities of receipt of surgery depending on condition, ranging from 6.2% to 90.7%. 5.6% to 21.0% of admissions presented with complicated disease, the highest being with peptic ulcer disease and appendicitis.
Conclusion: Admissions with EGS diagnoses were common and comparable to previously published rates of disease in general population. CJI individuals had high rates of complicated presentation, but low rates of surgical intervention. More granular evaluation of the burden and management of these common, morbid, and costly surgical diagnoses is essential for ensuring timely and quality care delivery for this vulnerable population.
abstract_id: PUBMED:21768232
Beyond appendicitis: common and uncommon gastrointestinal causes of right lower quadrant abdominal pain at multidetector CT. Right lower quadrant abdominal pain is one of the most common causes of a patient visit to the emergency department. Although appendicitis is the most common condition requiring surgery in patients with abdominal pain, right lower quadrant pain can be indicative of a vast list of differential diagnoses and is thus a challenge for clinicians. Other causes of right lower quadrant pain beyond appendicitis include inflammatory and infectious conditions involving the ileocecal region; diverticulitis; malignancies; conditions affecting the epiploic appendages, omentum, and mesentery; and miscellaneous conditions. Multidetector computed tomography (CT) has emerged as the modality of choice for evaluation of patients with several acute traumatic and nontraumatic conditions causing right lower quadrant pain. Multidetector CT is an extremely useful noninvasive method for diagnosis and management of not only the most common causes such as appendicitis but also less common conditions.
abstract_id: PUBMED:14504124
Meckel diverticulum: a geriatric disease masquerading as common gastrointestinal tract disorders. Background: Meckel diverticulum (MD) is traditionally considered a pediatric disease that is associated with intestinal hemorrhage or perforation. Symptomatic MD is rarely a consideration in the geriatric population.
Objective: To notify clinicians of the clinical variety and diagnostic uncertainty of MD in the elderly, we report 7 cases of complicated MD that presented as common disorders of the gastrointestinal (GI) tract in patients older than 65 years.
Methods: A retrospective record review at 2 university-affiliated hospitals revealed 7 patients older than 65 years with MD and abdominal complaints necessitating laparotomy. The patients represented a subset of 27 adults (age range, 21-89 years; mean age, 39 years) with symptomatic MD who required surgery during a 7-year period.
Results: The presenting complaints represented a variety of common GI presentations, including nausea, vomiting, and acute abdominal pain (n = 3); acute abdominal pain with peritonitis (n = 2); crampy abdominal pain lasting several weeks (n = 1); and rectal bleeding (n = 1). Meckel diverticulum was a preoperative consideration in only 2 of 7 cases. The preoperative diagnoses were consistent with common disorders of the GI tract in the elderly, including small-bowel obstruction (n = 2), ischemic colitis (n = 1), unrelenting bleeding in the GI tract (n = 1), perforated viscus (n = 1), diverticulitis (n = 1), and appendicitis (n = 1). In contradistinction to the pediatric age group, only 1 of 7 patients had an MD with ectopic mucosa.
Conclusions: Many different mechanisms can be responsible for complications due to MD in the geriatric population. Misdiagnosis occurs frequently in the elderly because of the poor sensitivity of diagnostic tests, nonspecificity of complaints, and lack of recognition that this anomaly can present in this age group. Clinicians must be cognizant of this common pediatric disease and its varied guises when they are evaluating unexplained acute or intermittent abdominal pain, nausea and vomiting, rectal bleeding, peritonitis, or obstruction in geriatric patients.
abstract_id: PUBMED:21365197
A comparison of the accuracy of ultrasound and computed tomography in common diagnoses causing acute abdominal pain. Objectives: Head-to-head comparison of ultrasound and CT accuracy in common diagnoses causing acute abdominal pain.
Materials And Methods: Consecutive patients with abdominal pain for >2 h and <5 days referred for imaging underwent both US and CT by different radiologists/radiological residents. An expert panel assigned a final diagnosis. Ultrasound and CT sensitivity and predictive values were calculated for frequent final diagnoses. Effect of patient characteristics and observer experience on ultrasound sensitivity was studied.
Results: Frequent final diagnoses in the 1,021 patients (mean age 47; 55% female) were appendicitis (284; 28%), diverticulitis (118; 12%) and cholecystitis (52; 5%). The sensitivity of CT in detecting appendicitis and diverticulitis was significantly higher than that of ultrasound: 94% versus 76% (p < 0.01) and 81% versus 61% (p = 0.048), respectively. For cholecystitis, the sensitivity of both was 73% (p = 1.00). Positive predictive values did not differ significantly between ultrasound and CT for these conditions. Ultrasound sensitivity in detecting appendicitis and diverticulitis was not significantly negatively affected by patient characteristics or reader experience.
Conclusion: CT misses fewer cases than ultrasound, but both ultrasound and CT can reliably detect common diagnoses causing acute abdominal pain. Ultrasound sensitivity was largely not influenced by patient characteristics and reader experience.
abstract_id: PUBMED:35298704
Impact of COVID-19 on common non-elective general surgery diagnoses. Background: During the COVID-19 pandemic, public health and hospital policies were enacted to decrease virus transmission and increase hospital capacity. Our aim was to understand the association between COVID-19 positivity rates and patient presentation with EGS diagnoses during the COVID pandemic compared to historical controls.
Methods: In this cohort study, we identified patients ≥ 18 years who presented to an urgent care, freestanding ED, or acute care hospital in a regional health system with selected EGS diagnoses during the pandemic (March 17, 2020 to February 17, 2021) and compared them to a pre-pandemic cohort (March 17, 2019 to February 17, 2020). Outcomes of interest were number of EGS-related visits per month, length of stay (LOS), 30-day mortality and 30-day readmission.
Results: There were 7908 patients in the pre-pandemic and 6771 in the pandemic cohort. The most common diagnoses in both were diverticulitis (29.6%), small bowel obstruction (28.8%), and appendicitis (20.8%). The lowest relative volume of EGS patients was seen in the first two months of the pandemic period (29% and 40% decrease). A higher percentage of patients were managed at a freestanding ED (9.6% vs. 8.1%) and patients who were admitted were more likely to be managed at a smaller hospital during the pandemic. Rates of surgical intervention were not different. There was no difference in use of ICU, ventilator requirement, or LOS. Higher 30-day readmission and lower 30-day mortality were seen in the pandemic cohort.
Conclusions: In the setting of the COVID pandemic, there was a decrease in visits with EGS diagnoses. The increase in visits managed at freestanding ED may reflect resources dedicated to supporting outpatient non-operative management and lack of bed availability during COVID surges. There was no evidence of a rebound in EGS case volume or substantial increase in severity of disease after a surge declined.
abstract_id: PUBMED:37572470
Diverticulitis with coloenteric fistula mimicking gangrenous appendicitis with perityphlitic abscess. An uncommon presentation of a common disease - A case report. Introduction: Diverticulitis is a common gastrointestinal disease usually presenting with a typical clinical picture depending on the stage of the disease. In complicated cases, the clinical presentation may be untypical, thus delaying diagnosis and treatment.
Presentation Of Case: We present a case of a young patient who was initially treated for obscure intraabdominal abscesses presumably due to gangrenous appendicitis; however, intraoperative exploration revealed a normal appendix and a coloenteric fistula resulting from an unknown and untreated perforated diverticulitis.
Discussion: A patient with a perityphlitic abscess was initially managed with primary non-operative management (NOM) in accordance with the current Jerusalem guidelines, but surgery was eventually necessary due to failure of NOM. Intraoperative findings revealed a sigmoido-ileal fistula, a rare but potentially detectable complication of diverticulitis through colonoscopy. This case highlights the challenges in diagnosing and treating common surgical diseases with uncommon clinical presentations, emphasizing the importance of a detailed patient history and not relying solely on imaging studies.
Conclusion: Intraabdominal abscesses require prompt treatment with non-operative management, while intestinal fistulae associated with diverticulitis are a rare consequence of chronic inflammation, often asymptomatic and often detected incidentally during surgery. In most cases simple fistulous tract resection is usually sufficient as first line therapy.
Answer: The studies provided suggest that appendicitis and diverticulitis may share a common pathological link. A retrospective cohort analysis conducted in a tertiary care facility found significantly higher appendectomy rates in patients with a diagnosis of diverticulitis, whether medically or surgically managed, compared to patients with incidentally identified diverticulosis. This led to the proposal that appendicitis and diverticulitis share similar risk factors and potentially a common pathological link (PUBMED:27270518).
Further supporting this hypothesis, an epidemiological study analyzing data from the National Hospital Discharge Survey between 1979 and 2006 found that the incidence rates of nonperforating appendicitis and nonperforating diverticulitis exhibited U-shaped secular trends and were significantly cointegrated over time. This suggests that childhood appendicitis and adult diverticulitis may be similar diseases, indicating a common underlying pathogenesis (PUBMED:21422362).
The studies indicate that both appendicitis and diverticulitis are associated with better hygiene and low-fiber diets, which could be contributing factors to their common pathogenesis. Additionally, the fact that nonperforating forms of both diseases have similar epidemiological trends, while their perforating forms do not, further suggests that they may represent different manifestations of the same underlying colonic process (PUBMED:21422362).
While these studies provide evidence of a potential common pathological link between appendicitis and diverticulitis, it is important to note that the retrospective design of the studies is associated with limitations such as selection, documentation, and recall bias (PUBMED:27270518). Therefore, while the data suggest a connection, further prospective studies would be beneficial to confirm the findings and to better understand the underlying mechanisms that link these two conditions. |
Instruction: Plasma concentrations of tissue factor and its inhibitor in chronic thromboembolic pulmonary hypertension: a step closer to explanation of the disease aetiology?
Abstracts:
abstract_id: PUBMED:27221958
Plasma concentrations of tissue factor and its inhibitor in chronic thromboembolic pulmonary hypertension: a step closer to explanation of the disease aetiology? Background: The aetiology of chronic thromboembolic pulmonary hypertension (CTEPH) is not clearly understood. In some patients, the disease is preceded by acute pulmonary embolism (APE), and is characterised by intravascular thrombosis, vasoconstriction, inflammation and remodelling of pulmonary arteries. Ensuing pulmonary hypertension leads to potentially fatal chronic right ventricle failure. Both inborn and acquired risk factors were identified. Pathogenesis of haemostatic disorders is not completely explained, and extrinsic coagulation pathway disorders may play a role in CTEPH aetiology.
Aim: To evaluate levels of tissue factor (TF) and tissue factor pathway inhibitor (TFPI) in CETPH, and to delineate their role in the disease pathogenesis.
Methods: Plasma concentrations of TF and TFPI were evaluated in 21 CTEPH patients, in 12 patients with pulmonary arterial hypertension (PAH), in 55 APE survivors without persistent pulmonary hypertension after at least 6 months from the acute episode, and in 53 healthy volunteers (control group C). Most patients were treated with vitamin K antagonists (VKA), and some with unfractionated or low molecular weight heparin. Exclusion criteria included malignancy, inflammation, and recent operation.
Results: Tissue factor concentration was lower in CTEPH and in post-APE patients, not stratified by anticoagulation modality, as compared to control group (p = 0.042; p = 0.011) and PAH group (p = 0.024, p = 0.014). Patients with CTEPH and post-APE on adequate VKA-anticoagulation had similar TF concentration to group C. TFPI concentration was similar in CETPH and post-APE patients irrespective of anticoagulation, and higher as compared to group C (respectively, p = 0.012; p = 0.024; p = 0.004). TFPI concentration was similar in patients with CETPH and in post-APE group, both on adequate VKA-anticoagulation when compared to group C. In the post-APE group, there was no significant difference in TFPI concentration between patients receiving adequate and subjects without anticoagulation. Group C was significantly (p = 0.000) younger than any other group, and showed correlation (r = 0.31) between age and TFPI concentration.
Conclusions: In CTEPH there is a high consumption of TF, leading to reduction in plasma concentration of TF and increase in TFPI. Adequate VKA-anticoagulation normalises TF and TFPI plasma concentrations, as is the case of APE survivors.
abstract_id: PUBMED:36843115
Plasma Connective Tissue Growth Factor as a Biomarker of Pulmonary Arterial Hypertension Associated With Congenital Heart Disease in Adults. Background: Connective tissue growth factor (CTGF) has diagnostic value for pulmonary arterial hypertension (PAH) associated with congenital heart disease (CHD) in children; however, its value in adult patients remains unclear. This study evaluated CTGF as a biomarker in adult PAH-CHD patients.Methods and Results: Based on mean pulmonary artery pressure (mPAP), 56 CHD patients were divided into 3 groups: without PAH (W; mPAP <25 mmHg; n=28); mild PAH (M; mPAP 25-35 mmHg; n=18); and moderate and severe PAH (H; mPAP ≥35 mmHg; n=10). The control group consisted of 28 healthy adults. Plasma CTGF and B-type natriuretic peptide (BNP) concentrations were determined. Plasma CTGF concentrations were higher in the H and M groups than in the W and control groups, and were higher in the H than M group. Plasma CTGF concentrations were positively correlated with pulmonary artery systolic pressure (PASP), mPAP, and pulmonary vascular resistance, and negatively correlated with mixed venous oxygen saturation. CTGF, BNP, red blood cell distribution width, and World Health Organization Class III/IV were risk factors for PAH in CHD patients, and CTGF was an independent risk factor for PAH-CHD. The efficacy of CTGF in the diagnosis of PAH was not inferior to that of BNP.
Conclusions: CTGF is a biomarker of PAH associated with CHD. It can be used for early diagnosis and severity assessment in adult patients with CHD-PAH.
abstract_id: PUBMED:26714814
Elevated Plasma Connective Tissue Growth Factor Levels in Children with Pulmonary Arterial Hypertension Associated with Congenital Heart Disease. We aimed to investigate plasma connective tissue growth factor (CTGF) levels in pulmonary arterial hypertension (PAH) associated with congenital heart disease (CHD) (PAH-CHD) in children and the relationships of CTGF with hemodynamic parameters. Plasma CTGF levels were calculated in 30 children with CHD, 30 children with PAH-CHD and 25 health volunteers, using the subtraction method. Cardiac catheterization was performed to measure clinical hemodynamic parameters. Plasma CTGF levels were significantly higher in PAH-CHD than in those with CHD and health volunteers (p < 0.01). In cyanotic PAH-CHD, plasma CTGF levels were significantly elevated compared with acyanotic PAH-CHD in the same group (p < 0.05). Plasma CTGF levels showed positive correlation with B-type natriuretic peptide (BNP) in PAH-CHD (r = 0.475, p < 0.01), while oxygen saturation was inversely related to plasma CTGF levels (r = -0.436, p < 0.05). There was no correlation between CTGF and hemodynamic parameters. Even though the addition of CTGF to BNP did not significantly increase area under curve for diagnosis of PAH-CHD compared with BNP alone (p > 0.05), it revealed a moderately better specificity, positive predictive value and positive likelihood ratio than BNP alone. Plasma CTGF levels could be a promising diagnostic biomarker for PAH-CHD in children.
abstract_id: PUBMED:32228739
Diagnostic and predictive values of plasma connective tissue growth factor in children with pulmonary hypertension associated with CHD. Objective: To evaluate the diagnostic and predictive values of plasma connective tissue growth factor in children with pulmonary hypertension (PH)-related CHD.
Patients And Methods: Forty patients with PH-related CHD were enrolled as group I, and 40 patients with CHD and no PH served as group II. Forty healthy children of matched age and sex served as a control group. Echocardiographic examinations and plasma connective tissue growth factor levels were performed for all included children. Cardiac catheterisation was performed for children with CHD only.
Results: Plasma connective tissue growth factor levels were significantly higher in children with PH-related CHD compared to CHD-only patients and to control group and this elevation went with the severity of PH. There was a significant positive correlation between connective tissue growth factor levels and mean pulmonary pressure, pulmonary vascular resistance, and right ventricular diameter. A significant negative correlation was noticed between connective tissue growth factor levels, oxygen saturation, and right ventricular diastolic function. The sensitivity of plasma connective tissue growth factor as a diagnostic biomarker for PH was 95%, and the specificity was 90% at a cut-off value ≥650 pg/mL. The predictive value of plasma connective tissue growth factor for adverse outcome had a sensitivity of 88% and a specificity of 83% at a cut-off value ≥1900 pg/mL.
Conclusion: Connective tissue growth factor is a promising biomarker with good diagnostic and predictive values in children with PH-related CHD.
abstract_id: PUBMED:16351011
MCTD--mixed connective tissue disease Mixed connective tissue disease is a disease entity characterized by overlapping symptoms of lupus erythematosus (LE), systemic sclerosis (SSc), polymyositis/dermatomyositis (PM/DM) and rheumatoid arthritis (RA). Diagnostic criteria include high titers of antibodies against U1RNP as well as the presence of at least 3 of 5 of the following clinical features: edema of hands, synovitis, myositis, Raynaud phenomenon and acroscierosis. In terms of the pathogenesis, genetic as well as infectious (viral) factors appear to play a role. The acceptance of MCTD as a distinct disease entity is controversial. Terms such as "undifferentiated connective tissue disease" or "overlapping syndromes" are not helpful. One-quarter of MCTD patients transform into LE, while one-third progress to SSc. Therapeutic recommendations are glucocorticoids in combination with immunosuppressive agents and endothelin receptor antagonists. Double blind studies are not available. The prognosis is relatively good. Causes of death include pulmonary hypertension, infections and both pulmonary and cardiac failure.
abstract_id: PUBMED:27306448
Factor V and VIII deficiency treated with therapeutic plasma exchange prior to redo mitral valve replacement. A 33-year-old male was admitted to the hospital for a repeat mitral valve replacement. The original surgery, performed in India in 2008 due to rheumatic heart disease, required massive amounts of plasma replacement during and after the surgery. The patient was admitted to our hospital with extremely low Factor V and Factor VIII activities due to a rare combined Factor V and Factor VIII deficiency. His clinical condition on admission was grave due to severe pulmonary hypertension. It was decided to replace the patient's Factor V using therapeutic plasma exchange (TPE) with fresh frozen plasma (FFP) just prior to surgery, and his Factor VIII with Factor VIII concentrate. The patient tolerated the valve replacement surgery very well, without excessive bleeding, and received several more TPE procedures postoperatively. He was successfully made replete with both coagulation factors with little to no bleeding during the procedure and postoperatively. TPE is a promising modality for the treatment of patients with similar factor deficiencies for which a specific factor concentrate is not available, especially those at risk of fluid overload from plasma transfusion. J. Clin. Apheresis 32:196-199, 2017. © 2016 Wiley Periodicals, Inc.
abstract_id: PUBMED:8615888
Cellular fibronectin and von Willebrand factor concentrations in plasma of rats treated with monocrotaline pyrrole. The monocrotaline pyrrole (MCTP)-treated rat is a useful model for the study of certain chronic pulmonary vascular diseases. A single, i.v. administration of a low dose of MCTP causes pneumotoxicity, pulmonary vascular remodeling, sustained increases in pulmonary arterial pressure, and right ventricular hypertrophy in rats. The pulmonary vascular lesions are characterized by endothelial cell alterations, platelet and fibrin microvascular thrombosis, pulmonary edema, and thickening of the intimal and medial layers of the vessel wall. These lesions suggest that some dysfunction of the hemostatic system occurs in the lungs of rats treated with MCTP. We evaluated the concentrations of two adhesion proteins, cellular fibronectin (cFn) and von Willebrand factor (vWF), in the plasma of rats treated with MCTP. We hypothesized that changes in these factors occur along with markers of pneumotoxicity and ventricular hypertrophy and that such changes might contribute to the genesis of the vascular lesions. Enzyme-linked immunosorbent assays were used to measure cFn and vWF concentrations in the plasma of rats after MCTP treatment. Rats treated with a single i.v. injection of 3.5 mg MCTP/kg body weight had delayed and progressive lung injury characterized at 5 days post-treatment by increases in the lung-to-body weight ratio and in lactate dehydrogenase activity and protein concentration in cell-free bronchoalveolar lavage fluid (BALF). Values for these markers were further increased at 8 days and reached a plateau thereafter. The number of nucleated cells within the BALF was increased at 8 and 14 days. Right ventricular hypertrophy, an indirect marker of pulmonary hypertension, was evident at 14 days. The cFn concentration was increased in plasma in rats at 8 and 14 days after treatment with MCTP. There was no difference between the vWF concentration in plasma of rats treated with MCTP and those treated with vehicle at any time. We conclude that an increase in plasma cFn concentration occurs prior to the onset of right ventricular hypertrophy and that this change is consistent with a role for cFn in the genesis of vascular remodeling and pulmonary hypertension in the MCTP-treated rat. The lung vascular injury and pulmonary hypertension in this model were not reflected in altered vWF concentration in the plasma.
abstract_id: PUBMED:10088949
Vanishing pulmonary hypertension in mixed connective tissue disease. A 29-year-old woman with mixed connective tissue disease presented with signs of progressive pulmonary hypertension. After admission to the hospital her condition worsened rapidly and she developed a cardiac arrest resistant to cardiopulmonary resuscitation. Therefore, emergency extracorporeal assist was performed. No pulmonary embolism was found. Right heart catheterisation showed severe pulmonary hypertension, which was treated with nitric oxide ventilation. She was weaned from the extracorporeal assist with high doses of inotropic agents. Because of suspicion of exacerbation of her underlying disease, which led to pulmonary hypertension, immunosuppressive treatment was started with high doses of corticosteroids and plasma exchange. This resulted in slow recovery over the next four weeks. Control echocardiography showed complete normalisation of cardiac function without signs of pulmonary hypertension. Two months after admission she was discharged from the hospital in good condition.
abstract_id: PUBMED:25480334
Plasma vascular endothelial growth factor A and placental growth factor: novel biomarkers of pulmonary hypertension in congenital diaphragmatic hernia. Pulmonary hypertension (PH) due to abnormal pulmonary vascular development is an important determinant of illness severity in congenital diaphragmatic hernia (CDH). Vascular endothelial growth factor A (VEGFA) and placental growth factor (PLGF) may be important mediators of pulmonary vascular development in health and disease. This prospective study investigated the relationship between plasma VEGFA and PLGF and measures of pulmonary artery pressure, oxygenation, and cardiac function in CDH. A cohort of 10 infants with CDH consecutively admitted to a surgical neonatal intensive care unit (NICU) was recruited. Eighty serial plasma samples were obtained and analyzed by multiplex immunoassay to quantify VEGFA and PLGF. Concurrent assessment of pulmonary artery pressure (PAP) and cardiac function were made by echocardiography. Plasma VEGFA was higher and PLGF was lower in CDH compared with existing normative data. Combined plasma VEGFA:PLGF ratio correlated positively with measures of PAP, diastolic ventricular dysfunction, and oxygenation index. Nonsurvivors had higher VEGFA:PLGF ratio than survivors at days 3-4 of life and in the second week of life. These findings suggest that increased plasma VEGFA and reduced PLGF correlate with clinical severity of pulmonary vascular disease and may be associated with adverse outcome in CDH. This potential role for combined plasma VEGFA and PLGF in CDH as disease biomarkers, pathogenic mediators, and therapeutic targets merits further investigation.
abstract_id: PUBMED:26667361
The role of mononuclear cell tissue factor and inflammatory cytokines in patients with chronic thromboembolic pulmonary hypertension. Thrombosis and inflammation are two major factors underlying chronic thromboembolic pulmonary hypertension (CTEPH). Tissue factor (TF), C-reactive protein (CRP), tumor necrosis factor-α (TNF-α) and monocyte chemoattractant protein 1 (MCP-1) may play critical roles in the process of CTEPH thrombosis and pulmonary vascular remodeling. Ten patients with a confirmed diagnosis of CTEPH, 20 patients with acute pulmonary thromboembolism and 15 patients with other types of pulmonary hypertension were enrolled in this study, along with 20 healthy subjects as the control group. The immunoturbidimetric method was used to determine the plasma content of CRP. The plasma levels of TNF-α, MCP-1, and TF antigen were measured by an enzyme-linked immunosorbent assay, and TF activity was measured by the chromogenic substrate method. Percoll density gradient centrifugation was used to separate peripheral blood mononuclear cells from plasma. The level of monocyte TF mRNA was examined by reverse transcriptase-polymerase chain reaction. The correlations between all indices described above were analyzed. In CTEPH patients, the expression of CRP, TNF-α, and MCP-1 was significantly higher than that in controls (P < 0.05). The levels of TF activity, TF antigen, and TF mRNA in monocyte cells were increased in CTEPH patients when compared with control subjects, but only the TF antigen and TF mRNA levels were significantly different (P < 0.05). In CTEPH patients, levels of CRP, MCP-1, and TNF-α significantly correlated with the level of TF antigen in plasma. TF gene expression was increased in patients with CTEPH, suggesting that blood-borne TF mainly comes from mononuclear cells. TF expression significantly correlated with levels of CRP, TNF-α and MCP-1. These factors may play an important role in the development of CTEPH via the inflammation-coagulation-thrombosis cycle.
Answer: The study on plasma concentrations of tissue factor (TF) and tissue factor pathway inhibitor (TFPI) in chronic thromboembolic pulmonary hypertension (CTEPH) aimed to evaluate their levels and delineate their role in the disease pathogenesis. The results indicated that tissue factor concentration was lower in CTEPH and in post-acute pulmonary embolism (APE) patients, not stratified by anticoagulation modality, as compared to a control group and a pulmonary arterial hypertension (PAH) group. Patients with CTEPH and post-APE on adequate vitamin K antagonist (VKA) anticoagulation had similar TF concentration to the control group. TFPI concentration was similar in CTEPH and post-APE patients irrespective of anticoagulation, and higher as compared to the control group. Adequate VKA-anticoagulation normalized TF and TFPI plasma concentrations, as was the case with APE survivors. The study concluded that in CTEPH there is a high consumption of TF, leading to a reduction in plasma concentration of TF and an increase in TFPI. This suggests that extrinsic coagulation pathway disorders may play a role in CTEPH aetiology, and that adequate anticoagulation with VKA can normalize these concentrations (PUBMED:27221958).
This research provides insight into the potential involvement of hemostatic disorders in the pathogenesis of CTEPH, indicating that the disease may be associated with an imbalance in the coagulation system, particularly involving tissue factor and its inhibitor. The findings suggest that the observed alterations in TF and TFPI levels could be a step closer to understanding the aetiology of CTEPH, although further research would be needed to fully elucidate the mechanisms and causal relationships involved. |
Instruction: Does a routinely measured blood pressure in young adolescence accurately predict hypertension and total cardiovascular risk in young adulthood?
Abstracts:
abstract_id: PUBMED:14597845
Does a routinely measured blood pressure in young adolescence accurately predict hypertension and total cardiovascular risk in young adulthood? Background: It is insufficiently known if routine blood pressure (BP) measurement by school doctors has added predictive value for later hypertension and cardiovascular risk.
Objective: To assess whether screening of BP in adolescence has additional predictive value to already routinely collected indicators of later hypertension and cardiovascular risk.
Methods: In the Dutch city of Utrecht, routine BPs and anthropometry were collected from school health records of 750 adolescents. In The Hague, standardized repeated BP measurements and anthropometry were available for 262 adolescents. Of both cohorts, 998 now young adults were recently re-examined. Predictors of adult hypertension, systolic blood pressure (SBP) > or = 140 mmHg and/or diastolic blood pressure (DBP) > or = 90 mmHg) and 10-year cardiovascular risk were analysed by logistic regression and area under receiver operator characteristics curve (AUC).
Results: A total of 167 young adults had hypertension. Single adolescent SBP and DBP predicted hypertension: odds ratio (OR) 1.04 per mmHg [95% confidence interval (CI): 1.03-1.06], OR 1.02 (1.00-1.04), respectively, but with little discriminative power. Gender, adolescent body mass index (BMI) and age combined predicted hypertension: AUC 0.71 (0.67-0.75), which slightly improved by adding SBP: AUC 0.74 (0.70-0.77); difference in AUC 0.03 (0.002-0.06). SBP exclusively predicted hypertension within men: OR 1.03 (1.01-1.04), AUC: 0.59 (0.53-0.65), and within women: OR 1.08 (1.05-1.11), AUC 0.74 (0.67-0.82). However, an adolescent BP of > or = 120 mmHg did not efficiently detect hypertensive men, while it detected 57.9% of hypertensive women. Only young adult men had meaningful 10-year cardiovascular risks, which only SBP predicted: OR risk score > 95th percentile 1.04 (1.02-1.07), AUC 0.67 (0.60-0.75).
Conclusion: A single routine BP measurement in adolescent girls efficiently predicts young adult hypertension. In adolescent boys, BP predicts young adult 10-year cardiovascular risk.
abstract_id: PUBMED:16880344
Elevated blood pressure in adolescent boys predicts endothelial dysfunction: the cardiovascular risk in young Finns study. Hypertension is a major risk factor for atherosclerosis. It may cause or be a consequence of endothelial dysfunction. We studied whether systolic blood pressure measured in childhood and adolescence predicts endothelial-dependent brachial flow-mediated dilation (FMD) in adulthood. Brachial FMD was measured in 2109 white adults, aged 24 to 39 years, in the 21-year follow-up of the Cardiovascular Risk in Young Finns Study. These subjects have risk factor data available dating back to their childhood (baseline in 1980, ages 3 to 18 years). In male subjects, the level of systolic blood pressure measured in adolescence (at ages 12 to 18 years at baseline) was inversely related to adulthood FMD (P=0.004). This association was independent of brachial diameter and other childhood (P=0.003) and adulthood risk factors, including blood pressure (P=0.03). Childhood (age 3 to 9 years at baseline) systolic blood pressure did not correlate with adult FMD in men or in women (P always >0.2). In male subjects, elevated systolic blood pressure in adolescence predicts impaired brachial endothelial function 21 years later in adulthood. This association is independent of other childhood and adulthood cardiovascular risk factors suggesting that blood pressure elevation in adolescence may have an influence on biological processes that regulate endothelium-dependent flow-mediated vasodilatation capacity.
abstract_id: PUBMED:33821946
Blood Pressure in Young Adults and Cardiovascular Disease Later in Life. Cardiovascular disease (CVD) mortality has declined markedly over the past several decades among middle-age and older adults in the United States. However, young adults (18-39 years of age) have had a lower rate of decline in CVD mortality. This trend may be related to the prevalence of high blood pressure (BP) having increased among young US adults. Additionally, awareness, treatment, and control of hypertension are low among US adults between 20 and 39 years of age. Many young adults and healthcare providers may not be aware of the impact of high BP during young adulthood on their later life, the associations of BP patterns with adverse outcomes later in life, and benefit-to-harm ratios of pharmacological treatment. This review provides a synthesis of the related resources available in the literature to better understand BP-related CVD risk among young adults and better identify BP patterns and levels during young adulthood that are associated with CVD events later in life, and lastly, to clarify future challenges in BP management for young adults.
abstract_id: PUBMED:28338726
Cumulative Exposure to Systolic Blood Pressure During Young Adulthood Through Midlife and the Urine Albumin-to-Creatinine Ratio at Midlife. Background: Higher blood pressure during young adulthood may increase cardiovascular and kidney disease risk later in life. This study examined the association of cumulative systolic blood pressure (SBP) exposure during young adulthood through midlife with urine albumin-to-creatinine ratios (ACR) measured during midlife.
Methods: We used data from the Coronary Artery Risk Development in Young Adults (CARDIA) study, a biracial cohort recruited in 4 urban areas during years 1985-1986. Cumulative SBP was calculated as the average SBP between 2 exams multiplied by years between exams over 20 year years. ACR was measured 20 years after baseline when participants were age 43-50 years (midlife). A generalized additive model was used to examine the association of log ACR as a function of cumulative SBP with adjustment for covariates including SBP measured concurrently with ACR.
Results: Cumulative SBP ranged from a low of 1,671 to a high of 3,260 mm Hg. Participants in the highest cumulative SBP quartile were more likely to be male (61.4% vs. 20.7%; P < 0.001), Black (61.5% vs. 25.6%; P < 0.001) and have elevated ACR (18.7% vs. 4.8%; P < 0.001) vs. lowest quartile. Spline regression curves of ACR vs. cumulative SBP demonstrated an inflection point in ACR with cumulative SBP levels >2,350 mm Hg with linear increases in ACR above this threshold. Adjusted geometric mean ACR values were significantly higher with cumulative SBP ≥2,500 vs. <2500 (9.18 [1.06] vs. 6.92 [1.02]; P < 0.0001).
Conclusion: Higher SBP during young adulthood through midlife is associated with higher ACR during midlife.
abstract_id: PUBMED:21273991
Relationship between trajectories of trunk fat mass development in adolescence and cardiometabolic risk in young adulthood. To examine developmental trajectories of trunk fat mass (FM) growth of individuals categorized as either low or high for cardiometabolic risk at 26 years, a total of 55 males and 76 females from the Saskatchewan Pediatric Bone Mineral Accrual Study (1991-2007) were assessed from adolescence (11.5 ± 1.8 years) to young adulthood (26.2 ± 2.2 years) (median of 11 visits per individual) and had a measure of cardiometabolic risk in young adulthood. Participants were categorized as low or high for blood pressure and cardiometabolic risk as adults using a sex-specific median split of continuous standardized risk scores. Individual trunk FM trajectories of participants in each risk group were analyzed using multilevel random effects models. Males and females in the high blood pressure group had significantly steeper (accelerated) trajectories of trunk FM development (0.61 ± 0.14 and 0.52 ± 0.10 log g, respectively) than those in the low blood pressure group for females in the high cardiometabolic risk group trajectory of trunk FM was significantly steeper (0.52 ± 0.10 log g) than those females in the low cardiometabolic risk group. Dietary fat was positively related (0.01 ± 0.003 g/1,000 kcal) and physical activity negatively related (-0.16 ± 0.05 physical activity score) to trunk FM development in males. Young adults with high cardiometabolic risk, compared to low, have greater trunk FM as early as 8 years of age, which supports the need for early intervention.
abstract_id: PUBMED:35253445
Adolescent Blood Pressure and the Risk for Early Kidney Damage in Young Adulthood. Background: Recent guidelines classified blood pressure above 130/80 mm Hg as hypertension. However, outcome data were lacking.
Objective: To determine the association between blood pressure in adolescence and the risk for early kidney damage in young adulthood.
Methods: In this nationwide cohort study, we included 629 168 adolescents aged 16 to 20 who underwent medical examinations before mandatory military service in Israel. We excluded 30 466 adolescents with kidney pathology, hypertension, or missing blood pressure or anthropometric data at study entry. Blood pressure measurements at study entry were categorized according to the Clinical Practice Guideline for Screening and Management of High Blood Pressure in Children and Adolescents: group A (<120/<80 mm Hg; Reference group), group B (120/<80-129/<80 mm Hg), group C (130/80-139/89 mm Hg), and group D (≥140/90 mm Hg). Early kidney damage in young adulthood was defined as albuminuria of ≥30 mg/g with an estimated glomerular filtration rate of 60 mL/(min·1.73 m2) or over.
Results: Of 598 702 adolescents (54% men), 2004 (0.3%) developed early kidney damage during a mean follow-up of 15.1 (7.2) years. The adjusted hazard ratios for early kidney damage in blood pressure group C were 1.17 (1.03-1.32) and 1.51 (1.22-1.86) among adolescents with lean (body mass index <85th percentile) and high body mass index (body mass index ≥85th percentile), respectively. Corresponding hazard ratios for kidney disease in group D were 1.49 (1.15-1.93) and 1.79 (1.35-2.38) among adolescents with lean and high body mass index, respectively.
Conclusions: Blood pressure of ≥130/80 mm Hg was associated with early kidney damage in young adulthood, especially in adolescents with overweight and obesity.
abstract_id: PUBMED:37409562
Association of joint exposure to various ambient air pollutants during adolescence with blood pressure in young adulthood. The association of various air pollutants exposure during adolescence with blood pressure (BP) in young adulthood is uncertain. We intended to evaluate the long-term association of individual and joint air pollutants exposure during adolescence with BP in young adulthood. This cross-sectional study of incoming students was conducted in five geographically disperse universities in China during September and October 2018. Mean concentrations of particulate matter with diameters ≤2.5 μm (PM2.5 ), ≤10 μm (PM10 ), nitrogen dioxides (NO2 ), carbon monoxide (CO), sulfur dioxide (SO2 ), and ozone (O3 ) at participants' residential addresses during 2013-2018 were collected from the Chinese Air Quality Reanalysis dataset. Generalized linear mixed models (GLM) and quantile g-computation (QgC) models were utilized to estimate the association between individual and joint air pollutants exposure and systolic blood pressure (SBP), diastolic blood pressure (DBP), and pulse pressure (PP). A total of 16,242 participants were included in the analysis. The GLM analyses showed that PM2.5 , PM10 , NO2 , CO, and SO2 were significantly positively associated with SBP and PP, while O3 was positively associated with DBP. The QgC analyses indicated that long-term exposure to a mixture of the six air pollutants had a significant positive joint association with SBP and PP. In conclusion, air pollutant co-exposure during adolescence may influence BP in young adulthood. The findings of this study emphasized the impacts of multiple air pollutants interactions on potential health and the need of minimizing pollution exposures in the environment.
abstract_id: PUBMED:33399019
Association of lifetime blood pressure with adulthood exercise blood pressure response: the cardiovascular risk in young Finns study. Purpose: Elevated blood pressure (BP) in childhood has been associated with increased adulthood BP. However, BP and its change from childhood to adulthood and the risk of exaggerated adulthood exercise BP response are largely unknown. Therefore, we studied the association of childhood and adulthood BP with adulthood exercise BP response.
Materials And Methods: This investigation consisted of 406 individuals participating in the ongoing Cardiovascular Risk in Young Finns Study (baseline in 1980, at age of 6-18 years; follow-up in adulthood in 27-29 years since baseline). In childhood BP was classified as elevated according to the tables from the International Child Blood Pressure References Establishment Consortium, while in adulthood BP was considered elevated if systolic BP was ≥120 mmHg or diastolic BP was ≥80 mmHg or if use of antihypertensive medications was self-reported. A maximal cardiopulmonary exercise test with BP measurements was performed by participants in 2008-2009, and exercise BP was considered exaggerated (EEBP) if peak systolic blood pressure exceeded 210 mmHg in men and 190 mmHg in women.
Results: Participants with consistently high BP from childhood to adulthood and individuals with normal childhood but high adulthood BP had an increased risk of EEBP response in adulthood (relative risk [95% confidence interval], 3.32 [2.05-5.40] and 3.03 [1.77-5.17], respectively) in comparison with individuals with normal BP both in childhood and adulthood. Interestingly, individuals with elevated BP in childhood but not in adulthood also had an increased risk of EEBP [relative risk [95% confidence interval], 2.17 [1.35-3.50]).
Conclusions: These findings reinforce the importance of achieving and sustaining normal blood pressure from childhood through adulthood.
abstract_id: PUBMED:19633567
Relation of blood pressure and body mass index during childhood to cardiovascular risk factor levels in young adults. Introduction: Adult obesity and hypertension are leading causes of cardiovascular morbidity/mortality. Although childhood BMI and blood pressure (BP) track into adulthood, how they influence adult cardiovascular risk independent of each other is not well defined.
Methods: Participants were from two longitudinal studies with a baseline evaluation at mean age of 13 years and a follow-up at mean age of 24 years. Regression models using childhood BP and BMI to predict young adult cardiovascular risk factors were performed.
Results: In univariate analysis, childhood BMI predicted young adult BP, lipids, glucose, insulin and insulin resistance, whereas childhood BP predicted young adult BP, lipids and glucose. In a multivariable regression model (adjusted for age, sex and race), which included change in BMI and BP from age 13 to 24 years, BMI predicted all young adult risk factors except BP and glucose. Baseline SBP predicted young adult BP, cholesterol, triglycerides and glucose whereas baseline DBP predicted young adult BP, BMI and glucose.
Conclusion: The results from this study show that BP and BMI act independently in children to influence future cardiovascular risk factors and the combination of high BP and BMI in childhood has an additive effect in predicting the highest levels of young adult cardiovascular risk. Thus, there should be a focus on treating hypertension in overweight and obese children, in addition to attempting to reduce weight.
abstract_id: PUBMED:35000150
Approach to Hypertension in Adolescents and Young Adults. Purpose Of Review: The prevalence of hypertension in adolescents and young adults has increased in part due to the obesity epidemic. The clinical impact and future cardiovascular risk of this underestimated public health problem is an evolving field.
Recent Findings: The development of hypertension is predicted by tracking of elevated blood pressure from childhood to adulthood. Young hypertensive individuals have lower awareness, slower diagnosis rates, and poorer blood pressure control than older patients. Increased awareness, appropriate screening, early identification, and individualized treatment approaches for elevated blood pressure could prevent development of hypertension in adulthood and cardiovascular events in later life. The optimal blood pressure management for young adults with a low 10-year risk of atherosclerotic cardiovascular disease of < 10% remains challenging due to lack of randomized controlled trials. Evidence-based recommendations are needed to implement appropriate measures for time of treatment initiation, preferred antihypertensive drug class to be used and optimal target blood pressure level from childhood through young adulthood.
Answer: A routinely measured blood pressure in young adolescence has some predictive value for hypertension and cardiovascular risk in young adulthood, but the accuracy and discriminative power vary. Single adolescent systolic blood pressure (SBP) and diastolic blood pressure (DBP) measurements can predict hypertension in young adults, with odds ratios indicating a small increase in risk per mmHg increase in BP. However, the discriminative power of these measurements alone is limited (PUBMED:14597845). The predictive value improves slightly when combined with gender, adolescent body mass index (BMI), and age (PUBMED:14597845). Specifically, in adolescent girls, a single routine BP measurement efficiently predicts young adult hypertension, while in boys, BP predicts young adult 10-year cardiovascular risk (PUBMED:14597845).
Elevated systolic blood pressure in adolescent boys has been shown to predict endothelial dysfunction, a precursor to atherosclerosis, in adulthood independently of other risk factors (PUBMED:16880344). Additionally, higher blood pressure during young adulthood through midlife is associated with higher urine albumin-to-creatinine ratios (ACR) during midlife, indicating potential kidney damage (PUBMED:28338726).
Furthermore, the development of hypertension is predicted by the tracking of elevated blood pressure from childhood to adulthood, with young hypertensive individuals having lower awareness and poorer blood pressure control than older patients (PUBMED:35000150). Adolescents with blood pressure ≥130/80 mm Hg are at increased risk for early kidney damage in young adulthood, especially if they are overweight or obese (PUBMED:33821946).
In summary, while adolescent blood pressure measurements do have some predictive value for later hypertension and cardiovascular risk, they should be considered alongside other factors such as BMI, gender, and age for a more accurate assessment. Additionally, the impact of high blood pressure during adolescence on long-term health outcomes such as endothelial function and kidney health further supports the importance of monitoring and managing blood pressure from a young age (PUBMED:16880344; PUBMED:28338726; PUBMED:33821946). |
Instruction: Does culture or illness change a smoker's perspective on cessation?
Abstracts:
abstract_id: PUBMED:24933135
Does culture or illness change a smoker's perspective on cessation? Objectives: To explore cultural context for smoking cessation within Chinese communities in Vancouver, and identify opportunities to support development of culturally appropriate resources for cessation.
Methods: Applied participatory approach involving community members, patients, and key-informants in the design and implementation of the research.
Results: Whereas many participants were motivated to quit, their perceptions of desire to do so were not supported by effective interventions and many attempts to quit were unsuccessful.
Conclusion: Tobacco control clinics and care providers need to adopt culturally and linguistically relevant interventions to facilitate behavioral modifications and cessation in ethnic minority communities.
abstract_id: PUBMED:25683197
Existential multiplicity and the late-modern smoker: negotiating multiple identities in a support group for smoking cessation. To examine identity work of smokers attempting to quit, we undertook participant observation at an Israeli cessation support group. Grounded theory and thematic analysis of group dialogue permitted identification of recurring themes and the presentation of illustrative vignettes. We found that, rather than the linear, goal-oriented constitution of a univocal non-smoking identity, identity work entailed re-appraisals of the experience of liminality between smoking and non-smoking selves. Although the group participants reduced their tobacco consumption and some even quit, specific technologies of self sustained the smoking self alongside the non-smoking self. We propose that the social contextualisation of the smoker in the context of late modernity may explain the tolerance of chronic ambivalence and the constitution of a 'resistant' smoking- non-smoking self. Phenomenological accounts of the experience of this hybrid self may more fully explain protracted or failed cessation and further deconstruct binary readings of indulgence or control, addiction or abstinence and illness or wellness. Our findings call for the re-conceptualisation of the experience and outcome of protracted cessation and a tolerant policy-driven intervention.
abstract_id: PUBMED:9411973
The diagnosis of "smoker's lung" encourages smoking cessation In a controlled randomised trial we analysed whether the use of the term "smoker's lung" (Danish: "rygerlunger") instead of chronic bronchitis when talking to patients with chronic obstructive lung disease (COLD) changed their smoking habits. Fifty-six smoking patients with COLD were allocated to either intervention (n = 25) or control groups (n = 31). In the intervention group the lung disease was designated smoker's lung in all communication with patients about their illness and in the control group traditional terminology was used. All patients were given the same medical treatment and the same encouragement to stop smoking. One week after discharge 57% had stopped smoking in the smoker's lung group vs 26% in the control group (p = 0.028), at three months 50% vs 19% (p = 0.027) and at one year 40% vs 20% (p = 0.148). Referring directly to the cause of a self-inflicted illness may be an effective way of discouraging risk behaviour, at negligible cost.
abstract_id: PUBMED:36474606
Prevalence of non-communicable diseases and its association with tobacco smoking cessation intention among current smokers in Shanghai, China. Introduction: Smoking remains one of the biggest public health challenges worldwide, quitting tobacco smoking can lead to substantial health gains, even later in life. Previous studies indicate that illness can be a powerful motivation to quit and physicians' advice on smoking cessation has been shown to improve quit rates, but evidence on the role of non-communicable diseases in smoking cessation is limited. This cross-sectional study aims to evaluate the prevalence of non-communicable diseases and to explore its role in smoking cessation intention in smokers in Shanghai.
Methods: From January to June 2021, 1104 current smokers were recruited in the Songjiang and Fengxian districts of Shanghai. We used an Android assisted electronic questionnaire for data collection, and implemented logistic regression for odds ratio (OR) and 95% confidence interval (CI) calculation to explore how smoking cessation intention would be influenced by non-communicable diseases comorbidity among smokers.
Results: The 1104 current smokers included 914 males (82.8%), with an average age of 43.6 years. Approximately 22% of smokers had at least 1 type of non-communicable disease, with 17.8% for non-respiratory system related non-communicable diseases and 6.6% for respiratory system related non-communicable diseases. The prevalence of non-communicable diseases comorbidity ranged from 0.5% to 13.9%, and was higher in male smokers; 41.8% of current smokers had intention to quit smoking in a recent year, and the percentage of smoking cessation intention was higher in smokers with non-communicable diseases. Logistic regression indicated that smokers with non-communicable diseases had 1.3 (95% CI: 1.0-1.8) times higher smoking cessation intention than those without non-communicable disease. The findings were consistent in respiratory system related and non-respiratory system related non-communicable diseases.
Conclusions: The prevalence of non-communicable diseases was high among current smokers in Shanghai, and their smoking cessation intention was associated with non-communicable diseases comorbidity. Physicians should treat illness as a powerful motivation and provide professional cessation service to tobacco users to reverse the severe tobacco epidemic.
abstract_id: PUBMED:27103658
Staying a smoker or becoming an ex-smoker after hospitalisation for unstable angina or myocardial infarction. The aim of our study was to better understand processes of ongoing smoking or smoking cessation (quitting) following hospitalisation for myocardial infarction or unstable angina (acute cardiac syndromes). In-depth interviews were used to elicit participants' stories about ongoing smoking and quitting. In total, 18 interviews with smokers and 14 interviews with ex-smokers were analysed. Our findings illustrate the complex social nature of smoking practices including cessation. We found that smoking cessation following hospitalisation for acute cardiac syndromes is to some extent a performative act linked to 'doing health' and claiming a new identity, that of a virtuous ex-smoker in the hope that this will prevent further illness. For some ex-smokers hospitalisation had facilitated this shift, acting as a rite of passage and disrupting un-contemplated habits. Those participants who continued to smoke had often considered quitting or had even stopped smoking for a short period of time after hospitalisation; however, they did not undergo the identity shift described by ex-smokers and smoking remained firmly entrenched in their sense of self and the pattern of their daily lives. The ongoing smokers described feeling ashamed and stigmatised because of their smoking and felt that quitting was impossible for them. Our study provides an entry point into the smokers' world at a time when their smoking has become problematised and highly visible due to their illness and when smoking cessation or continuance carries much higher stakes and more immediate consequences than might ordinarily be the case.
abstract_id: PUBMED:34286248
Smoking cessation on the African continent: Challenges and opportunities. Tobacco smoking is one of the world's single biggest preventable causes of death. Over 8 million people die each year of a tobacco-related illness - both directly and as a result of second-hand smoke. Combating this epidemic requires commitment from policy makers, healthcare workers and civil society. The WHO has invested extensively in supporting policy frameworks to assist countries to combat tobacco advertising, sales and promotion. Despite these interventions, over 1 billion people actively smoke, of whom >80% live in low- or middle-income countries. Strong policies, high taxation and cigarette pricing dissuade smokers effectively, but the clinician is frequently the individual who is faced with the smoker wishing to quit. Although many African countries have policies regarding tobacco control, very few have programmes to support smokers who wish to quit, and even fewer have active training programmes to equip healthcare practitioners to assist active smokers in breaking their addiction to nicotine. We present a perspective from several countries across the African continent, highlighting the challenges and opportunities to work together to build capacity for smoking cessation services throughout Africa.
abstract_id: PUBMED:20065338
Smoking cessation: the potential role of risk assessment tools as motivational triggers. Smoking is the most important and preventable cause of morbidity and premature mortality in developed and developing countries. To date, efforts to reduce the burden of smoking have focused on non-personalised strategies. Anxiety about ill health, especially lung cancer and emphysema, is the foremost concern for smokers and a major reason for quitting. Recent efforts in cessation management focus on behaviour change and pharmacotherapy. The '3 Ts' (tension, trigger, treatment) model of behaviour change proposes that at any one time a smoker experiences varying degrees of motivational tension, which in the presence of a trigger may initiate or enhance quitting. Smokers' optimistic bias (ie, denial of one's own vulnerability) sustains continued smoking, while increasing motivational tension (eg, illness) favours quitting. The 1 year quit rates achieved when smokers encounter a life threatening event, such as a heart attack or lung cancer, are as much as 50-60%. Utilising tests of lung function and/or genetic susceptibility personalises the risk and have been reported to achieve 1 year quit rates of 25%. This is comparable to quit rates achieved among healthy motivated smokers using smoking cessation drug therapy. In this paper we review existing evidence and propose that identifying those smokers at increased risk of an adverse smoking related disease may be a useful motivational tool, and enhance existing public health strategies directed at smoking cessation.
abstract_id: PUBMED:33588885
Cost-effectiveness analysis of text messaging to support health advice for smoking cessation. Background: Smoking in one of the most serious public health problems. It is well known that it constitutes a major risk factor for chronic diseases and the leading cause of preventable death worldwide. Due to high prevalence of smokers, new cost-effective strategies seeking to increase smoking cessation rates are needed.
Methods: We performed a Markov model-based cost-effectiveness analysis comparing two treatments: health advice provided by general practitioners and nurses in primary care, and health advice reinforced by sending motivational text messages to smokers' mobile phones. A Markov model was used in which smokers transitioned between three mutually exclusive health states (smoker, former smoker and dead) after 6-month cycles. We calculated the cost-effectiveness ratio associated with the sending of motivational messages. Health care and society perspectives (separately) was adopted. Costs taken into account were direct health care costs and direct health care cost and costs for lost productivity, respectively. Additionally, deterministic sensitivity analysis was performed modifying the probability of smoking cessation with each option.
Results: Sending of text messages as a tool to support health advice was found to be cost-effective as it was associated with increases in costs of €7.4 and €1,327 per QALY gained (ICUR) for men and women respectively from a healthcare perspective, significantly far from the published cost-effectiveness threshold. From a societal perspective, the combined programmed was dominant.
Conclusions: Sending text messages is a cost-effective approach. These findings support the implantation of the combined program across primary care health centres.
abstract_id: PUBMED:29573128
Changes in self-efficacy associated with success in quitting smoking in participants in Japanese smoking cessation therapy. Aims: To identify the strength of self-efficacy during 12-week smoking cessation therapy (SCT) that consisted of 5 sessions and its association with the success of smoking cessation at the end of SCT.
Background: Few studies showed to what level self-efficacy should be reinforced to facilitate success in smoking cessation.
Design: Prospective cohort study.
Methods: We enrolled 488 smokers who received SCT from 6 Japanese smoking cessation clinics between October 2008 and October 2014. In each smoker, the self-efficacy of quitting smoking was assessed at each session of SCT. The association of the strength of self-efficacy with the success of smoking cessation was investigated by logistic regression analysis adjusting for age, gender, having a present illness, prescription, and nicotine dependence score.
Results: The self-efficacy of the 398 abstainers was significantly higher than that of the 90 nonabstainers at the first to fifth sessions of the SCT (P < .001 each). Participants with self-efficacy scores of greater than 60% were significantly more likely to attain abstinence than those with scores of 40% and less.
Conclusion: During SCT, nurses' counselling to maintain patients' self-efficacy scores of smoking cessation in the range from 61% to 100% may be important for assisting patients in achieving smoking cessation.
abstract_id: PUBMED:1458044
Processes of change in smoking cessation: a cross-validation study in cardiac patients. The processes of change associated with smoking cessation were examined for 213 smokers and recent exsmokers who were scheduled for cardiac catheterization and compared to the processes reported by a sample of 180 nonmedical smokers and exsmokers. Subjects were classified into one of three stages of change depending on their readiness to quit smoking: precontemplation, contemplation, and action. The cardiac sample employed the processes of change more frequently than the nonmedical sample in all stages, but the functional relationship between the stages and processes of change was generally similar for the two groups. The hierarchical structure of the processes of change also was similar for both groups. Differences between the two samples in the use of the processes of change are discussed. These results are the first to support the validity of the stages and processes-of-change model of smoking cessation in a population experiencing severe illness.
Answer: Yes, both culture and illness can change a smoker's perspective on cessation. Cultural context plays a significant role in smoking cessation within Chinese communities in Vancouver, indicating that culturally and linguistically relevant interventions are necessary to facilitate behavioral modifications and cessation in ethnic minority communities (PUBMED:24933135). Additionally, the experience of liminality between smoking and non-smoking selves suggests that the social contextualization of the smoker in the context of late modernity may influence their cessation efforts, with identity work being a complex process rather than a linear, goal-oriented constitution of a non-smoking identity (PUBMED:25683197).
Illness is also a powerful motivator for smoking cessation. The diagnosis of "smoker's lung" has been shown to encourage smoking cessation, with a study finding that referring directly to the cause of a self-inflicted illness may be an effective way of discouraging risk behavior (PUBMED:9411973). The presence of non-communicable diseases among smokers in Shanghai was associated with a higher intention to quit smoking, suggesting that illness can serve as a significant motivation for cessation (PUBMED:36474606). Similarly, hospitalization for acute cardiac syndromes can act as a rite of passage that disrupts un-contemplated habits and facilitates the shift to a non-smoking identity for some individuals, while others may continue to smoke despite considering quitting (PUBMED:27103658).
In summary, both cultural factors and illness can significantly influence a smoker's perspective on cessation, highlighting the need for culturally appropriate resources and the potential of illness as a motivational trigger for quitting smoking. |
Instruction: Measuring Patient Satisfaction's Relationship to Hospital Cost Efficiency: Can Administrators Make a Difference?
Abstracts:
abstract_id: PUBMED:25533752
Measuring Patient Satisfaction's Relationship to Hospital Cost Efficiency: Can Administrators Make a Difference? Objective: The aim of this study was to assess the ability and means by which hospital administrators can influence patient satisfaction and its impact on costs.
Data Sources: Data are drawn from the American Hospital Association's Annual Survey of Hospitals, federally collected Hospital Cost Reports, and Medicare's Hospital Compare.
Study Design: Stochastic frontier analyses (SFA) are used to test the hypothesis that the patient satisfaction-hospital cost relationship is primarily a latent "management effect." The null hypothesis is that patient satisfaction measures are main effects under the control of care providers rather than administrators.
Principle Findings: Both SFA models were superior to the standard regression analysis when measuring patient satisfaction's relationship to hospitals' cost efficiency. The SFA model with patient satisfaction measures treated as main effects, rather than "latent, management effects," was significantly better comparing the log-likelihood statistics. Higher patient satisfaction scores on the environmental quality and provider communication dimensions were related to lower facility costs. Higher facility costs were positively associated with patients' overall impressions (willingness to recommend and overall satisfaction), assessments of medication and discharge instructions, and ratings of caregiver responsiveness (pain control and help when called).
Conclusions: In the short term, managers have a limited ability to influence patient satisfaction scores, and it appears that working through frontline providers (doctors and nurses) is critical to success. In addition, results indicate that not all patient satisfaction gains are cost neutral and there may be added costs to some forms of quality. Therefore, quality is not costless as is often argued.
abstract_id: PUBMED:35233402
The relationship between patient safety culture with patient satisfaction and hospital performance in Shafa Hospital of Kerman in 2020. Background: Hospitals are a significant part of the health system, so their performance is always measured based on some factors such as patient satisfaction and their safety level.
Aim: The present study was aimed to examine the relationship between patient safety culture with patient satisfaction and hospital performance.
Materials And Methods: This descriptive-analytical, cross-sectional study was performed on 240 patients, 240 staff and 20 hospital managers in Shafa hospital of Kerman, Iran, in 2020. To collect data, the patient safety culture, the patient satisfaction, and the hospital performance questionnaires were used. The data were analyzed by SPSS and PLS software; to measure the research model, structural equation models and confirmatory factor analysis were used.
Results: The variable "patient satisfaction" and its components had a high mean, with the component "the treating physician" having the highest mean. The variables "patient safety culture and hospital performance" had a medium mean. There was a significant positive relationship between patient safety culture-hospital performance, patient safety culture-patient satisfaction, and patient satisfaction-hospital performance.
Conclusion: The patient satisfaction level was appropriate in the studied center, and a positive and significant relationship was found between patient safety culture and patient satisfaction and hospital performance.
abstract_id: PUBMED:12626024
Hospital efficiency and patient satisfaction. The objective of this study was to investigate the relationship between efficiency and patient satisfaction for a sample of general, acute care hospitals in Ontario, Canada. A measure of patient satisfaction at the hospital level was constructed using data from a province-wide survey of patients in mid-1999. A measure of efficiency was constructed using data from a cost model used by the Ontario Ministry of Health, the primary funder of hospitals in Ontario. In accordance with previous studies, the model also included measures of hospital size, teaching status and rural location. Based on the results of this study, at a 95% confidence level, there does appear to be evidence to suggest that an inverse relationship between hospital efficiency and patient satisfaction exists. However, the magnitude of the effect appears to be small. Hospital size and teaching status also appear to affect satisfaction, with lower satisfaction scores reported among non-teaching and larger hospitals. This study did not find any evidence to suggest that hospital location (rural versus urban) or religious affiliation contributed to reports of patient satisfaction in any way not explained by the other measures included in the study. The findings imply that low patient satisfaction cannot be explained by excessive management concentration on efficiency. Managers should analyse some of the underlying causes of patient dissatisfaction before reconfiguring resources. It may be beneficial in larger hospitals to study the aspects of care that patients have reported they prefer in small hospitals.
abstract_id: PUBMED:17036456
Cost availability and patient satisfaction of hospital care The results of sociological questionnaire of 1496 patients of oblast and region hospitals related to cost availability and patient satisfaction of hospital care. About 75% of respondents considered their treatment spendings as significant. It is established that degree of patients' satisfaction of hospital care depends on length of stay at the in-patient department. The longer it is the fewer patients satisfied with medical care are. It is proposed to organize in hospitals among socially unprotected patients evaluation of individual spendings for medical care to protect constitutional rights of the population and to reveal factual cost of medical care in the framework of the territorial program of public guarantees.
abstract_id: PUBMED:28387619
Implementation of interdisciplinary neurosurgery morning huddle: cost-effectiveness and increased patient satisfaction. OBJECTIVE Morning discharge huddles consist of multiple members of the inpatient care team and are used to improve communication and patient care and to facilitate patient flow through the hospital. However, the effect of huddles on hospital costs and patient satisfaction has not been clearly elucidated. The authors investigated how a neurosurgeryled interdisciplinary daily morning huddle affected various costs of patient care and patient satisfaction. METHODS Huddles were conducted at 8:30 am Monday through Friday, and lasted approximately 30 minutes. The authors retrospectively looked at the average monthly costs per patient for a variety of variables (e.g., average ICU days, average step-down days, average direct cost, average laboratory costs, average pharmacy costs, hospital ratings, and hospital recommendations) and compared the results from before and after implementation of the huddle. RESULTS There was a significant decrease in the number of ICU days, average laboratory costs, and average pharmacy costs per patient after the huddle was implemented; decreased laboratory and pharmacy costs produced $1,408,047.66 in savings. There was no significant difference found for the average direct cost. The percentage of patients who rated the hospital as a 9 or 10 significantly increased. The percentage who answered "strongly agree" when asked whether they would recommend the hospital also significantly increased. CONCLUSIONS A short morning huddle consisting of key members of the inpatient team may result in substantial hospital savings derived from reduced ICU days and laboratory and pharmacy costs as well as increased patient satisfaction.
abstract_id: PUBMED:34888948
Medical disputes and patient satisfaction in China: How does hospital management matter? Objective: Satisfaction with healthcare may be captured by surveys of patients and staff, or in extreme cases, the number and severity of medical disputes. This study tries to investigate the relationship between satisfaction and hospital management as well as the role of good management in preventing medical disputes ex ante.
Method: We investigate this relationship using information on management practices collected from 510 hospitals in mainland China using the World Management Survey questionnaire and combined with medical malpractice litigation data and patient/staff satisfaction surveys. Multiple regression models were used to analyse the relationship between hospital management scores and medical litigation outcomes as well as patient and staff satisfaction during 2014-2016.
Results: An increase of one standard deviation in the management score was related to 13.1% (p < 0.10) lower incidence of medical disputes, 12.4% (p < 0.05) fewer medical litigations, and 51.3% (p < 0.10) less compensation. Better management quality of hospitals was associated with higher inpatient satisfaction (p < 0.05) and staff well-being (p < 0.01).
Conclusion: Improving hospital management could reduce hospital costs generated by lawsuits, reduce potential harm to patients, and improve patient and staff satisfaction, thus leading to a better patient-physician relationship.
abstract_id: PUBMED:10130049
Patient satisfaction and the cost/quality equation. Measuring patient satisfaction with health care treatment and delivery and making necessary adjustments can pay back big dividends to employers, payers, and providers in the form of cost savings and improved quality of care.
abstract_id: PUBMED:37171399
The relationship between the pharmacist's role, patient understanding and satisfaction during the provision of a cost-effective pharmacist-led intervention. Rationale, Aims And Objectives: This study aimed to investigate the relationship between the pharmacist's role, patient understanding and satisfaction during the provision of a cost-effective pharmacist-led intervention using structural equation modelling (SEM). SEM is a group of statistical techniques used in different disciplines to model latent variables and evaluate theories.
Methods: A validated questionnaire was used to gather patient views on a pharmacist-led intervention. A conceptual model was developed to test the statistical significance of the relationship between patient understanding and satisfaction, the pharmacist's role and patient understanding, the pharmacist's role and patient satisfaction. In addition, the study evaluated the model's in-sample and out-of-sample predictive power. The analysis tested fours hypotheses (H): 1) There was no significant relationship between patient understanding and patient satisfaction; 2) There was no significant relationship between the pharmacist's role and patient understanding; 3) There was no significant relationship between the pharmacist's role and patient satisfaction; 4) The in-sample and out-of-sample predictive power of the model. Data were analysed using Smart-PLS software version 3.2.8.
Results: Two hundred and forty-six patients returned the questionnaire. Construct reliability, validity (Cronbach's alpha〉0.70, ⍴A>0.70, ⍴C>0.70), average extracted variance (AVE〉0.50) and discriminant validity (HTMT<0.85) were confirmed. The structural model and hypothesis testing results showed that all hypotheses were supported in this study. Path coefficients and effect sizes suggested that the pharmacist's role played a significant part in patient understanding (H2, β=0.650, f2 =0.730, p<0.001), which then influenced patient satisfaction (H1, β=0.474, f2 =0.222, p<0.001). The in-sample and out-of-sample predictive powers were moderate.
Conclusions: Patient satisfaction is becoming an integral component in healthcare provision and evaluation of healthcare quality. The results support using structural equation modelling to assess the link between the pharmacist's role and patient understanding and satisfaction when delivering cost-effective pharmacist-led interventions.
abstract_id: PUBMED:33767533
Decreased costs with maintained patient satisfaction after total joint arthroplasty in a physician-owned hospital. Objective: Comparing total joint arthroplasty (TJA) costs and patient-reported outcomes between a physician-owned hospital (POH) and a non-POH.
Methods: Costs for each 90-day TJA episode at both facilities were determined, and patients were asked to complete a patient satisfaction questionnaire.
Results: Average TJA episode cost was $19,039 at the POH, compared to $21,302 at the non-POH, a difference of $2,263 (p = 0.03), largely driven by decreased skilled nursing facility utilization in the POH group. There were no differences between groups for patient satisfaction.
Conclusion: TJA can be performed at reduced cost with comparable patient satisfaction at POHs, compared to non-POH facilities.
abstract_id: PUBMED:33720552
Home telemonitoring and oxygen therapy in COVID-19 patients: safety, patient satisfaction, and cost-effectiveness Objective: To evaluate the implementation of home telemonitoring and oxygen therapy in COVID-19 patients. Primary outcomes were safety, patient satisfaction, reduction of hospital stay, and cost-effectiveness.
Design: Retrospective cohort study.
Method: All COVID-19 patients who were discharged with home telemonitoring and oxygen therapy between June 1st and November 1st 2020 were included. Eligible patients had a maximum oxygen requirement of 2 liters per minute during the 24 hours prior to discharge with a minimal peripheral oxygen saturation of 94%. A mobile application for telemonitoring was used, which patients or relatives had to be able to use independently. Patient demographics, clinical parameters, data on telemonitoring and readmissions were extracted from the electronic patient records. A survey for patient satisfaction and a cost-effectiveness analysis were performed.
Results: Out of 619 admissions, 49 patients were discharged with home telemonitoring and oxygen therapy. Median duration of home oxygen therapy was 11 days with a potential reduction in hospitalization of 616 days. Six patients were readmitted and were significantly more febrile on discharge (67% versus 14%, p=0.01) and had lower oxygenation (95%, (IQR 93-96) versus 96%, (IQR 95-97), p=0.02) with similar levels of oxygen administration. Patient satisfaction was high with a mean score of 5 to 6 on a scale measuring satisfaction from 1 to 7. Estimated total cost reduction was € 146.736.
Conclusion: This study shows that home telemonitoring and oxygen administration can be safely applied in COVID-19 patients resulting in a high patient satisfaction and reduction in hospital stay and costs.
Answer: The relationship between patient satisfaction and hospital cost efficiency is complex and multifaceted. Hospital administrators can influence patient satisfaction, but their ability to do so may be limited in the short term, and efforts to improve satisfaction may not always be cost-neutral. Higher patient satisfaction scores on environmental quality and provider communication were associated with lower facility costs, while higher costs were positively associated with patients' overall impressions, assessments of medication and discharge instructions, and ratings of caregiver responsiveness. This suggests that not all patient satisfaction improvements are cost-effective, and quality improvements may come with added costs (PUBMED:25533752).
Additionally, a positive relationship has been found between patient safety culture and patient satisfaction, as well as hospital performance. This indicates that fostering a culture of safety can lead to higher patient satisfaction and better hospital performance (PUBMED:35233402). However, an inverse relationship between hospital efficiency and patient satisfaction has been observed, with the effect being small. This suggests that low patient satisfaction cannot be solely attributed to a focus on efficiency and that managers should investigate underlying causes of dissatisfaction (PUBMED:12626024).
The implementation of interdisciplinary morning huddles has been shown to decrease ICU days and laboratory and pharmacy costs, resulting in significant savings and increased patient satisfaction (PUBMED:28387619). Moreover, better hospital management is associated with fewer medical disputes, lower litigation costs, and higher patient and staff satisfaction, indicating that good management practices can lead to cost savings and improved satisfaction (PUBMED:34888948).
In the context of a physician-owned hospital, it has been demonstrated that total joint arthroplasty can be performed at a reduced cost with comparable patient satisfaction compared to non-physician-owned hospitals (PUBMED:33767533). Furthermore, home telemonitoring and oxygen therapy for COVID-19 patients have been found to be safe, with high patient satisfaction, and can reduce hospital stay and costs (PUBMED:33720552).
In conclusion, hospital administrators can make a difference in patient satisfaction and its relationship to cost efficiency. However, the impact may vary depending on the specific measures and strategies implemented. It is important for administrators to work through frontline providers and focus on management practices that not only improve patient satisfaction but also align with cost efficiency goals. |
Instruction: Is Shock Index a Valid Predictor of Mortality in Emergency Department Patients With Hypertension, Diabetes, High Age, or Receipt of β- or Calcium Channel Blockers?
Abstracts:
abstract_id: PUBMED:26144893
Is Shock Index a Valid Predictor of Mortality in Emergency Department Patients With Hypertension, Diabetes, High Age, or Receipt of β- or Calcium Channel Blockers? Study Objective: Shock index is a widely reported tool to identify patients at risk for circulatory collapse. We hypothesize that old age, diabetes, hypertension, and β- or calcium channel blockers weaken the association between shock index and mortality.
Methods: This was a cohort study of all first-time emergency department (ED) visits between 1995 and 2011 (n=111,019). We examined whether age 65 years or older, diabetes, hypertension, and use of β- or calcium channel blockers modified the association between shock index and 30-day mortality.
Results: The 30-day mortality was 3.0%. For all patients, with shock index less than 0.7 as reference, a shock index of 0.7 to 1 had an adjusted odds ratio (OR) of 2.9 (95% confidence interval [CI] 2.7 to 3.2) for 30-day mortality, whereas shock index greater than or equal to 1 had an OR of 10.5 (95% CI 9.3 to 11.7). The crude OR for shock index greater than or equal to 1 in patients aged 65 years or older was 8.2 (95% CI 7.2 to 9.4) compared with 18.9 (95% CI 15.6 to 23.0) in younger patients. β- Or calcium channel-blocked patients had an OR of 6.4 (95% CI 4.9 to 8.3) versus 12.3 (95% CI 11.0 to 13.8) in nonusers and hypertensive patients had an OR of 8.0 (95% CI 6.6 to 9.4) versus 12.9 (95% CI 11.1 to 14.9) in normotensive patients. Diabetic patients had an OR of 9.3 (95% CI 6.7 to 12.9) versus 10.8 (95% CI 9.6 to 12.0) in nondiabetic patients. A shock index of 0.7 to 1 was associated with ORs greater than 1 (range 2.2 to 3.1), with no evident differences within subgroups. The adjusted analyses showed similar ORs.
Conclusion: Shock index is independently associated with 30-day mortality in a broad population of ED patients. Old age, hypertension, and β- or calcium channel blockers weaken this association. However, a shock index greater than or equal to 1 suggests substantial 30-day mortality risk in all ED patients.
abstract_id: PUBMED:31575497
Antihypertensive treatment with calcium channel blockers in patients with moderate or severe aortic stenosis: Relationship with all-cause mortality. Background: Hypertension is common in patients with aortic stenosis (AS) and optimal blood pressure (BP) control is advised to reduce arterial load and cardiovascular events. Whether calcium channel blockers (CCB) are safe is not known.
Methods: This was a retrospective analysis of 314 patients (age 65 ± 12 years, 68% men) with moderate or severe asymptomatic AS. Hypertension was defined from a history of hypertension, past or current antihypertensive treatment or a BP at the baseline clinic visit >140/90 mmHg. All patients underwent an exercise treadmill test (ETT) and echocardiography.
Results: The prevalence of hypertension was 73.6%, and 65% took antihypertensive treatment. Patients who used a CCB (25%) (CCB+) were older, more likely to have hypercholesterolemia and coronary artery disease, and had higher systolic BP, stroke work, left ventricular mass compared to CCB-patients (all p < 0.05). During the baseline ETT, CCB+ patients achieved a lower peak heart rate, a shorter exercise time and were more likely to have a blunted BP response compared to CCB- patients (p < 0.05). Event-free survival was significantly lower in CCB+ than CCB- patients (all-cause mortality 16 [20.3%] versus 13 [5.6%]; p < 0.001). In a multivariable Cox regression model, CCB+ was associated with a 7-fold increased hazard ratio (HR) for all-cause mortality (HR 7.09; 95% CI 2.15-23.38, p = 0.001), independent of age, hypertension, diabetes, left ventricular ejection fraction, and aortic valve area.
Conclusion: The use of CCB was associated with an adverse effect on treadmill exercise and reduced survival in asymptomatic patients with moderate or severe AS.
abstract_id: PUBMED:29508499
Current use of antihypertensive drugs in Japanese patients with hypertension: Analysis by age group. Aim: To analyze the current use of antihypertensive drug classes in Japanese hypertensive patients stratified by age, highlighting differences between older and younger patients.
Methods: A nationwide medical database was used to evaluate antihypertensive use in patients (aged ≥20 years) who had received a prescription for one or more antihypertensive drug as an outpatient from April 2014 to March 2015. Patients (n = 59 867) were age-stratified into three groups: <65 years (28.7%), 65-74 years (33.1%) and ≥75 years (38.2%).
Results: The mean number of antihypertensive drugs prescribed for patients in the overall population was 1.9 ± 1.0, with no appreciable differences between age groups. The most commonly prescribed drug classes for all ages were calcium channel blockers (CCB) and angiotensin II receptor blockers (ARB). CCB were prescribed more often than ARB in the 65-74 years (66.9% vs 60.5%) and ≥75 years (70.4% vs 56.8%) years age groups, and ARB were prescribed more often than CCB in patients aged <65 years (63.1% vs 61.9%). There were minimal differences by age in prescription rates for β-blockers, angiotensin-converting enzyme inhibitors and thiazide diuretics. ARB prescription rates were lower in patients aged ≥75 years with diabetes mellitus or renal disease than in younger age groups. Prescription rates for loop diuretics were higher in patients aged ≥75 years than in younger age groups, especially among those with renal disease.
Conclusions: Antihypertensive drugs selected for patients aged ≥75 years differed from those selected for younger patients, in particular CCB and loop diuretics (prescribed more often), and ARB (prescribed less often). Geriatr Gerontol Int 2018; 18: 899-906.
abstract_id: PUBMED:27648007
Beta-Blocker Use Is Associated with Higher Renal Tissue Oxygenation in Hypertensive Patients Suspected of Renal Artery Stenosis. Background: Chronic renal hypoxia influences the progression of chronic kidney disease (CKD). Blood oxygen level-dependent (BOLD) magnetic resonance (MR) is a noninvasive tool for the assessment of renal oxygenation. The impact of beta-blockers on renal hemodynamics and oxygenation is not completely understood. We sought to determine the association between beta-blocker use, renal cortical and medullary oxygenation, and renal blood flow in patients suspected of renal artery stenosis.
Methods: We measured renal cortical and medullary oxygenation using BOLD MR and renal artery blood flow using MR phase contrast techniques in 38 participants suspected of renal artery stenosis.
Results: Chronic beta-blocker therapy was associated with improved renal cortical (p < 0.001) and medullary (p = 0.03) oxygenation, while the use of calcium channel blockers or diuretics showed no association with either cortical or medullary oxygenation. Receipt of angiotensin-converting enzyme inhibitors or angiotensin receptor blockers was associated with reduced medullary oxygenation (p = 0.01). In a multivariable model, chronic receipt of beta-blockers was the only significant predictor of renal tissue oxygenation (β = 8.4, p = 0.008). Beta-blocker therapy was not associated with significant changes in renal artery blood flow, suggesting that improved renal oxygenation may be related to reduced renal oxygen consumption.
Conclusions: In addition to known benefits to reduce cardiovascular mortality in patients with renal disease, beta-blockers may reduce or prevent the progression of renal dysfunction in patients with hypertension, diabetes, and renovascular disease, partly by reducing renal oxygen consumption. These observations may have important implications for the treatment of patients with CKD.
abstract_id: PUBMED:28251160
Prognostic Analysis for Cardiogenic Shock in Patients with Acute Myocardial Infarction Receiving Percutaneous Coronary Intervention. Cardiogenic shock (CS) is uncommon in patients suffering from acute myocardial infarction (AMI). Long-term outcome and adverse predictors for outcomes in AMI patients with CS receiving percutaneous coronary interventions (PCI) are unclear. A total of 482 AMI patients who received PCI were collected, including 53 CS and 429 non-CS. Predictors for AMI patients with CS including recurrent MI, cardiovascular (CV) mortality, all-cause mortality, and repeated-PCI were analyzed. The CS group had a lower central systolic pressure and central diastolic pressure (both P < 0.001). AMI patients with hypertension history were less prone to develop CS (P < 0.001). Calcium channel blockers and statins were less frequently used by the CS group than the non-CS group (both P < 0.05) after discharge. Synergy between Percutaneous Coronary Intervention with Taxus and Cardiac Surgery (SYNTAX) score, CV mortality, and all-cause mortality were higher in the CS group than the non-CS group (all P < 0.005). For patients with CS, stroke history was a predictor of recurrent MI (P = 0.036). CS, age, SYNTAX score, and diabetes were predictors of CV mortality (all P < 0.05). CS, age, SYNTAX score, and stroke history were predictors for all-cause mortality (all P < 0.05). CS, age, and current smoking were predictors for repeated-PCI (all P < 0.05).
abstract_id: PUBMED:24050270
A meta-analysis of the effect of angiotensin receptor blockers and calcium channel blockers on blood pressure, glycemia and the HOMA-IR index in non-diabetic patients. Objective: This study compared the efficacy of angiotensin receptor blockers (ARBs) and calcium channel blockers (CCBs) in the effect of insulin resistance (IR) as assessed using the homeostasis model assessment of insulin resistance (HOMA-IR) in non-diabetic patients.
Methods: The MEDLINE, EMBASE, and Cochrane Library databases were searched to identify studies published before December 2012 that investigated the use of ARBs and CCBs to determine the effect on the HOMA-IR index in non-diabetics. Parameters on IR and blood pressure were collected. Review Manager 5.2 and Stata 12.0 were used to perform the meta-analysis. Fixed and random effects models were applied to various aspects of the meta-analysis, which assessed the therapeutic effects of the two types of drug using the HOMA-IR index in non-diabetic patients.
Results: The meta-analysis included five clinical trials. Patient comparisons before and after treatment with ARBs and CCBs revealed that ARBs reduced the HOMA-IR index (weighted mean difference (WMD) -0.65, 95% confidence interval (CI) -0.93 to -0.38) and fasting plasma insulin (FPI) (WMD -2.01, 95% CI -3.27 to -0.74) significantly more than CCBs. No significant differences in the therapeutic effects of these two types of drug on blood pressure were observed.
Conclusion: Given that there are no significant differences in the therapeutic effects of ARBs and CCBs on blood pressure, as ARBs are superior to CCBs in their effect on the HOMA-IR index in non-diabetics, they might be a better choice in hypertension patients without diabetes.
abstract_id: PUBMED:21546880
Calcium channel blockers are independently associated with short sleep duration in hypertensive patients with obstructive sleep apnea. Objective: Obstructive sleep apnea (OSA) and hypertension (HYP) frequently coexist and have additive harmful effects on the cardiovascular system. There is also growing evidence that short sleep duration may contribute independently to poor cardiovascular outcome. The aim of this study was to evaluate the potential influence of antihypertensive medication on sleep parameters objectively measured by standard polysomnography in hypertensive patients with OSA.
Methods: We evaluated consecutive patients with a recent diagnosis of OSA by full polysomnography (apnea hypopnea index ≥ 5 events/h) and HYP. Smokers, patients with diabetes mellitus, heart failure, or using hypnotics and benzodiazepines were excluded.
Results: We evaluated 186 hypertensive patients with OSA, 64% men. All patients were on at least one antihypertensive medication, including angiotensin-converting enzyme inhibitors (37%), beta-blockers (35%), angiotensin receptor blockers (32%), diuretics (29%) and calcium channel blockers (21%). Backward multiple regression analysis showed that age (P ≤ 0.001) and the use of calcium channel blockers (P = 0.037) were the only factors inversely associated with lower total sleep time. Sleep efficiency was inversely associated only with age (P ≤ 0.001), whereas the use of calcium channel blockers had a nonsignificant trend (P = 0.092). Use of calcium channel blockers was associated with significant reduction in total sleep time (-41 min, P = 0.005) and 8% lower sleep efficiency (P = 0.004). No other antihypertensive medication, including diuretics and beta-blockers, was associated with sleep impairment.
Conclusion: Calcium channel blockers may impact negatively on sleep duration in hypertensive patients with OSA. The mechanisms and significance of this novel finding warrants further investigation.
abstract_id: PUBMED:23752710
Antihypertensive drug class interactions and risk for incident diabetes: a nested case-control study. Background: We aimed to determine how single and combination antihypertensive therapy alters risk for diabetes mellitus (DM).Thiazide diuretics (TD), β blockers (BB), and renin-angiotensin system blockers (RASB) impact DM risk while calcium channel blockers (CCB) are neutral. DM risk associated with combinations is unclear.
Methods And Results: We enrolled nondiabetic patients from Kaiser Permanente Northwest with a fasting plasma glucose (FPG) <126 mg/dL between 1997 and 2010. DM cases were defined by a FPG ≥ 126 mg/dL, random plasma glucose ≥ 200 mg/dL, HbA1c ≥ 7.0%, or new DM prescription (index date). We used incidence density sampling to match 10 controls per case on the date of follow-up glucose test (to reduce detection bias), in addition to age and date of cohort entry. Exposure to antihypertensive class was assessed during the 30 days prior to index date. Our cohort contained 134 967 patients and had 412 604 glucose tests eligible for matching. A total of 9097 DM cases were matched to 90 495 controls (median age 51 years). Exposure to TD (OR 1.54, 95% CI 1.41 to 1.68) or BB (OR 1.19, 95% CI 1.11 to 1.28) was associated with an increased DM risk, while CCB and RASB exposure was not. TD+BB combination resulted in the fully combined diabetogenic risk of both agents (OR 1.99, 95% CI 1.80 to 2.20; interaction OR 1.09, 95% CI 0.97 to 1.22). In contrast, combination of RASB with either TD or BB showed significant negative interactions, resulting in a smaller DM risk than TD or BB monotherapy.
Conclusions: Diabetogenic potential of combination therapy should be considered when prescribing antihypertensive therapy.
abstract_id: PUBMED:28230330
Comparing six antihypertensive medication classes for preventing new-onset diabetes mellitus among hypertensive patients: a network meta-analysis. Hypertensive patients usually have a higher risk of new-onset diabetes mellitus (NOD) which may trigger cardiovascular diseases. In this study, the effectiveness of six antihypertensive agents with respect to NOD prevention in hypertensive patients was assessed. A network meta-analysis was conducted to compare the efficacy of specific drug classes. PubMed and Embase databases were searched for relevant articles. Results of the pairwised meta-analysis were illustrated by odd ratios (OR) and a corresponding 95% confidence interval (CI). The probabilities and outcome of each treatment were ranked and summarized using the surface under the cumulative ranking curve (SUCRA).Twenty-three trials were identified, including 224,832 patients with an average follow-up period of 3.9 ± 1.0 years. The network meta-analysis showed that patients treated by angiotensin II receptor blockers (ARBs) were associated with a lower risk of NOD compared to placebo (PCB), calcium channel blockers (CCBs) and β-blockers, while diuretic appeared to be ineffective for NOD prevention. Network meta-analysis results of specific drugs showed that enalapril exhibited distinct advantages and hydrochlorothiazide also exhibited a reliable performance. Our results suggested that both ARBs and angiotensin converse enzyme inhibitors (ACEIs), especially candesartan and enalapril, were preferable for NOD prevention in hypertensive patients. Hydrochlorothiazide also exhibited a reliable performance in comparison with other agents.
abstract_id: PUBMED:16996875
Comparison of mortality rates in statin users versus nonstatin users in a United States veteran population. Statins have been shown to be effective in reducing cardiovascular events and overall mortality in primary and secondary prevention trials. This study was designed to examine the effect of statin use on overall death. Cross-sectional data were obtained from the Department of Veterans Affairs Veterans Integrated Service Network 16 database for approximately 1.5 million veterans followed up in 10 hospitals in the southern United States. Statins were prescribed more often to elderly subjects with a history of coronary artery disease, hypertension, diabetes mellitus, current smoking, and using cardiovascular drugs (beta blockers, aspirin, angiotensin-converting enzyme inhibitors, and calcium channel blockers). The predictors of death were, as expected, cancer, diabetes mellitus, the use of cardiac drugs, and age. Importantly, using statins showed a highly significant negative association with death (odds ratio 0.54, 95% confidence interval 0.42 to 0.69, p <0.0001), even after adjustment for all other variables. Overall, the mean age at death among statin users was 2 years older than among nonstatin users, despite statin users being at a higher risk of death. In conclusion, the results of this study have shown that using statins is a potent life-saving strategy. The benefit observed in this study is unique because almost 1/2 the patients were >or=70 years of age when statin therapy was initiated.
Answer: The Shock Index, which is the ratio of heart rate to systolic blood pressure, is indeed a valid predictor of mortality in emergency department (ED) patients, including those with hypertension, diabetes, high age, or those receiving β- or calcium channel blockers. A study examining the association between Shock Index and 30-day mortality in a cohort of first-time ED visitors found that a Shock Index greater than or equal to 1 suggests a substantial 30-day mortality risk across all ED patients, regardless of these conditions (PUBMED:26144893). However, the association between Shock Index and mortality is weakened in patients aged 65 years or older, those with hypertension, and those on β- or calcium channel blockers. Despite this, the Shock Index remains independently associated with 30-day mortality in a broad population of ED patients (PUBMED:26144893). |
Instruction: Narcotic and benzodiazepine use after withdrawal of life support: association with time to death?
Abstracts:
abstract_id: PUBMED:15249473
Narcotic and benzodiazepine use after withdrawal of life support: association with time to death? Objective: To determine whether the dose of narcotics and benzodiazepines is associated with length of time from mechanical ventilation withdrawal to death in the setting of withdrawal of life-sustaining treatment in the ICU.
Design: Retrospective chart review.
Setting: University-affiliated, level I trauma center.
Patients: Consecutive critically ill patients who had mechanical ventilation withdrawn and subsequently died in the ICU during two study time periods.
Results: There were 75 eligible patients with a mean age of 59 years. The primary ICU admission diagnoses included intracranial hemorrhage (37%), trauma (27%), acute respiratory failure (27%), and acute renal failure (20%). Patients died during a median of 35 min (range, 1 to 890 min) after ventilator withdrawal. On average, 16.2 mg/h opiates in morphine equivalents and 7.5 mg/h benzodiazepine in lorazepam equivalents were administered during the time period starting 1 h before ventilator withdrawal and ending at death. There was no statistically significant relationship between the average hourly narcotic and benzodiazepine use during the 1-h period prior to ventilator withdrawal until death, and the time from ventilator withdrawal to death. The restriction of medication assessment in the last 2 h of life showed an inverse association between the use of benzodiazepines and time to death. For every 1 mg/h increase in benzodiazepine use, time to death was increased by 13 min (p = 0.015). There was no relationship between narcotic dose and time to death during the last 2 h of life (p = 0.11).
Conclusions: We found no evidence that the use of narcotics or benzodiazepines to treat discomfort after the withdrawal of life support hastens death in critically ill patients at our center. Clinicians should strive to control patient symptoms in this setting and should document the rationale for escalating drug doses.
abstract_id: PUBMED:25944573
Predicting time to death after withdrawal of life-sustaining therapy. Purpose: Predicting time to death following the withdrawal of life-sustaining therapy is difficult. Accurate predictions may better prepare families and improve the process of donation after circulatory death.
Methods: We systematically reviewed any predictive factors for time to death after withdrawal of life support therapy.
Results: Fifteen observational studies met our inclusion criteria. The primary outcome was time to death, which was evaluated to be within 60 min in the majority of studies (13/15). Additional time endpoints evaluated included time to death within 30, 120 min, and 10 h, respectively. While most studies evaluated risk factors associated with time to death, a few derived or validated prediction tools. Consistent predictors of time to death that were identified in five or more studies included the following risk factors: controlled ventilation, oxygenation, vasopressor use, Glasgow Coma Scale/Score, and brain stem reflexes. Seven unique prediction tools were derived, validated, or both across some of the studies. These tools, at best, had only moderate sensitivity to predicting the time to death. Simultaneous withdrawal of all support and physician opinion were only evaluated in more recent studies and demonstrated promising predictor capabilities.
Conclusions: While the risk factors controlled ventilation, oxygenation, vasopressors, level of consciousness, and brainstem reflexes have been most consistently found to be associated with time to death, the addition of novel predictors, such as physician opinion and simultaneous withdrawal of all support, warrant further investigation. The currently existing prediction tools are not highly sensitive. A more accurate and generalizable tool is needed to inform end-of-life care and enhance the predictions of donation after circulatory death eligibility.
abstract_id: PUBMED:35003959
Ketamine: A Potential Adjunct for Severe Benzodiazepine Withdrawal. Following the abrupt cessation of benzodiazepine therapy, patients can present with acute life-threatening withdrawal. Medical management of benzodiazepine withdrawal is typically undertaken with benzodiazepines either through loading dose with gradual taper or symptom triggered treatment, though adjuvant anxiolytics and anticonvulsants are often used. Ketamine, increasingly utilized as an adjunct in the treatment of alcohol withdrawal, may represent an effective medication in the treatment of benzodiazepine withdrawal. In this case report, a 27-year-old male with a history of benzodiazepine and opioid abuse presented to our emergency department with a chief complaint of drug withdrawal. Despite standard treatment with large amounts of benzodiazepine, barbiturate, opioid, and adjunctive medications, the patient remained with severe withdrawal syndrome until an infusion of ketamine (0.5mg/kg in 30 minutes) was administered resulting in significant improvement of the patient symptoms. This case demonstrates the potential role of ketamine as an adjunct medication in the treatment of benzodiazepine withdrawal.
abstract_id: PUBMED:31110855
Pediatric Withdrawal Identification and Management. Sedation administered by continuous intravenous infusion is commonly used in the pediatric intensive care unit to facilitate and maintain safe care of children during critical illness. Prolonged use of sedatives, including opioids, benzodiazepines, and potentially other adjunctive agents, is known to cause withdrawal symptoms when they are stopped abruptly or weaned quickly. In this review, the common signs and symptoms of opioid, benzodiazepine, and dexmedetomidine withdrawal will be discussed. Current tools used to measure withdrawal objectively, as well as withdrawal prevention and management strategies, will be discussed.
abstract_id: PUBMED:36760692
Enduring neurological sequelae of benzodiazepine use: an Internet survey. Introduction: Benzodiazepine tapering and cessation has been associated with diverse symptom constellations of varying duration. Although described in the literature decades ago, the mechanistic underpinnings of enduring symptoms that can last months or years have not yet been elucidated.
Objective: This secondary analysis of the results from an Internet survey sought to better understand the acute and protracted withdrawal symptoms associated with benzodiazepine use and discontinuation.
Methods: An online survey (n = 1207) was used to gather information about benzodiazepine use, including withdrawal syndrome and protracted symptoms.
Results: The mean number of withdrawal symptoms reported by a respondent in this survey was 15 out of 23 symptoms. Six percent of respondents reported having all 23 listed symptoms. A cluster of least-frequently reported symptoms (whole-body trembling, hallucinations, seizures) were also the symptoms most frequently reported as lasting only days or weeks, that is, short-duration symptoms. Symptoms of nervousness/anxiety/fear, sleep disturbances, low energy, and difficulty focusing/distractedness were experienced by the majority of respondents (⩾85%) and, along with memory loss, were the symptoms of longest duration. Prolonged symptoms of anxiety and insomnia occurred in many who have discontinued benzodiazepines, including over 50% who were not originally prescribed benzodiazepines for that indication. It remains unclear if these symptoms might be caused by neuroadaptive and/or neurotoxic changes induced by benzodiazepine exposure. In this way, benzodiazepine withdrawal may have acute and long-term symptoms attributable to different underlying mechanisms, which is the case with alcohol withdrawal.
Conclusions: These findings tentatively support the notion that symptoms which are acute but transient during benzodiazepine tapering and discontinuation may be distinct in their nature and duration from the enduring symptoms experienced by many benzodiazepine users.
abstract_id: PUBMED:30295409
Withdrawal from long-term use of zopiclone, zolpidem and temazepam may improve perceived sleep and quality of life in older adults with primary insomnia. Long-term use of benzodiazepines or benzodiazepine receptor agonists is widespread, although guidelines recommend short-term use. Only few controlled studies have characterized the effect of discontinuation of their chronic use on sleep and quality of life. We studied perceived sleep and quality of life in 92 older (age 55-91 years) outpatients with primary insomnia before and after withdrawal from long-term use of zopiclone, zolpidem or temazepam (BZDA). BZDA was withdrawn during 1 month, during which the participants received psychosocial support and blindly melatonin or placebo. A questionnaire was used to study perceived sleep and quality of life before withdrawal, and 1 month and 6 months later. 89 participants completed the 6-month follow-up. As melatonin did not improve withdrawal, all participants were pooled and then separated based solely on the withdrawal results at 6 months (34 Withdrawers. 55 Nonwithdrawers) for this secondary analysis. At 6 months, the Withdrawers had significantly (P < 0.05) shorter sleep-onset latency and less difficulty in initiating sleep than at baseline and when compared to Nonwithdrawers. Compared to baseline, both Withdrawers and Nonwithdrawers had at 6 months significantly (P < 0.05) less fatigue during the morning and daytime. Stress was alleviated more in Withdrawers than in Nonwithdrawers (P < 0.05). Satisfaction with life and expected health 1 year later improved (P < 0.05) in Withdrawers. In conclusion, sleep disturbances, daytime fatigue and impaired quality of life may resolve within 6 months of BZDA withdrawal. These results encourage withdrawal from chronic use of benzodiazepine-type hypnotics, particularly in older subjects.
abstract_id: PUBMED:3234249
Benzodiazepine dependence--aetiological factors, time course, consequences and withdrawal symptomatology: a study of five cases. The development and time course of benzodiazepine (BZD) dependence is reported for five case histories. The underlying psychiatric disorders, life-events as potential initiators of BZD use/abuse and psychosocial consequences are discussed. The abstinence symptoms appearing during a course of standardized withdrawal therapy are described in detail. The case reports clearly demonstrate the chronic nature of the development of BZD dependence and of the tendency to increase the dosage which may occur only after years of intake and the gradual appearance of negative effects of chronic BZD intake.
abstract_id: PUBMED:28577506
High enhancer, downer, withdrawal helper: Multifunctional nonmedical benzodiazepine use among young adult opioid users in New York City. Background: Benzodiazepines are a widely prescribed psychoactive drug; in the U.S., both medical and nonmedical use of benzodiazepines has increased markedly in the past 15 years. Long-term use can lead to tolerance and dependence, and abrupt withdrawal can cause seizures or other life-threatening symptoms. Benzodiazepines are often used nonmedically in conjunction with other drugs, and with opioids in particular-a combination that can increase the risk for fatal and non-fatal overdose. This mixed-methods study examines nonmedical use of benzodiazepines among young adults in New York City and its relationship with opioid use.
Methods: For qualitative analysis, 46 90-minute semi-structured interviews were conducted with young adult opioid users (ages 18-32). Interviews were transcribed and coded for key themes. For quantitative analysis, 464 young adult opioid users (ages 18-29) were recruited using Respondent-Driven Sampling and completed structured interviews. Benzodiazepine use was assessed via a self-report questionnaire that included measures related to nonmedical benzodiazepine and opioid use.
Results: Participants reported using benzodiazepines nonmedically for a wide variety of reasons, including: to increase the high of other drugs; to lessen withdrawal symptoms; and to come down from other drugs. Benzodiazepines were described as readily available and cheap. There was a high prevalence (93%) of nonmedical benzodiazepine use among nonmedical opioid users, with 57% reporting regular nonmedical use. In bivariate analyses, drug-related risk behaviours such as polysubstance use, drug binging, heroin injection and overdose were strongly associated with regular nonmedical benzodiazepine use. In multivariate analysis, growing up in a middle-income household (earning between $51,000 and $100,000 annually), lifetime overdose experience, having ever used cocaine regularly, having ever been prescribed benzodiazepines, recent drug binging, and encouraging fellow drug users to use benzodiazepines to cope with opioid withdrawal were consistently strong predictors of regular nonmedical benzodiazepine use.
Conclusion: Nonmedical benzodiazepine use may be common among nonmedical opioid users due to its drug-related multi-functionality. Harm reduction messages should account for the multiple functions benzodiazepines serve in a drug-using context, and encourage drug users to tailor their endorsement of benzodiazepines to peers to include safer alternatives.
abstract_id: PUBMED:15236807
Symptom severity and quality of life after benzodiazepine withdrawal treatment in participants with complicated dependence. The aims of the present study were to assess changes in psychopathology and quality of life after withdrawal treatment in participants with benzodiazepine dependence that was in most cases complicated by harmful and hazardous alcohol use or high benzodiazepine doses. Seventy-six participants with benzodiazepine dependence (DSM-III-R) who participated in a randomized clinical trial of two different gradual withdrawal treatment approaches were initially assessed by Symptom Checklist-90 (SCL-90), visual analogue scales (VASs), and the Health-Related Quality of Life battery (HRQOL). The assessments were repeated after treatment ended and again after a follow-up averaging 11 months. During the study, all measurements for the participants with clinically significant (over 50%) benzodiazepine-dose decreases improved more than those for the participants with smaller decreases, and differences in the HRQOL energy/vitality, home management, and life satisfaction scores were significant. Our data indicate that in participants with complicated benzodiazepine dependence, clinically significant dose decreases are associated with improvements in their self-rated quality of life.
abstract_id: PUBMED:21815323
Benzodiazepine withdrawal seizures and management. Since the first report of benzodiazepine withdrawal seizure in 1961, many case reports have followed. Withdrawal seizures have occurred with short, medium, and long halflife benzodiazepine, if discontinued abruptly. Withdrawal seizures usually occur in patients who have been taking these medications for long periods of time and at high doses. Seizures have also been reported with less than 15 days of use and at therapeutic dosage. Almost all the withdrawal seizures reported were grand mal seizures. The severity of seizures range from a single episode to coma and death. Benzodiazepine dose tapering can be done faster in a hospital setting in high-dose abusers, but must be done more slowly in the outpatient setting in therapeutic dosage users.
Answer: The study described in abstract PUBMED:15249473 investigated whether the dose of narcotics and benzodiazepines is associated with the length of time from mechanical ventilation withdrawal to death in the ICU setting after the withdrawal of life-sustaining treatment. The retrospective chart review included 75 patients who had mechanical ventilation withdrawn and subsequently died in the ICU. The results showed that there was no statistically significant relationship between the average hourly narcotic and benzodiazepine use during the 1-hour period prior to ventilator withdrawal until death and the time from ventilator withdrawal to death. However, when focusing on the last 2 hours of life, there was an inverse association between the use of benzodiazepines and time to death, with every 1 mg/h increase in benzodiazepine use increasing the time to death by 13 minutes (p = 0.015). There was no relationship between narcotic dose and time to death during the last 2 hours of life (p = 0.11). The conclusion of the study was that there was no evidence that the use of narcotics or benzodiazepines to treat discomfort after the withdrawal of life support hastens death in critically ill patients at the center where the study was conducted. Clinicians are encouraged to control patient symptoms in this setting and document the rationale for escalating drug doses. |
Instruction: Do elder emergency department patients and their informants agree about the elder's functioning?
Abstracts:
abstract_id: PUBMED:37798065
Elder Mistreatment: Emergency Department Recognition and Management. Elder mistreatment is experienced by 5% to 15% of community-dwelling older adults each year. An emergency department (ED) encounter offers an important opportunity to identify elder mistreatment and initiate intervention. Strategies to improve detection of elder mistreatment include identifying high-risk patients; recognizing suggestive findings from the history, physical examination, imaging, and laboratory tests; and/or using screening tools. ED management of elder mistreatment includes addressing acute issues, maximizing the patient's safety, and reporting to the authorities when appropriate.
abstract_id: PUBMED:30031426
Identifying and Initiating Intervention for Elder Abuse and Neglect in the Emergency Department. Elder abuse and neglect are common and may have serious medical and social consequences but are infrequently identified. An emergency department (ED) visit represents a unique but usually missed opportunity to identify potential abuse and initiate intervention. ED assessment should include observation of patient-caregiver interaction, comprehensive medical history, and head-to-toe physical examination. Formal screening protocols may also be useful. ED providers concerned about elder abuse or neglect should document their findings in detail. ED interventions for suspected or confirmed elder abuse or neglect include treatment of acute medical, traumatic, and psychological issues; ensuring patient safety; and reporting to the authorities.
abstract_id: PUBMED:35860986
Vulnerable Elder Protection Team: Initial experience of an emergency department-based interdisciplinary elder abuse program. Background: An emergency department (ED) visit provides a unique opportunity to identify elder abuse and initiate intervention, but emergency providers rarely do. To address this, we developed the Vulnerable Elder Protection Team (VEPT), an ED-based interdisciplinary consultation service. We describe our initial experience in the first two years after the program launch.
Methods: We launched VEPT in a large, urban, academic ED/hospital. From 4/3/17 to 4/2/19, we tracked VEPT activations, including patient characteristics, assessment, and interventions. We compared VEPT activations to frequency of elder abuse identification in the ED before VEPT launch. We examined outcomes for patients evaluated by VEPT, including change in living situation at discharge. We assessed ED providers' experiences with VEPT via written surveys and focus groups.
Results: During the program's initial two years, VEPT was activated and provided consultation/care to 200 ED patients. Cases included physical abuse (59%), neglect (56%), financial exploitation (32%), verbal/emotional/psychological abuse (25%), and sexual abuse (2%). Sixty-two percent of patients assessed were determined by VEPT to have high or moderate suspicion for elder abuse. Seventy-five percent of these patients had a change in living/housing situation or were discharged with new or additional home services, with 14% discharged to an elder abuse shelter, 39% to a different living/housing situation, and 22% with new or additional home services. ED providers reported that VEPT made them more likely to consider/assess for elder abuse and recognized the value of the expertise and guidance VEPT provided. Ninety-four percent reported believing that there is merit in establishing a VEPT Program in other EDs.
Conclusion: VEPT was frequently activated and many patients were discharged with changes in living situation and/or additional home services, which may improve safety. Future research is needed to examine longer-term outcomes.
abstract_id: PUBMED:11435187
Do elder emergency department patients and their informants agree about the elder's functioning? Objective: To compare elder patients' and their informants' ratings of the elder's physical and mental function measured by a standard instrument, the Medical Outcomes Study Short Form 12 (SF-12).
Methods: This was a randomized, cross-sectional study conducted at a university-affiliated community teaching hospital emergency department (census 65,000/year). Patients >69 years old, arriving on weekdays between 10 AM and 7 PM, able to engage in English conversation, and consenting to participate were eligible. Patients too ill to participate were excluded. Informants were people who accompanied and knew the patient. Elder patients were randomized 1:1 to receive an interview or questionnaire version of the SF-12. The questionnaire was read to people unable to read. Two trained medical students administered the instrument. The SF-12 algorithm was used to calculate physical (PCS) and mental (MCS) component scores. Oral and written versions were compared using analysis of variance. The PCS and MCS scores between patient-informant pairs were compared with a matched t-test. Alpha was 0.05.
Results: One hundred six patients and 55 informants were enrolled. The patients' average (+/-SD) age was 77 +/- 5 years; 59 (56%; 95% CI = 46% to 65%) were women. There was no significant difference for mode of administration in PCS (p = 0.53) or MCS (p = 0.14) scores. Patients rated themselves higher on physical function than did their proxies. There was a 4.1 (95% CI = 99 to 7.2) point difference between patients' and their proxies' physical component scores (p = 0.01). Scores on the mental component were quite similar. The mean difference between patients and proxies was 0.49 (95% CI = 3.17 to 4.16). The half point higher rating by patients was not statistically significant (p = 0.79).
Conclusions: Elders' self-ratings of physical function were higher than those of proxies who knew them. There was no difference in mental function ratings between patients and their proxies. Switching from informants' to patients' reports in evaluating elders' physical function in longitudinal studies may introduce error.
abstract_id: PUBMED:22981420
Elder abuse in the emergency department. Elder abuse is an important challenge in global societies. Detection of and intervention in elder abuse is crucial to the well-being of older people. Older people are high consumers of health care services and the consequences of elder abuse may provide a catalyst to attendance in the emergency department. This paper considers the topic of elder abuse and examines issues pertaining to understandings, recognition, screening and care in the emergency department environment.
abstract_id: PUBMED:11015061
Elder neglect assessment in the emergency department. Introduction: Emergency departments are often the first point of contact for elder neglect victims. The purpose of this article is to describe a pilot study pertaining to the screening of patients and detection of elder neglect conducted in a large metropolitan medical center emergency department. The research question to be answered was, "Is it feasible for ED nurses to conduct accurate screening protocols for elder neglect in the context of their busy practice?"
Methods: During a 3-week period, 180 patients older than age 70 years (90% of all possible elderly patients during the screening hours) were screened to determine if they met the study criteria and could be enrolled into the protocol.
Results: Thirty-six patients met the eligibility criteria to enroll in the study, and 7 patients screened positive for neglect by a home caregiver. The nurses were able to screen and detect elder neglect with more than 70% accuracy, confirming the research question. The true-positive rate was 71%, and the false-positive rate was 7%.
Discussion: Elder neglect protocols are feasible in busy emergency departments, and neglect can be accurately detected in the emergency department when screening procedures are in place.
abstract_id: PUBMED:33863468
Elder Abuse-A Guide to Diagnosis and Management in the Emergency Department. Elder abuse affects many older adults and can be life threatening. Older adults both in the community and long-term care facilities are at risk. An emergency department visit is an opportunity for an abuse victim to seek help. Emergency clinicians should be able to recognize the signs of abuse, including patterns of injury consistent with mistreatment. Screening tools can assist clinicians in the diagnosis of abuse. Physicians can help victims of mistreatment by reporting the abuse to the appropriate investigative agency and by developing a treatment plan with a multidisciplinary team to include a safe discharge plan and close follow-up.
abstract_id: PUBMED:31984415
Developing the Emergency Department Elder Mistreatment Assessment Tool for Social Workers Using a Modified Delphi Technique. Elder mistreatment is common and has serious consequences. The emergency department (ED) may provide a unique opportunity to detect this mistreatment, with social workers often asked to take the lead in assessment and intervention. Despite this, social workers may feel ill-equipped to conduct assessments for potential mistreatment, due in part to a lack of education and training. As a result, the authors created the Emergency Department Elder Mistreatment Assessment Tool for Social Workers (ED-EMATS) using a multiphase, modified Delphi technique with a national group of experts. This tool consists of both an initial and comprehensive component, with 11 and 17 items, respectively. To our knowledge, this represents the first elder abuse assessment tool for social workers designed specifically for use in the ED. The hope is that the ED-EMATS will increase the confidence of ED social workers in assessing for elder mistreatment and help ensure standardization between professionals.
abstract_id: PUBMED:10516843
Management of elder abuse in the emergency department. Emergency physicians are in an ideal position to diagnose and intervene in suspected cases of elder abuse. This article reviews domestic violence against the elderly, highlighting the risk factors for elder abuse, its prevalence, and related medicolegal issues. A special section on mistreatment of the elderly in long-term care facilities is included. In addition to the management of elderly abuse in the emergency department, possible future directions for improved detection of elder abuse or neglect are reviewed.
abstract_id: PUBMED:7627608
Elder abuse and neglect: issues in the accident and emergency department. Recognition and identification of elder abuse and neglect within the Accident and Emergency (A & E) department currently appears less evident than recognition of child abuse and domestic violence. As the population of older people increases, it is likely that there will be an increased usage of the A & E department by older people, some of whome may have been subject to abuse or neglect. Elder abuse and neglect constitute potentially serious health and social problems. The need for the A & E nurse to identify and respond to elder abuse and neglect is therefore a professional responsibility. The nursing profession is clearly in a position to contribute to the recognition of elder abuse and neglect, and A & E nurses are in a unique position to assist this process. This paper seeks to assist nurses with this task by arguing for a 'higher index of suspicion', allied to a sound knowledge base of the circumstances surrounding elder abuse, signs and symptoms and known risk factors. Finally, practical suggestions are offered to assist in the assessment and intervention with suspected or known cases of elder abuse and neglect.
Answer: Elder emergency department patients and their informants do not always agree about the elder's functioning. A study comparing elder patients' and their informants' ratings of the elder's physical and mental function using the Medical Outcomes Study Short Form 12 (SF-12) found that elders rated themselves higher on physical function than their informants did. There was a statistically significant difference, with a 4.1 point difference between patients' and their proxies' physical component scores. However, for mental function ratings, the scores were quite similar between patients and their proxies, with no statistically significant difference (PUBMED:11435187). |
Instruction: Gliding resistance of the flexor pollicis longus tendon after repair: does partial excision of the oblique pulley affect gliding resistance?
Abstracts:
abstract_id: PUBMED:17051113
Gliding resistance of the flexor pollicis longus tendon after repair: does partial excision of the oblique pulley affect gliding resistance? Background: After tendon repair, it is critical for the repair site to pass smoothly under the pulley edge to promote gliding and reduce the risk of adhesion or rupture. In this study, the authors assessed the effect of partial excision by "squaring off" the distal edge of the oblique pulley on the gliding resistance of the flexor pollicis longus tendon after repair in vitro.
Methods: Gliding resistance of 10 human thumbs was measured directly with three different sequential conditions: intact flexor pollicis longus tendon with intact A1 and oblique pulleys (group A), intact pulleys after repair of the tendon (group B), and after repair and excision of the distal triangular part (squaring off) of the oblique pulley (group C).
Results: Gliding resistance increased significantly after repair and squaring off the oblique pulley (group A, 0.22 +/- 0.08 N; group B, 1.29 +/- 0.68 N; and group C, 2.01 +/- 0.84 N).
Conclusions: Previous studies suggest that the trimming of an annular pulley in the finger would not result in any significant mechanical disadvantage if other parts of the pulley system were intact. However, the authors' results suggest that in the case of the thumb oblique pulley, gliding resistance is increased after trimming and tendon repair, and thus the oblique pulley should be left intact if possible.
abstract_id: PUBMED:35523637
The Effect of Flexor Carpi Ulnaris Pulley Design on Tendon Gliding Resistance After Flexor Digitorum Superficialis Tendon Transfer for Opposition Transfer. Purpose: The flexor digitorum superficialis (FDS) tendon transfer can be used to restore opposition of the thumb. Several pulley designs have been proposed for this transfer. Gliding resistance is considered to be an important factor influencing the efficiency of the pulley design. Our purpose was to compare the gliding resistance among 4 commonly used pulleys for the FDS oppositional transfer.
Methods: Ten fresh-frozen cadaver specimens were studied. The ring FDS was used as the donor tendon. An oppositional transfer was created using 4 pulley configurations: FDS passed around the flexor carpi ulnaris (a-FCU), FDS passed through a 2.5-cm circumference distally based FCU loop (2.5-FCU), FDS passed through a 3.5-cm circumference distally based FCU loop (3.5-FCU), and FDS passed through a longitudinal split in the FCU tendon (s-FCU). The gliding resistance was measured with the thumb in radial abduction and maximum opposition.
Results: In abduction, the average FDS gliding resistance of a-FCU, 2.5-FCU, 3.5-FCU, and s-FCU was 0.66 N (SD, 0.14 N), 0.70 N (SD, 0.14 N), 0.68 N (SD, 0.16 N), and 0.79 N (SD, 0.15 N), respectively. The peak gliding resistance of a-FCU, 2.5-FCU, 3.5-FCU, and s-FCU was 0.75 N (SD, 0.16 N), 0.74 N (SD, 0.15 N), 0.74 N (SD, 0.15 N), and 0.86 N (SD, 0.15 N), respectively.
Conclusions: The average gliding resistance of the s-FCU was found to be significantly higher than that of the a-FCU and 3.5-FCU pulleys. In opposition, there were no differences in average or peak gliding resistance among the different pulley designs.
Clinical Relevance: In this in vitro cadaveric study, the FDS split pulley produced higher gliding resistance. Consideration of the pulley configuration may improve the overall thumb function by decreasing forces needed to overcome gliding resistance.
abstract_id: PUBMED:35674265
A Biomechanical Comparison of Gliding Resistance between Modified Lim Tsai and Asymmetric Tendon Repair Techniques in Zone II Flexor Tendon Repairs. Background: Early active motion protocols have shown better functional outcomes in zone II flexor tendon lacerations. Different techniques of tendon repair have different effects on gliding resistance, which can impact tendon excursion and adhesion formation. For successful initiation of early active mobilisation, the repair technique should have high breaking strength and low gliding resistance. Previous studies have shown the Modified Lim-Tsai technique demonstrates these characteristics. The Asymmetric repair has also shown superior ultimate tensile strength. This study aims to compare the gliding resistance between the two techniques. Methods: FDP tendons from ten fresh frozen cadaveric fingers were randomly divided into two groups, transected completely distal to the sheath of the A2 pulley and repaired using either the Modified Lim-Tsai or Asymmetric technique. The core repair was performed with Supramid 4-0 looped sutures and circumferential epitendinous sutures were done with nylon monofilament Prolene 6-0 sutures. The gliding resistance and ultimate tensile strength were then tested. Results: The gliding resistance of the Asymmetric and Modified Lim-Tsai repair techniques were 0.2 and 0.95 N respectively. This difference was significant (p = 0.008). The Modified Lim-Tsai technique had a higher ultimate tensile strength and load to 2 mm gap formation, though this was not significant. Conclusions: Gliding resistance of the Asymmetric repair is significantly less than that of Modified Lim-Tsai. Ultimate tensile strength and load to 2 mm gap formation are comparable.
abstract_id: PUBMED:22763054
The effects of oblique or transverse partial excision of the A2 pulley on gliding resistance during cyclic motion following zone II flexor digitorum profundus repair in a cadaveric model. Purpose: To compare the gliding resistance of flexor tendons after oblique versus transverse partial excision of the A2 pulley in a human cadaveric model, to determine the effect of the angle of pulley trimming.
Methods: We obtained 36 human flexor digitorum profundus tendons from the index through the little finger and repaired them with a modified Massachusetts General Hospital suture using 4-0 FiberWire. We repaired all tendons with a similar epitendinous stitch. We randomly assigned the tendons to 1 of 3 groups: intact pulley, transverse partial excision, or oblique partial excision. We measured peak and normalized peak gliding resistance between the repairs and the A2 pulley during 1,000 cycles of simulated motion.
Results: There was no significant difference in the peak or normalized peak gliding resistance at any cycle among the 3 groups.
Conclusions: Both transverse and oblique trimming of the A2 pulley had similar effects on the peak and normalized gliding resistance after flexor tendon repair.
Clinical Relevance: When partial pulley resection is needed after flexor tendon repair, the transverse or oblique trimming of pulley edge does not affect repaired tendon gliding resistance.
abstract_id: PUBMED:28501340
Gliding Resistance After Epitendinous-First Repair of Flexor Digitorum Profundus in Zone II. Purpose: The importance of flexor tendon repair with both core and epitendinous suture placement has been well established. The objective of this study was to determine whether suture placement order affects gliding resistance and bunching in flexor digitorum profundus tendons in a human ex vivo model.
Methods: The flexor digitorum profundus tendons of the index, middle, ring, and little fingers of paired cadaver forearms were tested intact for excursion and mean gliding resistance in flexion and extension across the A2 pulley. Tendons were subsequently transected and repaired with either an epitendinous-first (n = 12) or a control (n = 12) repair. Gliding resistance of pair-matched tendons were analyzed at cycle 1 and during the steady state of tendon motion. The tendon repair breaking strength was also measured.
Results: The mean steady state gliding resistance was less for the epitendinous-first repair than for the control repair in flexion (0.61 N vs 0.72 N) and significantly less in extension (0.68 N vs 0.85 N). Similar results were seen for cycle 1. None of the repairs demonstrated gap formation; however, control repairs exhibited increased bunching. Load to failure was similar for both groups.
Conclusions: The order of suture placement for flexor tendon repair is important. Epitendinous-first repair significantly decreased mean gliding resistance, allowed for easier placement of core sutures, and resulted in decreased bunching.
Clinical Relevance: Epitendinous-first flexor tendon repairs may contribute to improved clinical outcomes compared with control repairs by decreasing gliding resistance and bunching.
abstract_id: PUBMED:9052543
Gliding resistance of extrasynovial and intrasynovial tendons through the A2 pulley. The gliding ability of the flexor digitorum profundus tendon and of the palmaris longus tendon through the A2 pulley was compared, in terms of gliding resistance, with use of a system that we developed. Fourteen digits and the ipsilateral palmaris longus tendons from fourteen cadavera were used. The average gliding resistance at the interface between the palmaris longus tendon and the A2 pulley was found to be greater than that between the flexor digitorum profundus tendon and the A2 pulley under similar loading conditions. We concluded that the gliding ability of the palmaris longus tendon was inferior to that of the flexor digitorum profundus tendon in vitro.
abstract_id: PUBMED:31274195
Identification of the retrotalar pulley of the Flexor Hallucis Longus tendon. Functional Hallux Limitus is the expression of the gliding restraint of the Flexor Hallucis Longus (Fhl) tendon, resulting in several painful syndromes. This impingement is located along the tract of the Fhl tendon at the level of its retrotalar tunnel sealed posteriorly by a fibrous pulley. This pulley, although poorly anatomically characterized, has been arthroscopically proven that its presence or resection plays a pivotal clinical role in the biomechanics of the lower leg, being the main restraint to the physiological movement of the Fhl tendon. The aim of our study was to identify and characterize this anatomical structure. Eleven cadaveric lower legs were initially assessed by computer tomography (CT) imaging, subsequently plastinated, dissected and histologically evaluated by use of Mayer's and Hematoxylin stain. We have shown that the retrotalar pulley of the Fhl shares the same histological characteristics with the retinaculum of the long fibularis muscle and the retinaculum of flexor digitorum muscle, thus it constitutes a different entity than the adjacent formations.
abstract_id: PUBMED:25282719
Release of the A4 pulley to facilitate zone II flexor tendon repair. During primary or delayed primary repair of the flexor digitorum profundus tendon, surgeons often face difficulty in passing the retracted tendon or repaired tendon under the dense, fibrous A4 pulley. The A4 pulley is the narrowest part of the flexor sheath, proximal to the terminal tendon. Disrupted tendon ends (or surgically repaired tendons) are usually swelling, making passage of the tendons under this pulley difficult or even impossible. During tendon repair in the A4 pulley area, when the trauma is in the middle part of the middle phalanx and the A3 pulley is intact, the A4 pulley can be vented entirely to accommodate surgical repair and facilitate gliding of the repaired tendon after surgery. Venting the pulley does not disturb tendon function when the other major pulleys are intact and when the venting of the A4 pulley and adjacent sheath is limited to the middle half of the middle phalanx. Such venting is easily achieved through a palmar midline or lateral incision of the A4 pulley and its adjacent distal or/and proximal sheath, which helps ensure a more predictable recovery of digital flexion and extension.
abstract_id: PUBMED:23849733
The effect of surface modification on gliding ability of decellularized flexor tendon in a canine model in vitro. Purpose: To investigate the gliding ability and mechanical properties of decellularized intrasynovial tendons with and without surface modification designed to reduce gliding resistance.
Methods: We randomly assigned 33 canine flexor digitorum profundus tendons to 1 of 3 groups: untreated fresh tendons, to serve as a control; tendons decellularized with trypsin and Triton X-100; and tendons decellularized as in group 2 with surface modification using carbodiimide-derivatized hyaluronic acid and gelatin (cd-HA-gelatin). Tendons were subjected to cyclic friction testing for 1,000 cycles with subsequent tensile stiffness testing. We qualitatively evaluated the surface roughness after 1,000 cycles using scanning electron microscopy.
Results: The gliding resistance of the decellularized group was significantly higher than that of both the control and cd-HA-gelatin tendons (0.20, 0.09, and 0.11 N after the first cycle; and 0.41, 0.09, and 0.14 N after 1,000 cycles, respectively). Gliding resistance between the control and cd-HA-gelatin groups was not significantly different. The Young modulus was not significantly different between groups. The surfaces of the control and cd-HA-gelatin-treated tendons appeared smooth after 1,000 cycles, whereas those of the decellularized tendons appeared roughened under scanning electron microscopy observation.
Conclusions: Decellularization with trypsin and Triton X-100 did not change tendon stiffness. However, although this treatment was effective in removing cells, it adversely altered the tendon surface in both appearance and gliding resistance. Surface modification with cd-HA-gelatin improved the tendon surface smoothness and significantly decreased the gliding resistance.
Clinical Relevance: The combination of decellularization and surface modification may improve the function of tendon allografts when used clinically.
abstract_id: PUBMED:33732858
Flexor pollicis longus zone 2 tendon repair under WALANT with ultrasound assistance. A feature of the flexor pollicis longus tendon injury is the frequency of palmar retraction so that a wrist approach is needed for retrieval of the proximal stump. We are reporting on our first outpatient zone T2 flexor pollicis longus tendon repair under WALANT with ultrasound guidance. In the event of zone T2 flexor pollicis rupture, ultrasound location of tendon extremities is used to plan surgical WALANT strategy and to guide the injection of lidocaine with epinephrine whilst limiting the injected volume to what is strictly necessary.
Answer: Yes, partial excision of the oblique pulley does affect the gliding resistance of the flexor pollicis longus tendon after repair. A study that assessed the effect of partial excision by "squaring off" the distal edge of the oblique pulley on the gliding resistance of the flexor pollicis longus tendon after repair in vitro found that gliding resistance increased significantly after repair and squaring off the oblique pulley. The measurements were taken under three different sequential conditions: intact tendon with intact A1 and oblique pulleys, intact pulleys after tendon repair, and after repair and excision of the distal triangular part of the oblique pulley. The results showed that gliding resistance was 0.22 ± 0.08 N for the intact tendon and pulleys, 1.29 ± 0.68 N after tendon repair, and 2.01 ± 0.84 N after repair and partial excision of the oblique pulley. This suggests that the oblique pulley should be left intact if possible to promote smoother gliding of the repaired tendon (PUBMED:17051113). |
Instruction: Does Greater Body Mass Index Increase the Risk for Revision Procedures Following a Single-Level Minimally Invasive Lumbar Discectomy?
Abstracts:
abstract_id: PUBMED:27128255
Does Greater Body Mass Index Increase the Risk for Revision Procedures Following a Single-Level Minimally Invasive Lumbar Discectomy? Study Design: Retrospective analysis of a prospectively maintained surgical registry.
Objective: To examine the association between body mass index (BMI) and the risk for undergoing a revision procedure following a single-level minimally invasive (MIS) lumbar discectomy (LD).
Summary Of Background Data: Studies conflict as to whether greater BMI contributes to recurrent herniation and the need for revision procedures following LD. Patients and surgeons would benefit from knowing whether greater BMI is a risk factor to guide the decision whether to pursue an operative versus non-operative treatment.
Methods: Patients undergoing a single-level MIS LD were retrospectively identified in our institution's prospectively maintained surgical registry. BMI was categorized as normal weight (<25 kg/m), overweight (25-30 kg/m), obese (30-40 kg/m), or morbidly obese (≥40 kg/m). Multivariate analysis was used to test for association with undergoing a revision procedure during the first 2 postoperative years. The model was demographics, comorbidities, and operative level.
Results: A total of 226 patients were identified. Of these, 56 (24.8%) were normal weight, 80 (35.4%) were overweight, 66 (29.2%) were obese, and 24 (10.6%) were morbidly obese. A total of 23 patients (10.2%) underwent a revision procedure in the first 2 postoperative years. The 2-year risk for revision procedure was 1.8% for normal weight patients, 12.5% for overweight patients, 9.1% for obese patients, and 25.0% for morbidly obese patients. In the multivariate-adjusted analysis model, BMI category was independently associated with undergoing a revision procedure (P = 0.038).
Conclusion: These findings indicate that greater BMI is an independent risk factor for undergoing a revision procedure following a LD. These findings conflict with recent studies that have found no difference between obese and non-obese patients in regards to risk for recurrent herniation and/or revision procedures. Patients with greater BMI undergoing LD should be informed they could have an elevated risk for revision procedures.
Level Of Evidence: 4.
abstract_id: PUBMED:29456910
Comparing the Incidence of Index Level Fusion Following Minimally Invasive Versus Open Lumbar Microdiscectomy. Study Design: Retrospective cohort study.
Objectives: To determine the incidence of index level fusion following open or minimally invasive lumbar microdiscectomy.
Methods: We conducted a retrospective review of 174 patients with a symptomatic single-level lumbar herniated nucleus pulposus who underwent microdiscectomy via a mini-open approach (MIS; 39) or through a minimally invasive dilator tube (135). Outcomes of interest included revision microdiscectomy and the ultimate need for index level fusion. Continuous variables were analyzed with independent sample t test, and χ2 analysis was used for categorical data. A multivariate regression analysis was performed to identify predictive factors for patients that required index level fusion after lumbar microdiscectomy.
Results: There was no difference in patient demographics in the open and MIS groups aside from length of follow-up (60.4 vs 40.03 months, P < .0001) and body mass index (24.72 vs 27.21, P = .03). The rate of revision microdiscectomy was not statistically significant between open and MIS approaches (10.3% vs 10.4%, P = .90). The rate of patients who ultimately required index level fusion approached significance, but was not statistically different between open and MIS approaches (10.3% vs 4.4%, P = .17). Multivariate regression analysis indicated that the need for eventual index level fusion after lumbar microdiscectomy was statistically predicted in smokers and those patients who underwent revision microdiscectomy (P < .05) in both open and MIS groups.
Conclusions: Our results suggest a low likelihood of patients ultimately requiring fusion following microdiscectomy with predictors including smoking status and a history of revision microdiscectomy.
abstract_id: PUBMED:28538081
Is Body Mass Index a Risk Factor for Revision Procedures After Minimally Invasive Transforaminal Lumbar Interbody Fusion? Study Design: Retrospective cohort study.
Objective: To determine if an association exists between body mass index (BMI) and the rate of revision surgery after single-level minimally invasive transforaminal lumbar interbody fusion (MIS TLIF).
Summary Of Background Data: MIS TLIF is an effective treatment for lumbar degenerative disease. Previous studies in the orthopedic literature have associated increased BMI with increased postoperative complications and need for revision. Few studies have evaluated the association between BMI and the risk for revision after minimally invasive spinal procedures.
Materials And Methods: A surgical registry of patients who underwent a single-level MIS TLIF for degenerative pathology between 2005 and 2014 was reviewed. Patients were stratified based on BMI category: normal weight (BMI<25), overweight (BMI, 25-29.9), obese I (BMI, 30-34.9), and obese II-III (BMI≥35). BMI category was tested for association with demographic and procedural characteristics using 1-way analysis of variance (ANOVA) for continuous variables, and χ analysis or the Fisher exact test for categorical variables. BMI category was tested for association with undergoing a revision fusion procedure within 2 years after MIS TLIF using multivariate Cox proportional hazards survival analysis modeling.
Results: In total, 274 patients were analyzed; of these, 52 (18.98%) were normal weight, 101 (36.86%) were overweight, 62 (22.63%) were obese I, and 59 (21.53%) were obese II-III. On multivariate Cox proportional hazards survival analysis modeling, BMI category was not associated with undergoing a revision procedure within 2 years after MIS TLIF (P=0.599). On multivariate analysis, younger age (P=0.004) was associated with increased risk of undergoing a revision after MIS TLIF.
Conclusions: The results of this study suggest that increasing BMI is not a risk factor for undergoing a revision procedure after MIS TLIF. As such, patients with high BMI should be counseled regarding having similar rates of needing a revision procedure after MIS TLIF as those with lower BMI.
Level Of Evidence: Level IV.
abstract_id: PUBMED:21944927
Perioperative results following open and minimally invasive single-level lumbar discectomy. Lumbar discectomy is the most commonly performed spine surgery and in recent years, minimally invasive tubular discectomy has become increasingly popular among surgeons and patients. However, recent reports have raised the question of whether or not patients have shorter hospitalizations following minimally invasive discectomy. From 2005 to 2010, we analyzed 109 patients who underwent elective, single-level lumbar discectomy for central or paracentral disc herniations. A retrospective analysis of medical records was performed for perioperative complications. Tubular discectomy was not associated with increased rates of durotomy, nerve root injury, wound complications, or recurrent disc herniations requiring additional surgery. Minimally invasive tubular discectomy in the lumbar spine results in a small, but statistically significant, advantage in length of stay compared to conventional open microdiscectomy. While small on an individual basis, this difference may translate to substantial economic savings over time when one considers how many discectomies are performed in aggregate.
abstract_id: PUBMED:18673050
Perioperative results following lumbar discectomy: comparison of minimally invasive discectomy and standard microdiscectomy. Object: Minimally invasive lumbar discectomy is a refinement of the standard open microsurgical discectomy technique. Proponents of the minimally invasive technique suggest that it improves patient outcome, shortens hospital stay, and decreases hospital costs. Despite these claims there is little support in the literature to justify the adoption of minimally invasive discectomy over standard open microsurgical discectomy. In the present study, the authors address some of these issues by comparing the short-term outcomes in patients who underwent first time, single-level lumbar discectomy at L3-4, L4-5, or L5-S1 using either a minimally invasive percutaneous, muscle splitting approach or a standard, open, muscle-stripping microsurgical approach.
Methods: A retrospective chart review of 172 patients who had undergone a first-time, single-level lumbar discectomy at either L3-4, L4-5, or L5-S1 was performed. Perioperative results were assessed by comparing the following parameters between patients who had undergone minimally invasive discectomy and those who received standard open microsurgical discectomy: length of stay, operative time, estimated blood loss, rate of cerebrospinal fluid leak, post-anesthesia care unit narcotic use, need for a physical therapy consultation, and need for admission to the hospital.
Results: Forty-nine patients underwent minimally invasive discectomy, and 123 patients underwent open microsurgical discectomy. At baseline the groups did differ significantly with respect to age, but did not differ with respect to height, weight, sex, body mass index, level of radiculopathy, side of radiculopathy, insurance status, or type of preoperative analgesic use. No statistically significant differences were identified in operative time, rate of cerebrospinal fluid leak, or need for a physical therapy consultation. Statistically significant differences were identified in length of stay, estimated blood loss, postanesthesia care unit narcotic use, and need for admission to the hospital.
Conclusions: In this retrospective study, patients who underwent minimally invasive discectomy were found to have similar perioperative results as those who underwent open microsurgical discectomy. The differences, although statistically significant, are of modest clinical significance.
abstract_id: PUBMED:20515355
Results and risk factors for recurrence following single-level tubular lumbar microdiscectomy. Object: The use of minimally invasive surgical techniques, including microscope-assisted tubular lumbar microdiscectomy (tLMD), has gained increasing popularity in treating lumbar disc herniations (LDHs). This particular procedure has been shown to be both cost-efficient and effective, resulting in outcomes comparable to those of open surgical procedures. Lumbar disc herniation recurrence necessitating reoperation, however, remains an issue following spinal surgery, with an overall reported incidence of approximately 3-13%. The authors' aim in the present study was to report their experience using tLMD for single-level LDH, hoping to provide further insight into the rate of surgical recurrence and to identify potential risk factors leading to this complication.
Methods: The authors retrospectively reviewed the cases of 217 patients who underwent tLMD for single-level LDH performed identically by 2 surgeons (J.B., R.H.) between 2004 and 2008. Evaluation for LDH recurrence included detailed medical chart review and telephone interview. Recurrent LDH was defined as the return of preoperative signs and symptoms after an interval of postoperative resolution, in conjunction with radiographic demonstration of ipsilateral disc herniation at the same level and pathological confirmation of disc material. A cohort of patients without recurrence was used for comparison to identify possible risk factors for recurrent LDH.
Results: Of the 147 patients for whom the authors were able to definitively assess symptomatic recurrence status, 14 patients (9.5%) experienced LDH recurrence following single-level tLMD. The most common level involved was L5-S1 (42.9%) and the mean length of time to recurrence was 12 weeks (range 1.5-52 weeks). Sixty-four percent of the patients were male. In a comparison with patients without recurrence, the authors found that relatively lower body mass index was significantly associated with recurrence (p = 0.005), such that LDH in nonobese patients was more likely to recur.
Conclusions: Recurrence rates following tLMD for LDH compare favorably with those in patients who have undergone open discectomy, lending further support for its effectiveness in treating single-level LDH. Nonobese patients with a relatively lower body mass index, in particular, appear to be at greater risk for recurrence.
abstract_id: PUBMED:30524611
Endoscopic lumbar discectomy and minimally invasive lumbar interbody fusion: a contrastive review. Both percutaneous endoscopic lumbar discectomy (PELD) and minimally invasive transforaminal lumbar interbody fusion (MIS-TLIF) have been demonstrated as two common and effective choices for lumbar disc herniation (LDH) minimally invasive surgery. In order to get a better understanding of these two procedures, we made this contrastive review. By looking up recent literature and combining it with our clinical practice, the indications/contraindications, advantages/disadvantages as well as complications/recurrences of PELD and MIS-TLIF were summarized in this review. It was concluded that PELD and MIS-TLIF are safe and effective minimally invasive operative techniques for symptomatic LDH treatment. A better understanding of these two procedures will help to improve clinical outcomes by selecting proper indications, and also benefit the further development of minimally invasive spine surgery.
abstract_id: PUBMED:27919762
Minimally Invasive Transforaminal Lumbar Interbody Fusion Versus Percutaneous Endoscopic Lumbar Discectomy: Revision Surgery for Recurrent Herniation After Microendoscopic Discectomy. Background: Most patients with recurrence of microendoscopic discectomy (MED) need to receive revision surgery. Minimally invasive transforaminal lumbar interbody fusion (MIS-TLIF) and percutaneous endoscopic lumbar discectomy (PELD) are common operative methods for MED recurrence, but no study has been made to compare the clinical outcomes of these 2 surgical methods as revision surgery for MED recurrence.
Methods: A total of 105 patients who underwent either MIS-TLIF (58 patients) or PELD (47 patients) for revision of MED recurrence were included in this study. Perioperative outcomes (operation time, blood loss, and hospital stay), total cost, pain and functional scores (visual analog scale, Oswestry Disability Index, 12-item short form health survey) with a 12-month follow-up visit and review of complications and recurrence within 12 months postoperatively were recorded and assessed.
Results: No significant difference of clinical outcome over time was observed between these 2 approaches. Compared with MIS-TLIF, PELD was associated with greater satisfaction in the early stage after surgery; this effect was equalized after 3 months postoperatively. PELD brought advantages in terms of shorter operation time, shorter hospital stay, less blood loss, and lower total cost compared with MIS-TLIF; however, PELD was also associated with a higher recurrence rate than MIS-TLIF.
Conclusions: Neither of these 2 surgical methods gave a clear advantage in long-term pain or function scores. Compared with MIS-TLIF, PELD could lead to a better perioperative result and less cost; however, the higher recurrence rate could not be ignored. Taking these characteristics into consideration was instrumental in pursuing personalized treatment for MED recurrence.
abstract_id: PUBMED:33363376
Comparison of Percutaneous Endoscopic Lumbar Discectomy with Minimally Invasive Transforaminal Lumbar Interbody Fusion as a Revision Surgery for Recurrent Lumbar Disc Herniation after Percutaneous Endoscopic Lumbar Discectomy. Objective: The purpose of this study was to compare the outcomes between percutaneous endoscopic lumbar discectomy (PELD) and minimally invasive transforaminal lumbar interbody fusion (MIS-TLIF) for the revision surgery for recurrent lumbar disc herniation (rLDH) after PELD surgery.
Patients And Methods: A total of 46 patients with rLDH were retrospectively assessed in this study. All the patients had received a PELD in Peking University First Hospital between January 2015 and June 2019, before they underwent a revision surgery by either PELD (n=24) or MIS-TLIF (n=22). The preoperative data, perioperative conditions, complications, recurrence condition, and clinical outcomes of the patients were compared between the two groups.
Results: Compared to the MIS-TLIF group, the PELD group had significantly shorter operative time, less intraoperative hemorrhage, and shorter postoperative hospitalization, but higher recurrence rate (P<0.05). Complication rates were comparable between the two groups. Both groups had satisfactory clinical outcomes at a 12-month follow-up after the revision surgery. The PELD group also showed significantly lower visual analog scale (VAS) scores of back pain and Oswestry disability index (ODI) in one month after the revision surgery, whereas the difference was not detectable at six- and 12-month follow-ups.
Conclusion: Both PELD and MIS-TLIF are effective as a revision surgery for rLDH after primary PELD. PELD is superior to MIS-TLIF in terms of operative time amount of intraoperative hemorrhage and postoperative hospitalization. However, its higher postoperative recurrence rate must be considered and patients should be well informed, when making a decision between the two surgical approaches.
abstract_id: PUBMED:30607499
Minimally invasive discectomy for lumbar disc herniation: current concepts, surgical techniques, and outcomes. Purpose: To date, various minimally invasive surgeries for lumbar disc herniation (LDH) have been performed in many countries, including microendoscopic discectomy (MED) and percutaneous endoscopic discectomy (PED). We herein report the surgical indications, current concepts, and techniques of minimally invasive discectomy for LDH, including MED and PED. The postoperative outcomes and risks of complications compared with conventional discectomy are also discussed.
Methods: By reviewing past literatures, the current concepts, surgical techniques, and clinical outcomes were described.
Results: Minimally invasive discectomy is generally performed using a transmuscular approach with an endoscopic technique and can reduce muscle and soft tissue damage. The minimal invasiveness can also reduce the severity of postoperative pain and improve patient satisfaction compared with conventional open discectomy. Based on the data of previous systematic reviews and meta-analyses, minimally invasive lumbar discectomy, such as MED and PED, appears to be safe and effective alternatives to open discectomy for patients with symptomatic LDH. These minimally invasive surgeries can provide beneficial effects to shorten the hospital stay and time to return to work. However, minimally invasive discectomy has a demanding learning curve. Only a well-trained surgeon and support team should perform these less invasive procedures.
Conclusions: Clinical evidence supporting the superiority of MED and PED is still limited. Well-designed prospective multicentered randomized controlled trials with large samples are therefore needed in order to draw a convincing conclusion.
Answer: Yes, greater body mass index (BMI) does increase the risk for revision procedures following a single-level minimally invasive lumbar discectomy (MIS LD). A retrospective analysis of a prospectively maintained surgical registry indicated that BMI was an independent risk factor for undergoing a revision procedure after a lumbar discectomy. The study found that the 2-year risk for a revision procedure was 1.8% for normal weight patients, 12.5% for overweight patients, 9.1% for obese patients, and 25.0% for morbidly obese patients. Multivariate-adjusted analysis showed that BMI category was independently associated with undergoing a revision procedure (P = 0.038) (PUBMED:27128255).
However, it is important to note that there are conflicting findings in the literature. Another study examining the risk of revision surgery after minimally invasive transforaminal lumbar interbody fusion (MIS TLIF) found that increasing BMI was not a risk factor for undergoing a revision procedure after MIS TLIF (PUBMED:28538081). This suggests that the relationship between BMI and the need for revision surgery may vary depending on the specific type of minimally invasive spinal procedure performed.
Therefore, while there is evidence to suggest that a greater BMI can increase the risk of revision procedures following a single-level MIS LD, the association may not be consistent across all types of minimally invasive spinal surgeries. Patients with higher BMI considering MIS LD should be informed of the potentially elevated risk for revision procedures. |
Instruction: Does physician benchmarking improve performance of laparoscopically assisted vaginal hysterectomy?
Abstracts:
abstract_id: PUBMED:16217980
Does physician benchmarking improve performance of laparoscopically assisted vaginal hysterectomy? Background: Benchmarking techniques were implemented to optimize operating time and charges associated with laparoscopically assisted vaginal hysterectomy (LAVH).
Materials And Methods: The baseline LAVH profile over a period of 4 years (167 cases) was compared with 1-year data (47 cases) after a benchmarking educational program (disseminating data ranking performance by each surgeon plus suggestions for improvement). Preintervention and postintervention profiles were compared by means of Student t test and wilcoxon rank sum analysis. Hierarchical multiple regression was used to identify additional sources of variation for operative charges and time.
Results: Mean operating times after implementing benchmarking were lower, averaging 182 versus 197 minutes in the control subjects (P = 0.05). We found no significant difference in total or operative charges. After adjusting for potential confounders, benchmarking remained associated with decreased operating time in the multivariate model (P = 0.01).
Conclusions: LAVH operating times decreased after a surgical benchmarking and education intervention, but operating charges did not.
abstract_id: PUBMED:28781532
Surgical technique of concomitant laparoscopically assisted vaginal hysterectomy and laparoscopic cholecystectomy. Background: Laparoscopically assisted vaginal hysterectomy is one of the most frequently performed gynecologic operations, and numerous authors have demonstrated its safety and feasibility.
Case Presentation: We practiced in some selected cases simultaneous laparoscopically assisted vaginal total hysterectomy with bilateral adnexectomy and laparoscopic cholecystectomy using 5 trocars without uterine manipulator. Previous examinations included abdominal ultrasound, cervix biopsy and CT of abdomen and pelvis. Our aim was to evaluate the surgical technique of our initial experiences for combined laparoscopically assisted vaginal hysterectomy and laparoscopic colecystectomy.
Conclusions: Laparoscopic hysterectomy had a number of advantages over the conventional technique given the underlying associated diseases, postoperative pain, rapid recovery and aesthetic benefits.
abstract_id: PUBMED:26327861
Vaginal hysterectomy vs. laparoscopically assisted vaginal hysterectomy in women with symptomatic uterine leiomyomas: a retrospective study. Introduction: Uterine leiomyomas are the most common benign tumors of the female reproductive system. Although the majority of myomas are asymptomatic, some patients have symptoms or signs of varying degrees and require a hysterectomy.
The Aim Of The Study: The aim of the study was to compare the clinical results of two minimally invasive hysterectomy techniques: vaginal hysterectomy (VH) and laparoscopically assisted vaginal hysterectomy (LAVH).
Material And Methods: A retrospective, observational study was performed at a tertiary care center: the Gynecology and Gynecologic Oncology Department, Polish Mother's Memorial Hospital Research Institute. The study period was from January 2003 to December 2012. A total of 159 women underwent either vaginal hysterectomy (VH, n = 120) or laparoscopically assisted vaginal hysterectomy (LAVH, n = 39) for symptomatic uterine myomas. Outcome measures, including past medical history, blood loss, major complications, operating time and discharge time were assessed and compared between the studied groups. Statistical analysis was performed using Student t-test, U-Mann Whitney test, χ(2) test and Yates'χ(2) test. P < 0.05 was considered statistically significant.
Results: There were no differences in patients' mean age. Parity was significantly higher in the VH group (VH 1.9 ± 0.7 vs. LAVH 1.5 ± 0.8; p = 0.008). No difference was found in the mean ± standard deviation (SD) uterine volume between vaginal hysterectomy and LAVH groups (179 ± 89 vs. 199 ± 88 cm(3)), respectively. The mean operative time was significantly longer for the LAVH group (83 ± 29 vs. 131 ± 30 min; p = 0.0001). The intraoperative blood loss (VH 1.3 ± 1.1 vs. LAVH 1.4 ± 0.9 g/dl; p = 0.2) and the rate of intra- and postoperative complications were similar in both groups studied. The mean discharge time was longer for LAVH than for VH (VH 4.2 ± 1.2 vs. LAVH 5.3 ± 1.3 days, p = 0.0001).
Conclusions: Laparoscopically assisted vaginal hysterectomy and VH are safe hysterectomy techniques for women with the myomatous uterus. Concerning the LAVH, the abdominal-pelvic exploration and the ability to perform adnexectomy safely represent the major advantages comparing with VH. Vaginal hysterectomy had a shorter operating time and the mild blood loss making it a suitable method of hysterectomy for cases in which the shortest duration of surgery and anesthesia is preferable.
abstract_id: PUBMED:28508344
Vaginal hysterectomy versus laparoscopically assisted vaginal hysterectomy for large uteri between 280 and 700 g: a randomized controlled trial. Objective: To compare surgical outcomes, postoperative complications and costs between vaginal hysterectomy and laparoscopically assisted vaginal hysterectomy in cases of large uteri.
Methods: Prospective randomized controlled trial done at Ain Shams University Maternity Hospital, where 50 patients were recruited and divided into two equal groups (each 25 patients). First group underwent vaginal hysterectomy, and the second underwent laparoscopically assisted vaginal hysterectomy.
Results: Patient characteristics were similar in both groups. As for surgical outcomes, estimated intraoperative blood loss (P = 0.90), operative time (P = 0.48), preoperative hemoglobin (P = 0.09), postoperative hemoglobin (P = 0.42), and operative complications (P = 1.0) did not differ between the two groups. The hospital costs (converted from Egyptian pound to U.S. dollars) were significantly higher in case of LAVH group [VH: $1060.86 ($180.09) versus LAVH: $1560.5 ($220.57), P value <0.001]. No significant difference exists in the duration of postoperative hospital stay between the two groups [VH: 49.92 h (28.50) versus LAVH: 58.56 (27.78), P = 0.28] or the actual uterine weight measured postoperatively [VH: 350.72 g (71.78) versus LAVH: 385.96 g (172.52), P = 0.35].
Conclusion: Both vaginal hysterectomy and laparoscopically assisted vaginal hysterectomy are safe procedures in cases of large uteri with no significant difference between them except in terms of costs as VH appears to be more cost effective. CLINICAL TRIALS.GOV: NCT02826304.
abstract_id: PUBMED:26085749
A Comparative Study Between Laparoscopically Assisted Vaginal Hysterectomy and Vaginal Hysterectomy: Experience in a Tertiary Diabetes Care Hospital in Bangladesh. Objective: The study was undertaken to compare the efficiency and outcome of Laparoscopic Assisted Vaginal Hysterectomy (LAVH) and Vaginal Hysterectomy (VH) in terms of operative time, cost, estimated blood loss, hospital stay, quantity of analgesia use, intra- and postoperative complication rates and patients recovery.
Materials And Methods: A total of 500 diabetic patients were prospectively collected in the study period from January 2005 through January 2009. The performance of LAVH was compared with that of VH, in a tertiary care hospital. The procedures were performed by the same surgeon.
Results: There was no significant difference in terms of age, parity, body weight or uterine weight. The mean estimated blood loss in LAVH was significantly lower when compared with the VH group (126.5±39.8 ml and 100±32.8 ml), respectively. As to postoperative pain, less diclofenac was required in the LAVH group compared to the VH group (70.38±13.45 mg and 75.18±16.45 mg), respectively.
Conclusions: LAVH, is clinically and economically comparable to VH, with patient benefits of less estimated blood loss, lower quantity of analgesia use, lower rate of intra- and postoperative complications, less postoperative pain, rapid patient recovery, and shorter hospital stay.
abstract_id: PUBMED:26451889
Systematic assessment of surgical complications in laparoscopically assisted vaginal hysterectomy for pelvic organ prolapse. Objective: To assess patient safety and complication rates in native tissue vaginal prolapse repair combined with laparoscopically assisted vaginal hysterectomy and prophylactic salpingectomy/salpingoophorectomy.
Study Design: This was a single-centre retrospective study conducted at the University Hospital, Urogynaecological Unit, with a certified urogynaecological surgeon. A cohort of 321 consecutive patients received laparoscopically assisted vaginal hysterectomy for pelvic organ prolapse grade II-IV combined with defect-specific vaginal native tissue repair. Analysis of the total cohort and subgroups according to prolapse grade and concomitant laparoscopic procedures was performed. Student's t-tests and chi-squared tests were used for descriptive statistical analysis. Surgical complications were classified using the Clavien-Dindo (CD) classification system of surgical complications.
Results: Complications were classified as CD I (1.87%), CD II (13.39%), CD IIIa (0.62%), and CD IIIb (1.87%); no CD IV or CD V complication occurred. One (0.31%) intraoperative bladder lesion, but no rectal lesion, ureter lesion, or intraoperative haemorrhage requiring blood transfusion, was noted. The overall morbidity rate, including the intraoperative bladder lesion and the CD I complication, was 18.06%. All (n=321) patients underwent prophylactic salpingectomy. Additional oophorectomy was performed in 222 post-menopausal patients. Pelvic adhesions were found in 123 (38.31%) patients and 148 (46%) patients presented grade IV prolapse. Operating time was longer for grade IV than for grade II/III prolapse (p<0.01), but CD III complication rates did not differ between these groups. Operating time was longer when laparoscopic adhesiolysis was performed (p=0.025), but this factor did not affect CD III complication rates.
Conclusions: The combination of vaginal site-specific prolapse repair with laparoscopically assisted hysterectomy leads to low complication rates. Prophylactic salpingectomy or salpingoophorectomy can be performed safely in combination with hysterectomy for pelvic organ prolapse. In terms of surgical safety laparoscopy seems to be a meaningful addition to vaginal native tissue prolapse surgery.
abstract_id: PUBMED:1828758
Laparoscopically-assisted vaginal hysterectomy. We report on our initial experience with laparoscopically-assisted vaginal hysterectomy. Seven patients aged 41 to 67 years were successfully treated with this new technique without significant morbidity and with the advantage of early discharge and return to full activity. We discuss the indications, technical details including preparation and positioning, and the shortcomings and problems encountered. It is concluded that laparoscopically-assisted vaginal hysterectomy may in the near future become a valid alternative to the conventional procedures in selected cases.
abstract_id: PUBMED:7772008
Laparoscopically-assisted vaginal hysterectomy. A series of 153 consecutive patients is presented in whom a laparoscopically-assisted vaginal hysterectomy was planned and was performed in 147 of them (96%). Bipolar diathermy was utilized to diathermy the upper uterine pedicles above the uterine arteries. Neither the uterine arteries nor the uterosacral ligaments were ligated laparoscopically and the remainder of the hysterectomy was performed vaginally. In the other 6 patients in whom the operation was commenced laparoscopically, it was discontinued and the operation concluded vaginally in 1 patient and abdominally in the other 5 patients. The aim of using the laparoscopic technique has been to convert a potential abdominal hysterectomy into a vaginal hysterectomy. The technique and the results are discussed.
abstract_id: PUBMED:37675800
Impact of resident participation on surgical outcomes in laparoscopically assisted vaginal hysterectomy. Objective: To compare surgical outcomes in patients with benign diseases who underwent laparoscopically assisted vaginal hysterectomy (LAVH) to determine the association between surgical outcomes and resident participation in the gynecologic field.
Methods: A single-center retrospective study was conducted of patients diagnosed with benign gynecologic diseases who underwent LAVH between January 2010 and December 2015. Clinicopathologic characteristics and surgical outcomes were compared between the resident involvement and non-involvement groups. The primary endpoint was the 30-day postoperative morbidity. Observers were propensity matched for 17 covariates for resident involvement or non-involvement.
Results: Of the 683 patients involved in the study, 165 underwent LAVH with resident involvement and 518 underwent surgery without resident involvement. After propensity score matching (157 observations), 30-day postoperative morbidity occurred in 6 (3.8%) and 4 (2.5%) patients in the resident involvement and non-involvement groups, respectively (P = 0.501). The length of hospital stay differed significantly between the two groups: 5 days in the resident involvement group and 4 days in the non-involvement group (P < 0.001). On multivariate analysis, Charlson Comorbidity Index >2 (odds ratio [OR] 8.01, 95% confidence interval [CI] 2.68-23.96; P < 0.001), operative time (OR 1.02, 95% CI 1.01-1.03; P < 0.001), and estimated blood loss (OR 1.00, 95% CI 1.00-1.00; P < 0.001) were significantly associated with 30-day morbidity, but resident involvement was not statistically significant.
Conclusion: There was no significant difference in the 30-day morbidity rate when residents participated in LAVH. These findings suggest that resident participation in LAVH may be a viable approach to ensure both residency education and patient safety.
abstract_id: PUBMED:21306312
Morbidity with total laparoscopic and laparoscopically assisted vaginal hysterectomy. We evaluated the operative and postoperative morbidity among 103 women who underwent total laparoscopic hysterectomy and 107 others who underwent laparoscopically assisted vaginal hysterectomy. Blood loss was significantly greater in the assisted vaginal hysterectomy group (178.0 ± 12.1 ml) than in the total hysterectomy group (130.2 ± 10.7 ml) (p < 0.001). Despite higher uterine weight in the total hysterectomy group, the operative time of both techniques was similar. The complications of both hysterectomies were also comparable. The results from our study suggest that the complication rates of laparoscopically assisted vaginal hysterectomy and total hysterectomy are similar. However, laparoscopically assisted vaginal hysterectomy is associated with increased blood loss.
Answer: Yes, physician benchmarking does improve the performance of laparoscopically assisted vaginal hysterectomy (LAVH). A study that implemented benchmarking techniques, which included disseminating data ranking performance by each surgeon along with suggestions for improvement, showed that mean operating times for LAVH decreased after the surgical benchmarking and education intervention. The average operating time was reduced from 197 minutes to 182 minutes, although there was no significant difference in total or operative charges. Even after adjusting for potential confounders, benchmarking remained associated with decreased operating time in the multivariate model (PUBMED:16217980). |
Instruction: Is the Reluctance for the Implantation of Right Donor Kidneys Justified?
Abstracts:
abstract_id: PUBMED:26319261
Is the Reluctance for the Implantation of Right Donor Kidneys Justified? Background: The lengths of right renal veins are shorter when compared to their left counterparts. Since the implantation of kidneys with short renal veins is considered more challenging, many surgeons prefer left kidneys for transplantation. Therefore, our hypothesis is that the implantation of right kidneys from living and deceased donors is associated with more technical graft failures as compared to left kidneys.
Methods: Two consecutive cohorts of adult renal allograft recipients of living (n = 4.372) and deceased (n = 5.346) donor kidneys between January 1, 2000 and January 1, 2013 were analyzed. Data were obtained from the prospectively maintained electronic database of the Dutch Organ Transplant Registry. Technical graft failure was defined as failure of the renal allograft within 10 days after renal transplantation without signs of acute rejection.
Results: In the living donor kidney transplantation cohort, the implantation of right donor kidneys was associated with a higher incidence of technical graft failure (multivariate analysis p = 0.03). For recipients of deceased donor kidneys, the implantation of right kidneys was not significantly associated with technique-related graft failure (multivariate analysis p = 0.16).
Conclusions: Our data show that the implantation of right kidneys from living donors is associated with a higher incidence of technique-related graft failure as compared to left kidneys.
abstract_id: PUBMED:12394852
Right-sided laparoscopic live-donor nephrectomy: is reluctance still justified? Background: Laparoscopic donor nephrectomy (LDN) of the right kidney is performed with great reluctance because of the shorter renal vein and possible increased incidence of venous thrombosis.
Methods: In this retrospective, clinical study, right LDN and left LDN were compared. Between December 1997 and May 2001, 101 LDN were performed. Seventy-three (72%) right LDN were compared with 28 (28%) left LDN for clinical characteristics, operative data, and graft function.
Results: There were no significant differences between the two groups regarding conversion rate, complications, hospital stay, thrombosis, graft function, and graft survival. Operating time was significantly shorter in the right LDN group (218 vs. 280 min).
Conclusion: In this study, right LDN was not associated with a higher number of complications, conversions, or incidence of venous thrombosis compared with the left LDN. Thus, reluctance toward right LDN is not justified, and therefore, right LDN should not be avoided.
abstract_id: PUBMED:37879529
Association of Implantation Biopsy Findings in Living Donor Kidneys With Donor and Recipient Outcomes. Rationale & Objective: Some living donor kidneys are found to have biopsy evidence of chronic scarring and/or glomerular disease at implantation, but it is unclear if these biopsy findings help predict donor kidney recovery or allograft outcomes. Our objective was to identify the prevalence of chronic histological changes and glomerular disease in donor kidneys, and their association with donor and recipient outcomes.
Study Design: Retrospective cohort study.
Setting & Participants: Single center, living donor kidney transplants from January 2010 to July 2022.
Exposure: Chronic histological changes, glomerular disease in donor kidney implantation biopsies.
Outcome: For donors, single-kidney estimated glomerular filtration rate (eGFR) increase, percent total eGFR loss, ≥40% eGFR decline from predonation baseline, and eGFR<60mL/min/1.73m2 at 6 months after donation; for recipients, death-censored allograft survival.
Analytical Approach: Biopsies were classified as having possible glomerular disease by pathologist diagnosis or chronic changes based on the percentage of glomerulosclerosis, interstitial fibrosis/tubular atrophy, and vascular disease. We used logistic regression to identify factors associated with the presence of chronic changes, linear regression to identify the association between chronic changes and single-kidney estimated glomerular filtration rate (eGFR) recovery, and time-to-event analyses to identify the relationship between abnormal biopsy findings and allograft outcomes.
Results: Among 1,104 living donor kidneys, 155 (14%) had advanced chronic changes on implantation biopsy, and 12 (1%) had findings suggestive of possible donor glomerular disease. Adjusted logistic regression showed that age (odds ratio [OR], 2.44 per 10 years [95% CI, 1.98-3.01), Hispanic ethnicity (OR, 1.87 [95% CI, 1.15-3.05), and hypertension (OR, 1.92 [95% CI, 1.01-3.64), were associated with higher odds of chronic changes on implantation biopsy. Adjusted linear regression showed no association of advanced chronic changes with single-kidney eGFR increase or relative risk of eGFR<60mL/min/1.73m2. There were no differences in time-to-death-censored allograft failure in unadjusted or adjusted Cox proportional hazards models when comparing kidneys with chronic changes to kidneys without histological abnormalities.
Limitations: Retrospective, absence of measured GFR.
Conclusions: Approximately 1 in 7 living donor kidneys had chronic changes on implantation biopsy, primarily in the form of moderate vascular disease, and 1% had possible donor glomerular disease. Abnormal implantation biopsy findings were not significantly associated with 6-month donor eGFR outcomes or allograft survival.
Plain-language Summary: Kidney biopsies are the gold standard test to identify the presence or absence of kidney disease. However, kidneys donated by healthy living donors-who are extensively screened for any evidence of kidney disease before donation-occasionally show findings that might be considered "abnormal," including the presence of scarring in the kidney or findings suggestive of a primary kidney disease. We studied the frequency of abnormal kidney biopsy findings among living donors at our center. We found that about 14% of kidneys had chronic abnormalities and 1% had findings suggesting possible glomerular kidney disease, but the presence of abnormal biopsy findings was not associated with worse outcomes for the donors or their recipients.
abstract_id: PUBMED:35181991
Risk aversion in the use of complex kidneys in paired exchange programs: Opportunities for even more transplants? This retrospective review of the largest United States kidney exchange reports characteristics, utilization, and recipient outcomes of kidneys with simple compared to complex anatomy and extrapolates reluctance to accept these kidneys. Of 3105 transplants performed, only 12.8% were right kidneys and 23.1% had multiple renal arteries. 59.3% of centers used fewer right kidneys than expected and 12.1% transplanted zero right kidneys or kidneys with more than 1 artery. Five centers transplanted a third of these kidneys (35.8% of right kidneys and 36.7% of kidneys with multiple renal arteries). 22.5% and 25.5% of centers currently will not entertain a match offer for a left or right kidney with more than one artery, respectively. There were no significant differences in all-cause graft failure or death-censored graft loss for kidneys with multiple arteries, and a very small increased risk of graft failure for right kidneys versus left of limited clinical relevance for most recipients. Kidneys with complex anatomy can be used with excellent outcomes at many centers. Variation in use (lack of demand) for these kidneys reduces the number of transplants, so systems to facilitate use could increase demand. We cannot know how many donors are turned away because perceived demand is limited.
abstract_id: PUBMED:36938669
Deceased donor vein extension grafts for right living donor kidney transplantation. Introduction: In an effort to maximize living donor kidney utilization, we describe the use of deceased donor vein extension grafts for right-sided living donor kidneys and report our single-center experience using this technique.
Methods: A retrospective review of kidney transplant recipients (KTR) who received a right living donor kidney with deceased donor vein extension graft. Recipient demographics, postoperative graft function, and surgical complications were reviewed. Living donor nephrectomies were performed laparoscopically. Vein grafts were obtained from recent deceased donor procurements. End-to-end anastomosis of the graft to the renal vein was performed prior to implantation.
Results: Thirty-eight KTR received a right kidney transplant with deceased donor extension grafts. The median recipient age and BMI were 53.0 years and 29.3 kg/m2 . Total 71% were male. Ninety-five percent of grafts displayed immediate graft function, with two recipients requiring temporary dialysis due to anaphylaxis from induction therapy. Median serum creatinine at two weeks was 1.6 mg/dL and at three months was 1.5 mg/dL. There were no graft thromboses.
Conclusion: Utilization of deceased donor extension grafts for short right renal veins is a simple technique that expands the donor pool for living donor renal transplantation. Our experience resulted in no technical complications and excellent early graft function.
abstract_id: PUBMED:23195003
Transposition of iliac vessels in implantation of right living donor kidneys. Background: The rate of right laparoscopic living-donor nephrectomy (RLLDN) is low among kidney transplantations due to the short renal vein and presumed higher risk of thrombosis. Our objective was to describe a surgical technique to compensate for the shorter veins of these grafts.
Methods: Between January 2004 and July 2010, we prospectively collected data from all transplantations using RLLDN-harvested kidneys at our center. Recipient iliac vein transposition was performed in all patients. We reviewed the indications, surgical techniques, and postoperative courses.
Results: The 43 included cases showed a 2.1 +/- 0.6 cm, average length of the right renal vein as measured on abdominal computed tomography (CT). The mean extraction and implantation times were 109 +/- 33 and 124 +/- 31 minutes, respectively; the mean warm ischemia time was 151 +/- 29 seconds. Two recipients required postsurgical blood transfusions. In 97.6% of cases, there was immediate urine flow. Postoperative echo-Doppler revealed good arterial and venous flows in all patients. No venous thromboses were detected. The recipients' average hospital stay was 8 +/- 5 days. With a mean follow-up of 57 months, 86% of recipients maintain a glomerular filtration rate (GFR) >50 mL/min and creatinine levels <1.5 mg/dL.
Conclusions: Transposition of the recipient iliac vein during implantation is a good technical solution to compensate for the short length of the right renal vein. The use of iliac vein transposition allowed us to perform safe implants of RLLDN-harvested kidneys with good short-term and long-term results.
abstract_id: PUBMED:27549592
Is the Reluctance for the Implantation of Right Kidneys Justified? N/A
abstract_id: PUBMED:20011083
Brain dead donor kidneys are immunologically active: is intervention justified? The improvement in the field of kidney transplantation, during the last decades, has brought kindey transplantation to the top of patient preference as the best kidney replacement therapy. The use of marginal kidney grafts, which are highly immunogenic has become common practice because of lack of kidney donors. Inflammatory activity in the kidneys after brain death is an ongoing phenomenon. The inappropriate treatment of brain dead donor may result to primary non function (PNF) of the graft, delayed graft function (DGF) or to long term graft dysfunction and shortened graft survival. Therefore correct handling of the brain dead donor is of paramount importance. The impact of various pharmacologic agents (catecholamines, glucocorticoids, carbamylated recombinant human erythropoietin, recombinant soluble P-selectin glycoprotein ligant, heme oxygenase-1, carbon monoxide, and mycophenolate mofetil) on the immunogenicity of brain dead donor kidneys is discussed.
abstract_id: PUBMED:27853814
Is the Reluctance for the Implantation of Right Kidneys Justified: Reply. N/A
abstract_id: PUBMED:17580144
Reluctance over right-sided retroperitoneoscopic living donor nephrectomy: justified or not? Unlabelled: We retrospectively compared perioperative donor outcomes and early complication rate of right- and left-sided retroperitoneoscopic living donor nephrectomy (RLDN).
Methods: From November 2001 to April 2006, we performed 118 RLDN. Including 24% (n = 28) right-sided RLDN and 76% (n = 90) left-sided RLDN. Perioperative results and the rate of adverse events were compared for both sides.
Results: We observed no significant difference in operation time, blood loss, warm ischemia time, or postoperative creatinine levels between right- and left-sided kidney donors. RLDN was successfully performed in 116 of 118 donors. One donor in each group had to be converted to an open approach. We observed one graft loss due to renal artery kinking in one recipient after left-sided RLDN. Two right donations needed a saphenous venous patch due to a short right renal vein (<2 cm). Overall, intraoperative and postoperative complications were comparable between the two donor groups.
Conclusion: Right-sided RLDN provides comparable perioperative and postoperative results to those of left-sided RLDN. Our results demonstrated that groups with significant experience in RLDN can perform right living donor nephrectomy safely and efficiently with minimal invasiveness.
Answer: The reluctance for the implantation of right donor kidneys is a topic of debate in the field of renal transplantation. Some studies and clinical experiences suggest that this reluctance may not be entirely justified.
One study found that the implantation of right kidneys from living donors is associated with a higher incidence of technique-related graft failure compared to left kidneys (PUBMED:26319261). This could be due to the shorter length of the right renal vein, which makes the surgical procedure more challenging. However, for deceased donor kidneys, the implantation of right kidneys was not significantly associated with technique-related graft failure (PUBMED:26319261).
In contrast, another study reported that right laparoscopic live-donor nephrectomy (LDN) was not associated with a higher number of complications, conversions, or incidence of venous thrombosis compared with left LDN (PUBMED:12394852). This study suggests that reluctance toward right LDN may not be justified, and right LDN should not be avoided.
A retrospective review of kidney exchange programs revealed that kidneys with complex anatomy, including right kidneys and those with multiple renal arteries, can be used with excellent outcomes at many centers (PUBMED:35181991). The variation in use for these kidneys reduces the number of transplants, indicating that systems to facilitate their use could increase demand and the number of transplants performed.
Furthermore, a single-center experience reported successful use of deceased donor vein extension grafts for short right renal veins in living donor kidney transplantation, with no technical complications and excellent early graft function (PUBMED:36938669). Another study described a surgical technique involving transposition of the recipient iliac vein during implantation to compensate for the short length of the right renal vein, resulting in safe implants with good short-term and long-term results (PUBMED:23195003).
Lastly, a comparison of perioperative donor outcomes and early complication rates of right- and left-sided retroperitoneoscopic living donor nephrectomy (RLDN) found no significant difference in operation time, blood loss, warm ischemia time, or postoperative creatinine levels between right- and left-sided kidney donors (PUBMED:17580144).
In summary, while there are technical challenges associated with the implantation of right donor kidneys, particularly from living donors, several studies and surgical techniques have demonstrated that these challenges can be overcome, and good outcomes can be achieved. Therefore, the reluctance for the implantation of right donor kidneys may not be entirely justified. The evidence suggests that with appropriate surgical techniques and experience, right donor kidneys can be successfully transplanted with outcomes comparable to left donor kidneys. This indicates that the reluctance to use right donor kidneys may be more related to surgical preference and experience rather than an intrinsic risk associated with the kidneys themselves. As such, efforts to increase the utilization of right donor kidneys, including the use of vein extension grafts and iliac vessel transposition, could help expand the donor pool and increase the number of kidney transplants performed (PUBMED:36938669, PUBMED:23195003, PUBMED:17580144). |
Instruction: Coronary CT angiography and myocardial perfusion imaging to detect flow-limiting stenoses: a potential gatekeeper for coronary revascularization?
Abstracts:
abstract_id: PUBMED:19684023
Coronary CT angiography and myocardial perfusion imaging to detect flow-limiting stenoses: a potential gatekeeper for coronary revascularization? Aims: To evaluate the diagnostic accuracy of a combined non-invasive assessment of coronary artery disease with coronary CT angiography (CTA) and myocardial perfusion imaging (MPI) for the detection of flow-limiting coronary stenoses and its potential as a gatekeeper for invasive examination and treatment.
Methods And Results: In 78 patients (mean age 65 +/- 9 years) referred for coronary angiography (CA), additional CTA and MPI (using single-photon emission-computed tomography) were performed and the findings not communicated. Detection of flow-limiting stenoses (justifying revascularization) by the combination of CTA and MPI (CTA/MPI) was compared with the combination of quantitative coronary angiography (QCA) plus MPI (QCA/MPI), which served as standard of reference. The findings of both combinations were related to the treatment strategy (revascularization vs. medical treatment) chosen in the catheterization laboratory based on the CA findings. Sensitivity, specificity, positive and negative predictive value, and accuracy of CTA/MPI for the detection of flow-limiting coronary stenoses were 100% each. More than half of revascularization procedures (21/40, 53%) was performed in patients without flow-limiting stenoses and 76% (47/62) of revascularized vessels were not associated with ischaemia on MPI.
Conclusion: The combined non-invasive approach CTA/MPI has an excellent accuracy to detect flow-limiting coronary stenoses compared with QCA/MPI and its use as a gatekeeper appears to make a substantial part of revascularization procedures redundant.
abstract_id: PUBMED:24343677
Cardiac hybrid SPECT/CTA imaging to detect "functionally relevant coronary artery lesion": a potential gatekeeper for coronary revascularization? Objective: Combination of both morphological and functional information has gained more and more appreciation with the concept of "functionally relevant coronary artery lesion (FRCAL)" and "functional revascularization". This has paved the way for non-invasive single-photon emission computed tomography (SPECT)/computed tomography angiography (CTA) hybrid imaging. We aimed at assessing the value of cardiac hybrid imaging on the detection of FRCAL and its potential as a gatekeeper for invasive examination and treatment.
Methods: In Two hundred and thirty-eight patients with known or suspected coronary artery disease (CAD) underwent CTA and myocardial perfusion imaging (MPI) using SPECT on a dual system scanner in one session before treatment. 78 patients underwent invasive coronary angiography (CAG) within 1 month. Detection of FRCAL by the combination of SPECT/CTA was compared with SPECT/CAG, which served as a standard of reference. According to the both combination results, treatment decision (revascularization or medical treatment) was chosen in the catheterization laboratory.
Results: Sensitivity, specificity, accuracy, positive and negative prediction rate by SPECT/CTA vs. SPECT/CAG for the detection of flow-limiting coronary stenosis on patient- and vessel-based analysis were 94.33, 72.00, 87.18, 87.71, 85.71 % and 88.71, 92.44, 91.45, 80.89, 95.78 %, respectively. No revascularization procedures were performed in patients without flow-limiting stenosis. However, more than one-third (25/67, 37 %) of revascularized vessels were not associated with ischemia on MPI.
Conclusions: The cardiac SPECT/CTA hybrid imaging can accurately detect FRCAL and thereby it may be used as a gatekeeper for CAG and revascularization procedures.
abstract_id: PUBMED:21354895
CT coronary angiography combined with adenosine stress myocardial perfusion scintigraphy for detecting flow-limiting coronary stenoses Objective: To assess the feasibility and accuracy of CT coronary angiography (CTCA) combined with adenosine stress myocardial perfusion scintigraphy (MPS) for diagnosis of flow-limiting coronary stenosis.
Methods: A total of 105 patients with suspected or established coronary artery disease (CAD) underwent CTCA and MPS within 4 weeks before invasive coronary angiography. The accuracy of CTCA/MPS in the diagnosis of flow-limiting coronary stenosis was evaluated in comparison with the results of quantitative coronary angiography and MPS.
Results: The sensitivity, specificity, positive predictive value and negative predictive value of CTCA/MPS as a combined approach for detection of flow-limiting coronary stenosis were all 100%. In 16% (9/55) of the patients, revascularization procedures were performed and no flow-limiting stenosis was found.
Conclusion: Combination of CTCA and MPS has an excellent accuracy for detecting flow-limiting coronary stenosis as compared with quantitative coronary angiography/MPI, and can be a useful gatekeeper for revascularization procedures.
abstract_id: PUBMED:36997751
Dynamic myocardial CT perfusion imaging-state of the art. In patients with suspected coronary artery disease (CAD), dynamic myocardial computed tomography perfusion (CTP) imaging combined with coronary CT angiography (CTA) has become a comprehensive diagnostic examination technique resulting in both anatomical and quantitative functional information on myocardial blood flow, and the presence and grading of stenosis. Recently, CTP imaging has been proven to have good diagnostic accuracy for detecting myocardial ischemia, comparable to stress magnetic resonance imaging and positron emission tomography perfusion, while being superior to single photon emission computed tomography. Dynamic CTP accompanied by coronary CTA can serve as a gatekeeper for invasive workup, as it reduces unnecessary diagnostic invasive coronary angiography. Dynamic CTP also has good prognostic value for the prediction of major adverse cardiovascular events. In this article, we will provide an overview of dynamic CTP, including the basics of coronary blood flow physiology, applications and technical aspects including protocols, image acquisition and reconstruction, future perspectives, and scientific challenges. KEY POINTS: • Stress dynamic myocardial CT perfusion combined with coronary CTA is a comprehensive diagnostic examination technique resulting in both anatomical and quantitative functional information. • Dynamic CTP imaging has good diagnostic accuracy for detecting myocardial ischemia comparable to stress MRI and PET perfusion. • Dynamic CTP accompanied by coronary CTA may serve as a gatekeeper for invasive workup and can guide treatment in obstructive coronary artery disease.
abstract_id: PUBMED:25977111
Combined coronary angiography and myocardial perfusion by computed tomography in the identification of flow-limiting stenosis - The CORE320 study: An integrated analysis of CT coronary angiography and myocardial perfusion. Background: The combination of coronary CT angiography (CTA) and myocardial CT perfusion (CTP) is gaining increasing acceptance, but a standardized approach to be implemented in the clinical setting is necessary.
Objectives: To investigate the accuracy of a combined coronary CTA and myocardial CTP comprehensive protocol compared to coronary CTA alone, using a combination of invasive coronary angiography and single photon emission CT as reference.
Methods: Three hundred eighty-one patients included in the CORE320 trial were analyzed in this study. Flow-limiting stenosis was defined as the presence of ≥50% stenosis by invasive coronary angiography with a related perfusion defect by single photon emission CT. The combined CTA + CTP definition of disease was the presence of a ≥50% stenosis with a related perfusion defect. All data sets were analyzed by 2 experienced readers, aligning anatomic findings by CTA with perfusion defects by CTP.
Results: Mean patient age was 62 ± 6 years (66% male), 27% with prior history of myocardial infarction. In a per-patient analysis, sensitivity for CTA alone was 93%, specificity was 54%, positive predictive value was 55%, negative predictive value was 93%, and overall accuracy was 69%. After combining CTA and CTP, sensitivity was 78%, specificity was 73%, negative predictive value was 64%, positive predictive value was 0.85%, and overall accuracy was 75%. In a per-vessel analysis, overall accuracy of CTA alone was 73% compared to 79% for the combination of CTA and CTP (P < .0001 for difference).
Conclusions: Combining coronary CTA and myocardial CTP findings through a comprehensive protocol is feasible. Although sensitivity is lower, specificity and overall accuracy are higher than assessment by coronary CTA when compared against a reference standard of stenosis with an associated perfusion defect.
abstract_id: PUBMED:33409778
Prospective comparison of integrated on-site CT-fractional flow reserve and static CT perfusion with coronary CT angiography for detection of flow-limiting coronary stenosis. Objectives: To compare the diagnostic power of separately integrating on-site computed tomography (CT)-derived fractional flow reserve (CT-FFR) and static CT stress myocardial perfusion (CTP) with coronary computed tomography angiography (CCTA) in detecting patients with flow-limiting CAD. The flow-limiting stenosis was defined as obstructive (≥ 50%) stenosis by invasive coronary angiography (ICA) with a corresponding perfusion deficit on stress single photon emission computed tomography (SPECT/MPI).
Methods: Forty-eight patients (74 vessels) were enrolled who underwent research-indicated combined CTA-CTP (320-row CT scanner, temporal resolution 137 ms) and SPECT/MPI prior to conventional coronary angiography. CT-FFR was computed on-site using resting CCTA data with dedicated workstation-based software. All five imaging modalities were analyzed in blinded independent core laboratories. Logistic regression and the integrated discrimination improvement (IDI) index were used to evaluate incremental differences in CT-FFR or CTP compared with CCTA alone.
Results: The prevalence of obstructive CAD defined by combined ICA-SPECT/MPI was 40%. Per-vessel sensitivity and specificity were 95 and 42% for CCTA, 76 and 89% for CCTA + CTP, and 81 and 96% for CCTA + CT-FFR, respectively. The diagnostic performance of CCTA (AUC = 0.82) was improved by combining it with CT-FFR (AUC = 0.92, p = 0.01; IDI = 0.27, p < 0.001) or CTP (AUC = 0.90, p = 0.02; IDI = 0.18, p = 0.003).
Conclusion: On-site CT-FFR combined with CCTA provides an incremental diagnostic improvement over CCTA alone in identifying patients with flow-limiting CAD defined by ICA + SPECT/MPI, with a comparable diagnostic accuracy for integrated CTP and CCTA.
Key Points: • Both on-site CT-FFR and CTP perform well with high diagnostic accuracy in the detection of flow-limiting stenosis. • Comparable diagnostic accuracy between CCTA + CT-FFR and CCTA + CTP is demonstrated to detect flow-limiting stenosis. • Integrated CT-FFR and CCTA derived from a single widened CCTA data acquisition can accurately and conveniently evaluate both coronary anatomy and physiology in the future management of patients with suspected CAD, without the need for additional vasodilator administration and contrast and radiation exposure.
abstract_id: PUBMED:28575302
Impact of computed tomography myocardial perfusion following computed tomography coronary angiography on downstream referral for invasive coronary angiography, revascularization and, outcome at 12 months. Aims: The aim of this study was to assess the impact of adding stress computed tomography (CT) myocardial perfusion (CTP) to coronary CT angiography (CTA) on downstream referral for invasive coronary angiography (ICA), revascularization, and outcome in patients presenting with new-onset chest pain.
Methods And Results: Three hundred and eighty-four patients were referred for cardiac CT. Patients with lesions ≥50% stenosis underwent subsequently stress CTP. Perfusion scans were considered abnormal if a defect was observed in ≥ 1 segment. Downstream performance of ICA, revascularization, and the occurrence of major cardiovascular events (death, non-fatal myocardial infarction, and unstable angina requiring urgent revascularization) were assessed within 12 months. In total, 119 patients showed ≥50% stenosis on coronary CTA; stress CTP was normal in 61 patients, abnormal in 38 patients and was not performed in 20 patients. After normal stress CTP, 19 (31%) patients underwent ICA and 9 (15%) underwent revascularization. After abnormal stress CTP, 36 (95%) patients underwent ICA and 29 (76%) revascularizations were performed. Multivariable analyses showed a five-fold reduction in likelihood of proceeding to ICA when a normal stress CTP was added to a coronary CTA showing obstructive CAD. Major cardiovascular event rates at 12 months for patients with obstructive CAD and normal stress CTP (N = 61) were low: 1 myocardial infarction, 1 urgent revascularization, and 1 non-cardiac death.
Conclusion: The performance of stress CTP in patients with obstructive CAD at coronary CTA in the same setting is feasible and reduces the referral rate for ICA and revascularization. Secondly, the occurrence of major cardiovascular events at 12 months follow-up in patients with normal stress CTP is low.
abstract_id: PUBMED:28916411
3D fusion of coronary CT angiography and CT myocardial perfusion imaging: Intuitive assessment of morphology and function. Background: The objective of this work was to support three-dimensional fusion of coronary CT angiography (coronary CTA) and CT myocardial perfusion (CT-Perf) data visualizing coronary artery stenoses and corresponding stress-induced myocardial perfusion deficits for diagnostics of coronary artery disease.
Methods: Twelve patients undergoing coronary CTA/CT-Perf after heart transplantation were included (56 ± 12 years, all males). CT image quality was rated. Coronary diameter stenoses >50% were documented for coronary CTA. Stress-induced perfusion deficits were noted for CT-Perf. A software was implemented facilitating 3D fusion imaging of coronary CTA/CT-Perf data. Coronary arteries and heart contours were segmented automatically. To overcome anatomical mismatch of coronary CTA/CT-Perf image acquisition, perfusion values were projected on the left ventricle as visualized in coronary CTA. Three resulting datasets (coronary tree/heart contour/perfusion values) were fused for combined three-dimensional rendering. 3D fusion was compared with conventional analysis of coronary CTA/CT-Perf data and to results from catheter coronary angiography.
Results: CT image quality was rated good-excellent (3.5 ± 0.5, scale 1-4). 3D fusion imaging of coronary CTA/CT-Perf data was feasible in 11/12 patients (92%). One patient (8%) was excluded from further analysis due to severe motion artifacts. 2 of 11 remaining patients (18%) showed both stress-induced perfusion deficits and relevant coronary stenoses. Using 3D fusion imaging, the ischemic region could be correlated to a culprit coronary lesion in one case (1/2 = 50%) and diagnostic findings could be rectified in the other case (1/2 = 50%). Coronary CTA was in full correspondence with catheter coronary angiography.
Conclusion: A method for 3D fusion of coronary CTA/CT-Perf is introduced correlating relevant coronary lesions and corresponding stress-induced myocardial perfusion deficits.
abstract_id: PUBMED:37145148
Fully automated pixel-wise quantitative CMR-myocardial perfusion with CMR-coronary angiography to detect hemodynamically significant coronary artery disease. Objectives: We applied a fully automated pixel-wise post-processing framework to evaluate fully quantitative cardiovascular magnetic resonance myocardial perfusion imaging (CMR-MPI). In addition, we aimed to evaluate the additive value of coronary magnetic resonance angiography (CMRA) to the diagnostic performance of fully automated pixel-wise quantitative CMR-MPI for detecting hemodynamically significant coronary artery disease (CAD).
Methods: A total of 109 patients with suspected CAD were prospectively enrolled and underwent stress and rest CMR-MPI, CMRA, invasive coronary angiography (ICA), and fractional flow reserve (FFR). CMRA was acquired between stress and rest CMR-MPI acquisition, without any additional contrast agent. Finally, CMR-MPI quantification was analyzed by a fully automated pixel-wise post-processing framework.
Results: Of the 109 patients, 42 patients had hemodynamically significant CAD (FFR ≤ 0.80 or luminal stenosis ≥ 90% on ICA) and 67 patients had hemodynamically non-significant CAD (FFR ˃ 0.80 or luminal stenosis < 30% on ICA) were enrolled. On the per-territory analysis, patients with hemodynamically significant CAD had higher myocardial blood flow (MBF) at rest, lower MBF under stress, and lower myocardial perfusion reserve (MPR) than patients with hemodynamically non-significant CAD (p < 0.001). The area under the receiver operating characteristic curve of MPR (0.93) was significantly larger than those of stress and rest MBF, visual assessment of CMR-MPI, and CMRA (p < 0.05), but similar to that of the integration of CMR-MPI with CMRA (0.90).
Conclusions: Fully automated pixel-wise quantitative CMR-MPI can accurately detect hemodynamically significant CAD, but the integration of CMRA obtained between stress and rest CMR-MPI acquisition did not provide significantly additive value.
Key Points: • Full quantification of stress and rest cardiovascular magnetic resonance myocardial perfusion imaging can be postprocessed fully automatically, generating pixel-wise myocardial blood flow (MBF) and myocardial perfusion reserve (MPR) maps. • Fully quantitative MPR provided higher diagnostic performance for detecting hemodynamically significant coronary artery disease, compared with stress and rest MBF, qualitative assessment, and coronary magnetic resonance angiography (CMRA). • The integration of CMRA and MPR did not significantly improve the diagnostic performance of MPR alone.
abstract_id: PUBMED:29248657
Comprehensive Cardiac CT With Myocardial Perfusion Imaging Versus Functional Testing in Suspected Coronary Artery Disease: The Multicenter, Randomized CRESCENT-II Trial. Objectives: This study sought to assess the effectiveness, efficiency, and safety of a tiered, comprehensive cardiac computed tomography (CT) protocol in comparison with functional testing.
Background: Although CT angiography accurately rules out coronary artery disease (CAD), incorporation of CT myocardial perfusion imaging as part of a tiered diagnostic approach could improve the clinical value and efficiency of cardiac CT in the diagnostic work-up of patients with angina pectoris.
Methods: Between July 2013 and November 2015, 268 patients (mean age 58 years; 49% female) with stable angina (mean pre-test probability 54%) were prospectively randomized between cardiac CT and standard guideline-directed functional testing (95% exercise electrocardiography). The tiered cardiac CT protocol included a calcium scan, followed by CT angiography if calcium was detected. Patients with ≥50% stenosis on CT angiography underwent CT myocardial perfusion imaging.
Results: By 6 months, the primary endpoint, the rate of invasive coronary angiograms without a European Society of Cardiology class I indication for revascularization, was lower in the CT group than in the functional testing group (2 of 130 [1.5%] vs. 10 of 138 [7.2%]; p = 0.035), whereas the proportion of invasive angiograms with a revascularization indication was higher (88% vs. 50%; p = 0.017). The median duration until the final diagnosis was 0 (0 of 0) days in the CT group and 0 (0 of 17) in the functional testing group (p < 0.001). Overall, 13% of patients randomized to CT required further testing, compared with 37% in the functional testing group (p < 0.001). The adverse event rate was similar (3% vs. 3%; p = 1.000), although the median cumulative radiation dose was higher for the CT group (3.1 mSv [interquartile range: 1.6 to 7.8] vs. 0 mSv [interquartile range: 0.0 to 7.1]; p < 0.001).
Conclusions: In patients with suspected stable CAD, a tiered cardiac CT protocol with dynamic perfusion imaging offers a fast and efficient alternative to functional testing. (Comprehensive Cardiac CT Versus Exercise Testing in Suspected Coronary Artery Disease 2 [CRESCENT2]; NCT02291484).
Answer: Yes, coronary CT angiography (CTA) combined with myocardial perfusion imaging (MPI) has been shown to be a potential gatekeeper for coronary revascularization. Studies have demonstrated that this combined non-invasive approach has excellent accuracy in detecting flow-limiting coronary stenoses when compared with quantitative coronary angiography (QCA) plus MPI, which serves as the standard of reference. For instance, one study reported that the sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of CTA combined with MPI (CTA/MPI) for the detection of flow-limiting coronary stenoses were 100% each (PUBMED:19684023).
Moreover, the use of CTA/MPI as a gatekeeper appears to make a substantial part of revascularization procedures redundant, as more than half of revascularization procedures were performed in patients without flow-limiting stenoses, and a significant percentage of revascularized vessels were not associated with ischemia on MPI (PUBMED:19684023).
Similarly, cardiac hybrid imaging combining SPECT/CTA has been shown to accurately detect functionally relevant coronary artery lesions (FRCAL), which may be used as a gatekeeper for invasive coronary angiography (CAG) and revascularization procedures (PUBMED:24343677).
Additionally, dynamic myocardial CT perfusion (CTP) imaging combined with coronary CTA has been recognized as a comprehensive diagnostic examination technique, providing both anatomical and quantitative functional information on myocardial blood flow and the presence and grading of stenosis. Dynamic CTP imaging has good diagnostic accuracy for detecting myocardial ischemia, comparable to stress MRI and PET perfusion, and may serve as a gatekeeper for invasive workup (PUBMED:36997751).
In summary, the combination of coronary CTA and MPI or CTP imaging has been validated as a highly accurate non-invasive method for detecting flow-limiting coronary stenoses and could potentially serve as a gatekeeper to reduce unnecessary invasive coronary angiography and revascularization procedures. |
Instruction: Obscure recurrent gastrointestinal bleeding: a revealed mystery?
Abstracts:
abstract_id: PUBMED:24945819
Obscure recurrent gastrointestinal bleeding: a revealed mystery? Objective: Nowadays, capsule endoscopy (CE) is the first-line procedure after negative upper and lower gastrointestinal (GI) endoscopy for obscure gastrointestinal bleeding (OGIB). Approximately, two-thirds of patients undergoing CE for OGIB will have a small-bowel abnormality. However, several patients who underwent CE for OGIB had the source of their blood loss in the stomach or in the colon. The aim of the present study is to determine the incidence of bleeding lesions missed by the previous gastroscopy/colonoscopy with CE and to evaluate the indication to repeat a new complete endoscopic workup in subjects related to a tertiary center for obscure bleeding before CE.
Methods And Methods: We prospectively reviewed data from 637/1008 patients underwent to CE for obscure bleeding in our tertiary center after performing negative gastroscopy and colonoscopy.
Results: CE revealed a definite or likely cause of bleeding in stomach in 138/637 patients (yield 21.7%) and in the colon in 41 patients (yield 6.4%) with a previous negative gastroscopy and colonoscopy, respectively. The lesions found were outside the small bowel in only 54/637 (8.5%) patients. In 111/138 patients, CE found lesions both in stomach and small bowel (small-bowel erosions in 54, AVMs in 45, active small-bowel bleeding in 4, neoplastic lesions in 3 and distal ileum AVMs in 5 patients). In 24/41 (58.5%) patients, CE found lesions both in small bowel and colon (multiple small-bowel erosions in 15; AVMs in 8 and neoplastic lesion in 1 patients. All patients underwent endoscopic therapy or surgery for their nonsmall-bowel lesions.
Conclusions: Lesions in upper or lower GI tract have been missed in about 28% of patients submitted to CE for obscure bleeding. CE may play an important role in identifying lesions missed at conventional endoscopy.
abstract_id: PUBMED:38395486
Diagnosis of Occult and Obscure Gastrointestinal Bleeding. Occult and obscure bleeding are challenging conditions to manage; however, recent advances in gastroenterology and endoscopy have improved our diagnostic and therapeutic capabilities. Obscure gastrointestinal (GI) bleeding is an umbrella category of bleeding of unknown origin that persists or recurs after endoscopic evaluation of the entire bowel fails to reveal a bleeding source. This review details the evaluation of patients with occult and obscure GI bleeding and offers diagnostic algorithms. The treatment of GI bleeding depends on the type and location of the bleeding lesion and an overview of how to manage these conditions is presented.
abstract_id: PUBMED:26879551
Diagnosis of Obscure Gastrointestinal Bleeding. Obscure gastrointestinal bleeding (OGIB) is defined as gastrointestinal bleeding from a source that cannot be identified on upper or lower gastrointestinal endoscopy. OGIB is considered an important indication for capsule endoscopy (CE). CE is particularly useful for the detection of vascular and small ulcerative lesions, conditions frequently associated with OGIB, particularly in Western countries. On the other hand, balloon enteroscopy (BE) can facilitate the diagnosis of lesions presenting with minimal changes of the mucosal surface, such as submucosal tumors, and can be used not only for diagnosis, but also for treatment, including endoscopic hemostasis. In other words, the complementary use of both CE and BE enables OGIB to be more efficiently diagnosed and treated. However, rebleeding can occur even in patients with negative results of CE, and such patients require repeat CE or BE. It is essential to effectively use both CE and BE based on a thorough understanding of the strong points and weak points of these procedures. Further advances and developments in virtual endoscopy incorporating computed tomography and magnetic resonance imaging are expected in the future.
abstract_id: PUBMED:22282709
Therapeutic management options for patients with obscure gastrointestinal bleeding. Obscure gastrointestinal bleeding (OGIB) is one of the most challenging disorders faced by gastroenterologists because of its evasive nature and difficulty in identifying the exact source of bleeding. Recent technological advances such as video capsule endoscopy and small bowel deep enteroscopy have revolutionized the diagnosis and management of patients with OGIB. In this paper, we review the various diagnostic and therapeutic options available for the management of patients with OGIB.
abstract_id: PUBMED:32905491
Gastrointestinal Stromal Tumor (GIST) Causing Obscure Gastrointestinal Bleeding: An Uncommon Way of Diagnosing An Uncommon Disease. Gastrointestinal stromal tumors (GISTs) are neoplasms that arise from the wall of the gastrointestinal tract or, rarely, from other intra-abdominal tissues. They are the most common mesenchymal tumors of the gastrointestinal tract and they should be considered in the differential diagnosis of obscure gastrointestinal bleeding. Computed tomography angiogram (CTA) can be utilized as an alternative imaging study when endoscopic and colonoscopy results are non-diagnostic. We report a case of a 59-year-old woman who presented with recurrent episodes of obscure overt gastrointestinal bleeding secondary to a gastrointestinal stromal tumor (GIST).
abstract_id: PUBMED:24759341
Sarcomatoid carcinoma of the jejunum presenting as obscure gastrointestinal bleeding in a patient with a history of gliosarcoma. Small bowel malignant tumors are rare and sarcomatoid carcinomas have rarely been reported at this site. We report a 56-year-old woman, with history of an excised gliosarcoma, who presented with recurrent obscure gastrointestinal bleeding. She underwent endoscopy and colonoscopy, which failed to identify the cause of the bleeding. The abdominal computed tomography scan located a tumor in the small bowel. Pathology revealed a jejunal sarcomatoid carcinoma. She developed tumor recurrence and multiple liver metastases shortly after surgery. Immunohistochemistry is required for accurate diagnosis. Sarcomatoid carcinoma is a rare cause of obscure gastrointestinal bleeding, which is associated with a poor prognosis.
abstract_id: PUBMED:29238592
Predictors and characteristics of angioectasias in patients with obscure gastrointestinal bleeding identified by video capsule endoscopy. Background: In obscure gastrointestinal bleeding, angioectasias are common findings in video capsule endoscopy (VCE).
Objective: The objective of this study was to identify predictors and characteristics of small bowel angioectasias.
Methods: Video capsule examinations between 1 July 2001 and 31 July 2011 were retrospectively reviewed. Patients with obscure gastrointestinal bleeding were identified, and those with small bowel angioectasia were compared with patients without a definite bleeding source. Univariate and multivariable statistical analyses for possible predictors of small bowel angioectasia were performed.
Results: From a total of 717 video capsule examinations, 512 patients with obscure gastrointestinal bleeding were identified. Positive findings were reported in 350 patients (68.4%) and angioectasias were documented in 153 of these patients (43.7%). These angioectasias were mostly located in the proximal small intestine (n = 86, 56.6%). Patients' age >65 years (odds ratio (OR) 2.15, 95% confidence interval (CI) 1.36-3.38, p = .001) and overt bleeding type (OR 1.89, 95% CI 1.22-2.94, p = .004) were identified as significant independent predictors of small bowel angioectasia.
Conclusion: Angioectasias are the most common finding in VCE in patients with obscure gastrointestinal bleeding. They are mostly located in the proximal small bowel and are associated with higher age and an overt bleeding type.
abstract_id: PUBMED:30837783
Diagnostic and therapeutic considerations for obscure gastrointestinal bleeding in patients with chronic kidney disease. Recurrent obscure gastrointestinal bleeding amongst patients with chronic kidney disease is a challenging problem gastroenterologists are facing and is associated with an extensive diagnostic workup, limited therapeutic options, and high healthcare costs. Small-bowel angiodysplasia is the most common etiology of obscure and recurrent gastrointestinal bleeding in the general population. Chronic kidney disease is associated with a higher risk of gastrointestinal bleeding and of developing angiodysplasia compared with the general population. As a result, recurrent bleeding in this subgroup of patients is more prevalent and is associated with an increased number of endoscopic and radiographic procedures with uncertain benefit. Alternative medical therapies can reduce re-bleeding; however, more studies are needed to confirm their efficacy in this subgroup of patients.
abstract_id: PUBMED:36017285
Obscure Gastrointestinal Bleeding and Capsule Endoscopy: A Win-Win Situation or Not? Obscure gastrointestinal bleeding (OGIB) refers to bleeding of uncertain origin that persists or recurs after negative workup using any of the radiologic evaluation modalities. It can be divided into two types based on whether clinically evident bleeding is present, namely, obscure overt and obscure occult bleeding. As the visualization of the bowel mucosa is challenging, capsule endoscopy (CE) is the ideal go-to procedure as the process is wireless, ingestible, small, disposable, and, most importantly, non-invasive. This review article has compiled various studies to shed light on the guidelines for using CE, its structure and procedure, patient preferences, diagnostic yield, cost-effectiveness, and the future. The goal of this review is to show the influence of CE on OGIB on the aspects mentioned earlier.
abstract_id: PUBMED:25400996
Continuing challenges in the diagnosis and management of obscure gastrointestinal bleeding. The diagnosis and management of obscure gastrointestinal bleeding (OGIB) have changed dramatically since the introduction of video capsule endoscopy (VCE) followed by deep enteroscopy and other imaging technologies in the last decade. Significant advances have been made, yet there remains room for improvement in our diagnostic yield and treatment capabilities for recurrent OGIB. In this review, we will summarize the latest technologies for the diagnosis of OGIB, limitations of VCE, technological enhancement in VCE, and different management options for OGIB.
Answer: Obscure recurrent gastrointestinal bleeding (OGIB) is a condition where the source of bleeding persists or recurs after a thorough endoscopic evaluation of the gastrointestinal (GI) tract fails to reveal a bleeding source (PUBMED:38395486). Capsule endoscopy (CE) is considered the first-line procedure after negative upper and lower GI endoscopy for OGIB, with approximately two-thirds of patients undergoing CE for OGIB having a small-bowel abnormality (PUBMED:24945819). However, it has been found that lesions in the upper or lower GI tract may be missed in about 28% of patients submitted to CE for obscure bleeding, indicating that CE can play an important role in identifying lesions missed at conventional endoscopy (PUBMED:24945819).
CE is particularly useful for detecting vascular and small ulcerative lesions, which are frequently associated with OGIB, especially in Western countries (PUBMED:26879551). Balloon enteroscopy (BE) can also be used for diagnosis and treatment, including endoscopic hemostasis, and the complementary use of both CE and BE can lead to more efficient diagnosis and treatment of OGIB (PUBMED:26879551). However, rebleeding can occur even in patients with negative CE results, necessitating repeat CE or BE (PUBMED:26879551).
Gastrointestinal stromal tumors (GISTs) and sarcomatoid carcinomas are among the neoplasms that should be considered in the differential diagnosis of OGIB (PUBMED:32905491; PUBMED:24759341). Angioectasias are common findings in video capsule endoscopy (VCE) in patients with OGIB and are mostly located in the proximal small intestine (PUBMED:29238592). Patients over 65 years of age and those with overt bleeding type are significant independent predictors of small bowel angioectasia (PUBMED:29238592).
In patients with chronic kidney disease, small-bowel angiodysplasia is the most common etiology of obscure and recurrent GI bleeding, and these patients have a higher risk of GI bleeding and developing angiodysplasia compared to the general population (PUBMED:30837783). CE is the ideal procedure for OGIB due to its non-invasive nature and the ability to visualize the bowel mucosa (PUBMED:36017285). Despite significant advances in the diagnosis and management of OGIB, there is still a need for improvement in diagnostic yield and treatment capabilities for recurrent OGIB (PUBMED:25400996). |
Instruction: Is resurfacing arthroplasty appropriate for posttraumatic osteoarthritis?
Abstracts:
abstract_id: PUBMED:21132415
Is resurfacing arthroplasty appropriate for posttraumatic osteoarthritis? Background: High survival has been reported for resurfacing arthroplasty in patients with femoral deformities. Also, hardware removal may not always be necessary with resurfacing arthroplasty and may eliminate some of the difficulties performing total hip arthroplasty (THA) in patients with posttraumatic osteoarthritis.
Questions/purposes: We therefore asked: (1) are survivorship higher in patients who underwent resurfacing arthroplasty compared with patients with nontraumatic osteoarthritis; and (2) are those higher compared with all patients who have resurfacing?
Methods: We identified 29 patients (29 hips) who had hip resurfacing for posttraumatic arthritis. These were compared with a matched cohort who had hip resurfacings for nontraumatic osteoarthritis and to all patients who underwent hip resurfacing for osteoarthritis during this time. The mean age was 47 years and mean body mass index was 27 kg/m(2). Survivorship and Harris hip scores were compared. Radiographs were evaluated for signs of radiolucencies, penciling, or osteolysis. The mean followup was 39 months (range, 24-99 months).
Results: The 5-year survivorship was 90% in the posttraumatic group, 93% in the matched osteoarthritis group, and 97% in the entire osteoarthritis cohort. The mean Harris hip score for the posttraumatic group at last followup was 90 points. Other than the patients who underwent revision, we observed no radiographic radiolucencies or loosening in any of the groups.
Conclusions: The survival of resurfacing arthroplasty appears comparable to THA in posttraumatic osteoarthritis and for resurfacing in patients with osteoarthritis. Therefore, resurfacing may present an alternative treatment to THA in these patients.
abstract_id: PUBMED:26830851
Current indications for hip resurfacing arthroplasty in 2016. Hip resurfacing arthroplasty (HRA) is an alternative to conventional, stemmed total hip arthroplasty (THA). The best reported results are young, active patients with good bone stock and a diagnosis of osteoarthritis. Since the 1990s, metal-on-metal (MoM) HRA has achieved excellent outcomes when used in the appropriate patient population. Concerns regarding the metal-on-metal bearing surface including adverse local tissue reaction (ALTR) to metal debris have recently lead to a decline in the use of this construct. The current paper aims to provide an updated review on HRA, including a critical review of the most recent literature on HRA.
abstract_id: PUBMED:31559239
Computer-assisted Navigation in Hip Resurfacing Arthroplasty: A Case Study utilizing the ReCap Femoral Resurfacing System. Introduction: The ReCap Femoral Resurfacing System has been associated with increased cases of revision surgery when compared to other hip resurfacing systems. However, computer-assisted navigation may have the potential to reduce the risk of post-operative complications by providing more accurate intraoperative measurements for acetabular component positioning.
Case Report: The present case describes an active 46-year-old male presenting with severe osteoarthritis of the right hip who elected to undergo a ReCap resurfacing arthroplasty with navigation. Results demonstrated accurate acetabular component position and leg length measurements to within <1° and 1mm of standard radiographic measurements.
Conclusion: These findings are the first to describe the use of navigation with the ReCap system and provide encouraging results for further clinical evaluation.
abstract_id: PUBMED:35392729
Comparison of clinical outcomes between patellar resurfacing and patellar non-resurfacing in cruciate retaining total knee arthroplasty. Background: It is not established whether patellar resurfacing is better than patellar non-resurfacing during total knee arthroplasty (TKA). This study was to compare the clinical outcomes between groups with patellar resurfacing and non-resurfacing during cruciate retaining (CR) TKA.
Methods: In this retrospective cohort study, subjects undergoing primary CR TKA for osteoarthritis between 2012 and 2019 were included. Of 500 subjects, 250 had patellar resurfacing (group 1) and 250 had patellar non-resurfacing (group 2) CR TKA. Knee society knee score (KSKS), knee society function score (KSFS), Western Ontario and McMaster Universities Osteoarthritis (WOMAC) scale, Kujala score, anterior knee pain, patellar compression test and range of motion (ROM) of the replaced knee were assessed and compared between the two groups.
Results: There were no significant differences in KSKS, KSFS, WOMAC scale, Kujala score, prevalence of anterior knee pain and ROM of the replaced knee between the two groups at the last follow-up (p > .05). Group 2 had more subjects with positive patellar compression test than group 1 at the last follow-up (p = .010).
Conclusions: Clinical and functional outcomes of the replaced knee were not different between patellar resurfacing and non-resurfacing groups. Anterior knee pain was significantly reduced after total knee arthroplasty regardless of patellar resurfacing.
Level Of Evidence: Retrospective cohort study, Level III.
abstract_id: PUBMED:26980986
Revision of failed humeral head resurfacing arthroplasty. Purpose: The purpose of this study is to assess the outcomes of a consecutive series of patients who underwent revision surgery after humeral head resurfacing (HHR). Our joint registry was queried for all patients who underwent revision arthroplasty for failed HHR at our institution from 2005 to 2010. Eleven consecutive patients (average age 54 years; range 38-69 years) that underwent revision of 11 resurfacing arthroplasties were identified. The primary indication for resurfacing had been osteoarthritis in six, glenoid dysplasia in two, a chondral lesion in two, and postinstability arthropathy in one patient. The indication for revision was pain in 10 and infection in one patient. Seven patients had undergone an average of 1.9 surgeries prior to resurfacing (range 1-3).
Materials And Methods: All patients were revised to stemmed arthroplasties, including one hemiarthroplasty, two reverse, and eight anatomic total shoulder arthroplasties at a mean 33 months after primary resurfacing (range 10-131 months). A deltopectoral approach was used in seven patients; four patients required an anteromedial approach due to severe scarring. Subscapularis attenuation was found in four cases, two of which required reverse total shoulder arthroplasty. Bone grafting was required in one glenoid and three humeri.
Results: At a mean follow-up of 3.5 years (range 1.6-6.9 years), modified Neer score was rated as satisfactory in five patients and unsatisfactory in six. Abduction and external rotation improved from 73° to 88° (P = 0.32) and from 23° to 32° (P = 0.28) respectively. Reoperation was required in two patients, including one hematoma and one revision for instability.
Conclusion: Outcomes of revision of HHR arthroplasty in this cohort did not improve upon those reported for revision of stemmed humeral implants. A comparative study would be required to allow for definitive conclusions to be made.
abstract_id: PUBMED:29600265
Comparison of Clinical Results between Patellar Resurfacing and Non-resurfacing in Total Knee Arthroplasty: A Short Term Evaluation. Background: There is no difference in the functional outcomes 6 months after total knee arthroplasty (TKA) for knee osteoarthritis between patellar resurfacing and non-resurfacing. Thus, we have performed this study to compare the short-term clinical outcomes of TKA performed with and without the patella resurfacing.
Methods: A total of 50 patients with osteoarthritis of the knee (OAK) were randomized to receive patellar resurfacing (n=24; resurfaced group) or to retain their native patella (n=26; non-resurfaced group) based on envelope selection and provided informed consent. Disease specific outcomes including Knee Society Score (KSS), Knee Society Function Score (KSKS-F), Kujala Anterior Knee Pain Scale (AKPS), Western Ontario and McMaster Universities Arthritis Index (WOMAC), Short Form 36 (SF-36), and functional patella-related activities were measured within six months of follow-up.
Results: There was no significant difference between the resurfaced and non-resurfaced groups in pre and post-operative improvement of range of motion (ROM) (P=0.421), KSS (P=0.782, P=0.553), KSKS-F (P=0.241, P=0.293), AKPS (P=0.128, P=0.443), WOMAC (P=0.700, P=0.282), and pain scores (P=0.120, P=0.508). There was no difference in ROM between resurfaced and non-resurfaced group pre (15.24° and 15.45°) and post-operative (18.48° and 18.74). No side effects related to patella was observed in any of the groups. Revision was required in none of the participants.
Conclusion: The results showed no significant difference between patellar resurfacing and non-resurfacing in TKA for all outcome measures in a short term.Level of evidence: I.
abstract_id: PUBMED:38213349
A Prospective Comparative Study of the Functional Outcomes of Patellar Resurfacing Versus Non-resurfacing in Patients Undergoing Total Knee Arthroplasty. Background and objective Total knee arthroplasty (TKA) is a highly successful surgical procedure. However, there is a lack of consensus about whether to resurface the patella or not. This study was aimed at evaluating the outcome of patellar resurfacing in terms of a decrease in the incidence of anterior knee pain after TKA and assessing whether patellar resurfacing is beneficial in improving functional outcomes. Materials and methods This prospective comparative study included 100 patients undergoing TKA who were randomly allotted to the patellar resurfacing or non-resurfacing group. Functional evaluation was done based on the Knee Society Score, and the pain was evaluated by the visual analogue scale (VAS) preoperatively and after one year. Results There was a significant improvement in the Knee Society scores as well as the pain scores in both groups postoperatively. The patellar resurfacing group showed statistically significant improvement as compared to the non-resurfacing group in the Knee Society clinical and functional scores as well as the VAS at the end of one year. Conclusion Patellar resurfacing during TKA provides better clinical and functional outcomes as well as more relief from anterior knee pain as compared to non-resurfacing of the patella.
abstract_id: PUBMED:24396224
Resurfacing hip arthroplasty in neuromuscular hip disorders - A retrospective case series. Background: Management of the degenerate hip in patients with neuromuscular conditions should be aimed at improving quality of life and ease of nursing care. Arthroplasty poses a significant challenge with predisposition to dislocation and loosening due to anatomical abnormalities, soft tissue contractures and impaired muscle tone.
Methods: We present a series of 11 hips (9 patients) following total hip resurfacing arthroplasty for painful osteoarthritis in patients with differing neuromuscular conditions. Patients were assessed clinically and radiographically and also for satisfaction of their carers due to improved ability to provide nursing care. Mean patient age was 33.1 years (range 13-49 years) with mean follow up at publication 63.7 months (41-89 months). All patients were operated by a single surgeon (AHN) and received the required post operative care and physiotherapy. Soft tissue releases were performed when necessary. All hips were assessed clinically and radiographically at 6 weeks and 6 months and 1 year post-operatively. Six month follow-up also included a questionnaire with scoring of care-provider satisfaction.
Results: Ten hips had good clinical results with improvement in pain and function and radiologically showed no signs of loosening. One hip required revision to proximal femoral excision due to dislocation and loose acetabular component with severe pain. None of the other hips dislocated. Analysis of care provider satisfaction assessing ability to provide personal care, positioning and transferring, comfort, interaction and communication scored excellent to good in 10 patients and satisfactory in one.
Conclusion: We believe hip resurfacing arthroplasty to be a viable option in the treatment of the complex problem of osteoarthritis in the hips of patients with neuromuscular disease. The improved biomechanics confer greater stability when compared to conventional total hip arthroplasty. Although technically demanding, a successful result has been shown to improve patient pain, function and ease of nursing care.
Level Of Evidence: Level IV.
abstract_id: PUBMED:24103410
Patellar resurfacing versus no resurfacing in two-stage revision of infected total knee arthroplasty. Background: Cartilage debridement and staged reimplantation are the most common treatments for infected total knee arthroplasty (TKA). Very few studies have focused on the management of primarily non-resurfaced patellae in infected TKA. The purpose of this study was to compare the outcomes of patellar resurfacing and non-resurfacing in two-stage revision of infected TKA.
Methods: This study involved a retrospective comparison of the reinfection rate and clinical outcomes of revision TKA patients managed with patellar resurfacing (22 patients, 23 knees) and patients managed without patellar resurfacing (26 patients) at a minimum two-year follow-up. The mean age in the resurfacing group was 69.4years old, and the mean age in the non-resurfacing group was 67.3years old. Three scales were used in the assessment: the Knee Society Score, Western Ontario and McMaster Universities Osteoarthritis Index, and the Anterior Knee Pain Scale.
Results: Three knees had recurrent infections. All infections occurred in patients who had received patellar resurfacing. There were no significant differences between groups in clinical results and on the Anterior Knee Pain Scale. A lack of patellar resurfacing did not increase recurrence of infection and did not result in significant clinical differences compared to patellar resurfacing.
Conclusions: This study indicates that preservation of the original patellar bone is a feasible option for patellar management in revision of septic TKA.
Level Of Evidence: Level III, therapeutic study. See the Guidelines for Authors for a complete description of levels of evidence.
abstract_id: PUBMED:33527158
Shoulder resurfacing for treatment of focal defects and diffuse osteoarthritis. This article provides a review of the various options for the treatment of osteoarticular pathology of the shoulder. This can range from focal osteochondral defects to diffuse arthritis and can occur at any age. Although total shoulder arthroplasty and hemiarthroplasty are the most frequently used methods, they not generally suitable for younger patients under 60 years old due to the lack of long-term stability and deterioration. If conservative treatment is unsuccessful or not an option, resurfacing is an alternative procedure with good long-term results. The various methods that have been reported for resurfacing and partial resurfacing of the humeral head are described as well as the results of studies on these procedures. In this article the indications, biomechanics, outcomes and complications are outlined for the two resurfacing options preferred by the authors.
Answer: Resurfacing arthroplasty appears to be an appropriate treatment option for posttraumatic osteoarthritis. A study identified 29 patients who underwent hip resurfacing for posttraumatic arthritis and compared them with a matched cohort who had hip resurfacings for nontraumatic osteoarthritis and to all patients who underwent hip resurfacing for osteoarthritis during the same time. The 5-year survivorship was 90% in the posttraumatic group, which is comparable to the 93% in the matched osteoarthritis group and 97% in the entire osteoarthritis cohort. The mean Harris hip score for the posttraumatic group at the last follow-up was 90 points, indicating good functional outcomes. Additionally, there were no observed radiographic radiolucencies or loosening in any of the groups, other than the patients who underwent revision. These results suggest that the survival of resurfacing arthroplasty is comparable to total hip arthroplasty (THA) in posttraumatic osteoarthritis and for resurfacing in patients with osteoarthritis, making it a viable alternative treatment to THA in these patients (PUBMED:21132415).
Furthermore, hip resurfacing arthroplasty (HRA) has been reported to achieve excellent outcomes when used in the appropriate patient population, which includes young, active patients with good bone stock and a diagnosis of osteoarthritis (PUBMED:26830851). Although concerns regarding metal-on-metal bearing surfaces have led to a decline in the use of this construct, the literature suggests that HRA can still be considered in certain cases.
In summary, resurfacing arthroplasty is appropriate for posttraumatic osteoarthritis, particularly in patients who meet the criteria of being young and active with good bone stock, and it may present an alternative treatment to THA in these patients. |
Instruction: Different clinical outcomes in patients with asymptomatic severe aortic stenosis according to the stage classification: Does the aortic valve area matter?
Abstracts:
abstract_id: PUBMED:27865193
Different clinical outcomes in patients with asymptomatic severe aortic stenosis according to the stage classification: Does the aortic valve area matter? Background: The ACC/AHA guidelines introduced a new classification of severe aortic stenosis (AS) mainly based on maximum jet velocity (Vmax) and mean pressure gradient (mPG), but not on aortic valve area (AVA). However, prognostic value of this new classification has not yet been fully evaluated.
Methods And Results: We studied 1512 patients with asymptomatic severe AS enrolled in the CURRENT AS registry in whom surgery was not initially planned. Patients were divided into 2 groups: Group 1 (N=122) comprised patients who met the recommendation for surgery; high-gradient (HG)-AS (Vmax≥4.0m/s or mPG≥40mmHg) with ejection fraction (EF)<50%, or very HG-AS (Vmax≥5.0m/s or mPG≥60mmHg), and Group 2 (N=1390) comprised patients who did not meet this recommendation. Group 2 was further subdivided into HG-AS with preserved EF (HGpEF-AS, N=498) and low-gradient (LG)-AS, but AVA<1.0cm2 (N=892). The excess risk of Group 1 relative to Group 2 for the primary outcome measure (a composite of aortic valve-related death or heart failure hospitalization) was significant (adjusted HR: 1.92, 95%CI: 1.37-2.68, P<0.001). The excess risk of HGpEF-AS relative to LG-AS for the primary outcome measure was also significant (adjusted HR: 1.45, 95%CI: 1.11-1.89, P=0.006). Among LG-AS patients, patients with reduced EF (<50%) (LGrEF-AS, N=103) had extremely high cumulative 5-year incidence of all-cause death (85.5%).
Conclusion: Trans-aortic valve gradient in combination with EF was a good prognostic marker in patients with asymptomatic AS. However, patients with LGrEF-AS had extremely poor prognosis when managed conservatively.
abstract_id: PUBMED:30712486
Prognostic Impact of Aortic Valve Area in Conservatively Managed Patients With Asymptomatic Severe Aortic Stenosis With Preserved Ejection Fraction. Background Data are scarce on the role of aortic valve area (AVA) to identify those patients with asymptomatic severe aortic stenosis (AS) who are at high risk of adverse events. We sought to explore the prognostic impact of AVA in asymptomatic patients with severe AS in a large observational database. Methods and Results Among 3815 consecutive patients with severe AS enrolled in the CURRENT AS (Contemporary Outcomes After Surgery and Medical Treatment in Patients With Severe Aortic Stenosis) registry, the present study included 1309 conservatively managed asymptomatic patients with left ventricular ejection fraction ≥50%. The study patients were subdivided into 3 groups based on AVA (group 1: AVA >0.80 cm2, N=645; group 2: 0.8 cm2 ≥AVA >0.6 cm2, N=465; and group 3: AVA ≤0.6 cm2, N=199). The prevalence of very severe AS patients (peak aortic jet velocity ≥5 m/s or mean aortic pressure gradient ≥60 mm Hg) was 2.0%, 5.8%, and 26.1% in groups 1, 2, and 3, respectively. The cumulative 5-year incidence of AVR was not different across the 3 groups (39.7%, 43.7%, and 39.9%; P=0.43). The cumulative 5-year incidence of the primary outcome measure (a composite of aortic valve-related death or heart failure hospitalization) was incrementally higher with decreasing AVA (24.1%, 29.1%, and 48.1%; P<0.001). After adjusting for confounders, the excess risk of group 3 and group 2 relative to group 1 for the primary outcome measure remained significant (hazard ratio, 2.21, 95% CI, 1.56-3.11, P<0.001; and hazard ratio, 1.34, 95% CI, 1.01-1.78, P=0.04, respectively). Conclusions AVA ≤0.6 cm2 would be a useful marker to identify those high-risk patients with asymptomatic severe AS, who might benefit from early AVR. Clinical Trial Registration URL: www.umin.ac.jp . Unique identifier: UMIN000012140.
abstract_id: PUBMED:31345430
Staging Cardiac Damage in Patients With Asymptomatic Aortic Valve Stenosis. Background: The optimal timing of intervention in patients with asymptomatic severe aortic stenosis (AS) remains controversial.
Objectives: This multicenter study sought to test and validate the prognostic value of the staging of cardiac damage in patients with asymptomatic moderate to severe AS.
Methods: This study retrospectively analyzed the clinical, Doppler echocardiographic, and outcome data that were prospectively collected in 735 asymptomatic patients (71 ± 14 years of age; 60% men) with at least moderate AS (aortic valve area <1.5 cm2) and preserved left ventricular ejection fraction (≥50%) followed in the heart valve clinics of 4 high-volume centers. Patients were classified according to the following staging classification: no cardiac damage associated with the valve stenosis (Stage 0), left ventricular damage (Stage 1), left atrial or mitral valve damage (Stage 2), pulmonary vasculature or tricuspid valve damage (Stage 3), or right ventricular damage or subclinical heart failure (Stage 4). The primary endpoint was all-cause mortality.
Results: At baseline, 89 (12%) patients were classified in Stage 0, 200 (27%) in Stage 1, 341 (46%) in Stage 2, and 105 (14%) in Stage 3 or 4. Median follow-up was 2.6 years (interquartile range: 1.1 to 5.2 years). There was a stepwise increase in mortality rates according to staging: 13% in Stage 0, 25% in Stage 1, 44% in Stage 2, and 58% in Stages 3 to 4 (p < 0.0001). The staging was significantly associated with excess mortality in multivariable analysis adjusted for aortic valve replacement as a time-dependent variable (hazard ratio: 1.31 per each increase in stage; 95% CI: 1.06 to 1.61; p = 0.01), and showed incremental value to several clinical variables (net reclassification index = 0.34; p = 0.003).
Conclusions: The new staging system characterizing the extra-aortic valve cardiac damage provides incremental prognostic value in patients with asymptomatic moderate to severe AS. This staging classification may be helpful to identify asymptomatic AS patients who may benefit from elective aortic valve replacement.
abstract_id: PUBMED:30311005
Asymptomatic Severe Aortic Valve Stenosis-When to Intervene: a Review of the Literature, Current Trials, and Guidelines. Purpose Of Review: The optimal treatment for asymptomatic patients with severe aortic valve stenosis (AS) is not clearly known. Here, we review the available data on the management of such patients.
Recent Findings: Half of patients with severe AS are asymptomatic at the time of diagnosis, and are at risk for adverse events, including sudden cardiac death. A significant proportion of these patients develop AS-related symptoms within 1 or 2 years. Clinical and echocardiographic characteristics are predictors of poor outcomes and can guide treatment decisions. Several non-randomized studies and meta-analyses have suggested benefit from early AVR for asymptomatic severe AS, including improved all-cause, cardiovascular, and valve-related mortality. Based on the available information, current guidelines suggest aortic valve replacement in the presence of specific characteristic, including left ventricular dysfunction and very severe AS with significantly elevated gradients. Although the available data suggests early AVR improves the clinical outcomes of these patients, most patients in current practice are managed conservatively. Six randomized trials are ongoing to better elucidate the ideal management of asymptomatic severe AS patients.
abstract_id: PUBMED:33807143
Asymptomatic Patients with Severe Aortic Stenosis and the Impact of Intervention. Objectives the exact timing of aortic valve replacement (AVR) in asymptomatic patients with severe aortic stenosis (AS) remains a matter of debate. Therefore, we described the natural history of asymptomatic patients with severe AS, and the effect of AVR on long-term survival. Methods: Asymptomatic patients who were found to have severe AS between June 2006 and May 2009 were included. Severe aortic stenosis was defined as peak aortic jet velocity Vmax ≥ 4.0 m/s or aortic valve area (AVA) ≤ 1 cm2. Development of symptoms, the incidence of AVR, and all-cause mortality were assessed. Results: A total of 59 asymptomatic patients with severe AS were followed, with a mean follow-up of 8.9 ± 0.4 years. A total of 51 (86.4%) patients developed AS related symptoms, and subsequently 46 patients underwent AVR. The mean 1-year, 2-year, 5-year, and 10-year overall survival rates were higher in patients receiving AVR compared to those who did not undergo AVR during follow-up (100%, 93.5%, 89.1%, and 69.4%, versus 92.3%, 84.6%, 65.8%, and 28.2%, respectively; p < 0.001). Asymptomatic patients with severe AS receiving AVR during follow-up showed an incremental benefit in survival of up to 31.9 months compared to conservatively managed patients (p = 0.002). Conclusions: The majority of asymptomatic patients turn symptomatic during follow-up. AVR during follow-up is associated with better survival in asymptomatic severe AS patients.
abstract_id: PUBMED:27143354
Prognostic Value of Aortic Valve Area by Doppler Echocardiography in Patients With Severe Asymptomatic Aortic Stenosis. Background: The aim of this study was to evaluate the relationship between aortic valve area (AVA) obtained by Doppler echocardiography and outcome in patients with severe asymptomatic aortic stenosis and to define a specific threshold of AVA for identifying asymptomatic patients at very high risk based on their clinical outcome.
Methods And Results: We included 199 patients with asymptomatic severe aortic stenosis (AVA ≤1.0 cm(2)). The risk of events (death or need for aortic valve replacement) increased linearly on the scale of log hazard with decreased AVA (adjusted hazard ratio 1.17; 95% CI 1.06-1.29 per 0.1 cm(2) AVA decrement; P=0.002). Event-free survival at 12, 24, and 48 months was 63±6%, 51±6%, and 34±6%, respectively, for AVA 0.8 to 1 cm(2); 49±6%, 36±6%, and 26±6%, respectively, for AVA 0.6 to 0.8 cm(2); and 33±8%, 20±7%, and 11±5%, respectively, for AVA ≤0.6 cm(2) (Ptrend=0.002). Patients with AVA ≤0.6 cm(2) had a significantly increased risk of events compared with patients with AVA 0.8 to 1 cm(2) (adjusted hazard ratio 2.22; 95% CI 1.41-3.52; P=0.001), whereas patients with AVA 0.6 to 0.8 cm(2) had an increased risk of events compared with those with AVA 0.8 to 1 cm(2), but the difference was not statistically significant (adjusted hazard ratio 1.38; 95% CI 0.93-2.05; P=0.11). After adjustment for covariates and aortic valve replacement as a time-dependent variable, patients with AVA ≤0.6 cm(2) had a significantly greater risk of all-cause mortality than patients with AVA >0.6 cm(2) (hazard ratio 3.39; 95% CI 1.80-6.40; P<0.0001).
Conclusions: Patients with severe asymptomatic aortic stenosis and AVA ≤0.6 cm(2) displayed an important increase in the risk of adverse events during short-term follow-up. Further studies are needed to determine whether elective aortic valve replacement improves outcome in this high-risk subgroup of patients.
abstract_id: PUBMED:30285058
Outcomes of Patients With Asymptomatic Aortic Stenosis Followed Up in Heart Valve Clinics. Importance: The natural history and the management of patients with asymptomatic aortic stenosis (AS) have not been fully examined in the current era.
Objective: To determine the clinical outcomes of patients with asymptomatic AS using data from the Heart Valve Clinic International Database.
Design, Setting, And Participants: This registry was assembled by merging data from prospectively gathered institutional databases from 10 heart valve clinics in Europe, Canada, and the United States. Asymptomatic patients with an aortic valve area of 1.5 cm2 or less and preserved left ventricular ejection fraction (LVEF) greater than 50% at entry were considered for the present analysis. Data were collected from January 2001 to December 2014, and data were analyzed from January 2017 to July 2018.
Main Outcomes And Measures: Natural history, need for aortic valve replacement (AVR), and survival of asymptomatic patients with moderate or severe AS at entry followed up in a heart valve clinic. Indications for AVR were based on current guideline recommendations.
Results: Of the 1375 patients included in this analysis, 834 (60.7%) were male, and the mean (SD) age was 71 (13) years. A total of 861 patients (62.6%) had severe AS (aortic valve area less than 1.0 cm2). The mean (SD) overall survival during medical management (mean [SD] follow up, 27 [24] months) was 93% (1%), 86% (2%), and 75% (4%) at 2, 4, and 8 years, respectively. A total of 104 patients (7.6%) died under observation, including 57 patients (54.8%) from cardiovascular causes. The crude rate of sudden death was 0.65% over the duration of the study. A total of 542 patients (39.4%) underwent AVR, including 388 patients (71.6%) with severe AS at study entry and 154 (28.4%) with moderate AS at entry who progressed to severe AS. Those with severe AS at entry who underwent AVR did so at a mean (SD) of 14.4 (16.6) months and a median of 8.7 months. The mean (SD) 2-year and 4-year AVR-free survival rates for asymptomatic patients with severe AS at baseline were 54% (2%) and 32% (3%), respectively. In those undergoing AVR, the 30-day postprocedural mortality was 0.9%. In patients with severe AS at entry, peak aortic jet velocity (greater than 5 m/s) and LVEF (less than 60%) were associated with all-cause and cardiovascular mortality without AVR; these factors were also associated with postprocedural mortality in those patients with severe AS at baseline who underwent AVR (surgical AVR in 310 patients; transcatheter AVR in 78 patients).
Conclusions And Relevance: In patients with asymptomatic AS followed up in heart valve centers, the risk of sudden death is low, and rates of overall survival are similar to those reported from previous series. Patients with severe AS at baseline and peak aortic jet velocity of 5.0 m/s or greater or LVEF less than 60% have increased risks of all-cause and cardiovascular mortality even after AVR. The potential benefit of early intervention should be considered in these high-risk patients.
abstract_id: PUBMED:33868420
Twelve-month outcomes of transapical transcatheter aortic valve implantation in patients with severe aortic valve stenosis. Introduction: Transapical access (TA) transcatheter aortic valve implantation (TAVI) (TA-TAVI) represents one of the possible routes in patients with severe aortic stenosis (AS) who are not suitable for transfemoral access.
Aim: To assess early- and mid-term clinical outcomes after TA-TAVI.
Material And Methods: Patients with severe symptomatic AS undergoing TA-TAVI from November 2008 to December 2019 were enrolled. Clinical and procedural characteristics as well as clinical outcomes including all-cause mortality during 12-month follow-up were assessed.
Results: Sixty-one consecutive patients underwent TA-TAVI for native AS. Patients were elderly with median age of 80.0 (76.0-84.0) years; 55.7% were males. Median baseline EuroSCORE I and STS scores were 18.2% (11.6-27.7) and 4.8% (3.3-8.2), respectively. The procedural success rate was 96.7%. In-hospital, 30-day and 12-month mortality rates were 9.8%; 18.0% and 24.6%, respectively. The main periprocedural and in-hospital complications were bleeding complications (14.8%). The following factors were associated with 12-month mortality: previous cerebrovascular event (CVE), glomerular filtration rate (GFR), aortic valve area (AVA), right ventricular systolic pressure (RVSP) and serum level of N-terminal prohormone of brain natriuretic peptide (NT-proBNP) (RR for CVE 3.17, 95% confidence interval (CI): 1.15-8.76: p = 0.026; RR for AVA per 0.1 cm2 1.28, 95% CI: 1.03-1.55: p = 0.024; RR for GFR per 1 ml/min 0.96: 95% CI: 0.94-0.99: p = 0.007; RR for NT-proBNP per 1000 pg/ml 1.07: 95% CI: 1.01-1.17: p = 0.033; RR for RVSP per 1 mm Hg 1.07: 95% CI 1.02-1.16: p = 0.011).
Conclusions: Transapical TAVI in high-risk patients provides good hemodynamic results with acceptable outcomes.
abstract_id: PUBMED:31386103
A risk prediction model in asymptomatic patients with severe aortic stenosis: CURRENT-AS risk score. Aims: Early aortic valve replacement (AVR) might be beneficial in selected high-risk asymptomatic patients with severe aortic stenosis (AS), considering their poor prognosis when managed conservatively. This study aimed to develop and validate a clinical scoring system to predict AS-related events within 1 year after diagnosis in asymptomatic severe AS patients.
Methods And Results: We analysed 1274 asymptomatic severe AS patients derived from a retrospective multicentre registry enrolling consecutive patients with severe AS in Japan (CURRENT AS registry), who were managed conservatively and completed 1-year follow-up without AVR. From a randomly assigned derivation set (N = 849), we developed CURRENT AS risk score for the AS-related event (a composite of AS-related death and heart failure hospitalization) within 1 year using a multivariable logistic regression model. The risk score comprised independent risk predictors including left ventricular ejection fraction <60%, haemoglobin ≤11.0 g/dL, chronic lung disease (2 points), diabetes mellitus, haemodialysis, and any concomitant valve disease (1 point). The predictive accuracy of the model was good with the area under the curve of 0.79 and 0.77 in the derivation and validation sets (N = 425). In the validation set, the 1-year incidence of AS-related events was much higher in patients with score ≥2 than in patients with score ≤1 (Score 0: 2.2%, Score 1: 1.9%, Score 2: 13.4%, Score 3: 14.3%, and Score ≥4: 22.7%, P < 0.001).
Conclusion: The CURRENT-AS risk score integrating clinical and echocardiographic factors well-predicted the risk of AS-related events at 1 year in asymptomatic patients with severe AS and was validated internally.
abstract_id: PUBMED:23580789
Asymptomatic severe aortic stenosis with left ventricular dysfunction: watchful waiting or valve replacement? Aortic stenosis (AS) is the most common valvular heart disorder in older adults. Patients with severe AS are generally treated nonsurgically if asymptomatic and referred to aortic valve replacement when symptoms develop. However, patients with severe asymptomatic AS with left ventricular dysfunction may benefit from early aortic valve replacement. Although operative mortality in patients with severe AS and left ventricular dysfunction is greater than in patients with preserved left ventricular function, the overall mortality risk is substantially lower than that of watchful waiting. Operative risk in patients with severe AS and left ventricular dysfunction is often overestimated and, consequently, most are not referred to surgery despite clinical data in support of early aortic valve replacement. Asymptomatic patients with echocardiographic confirmation of severe AS and left ventricular dysfunction should be referred for aortic valve replacement.
Answer: The clinical outcomes of patients with asymptomatic severe aortic stenosis (AS) do appear to be influenced by the aortic valve area (AVA). Several studies have investigated the prognostic impact of AVA in patients with asymptomatic severe AS, particularly in those with preserved ejection fraction (EF).
One study found that patients with an AVA ≤0.6 cm² had a significantly higher risk of adverse events, such as aortic valve-related death or heart failure hospitalization, compared to those with larger AVA. This suggests that a smaller AVA is associated with a worse prognosis in asymptomatic severe AS patients and that AVA ≤0.6 cm² could be a useful marker to identify high-risk patients who might benefit from early aortic valve replacement (AVR) (PUBMED:30712486).
Another study developed a staging system for cardiac damage in patients with asymptomatic moderate to severe AS and found that the staging was significantly associated with excess mortality. This staging system, which includes AVA as one of the parameters, provides incremental prognostic value and may help identify patients who could benefit from elective AVR (PUBMED:31345430).
Furthermore, a study that included patients with asymptomatic severe AS who were managed conservatively found that those with a smaller AVA had a higher risk of events, including death or the need for AVR. Specifically, patients with an AVA ≤0.6 cm² displayed a significant increase in the risk of adverse events during short-term follow-up (PUBMED:27143354).
In summary, the AVA does matter in patients with asymptomatic severe AS, as a smaller AVA is associated with a higher risk of adverse clinical outcomes. This information can be crucial in the decision-making process regarding the timing of interventions such as AVR in asymptomatic patients with severe AS (PUBMED:27865193, PUBMED:30712486, PUBMED:31345430, PUBMED:27143354). |
Instruction: Can 'early programming' be partly explained by smoking?
Abstracts:
abstract_id: PUBMED:25417973
Can 'early programming' be partly explained by smoking? Results from a prospective, population-based cohort study. Background: Numerous studies have focused the association between low birthweight and later disease. Our objective was to study the association between birthweight and later adult smoking and thereby explore a possible mechanism for the association between low birthweight and later adult disease.
Methods: We studied associations between birthweight of women (n=247704) born in 1967-1995 and smoking habits at the end of their pregnancy 13-42 years later in a prospective, population-based cohort study from The Medical Birth Registry of Norway. Similarly, the association between birthweight of men (n=194393) and smoking habits of their partners were assessed. Finally, we studied the relation between smoking habits of the participating women and the cause specific death of their mothers (n=222808).
Results: Twenty per cent of women with birthweight less than 2000 g were adult daily smokers compared with 11% with birthweight 4000-4499 g [relative risk=1.8, 95% confidence interval 1.4, 2.2]. Similarly, we found an association between men's birthweight and their partners smoking habits. Mothers of smoking women had doubled risk of dying from lung cancer and from cardiovascular disease compared with mothers of non-smoking women.
Conclusions: Being born with low birthweight is associated with smoking in adulthood. Associations of adult smoking with partners' birthweight and mothers' smoking-related causes of death suggest a shared smoking environment, and may account for some of the established association between birthweight and later cardiovascular disease.
abstract_id: PUBMED:25656371
Fetal programming of chronic kidney disease: the role of maternal smoking, mitochondrial dysfunction, and epigenetic modfification. The role of an adverse in utero environment in the programming of chronic kidney disease in the adult offspring is increasingly recognized. The cellular and molecular mechanisms linking the in utero environment and future disease susceptibility remain unknown. Maternal smoking is a common modifiable adverse in utero exposure, potentially associated with both mitochondrial dysfunction and epigenetic modification in the offspring. While studies are emerging that point toward a key role of mitochondrial dysfunction in acute and chronic kidney disease, it may have its origin in early development, becoming clinically apparent when secondary insults occur. Aberrant epigenetic programming may add an additional layer of complexity to orchestrate fibrogenesis in the kidney and susceptibility to chronic kidney disease in later life. In this review, we explore the evidence for mitochondrial dysfunction and epigenetic modification through aberrant DNA methylation as key mechanistic aspects of fetal programming of chronic kidney disease and discuss their potential use in diagnostics and targets for therapy.
abstract_id: PUBMED:30574076
The Mitochondrion as Potential Interface in Early-Life Stress Brain Programming. Mitochondria play a central role in cellular energy-generating processes and are master regulators of cell life. They provide the energy necessary to reinstate and sustain homeostasis in response to stress, and to launch energy intensive adaptation programs to ensure an organism's survival and future well-being. By this means, mitochondria are particularly apt to mediate brain programming by early-life stress (ELS) and to serve at the same time as subcellular substrate in the programming process. With a focus on mitochondria's integrated role in metabolism, steroidogenesis and oxidative stress, we review current findings on altered mitochondrial function in the brain, the placenta and peripheral blood cells following ELS-dependent programming in rodents and recent insights from humans exposed to early life adversity (ELA). Concluding, we propose a role of the mitochondrion as subcellular intersection point connecting ELS, brain programming and mental well-being, and a role as a potential site for therapeutic interventions in individuals exposed to severe ELS.
abstract_id: PUBMED:32818036
Early leucine programming on protein utilization and mTOR signaling by DNA methylation in zebrafish (Danio rerio). Background: Early nutritional programming affects a series of metabolism, growth and development in mammals. Fish also exhibit the developmental plasticity by early nutritional programming. However, little is known about the effect of early amino acid programming on growth and metabolism.
Methods: In the present study, zebrafish (Danio rerio) was used as the experimental animal to study whether early leucine stimulation can programmatically affect the mechanistic target of rapamycin (mTOR) signaling pathway, growth and metabolism in the later life, and to undercover the mechanism of epigenetic regulation. Zebrafish larvas at 3 days post hatching (dph) were raised with 1.0% leucine from 3 to 13 dph during the critical developmental stage, then back to normal water for 70 days (83 dph).
Results: The growth performance and crude protein content of zebrafish in the early leucine programming group were increased, and consistent with the activation of the mTOR signaling pathway and the high expression of genes involved in the metabolism of amino acid and glycolipid. Furthermore, we compared the DNA methylation profiles between the control and leucine-stimulated zebrafish, and found that the methylation levels of CG-differentially methylated regions (DMGs) and CHH-DMGs of genes involved in mTOR signaling pathway were different between the two groups. With quantitative PCR analysis, the decreased methylation levels of CG type of Growth factor receptor-bound protein 10 (Grb10), eukaryotic translation initiation factor 4E (eIF4E) and mTOR genes of mTOR signaling pathway in the leucine programming group, might contribute to the enhanced gene expression.
Conclusions: The early leucine programming could improve the protein synthesis and growth, which might be attributed to the methylation of genes in mTOR pathway and the expression of genes involved in protein synthesis and glycolipid metabolism in zebrafish. These results could be beneficial for better understanding of the epigenetic regulatory mechanism of early nutritional programming.
abstract_id: PUBMED:38077599
Early prediction of student performance in CS1 programming courses. There is a high failure rate and low academic performance observed in programming courses. To address these issues, it is crucial to predict student performance at an early stage. This allows teachers to provide timely support and interventions to help students achieve their learning objectives. The prediction of student performance has gained significant attention, with researchers focusing on machine learning features and algorithms to improve predictions. This article proposes a model for predicting student performance in a 16-week CS1 programming course, specifically in weeks 3, 5, and 7. The model utilizes three key factors: grades, delivery time, and the number of attempts made by students in programming labs and an exam. Eight classification algorithms were employed to train and evaluate the model, with performance assessed using metrics such as accuracy, recall, F1 score, and AUC. In week 3, the gradient boosting classifier (GBC) achieved the best results with an F1 score of 86%, followed closely by the random forest classifier (RFC) with 83%. These findings demonstrate the potential of the proposed model in accurately predicting student performance.
abstract_id: PUBMED:26209745
Foetal programming and cortisol secretion in early childhood: A meta-analysis of different programming variables. It is widely recognized that different events may take place in the intrauterine environment that may influence later developmental outcome. Scholars have long postulated that maternal prenatal stress, alcohol or drug use, and cigarette smoking may impact foetal formation of the hypothalamus-pituitary-adrenal (HPA) axis, which may later influence different aspects of early childhood socioemotional and cognitive development. However, results linking each of these factors with child cortisol secretion have been mixed. The current meta-analysis examined the relation between each of these programming variables and child cortisol secretion in studies conducted up to December 31st, 2012. Studies were included if they were conducted prior to child age 60 months, and if they reported an index of effect size linking either maternal prenatal stress, alcohol or drug use, or cigarette smoking with an index of child cortisol secretion. In total, 19 studies (N=2260) revealed an average effect size of d=.36 (p<.001). Moderator analyses revealed that greater effect sizes could be traced to maternal alcohol use, to the use of retrospective research methodology, where mothers are questioned after childbirth regarding programming variables, and to the use of baseline measures of cortisol secretion, as opposed to recovery measures. Discussion focuses on processes that link the environment to foetal development and how both are linked to later adaptation.
abstract_id: PUBMED:30939482
Impact of Micronutrient Status during Pregnancy on Early Nutrition Programming. Background: Nutrition status prior to conception and during pregnancy and infancy seems to have an influence on the disease risk in adulthood (early nutrition/developmental programming). We aimed to review the current knowledge on the role of micronutrients in early nutrition programming and its implications for healthcare.
Summary Of Findings: Globally and even in high-income countries where a balanced diet is generally accessible, an inadequate maternal micronutrient status is common. This may induce health problems in the mother and foetus/newborn both immediately and in later life. Pregnant women and those who may become pregnant should aim to achieve a satisfactory micronutrient status from a well-balanced diet, and where necessary from additional supplements. Key Messages: We emphasise the need for a call to action for healthcare providers and policymakers to better educate women of child-bearing age regarding the short- and long-term benefits of an appropriate micronutrient status. The role of micronutrient status in early nutrition programming needs to be emphasized more to address the still limited awareness of the potential long-term health repercussions of suboptimal micronutrient supply during pregnancy.
abstract_id: PUBMED:32738371
Tobacco smoking during breastfeeding increases the risk of developing metabolic syndrome in adulthood: Lessons from experimental models. Metabolic syndrome (MetS) is characterized by increased abdominal fat, dyslipidemia, diabetes mellitus and hypertension. A high MetS prevalence is strongly associated with obesity. Obesity is a public health problem in which several complex factors have been implicated, including environmental pollutants. For instance, maternal smoking seems to play a role in obesogenesis in childhood. Given the association between endocrine disruptors, obesity and metabolic programming, over the past 10 years, our research group has contributed to studies based on the hypothesis that early exposure to nicotine/tobacco causes offspring to become MetS-prone. The mechanism by which tobacco smoking during breastfeeding induces metabolic dysfunctions is not completely understood; however, increased metabolic programming has been shown in studies that focus on this topic. Here, we reviewed the literature mainly based in light of our latest data from experimental models. Nicotine or tobacco exposure during breastfeeding induces several endocrine dysfunctions in a sex- and tissue-specific manner. This review provides an updated summary regarding the hypothesis that early exposure to nicotine/tobacco causes offspring to become MetS-prone. An understanding of this issue can provide support to prevent long-term disorders, mainly related to the risk of obesity and its comorbidities, in future generations.
abstract_id: PUBMED:32054074
Early Programming of Adult Systemic Essential Hypertension. Cardiovascular diseases are being included in the study of developmental origins of health and disease (DOHaD) and essential systemic hypertension has also been added to this field. Epigenetic modifications are one of the main mechanisms leading to early programming of disease. Different environmental factors occurring during critical windows in the early stages of life may leave epigenetic cues, which may be involved in the programming of hypertension when individuals reach adulthood. Such environmental factors include pre-term birth, low weight at birth, altered programming of different organs such as the blood vessels and the kidney, and living in disadvantageous conditions in the programming of hypertension. Mechanisms behind these factors that impact on the programming include undernutrition, oxidative stress, inflammation, emotional stress, and changes in the microbiota. These factors and their underlying causes acting at the vascular level will be discussed in this paper. We also explore the establishment of epigenetic cues that may lead to hypertension at the vascular level such as DNA methylation, histone modifications (methylation and acetylation), and the role of microRNAs in the endothelial cells and blood vessel smooth muscle which participate in hypertension. Since epigenetic changes are reversible, the knowledge of this type of markers could be useful in the field of prevention, diagnosis or epigenetic drugs as a therapeutic approach to hypertension.
abstract_id: PUBMED:28575096
Are social inequalities in early childhood smoking initiation explained by exposure to adult smoking? Findings from the UK Millennium Cohort Study. Introduction: To assess the socio-economic gradient in early smoking initiation at age 11 years and the extent to which any inequality was explained after accounting for longitudinal exposure to adult smoking.
Methods: Analysis of the UK Millennium Cohort Study, based on 9, 609 children from ages 9 months to 11 years. The outcome was smoking initiation by age 11. Odds ratios (ORs) for smoking initiation were estimated using logistic regression, according to maternal education, whilst adjusting for baseline demographic factors. Longitudinal exposure to a regular smoker in the same room was assessed as potential mediator of the association between maternal education and early smoking, along with other socially patterned risk factors for early smoking initiation, such as parental separation and mental health.
Results: Overall 2.7% (95% CI: 2.3-3.1) of children had tried a cigarette by age eleven. Children of mothers with no qualifications were more than six times as likely to have tried a cigarette than children of mothers with degree level qualifications or higher (OR 6.0 [95%CI 3.5-10.1]), with clear social gradient. Controlling for potentially mediating variables, particularly exposure to a regular adult smoker reduced the OR smoking initiation in children of mothers with no qualifications by 63% (aOR 2.9 [95%CI 1.7 to 5.1]).
Conclusions: Smoking initiation is more common in disadvantaged children, and this is largely explained by regular exposure to an adult smoker in the same room. Reducing adult smoking in front of children may reduce inequalities in smoking initiation in children by over a half.
Answer: Yes, 'early programming' can be partly explained by smoking. The concept of 'early programming' refers to the idea that certain exposures or experiences during critical periods of development can have long-lasting effects on an individual's health and behavior. Smoking, particularly maternal smoking, has been implicated as one such exposure that can influence early programming.
Research has shown that being born with low birthweight, which is associated with smoking in adulthood, can be linked to later cardiovascular disease (PUBMED:25417973). This suggests that the intrauterine environment, potentially affected by maternal smoking, can have long-term health implications for the offspring. Additionally, maternal smoking is associated with mitochondrial dysfunction and epigenetic modifications in the offspring, which may contribute to the programming of chronic kidney disease and other health issues later in life (PUBMED:25656371).
Moreover, early-life stress (ELS), which can be influenced by maternal smoking, has been shown to alter mitochondrial function in the brain, placenta, and peripheral blood cells, potentially mediating brain programming and affecting mental well-being (PUBMED:30574076). Similarly, exposure to nicotine or tobacco during breastfeeding has been linked to an increased risk of developing metabolic syndrome in adulthood, indicating that early exposure to tobacco can lead to metabolic programming (PUBMED:32738371).
Furthermore, social inequalities in early childhood smoking initiation have been explained by exposure to adult smoking, with disadvantaged children being more likely to try smoking if they are regularly exposed to an adult smoker (PUBMED:28575096). This highlights the role of environmental factors, such as smoking, in shaping health behaviors from an early age.
In summary, smoking can indeed be a factor in 'early programming,' influencing a range of health outcomes and behaviors from birth into adulthood through various biological and environmental mechanisms. |
Instruction: Is immunohistochemistry of BRAF V600E useful as a screening tool and during progression disease of melanoma patients?
Abstracts:
abstract_id: PUBMED:27863476
Is immunohistochemistry of BRAF V600E useful as a screening tool and during progression disease of melanoma patients? Background: In clinical practice the gold standard method to assess BRAF status in patients with metastatic melanoma is based on molecular assays. Recently, a mutation-specific monoclonal antibody (VE1), which detects the BRAF V600E mutated protein, has been developed. With this study we aimed to confirm the clinical value of the VE1 Ventana® antibody, as today a univocal validated and accredited immunohistochemical procedure does not exist, to preliminary detect BRAF status in our routine diagnostic procedures. Moreover, we explored the biological meaning of BRAF immunohistochemical labeling both as a predictor marker of response to target therapy and, for the first time, as a player of acquired tumor drug resistance.
Methods: We analyzed a retrospective series of 64 metastatic melanoma samples, previously investigated for molecular BRAF status, using a fully automatized immunohistochemical method. We correlated the data to the clinicopathologic characteristics of patients and their clinical outcome.
Results: The sensitivity and the specificity of the Ventana® VE1 antibody were 89.2 and 96.2% respectively, while the positive predictive value and negative predictive value were 97.1 and 86.2%, respectively. For six mutated patients the histological sample before treatment and when disease progressed was available. The immunohistochemical BRAF V600E expression in the specimens when disease progressed was less intense and more heterogeneous compared to the basal expression. Multivariate analysis revealed that a less intense grade of positive expression is an independent predictor of a less aggressive stage at diagnosis (p = 0.0413).
Conclusions: Our findings encourage the introduction of immunohistochemistry as a rapid screening tool for the assessment of BRAF status in melanoma patients in routine diagnostic procedures and prepare the ground for other studies to highlight the role of immunohistochemical BRAF V600E expression in patients at the time of progression.
abstract_id: PUBMED:26204954
NRAS (Q61R), BRAF (V600E) immunohistochemistry: a concomitant tool for mutation screening in melanomas. Background: The determination of NRAS and BRAF mutation status is a major requirement in the treatment of patients with metastatic melanoma. Mutation specific antibodies against NRAS(Q61R) and BRAF(V600E) proteins could offer additional data on tumor heterogeneity. The specificity and sensitivity of NRAS(Q61R) immunohistochemistry have recently been reported excellent. We aimed to determine the utility of immunohistochemistry using SP174 anti-NRAS(Q61R) and VE1 anti-BRAF(V600E) antibodies in the theranostic mutation screening of melanomas.
Methods: 142 formalin-fixed paraffin-embedded melanoma samples from 79 patients were analyzed using pyrosequencing and immunohistochemistry.
Results: 23 and 26 patients were concluded to have a NRAS-mutated or a BRAF-mutated melanoma respectively. The 23 NRAS (Q61R) and 23 BRAF (V600E) -mutant samples with pyrosequencing were all positive in immunohistochemistry with SP174 antibody and VE1 antibody respectively, without any false negative. Proportions and intensities of staining were varied. Other NRAS (Q61L) , NRAS (Q61K) , BRAF (V600K) and BRAF (V600R) mutants were negative in immunohistochemistry. 6 single cases were immunostained but identified as wild-type using pyrosequencing (1 with SP174 and 5 with VE1). 4/38 patients with multiple samples presented molecular discordant data. Technical limitations are discussed to explain those discrepancies. Anyway we could not rule out real tumor heterogeneity.
Conclusions: In our study, we showed that combining immunohistochemistry analysis targeting NRAS(Q61R) and BRAF(V600E) proteins with molecular analysis was a reliable theranostic tool to face challenging samples of melanoma.
abstract_id: PUBMED:23041829
Immunohistochemistry with a mutation-specific monoclonal antibody as a screening tool for the BRAFV600E mutational status in primary cutaneous malignant melanoma. The V600E mutation of BRAF has emerged as both an effective biomarker and therapeutic target for select benign and malignant cutaneous and non-cutaneous human tumors and is typically determined using DNA-based techniques that include allele-specific PCR and direct DNA sequencing. Recently however, the development of new antibodies directed against the V600E protein has opened the door for an easier and more efficient strategy for identifying this mutation. Our present aim was to determine the efficacy of one such antibody, anti-B-Raf (V600E), a mouse monoclonal antibody in which the immunogen is a synthetic peptide derived from the internal region of BRAFV600E. A total of 35 cases of primary cutaneous melanoma were evaluated using a combination of DNA-based techniques that included allele-specific PCR and/or direct DNA sequencing and immunohistochemistry. Cases of papillary thyroid carcinomas (n=5) and colorectal carcinomas (n=5), known to harbor the BRAFV600E mutation, served as positive controls for the study. DNA analyses revealed that 6 of 35 (17%) cases of the primary cutaneous malignant melanoma possessed the BRAFV600E mutation. For immunohistochemical analyses, cytoplasmic positivity with anti-B-Raf was noted in 7 of 35 (20%) cases of primary melanoma and in all 10 positive controls. Statistical analyses of the data demonstrated that the sensitivity of the immunohistochemistry was 100% and specificity was 97%. Findings from the current study support the potential use of immunohistochemistry as an ancillary screening tool to assess the BRAFV600E mutation status in primary cutaneous melanoma.
abstract_id: PUBMED:34790354
Immunohistochemistry as an accurate tool for the assessment of BRAF V600E and TP53 mutations in primary and metastatic melanoma. Metastatic melanoma is a fatal disease with poor prognosis. Ever since targeted therapy against oncogenic BRAF was approved, molecular profiling has become an integral part of the management of such patients. While molecular testing is not available in all pathology laboratories, immunohistochemistry (IHC) is a reliable screening option. The major objective of the present study was to evaluate whether IHC detection of BRAF and the tumor (suppressor) protein 53 gene (TP53) are reliable surrogates for mutation detection. Formalin-fixed paraffin-embedded samples of melanomas for which molecular data were previously obtained by targeted next-generation sequencing (NGS) between January 2014 and February 2019 were immunostained with BRAF V600E and p53 antibodies. A blinded evaluation of the IHC slides was performed by two pathologists in order to evaluate inter-observer concordance (discordant cases were reviewed by a third observer). The associations between the results of IHC and molecular profiling were evaluated. The study included a series of 37 cases of which 15 harbored a BRAF mutation and five a TP53 mutation. IHC had an overall diagnostic accuracy of 93.9% for BRAF V600E and 68.8% for TP53 compared to NGS. A statistically significant association between the two diagnostic methods was obtained for BRAF V600E (P=0.0004) but not for p53 (P=0.3098) IHC. The κ coefficient for IHC assessment of p53 was 0.55 and that for BRAF V600E was 0.72. In conclusion, the present results evidenced that IHC staining is a reliable surrogate for NGS in identifying the BRAF V600E mutation, which may become an efficient screening tool. Aberrant expression of p53 on IHC is at times associated with TP53 mutations but it was not possible to establish a direct link.
abstract_id: PUBMED:32463489
Outcomes after progression of disease with anti-PD-1/PD-L1 therapy for patients with advanced melanoma. Background: Greater than one-half of patients with melanoma who are treated with antibodies blocking programmed cell death protein 1 receptor (anti-PD-1) experience disease progression. The objective of the current study was to identify prognostic factors and outcomes in patients with metastatic melanoma that progressed while they were receiving anti-PD-1 therapy.
Methods: The authors evaluated 383 consecutively treated patients who received anti-PD-1 for advanced melanoma between 2009 and 2019. Patient and disease characteristics at baseline and at the time of progression, subsequent therapies, objective response rate (ORR), overall survival, and progression-free survival were assessed.
Results: Of 383 patients, 247 experienced disease progression. The median survival after progression was 6.8 months. There was no difference in survival noted after disease progression based on primary tumor subtype, receipt of prior therapy, or therapy type. However, significantly improved survival after disease progression correlated with clinical features at the time of progression, including normal lactate dehydrogenase, more favorable metastatic stage (American Joint Committee on Cancer eighth edition stage IV M1a vs M1b, M1c, or M1d), mutation status (NRAS or treatment-naive BRAF V600 vs BRAF/NRAS wild-type or treatment-experienced BRAF-mutant), decreasing tumor bulk, and progression at solely existing lesions. After progression, approximately 54.3% of patients received additional systemic therapy. A total of 41 patients received BRAF/MEK inhibition (ORR of 58.6%, including 70.4% for BRAF/MEK-naive patients), 30 patients received ipilimumab (ORR of 0%), and 11 patients received ipilimumab plus nivolumab (ORR of 27.3%).
Conclusions: The current study identified prognostic factors in advanced melanoma for patients who experienced disease progression while receiving anti-PD-1, including lactate dehydrogenase, stage of disease, site of disease progression, tumor size, and mutation status.
abstract_id: PUBMED:26695089
Highly Concordant Results Between Immunohistochemistry and Molecular Testing of Mutated V600E BRAF in Primary and Metastatic Melanoma. This study tested the sensitivity and specificity of VE1 antibody raised against BRAFV600E protein, on 189 melanoma samples, compared with molecular testing. In addition, the therapeutic response to BRAF inhibitors was analysed in 27 patients, according to staining intensity (scored from weak to strong) and pattern (homogeneous or heterogeneous). BRAFV600E status during melanoma progression was evaluated in a cohort of 54 patients with at least paired-samples. High sensitivity (98.6%) and specificity (97.7%) of VE1 were confirmed. During melanoma progression different samples showed concordant phenotypes. Heterogeneous VE1 staining was observed in 28.5% of cases, and progression-free survival was higher in patients with tumour samples displaying such staining. These findings suggest that only VE1-negative tumours would be genotyped to detect other BRAFV600 mutations, and that either primary melanoma or metastasis can be tested using immunohistochemistry, according to the material available.
abstract_id: PUBMED:34018988
Use of BRAF immunohistochemistry as a screening test in detecting BRAFV600E mutation in melanomas. Objective: BRAF mutation is detected in 50-70% of melanomas. The molecular methods used to detect BRAF mutations are 80-90% sensitive, specific, and expensive methods. Immunohistochemistry is a relatively common, rapid, relatively inexpensive method in pathology practice compared to molecular techniques.
Aims: We aimed to compare immunohistochemical and molecular methods in our case of malign melanoma in which we investigated BRAF mutation with "real time PCR" method and to investigate the compatibility of molecular test results of BRAF immunohistochemistry results as a preliminary test.
Methods: Selected blocks of 30 patients with metastatic melanoma who came to our department for BRAF mutation detection were subjected to real time PCR molecular method and immunohistochemical study was performed with BRAF primer antibody.
Results: BRAF mutation was detected by molecular method in 7 of 30 cases (23.33%).
Conclusion: In all of these 7 cases, positive immunohistochemical staining was identified (100%). In conclusion, the use of BRAF immunohistochemistry as a screening test in the detection of mutant disease will allow the cost-effective use of molecular testing.
abstract_id: PUBMED:26440707
Sensitivity of plasma BRAFmutant and NRASmutant cell-free DNA assays to detect metastatic melanoma in patients with low RECIST scores and non-RECIST disease progression. Melanoma lacks a clinically useful blood-based biomarker of disease activity to help guide patient management. To determine whether measurements of circulating, cell-free, tumor-associated BRAF(mutant) and NRAS(mutant) DNA (ctDNA) have a higher sensitivity than LDH to detect metastatic disease prior to treatment initiation and upon disease progression we studied patients with unresectable stage IIIC/IV metastatic melanoma receiving treatment with BRAF inhibitor therapy or immune checkpoint blockade and at least 3 plasma samples obtained during their treatment course. Levels of BRAF(mutant) and NRAS(mutant) ctDNA were determined using droplet digital PCR (ddPCR) assays. Among patients with samples available prior to treatment initiation ctDNA and LDH levels were elevated in 12/15 (80%) and 6/20 (30%) (p = 0.006) patients respectively. In patients with RECIST scores <5 cm prior to treatment initiation, ctDNA levels were elevated in 5/7 (71%) patients compared to LDH which was elevated in 1/13 (8%) patients (p = 0.007). Among all disease progression events the modified bootstrapped sensitivities for ctDNA and LDH were 82% and 40% respectively, with a median difference in sensitivity of 42% (95% confidence interval, 27%-58%; P < 0.001). In addition, ctDNA levels were elevated in 13/16 (81%) instances of non-RECIST disease progression, including 10/12 (83%) instances of new brain metastases. In comparison LDH was elevated 8/16 (50%) instances of non-RECIST disease progression, including 6/12 (50%) instances of new brain metastases. Overall, ctDNA had a higher sensitivity than LDH to detect disease progression, including non-RECIST progression events. ctDNA has the potential to be a useful biomarker for monitoring melanoma disease activity.
abstract_id: PUBMED:29561296
Potential clinical and immunotherapeutic utility of talimogene laherparepvec for patients with melanoma after disease progression on immune checkpoint inhibitors and BRAF inhibitors. Talimogene laherparepvec is a genetically modified herpes simplex virus type 1-based oncolytic immunotherapy for the local treatment of unresectable subcutaneous and nodal tumors in patients with melanoma recurrent after initial surgery. We report on two patients with melanoma who, after progression on numerous systemic therapies, derived clinical benefit from talimogene laherparepvec in an expanded-access protocol (ClinicalTrials.gov, NCT02147951). Intralesional talimogene laherparepvec (day 1, ≤4 ml 10 PFU/ml; after 3 weeks, ≤4 ml 10 PFU/ml every 2 weeks) was administered until complete response, no injectable tumors, progressive disease, or intolerance occurred. Patient 1 was 71 years old, had stage IIIB disease, and had previously received granulocyte-macrophage colony-stimulating factor, vemurafenib, metformin, ipilimumab, dabrafenib, trametinib, and pembrolizumab. Patient 2 was 45 years old, had stage IIIC disease, and had previously received nivolumab/ipilimumab combination therapy. There were marked reductions in the number and size of melanoma lesions during treatment with talimogene laherparepvec. Both patients experienced mild-to-moderate nausea and vomiting, which were managed using ondansetron, metoclopramide, and pantoprazole. Both patients completed treatment with talimogene laherparepvec in the expanded-access protocol on 24 November 2015, but received talimogene laherparepvec in clinical practice. Patient 1 continues to receive therapy (>60 weeks); patient 2 experienced a complete response at 23 weeks. Immunohistochemistry of a biopsied dermal metastasis from patient 1 showed a marked infiltration of CD4 and CD8 T cells after 1 year of treatment. Talimogene laherparepvec was active in patients with advanced melanoma with disease progression following multiple previous systemic therapies; no new safety signals were identified.
abstract_id: PUBMED:35129375
Treatment patterns and outcomes following disease progression on anti-PD-1 therapies for advanced melanoma. Background: Anti-PD-1-based therapies prolong survival in advanced melanoma, but disease progression is common. This study evaluated treatment patterns and overall survival (OS) after anti-PD-1 progression. Methods: Retrospective data from patients with advanced melanoma and progression on anti-PD-1 treatment between 2014 and 2019 were taken from Flatiron Health, which reflects largely community practice. Treatment patterns and OS were analyzed for BRAF mutant (mt) and wild-type (wt) subgroups; OS was also examined across all patients. Results: Progression following anti-PD-1 was recorded for 679 patients. Median OS ranged from 5.0 to 11.3 months. Of 275 BRAFmt and 374 BRAFwt patients, 113 (41.1%) and 228 (61.0%) received no subsequent therapy, respectively. However, 48.4% of BRAFmt and 57.8% of BRAFwt patients continued anti-PD-1 treatment beyond progression. Conclusion: This real-world study underscores the need for effective treatments for advanced melanoma post-progression on anti-PD-1 therapy.
Answer: Yes, immunohistochemistry of BRAF V600E is useful as a screening tool and during progression disease of melanoma patients. Several studies have demonstrated the clinical value of using immunohistochemistry (IHC) with mutation-specific monoclonal antibodies, such as VE1, to detect the BRAF V600E mutation in melanoma patients.
One study found that the sensitivity and specificity of the Ventana® VE1 antibody were 89.2% and 96.2%, respectively, indicating that IHC can be a rapid screening tool for assessing BRAF status in routine diagnostic procedures (PUBMED:27863476). Another study confirmed the reliability of combining IHC analysis targeting NRAS(Q61R) and BRAF(V600E) proteins with molecular analysis, showing that it was a reliable theranostic tool for challenging samples of melanoma (PUBMED:26204954).
Furthermore, IHC with a mutation-specific monoclonal antibody was found to have a sensitivity of 100% and specificity of 97% for detecting the BRAF V600E mutation in primary cutaneous melanoma, supporting its use as an ancillary screening tool (PUBMED:23041829). Another study demonstrated that IHC had an overall diagnostic accuracy of 93.9% for BRAF V600E compared to next-generation sequencing (NGS), making it a reliable surrogate for NGS in identifying the BRAF V600E mutation (PUBMED:34790354).
Additionally, research has shown that the VE1 antibody has high sensitivity (98.6%) and specificity (97.7%) when compared with molecular testing, and that BRAF V600E status during melanoma progression showed concordant phenotypes in different samples (PUBMED:26695089). Another study suggested that BRAF immunohistochemistry could be used as a screening test in the detection of the BRAF V600E mutation, allowing for cost-effective use of molecular testing (PUBMED:34018988).
In summary, these findings indicate that immunohistochemistry of BRAF V600E is a useful tool for both initial screening and monitoring disease progression in melanoma patients. It provides a rapid, accurate, and cost-effective method for determining BRAF mutation status, which is crucial for guiding targeted therapy decisions. |
Instruction: Is a targeted intensive intervention effective for improvements in hypertension control?
Abstracts:
abstract_id: PUBMED:22565110
Is a targeted intensive intervention effective for improvements in hypertension control? A randomized controlled trial. Background: High blood pressure (BP) is one of the most important risk factors for stroke, and antihypertensive therapy significantly reduces the risk of cardiovascular morbidity and mortality. However, achieving a regulated BP in hypertensive patients is still a challenge.
Objective: To evaluate the impact of an intervention targeting GPs' management of hypertension.
Methods: A cluster randomized trial comprising 124 practices and 2646 patients with hypertension. In the Capital Region of Denmark, the participating GPs were randomized to an intensive or to a moderately intensive intervention group or to a control group and in Region Zealand and Region of Southern Denmark, practices were randomized into a moderately intensive intervention and to a control group. The main outcome measures were change in proportion of patients with high BP and change in systolic BP (SBP) and diastolic BP (DBP) from the first to the second registration.
Results: The proportion of patients with high BP in 2007 was reduced in 2009 by ~9% points. The mean SBP was reduced significantly from 2007 to 2009 by 3.61 mmHg [95% confidence interval (CI): -4.26 to -2.96], and the DBP was reduced significantly by 1.99 mmHg (95% CI: -2.37 to -1.61). There was no additional impact in either of the intervention groups.
Conclusion: There was no impact of the moderate intervention and no additional impact of the intensive intervention on BP.
abstract_id: PUBMED:36975584
Effective Coverage in Health Systems: Evolution of a Concept. The manner in which high-impact, life-saving health interventions reach populations in need is a critical dimension of health system performance. Intervention coverage has been a standard metric for such performance. To better understand and address the decay of intervention effectiveness in real-world health systems, the more complex measure of "effective coverage" is required, which includes the health gain the health system could potentially deliver. We have carried out a narrative review to trace the origins, timeline, and evolution of the concept of effective coverage metrics to illuminate potential improvements in coherence, terminology, application, and visualizations, based on which a combination of approaches appears to have the most influence on policy and practice. We found that the World Health Organization first proposed the concept over 45 years ago. It became increasingly popular with the further development of theoretical underpinnings, and after the introduction of quantification and visualization tools. The approach has been applied in low- and middle-income countries, mainly for HIV/AIDS, TB, malaria, child health interventions, and more recently for non-communicable diseases, particularly diabetes and hypertension. Nevertheless, despite decades of application of effective coverage concepts, there is considerable variability in the terminology used and the choices of effectiveness decay steps included in the measures. Results frequently illustrate a profound loss of service effectiveness due to health system factors. However, policy and practice rarely address these factors, and instead favour narrowly targeted technical interventions.
abstract_id: PUBMED:36316765
An economic evaluation of intensive hypertension control in CKD patients: a cost-effectiveness study. Background: Studies have suggested that intensive hypertension control in patients with a high risk of cardiovascular disease (CVD) is both effective and economically feasible. The purpose of this study is to conduct an economic evaluation of intensive hypertension control targeting chronic kidney disease (CKD) patients using the representative data in Korea.
Methods: We used a Markov decision model to compare both cost and effectiveness of intensive hypertension control versus standard hypertension control in hypertensive CKD patients. Model parameters were estimated with the data from the National Health Insurance Service (NHIS)-National Sample Cohort, as well as latest literature. One-way sensitivity analysis was conducted to test the effect of variation in key parameters on the model outcome.
Results: For CKD patients with hypertension, intensive hypertension control would cost more but increase utilities, compared to standard hypertension control. The incremental cost-effectiveness ratio (ICER) for intensive hypertension control in CKD patients was projected at 18,126 USDs per quality-adjusted life year (QALY) compared to standard hypertension control. The results of sensitivity analysis suggest that the results are overall robust.
Conclusions: This study finds that intensive hypertension control in CKD patients in Korea is economically sound. This information is expected to be useful for clinicians in managing hypertension of CKD patients and policymakers when making decisions.
abstract_id: PUBMED:36716991
Association of serum uric acid with benefits of intensive blood pressure control. Introduction And Objectives: Intensive systolic blood pressure (SBP) control improved outcomes in the Strategy of Blood Pressure Intervention in the Elderly Hypertensive Patients (STEP) trial. Whether the serum uric acid concentration at baseline alters the benefits of intensive SBP control is unknown.
Methods: The STEP trial was a randomized controlled trial that compared the effects of intensive (SBP target of 110 to<130mmHg) and standard (SBP target of 130 to <150mmHg) SBP control in Chinese patients aged 60 to 80 years with hypertension. The primary outcome was a composite of cardiovascular disease events. This post hoc analysis was performed to examine whether the effects of intensive SBP intervention differed by the baseline uric acid concentration using 2 models: restricted cubic spline curves and subgroup analyses, both based on the Fine-Gray subdistribution hazard model in the analysis of the primary outcome and secondary outcomes (excluding all-cause death). In the analysis of all-cause death, the Cox regression model was used. We also examined the change in the follow-up uric acid concentrations.
Results: Overall, the risk of the primary outcome rose as the cumulative uric acid concentration increased in both the intensive and standard treatment groups. Patients with intensive treatment had a lower multivariable-adjusted subdistribution hazard ratio for the primary outcome, but with a wide overlap of 95%CI. Next, we stratified patients according to their baseline uric acid concentration (tertile 1 [T1], <303.0μmol/L; tertile 2 [T2], 303.0 to <375.8μmol/L; and tertile 3 [T3], ≥375.8μmol/L). Subgroup analyses using tertiles provided HRs and 95%CI in T1 (HR, 0.55; 95%CI, 0.36-0.86; P=.008), T2 (HR, 0.80; 95%CI, 0.56-1.14; P=.22) and T3 (HR, 0.86; 95%CI, 0.60-1.21; P=.39), with an interaction P value of .29. The results for most of the secondary outcomes followed the same trends.
Conclusions: There was no evidence that the benefit of the intensive SBP control differed by baseline uric acid concentrations. This trial was registered at ClinicalTrial.gov (Identifier: NCT03015311).
abstract_id: PUBMED:32509638
The effect of intensive hemodialysis on LVH regression and blood pressure control in ESRD patients. Introduction: Cardiovascular diseases are considered the major cause of death in dialysis patients with end-stage renal disease (ESRD). Recently, intensive hemodialysis has increasingly used and replaced conventional hemodialysis. The present study aimed to evaluate the effect of intensive hemodialysis on LVH regression and blood pressure control.
Methods: The present study is self-control, pre- and post-intervention clinical trial on hemodialysis ESRD patients with hypertension (52.5% female with a mean age of 55.55 ± 12.96), who were admitted to Imam Khomeini Hospitals, Golestan Ahvaz in 1396. All patients underwent intensive hemodialysis treatment 4 times a week for 2 months. 2-D color Doppler echocardiography was performed for all patients before the intervention and following 2 months of intensive hemodialysis. The results of chest echocardiographic were used to determine left ventricular thickness.
Results: In this study, 40 patients with hypertension were studied. The results of this study showed a significant decrease (P < 0.0001) in the levels of LVH, SBP, DBP and mean BP after intervention in ESRD patients. The level of LVH was decreased from 15.42 ± 1.67 mmHg to 13.86 ± 1.39 mmHg, SBP from 161.50 ± 12.16 mmHg to 141.12 ± 8.87 mmHg, DBP from 25.25 ± 5.15 mmHg to 81/75 ± 2.89 mmHg, and mean BP from 114.66 ± 6.82 mmHg to 101/54 ± 3.98 mmHg.
Conclusion: Based on the results, it can be concluded that intensive hemodialysis resulted in improved LVH regression and blood pressure control, and fewer requirements for blood pressure-lowering medications.
abstract_id: PUBMED:34901238
The Impact of Cognitive Function on the Effectiveness and Safety of Intensive Blood Pressure Control for Patients With Hypertension: A post-hoc Analysis of SPRINT. Background: Poor cognitive function can predict poor clinical outcomes. Intensive blood pressure control can reduce the risk of cardiovascular diseases and all-cause mortality. In this study, we assessed whether intensive blood pressure control in older patients can reduce the risk of stroke, composite cardiovascular outcomes and all-cause mortality for participants in the Systolic Blood Pressure Intervention Trial (SPRINT) with lower or higher cognitive function based on the Montreal Cognitive Assessment (MoCA) cut-off scores. Methods: The SPRINT evaluated the impact of intensive blood pressure control (systolic blood pressure <120 mmHg) compared with standard blood pressure control (systolic blood pressure <140 mmHg). We defined MoCA score below education specific 25th percentile as lower cognitive function. And SPRINT participants with a MoCA score below 21 (<12 years of education) or 22 (≥12 years of education) were having lower cognitive function, and all others were having higher cognitive function. The Cox proportional risk regression was used to investigate the association of treatment arms with clinical outcomes and serious adverse effects in different cognitive status. Additional interaction and stratified analyses were performed to evaluate the robustness of the association between treatment arm and stroke in patients with lower cognitive function. Results: Of the participants, 1,873 were having lower cognitive function at baseline. The median follow-up period was 3.26 years. After fully adjusting for age, sex, ethnicity, body mass index, smoking, systolic blood pressure, Framingham 10-year CVD risk score, aspirin use, statin use, previous cardiovascular disease, previous chronic kidney disease and frailty status, intensive blood pressure control increased the risk of stroke [hazard ratio (HR) = 1.93, 95% confidence interval (CI): 1.04-3.60, P = 0.038)] in patients with lower cognitive function. Intensive blood pressure control could not reduce the risk of composite cardiovascular outcomes (HR = 0.81, 95%CI: 0.59-1.12, P = 0.201) and all-cause mortality (HR = 0.93, 95%CI: 0.64-1.35, P = 0.710) in lower cognitive function group. In patients with higher cognitive function, intensive blood pressure control led to significant reduction in the risk of stroke (HR = 0.55, 95%CI: 0.35-0.85, P = 0.008), composite cardiovascular outcomes (HR = 0.68, 95%CI: 0.56-0.83, P < 0.001) and all-cause mortality (HR = 0.62, 95%CI: 0.48-0.80, P < 0.001) in the fully adjusted model. Additionally, after the full adjustment, intensive blood pressure control increased the risk of hypotension and syncope in patients with lower cognitive function. Rates of hypotension, electrolyte abnormality and acute kidney injury were increased in the higher cognitive function patients undergoing intensive blood pressure control. Conclusion: Intensive blood pressure control might not reduce the risk of stroke, composite cardiovascular outcomes and all-cause mortality in patients with lower cognitive function.
abstract_id: PUBMED:34934160
Cost-effectiveness analysis of intensive blood pressure control in Korea. This study was a cost-effectiveness analysis of intensive blood pressure (BP) control among hypertensive patients in Korea. We constructed a Markov model comparing intensive versus standard BP control treatment and calculated the incremental cost-effectiveness ratio. The study population consisted of hypertensive patients over 50 years old with systolic blood pressures (SBPs) exceeding 140 mmHg and at high risk of cardiovascular disease. Treatment alternatives included lowering the SBP below 120 mmHg (intensive) and 140 mmHg (standard) for target BP. We assumed five scenarios with different medication adherence. The effectiveness variable was quality-adjusted life years (QALYs), and costs included medical costs related to hypertension (HT), complications, and nonmedical costs. In addition, we performed a sensitivity analysis to confirm the robustness of the results of this study. Scenario 5, with 100% medication adherence, showed the lowest incremental cost-effectiveness ratio (ICER) of $1,373 USD, followed by scenario 1 (first 15 years: 62.5%, 16-30 years: 65.2%, after 30 years: 59.5%), scenario 2 (first five years: 62.5% decrease by 5% every five years), and scenario 3 (first 10 years: 62.5% decrease by 10% every 10 years). The ICERs in all scenarios were lower than the willingness to pay (WTP) threshold of $9,492-$32,907 USD in Korea. Tornado analysis showed that the ICERs were changed greatly according to stroke incidence. Intensive treatment of HT prevents cardiovascular disease (CVD); therefore, intensive treatment is more cost-effective than standard treatment despite the consumption of more health resources. ICERs are considerably changed according to medication adherence, confirming the importance of patient adherence to treatment.
abstract_id: PUBMED:33021715
Effectiveness of Lifestyle and Drug Intervention on Hypertensive Patients: a Randomized Community Intervention Trial in Rural China. Background: Strict medication guidance and lifestyle interventions to manage blood pressure (BP) in hypertensive patients are typically difficult to follow.
Objective: To evaluate the 1-year effectiveness of lifestyle and drug intervention in the management of rural hypertensive patients.
Design: Randomized community intervention trial.
Participants: The control group comprised 967 patients who received standard antihypertensive drug intervention therapy from two communities, whereas the intervention group comprised 1945 patients who received antihypertensive drug and lifestyle intervention therapies from four communities in rural China.
Main Measures: Data on lifestyle behaviors and BP measurements at baseline and 1-year follow-up were collected. A difference-in-difference logistic regression model was used to assess the effect of the intervention.
Key Results: BP control after the 1-year intervention was better than that at baseline in both groups. The within-group change in BP control of 59.3% in the intervention group was much higher than the 25.2% change in the control group (P < 0.001). Along with the duration of the follow-up period, systolic and diastolic BP decreased rapidly in the early stages and then gradually after 6 months in the intervention group (P < 0.001). In the intervention group, drug therapy adherence was increased by 39.5% (from 48.1% at 1 month to 87.6% at 1 year) (P < 0.001), more in women (45.6%) than in men (31.2%; P < 0.001). The net effect of the lifestyle intervention improved the rate of BP control by 56.1% (70.8% for men and 44.7% for women). For all physiological and biochemical factors, such as body mass index, waist circumference, lipid metabolism, and glucose control, improvements were more significant in the behavioral intervention group than those in the control group (all P < 0.001).
Conclusion: The addition of lifestyle intervention by physicians or nurses helps control BP effectively and lowers BP better than usual care with antihypertensive drug therapy alone.
abstract_id: PUBMED:35023346
Effectiveness of a School-Based Educational Intervention to Improve Hypertension Control Among Schoolteachers: A Cluster-Randomized Controlled Trial. Background The control of hypertension is low in low- and middle-income countries like India. We evaluated the effects of a nurse-facilitated educational intervention in improving the control rate of hypertension among school teachers in India. Methods and Results This was a cluster-randomized controlled trial involving 92 schools in Kerala, which were randomly assigned equally into a usual care group and an intervention group. Participants were 402 school teachers (mean age, 47 years; men, 29%) identified with hypertension. Participants in both study groups received a leaflet containing details of a healthy lifestyle and the importance of regular intake of antihypertensive medication. In addition, the intervention participants received a nurse-facilitated educational intervention on hypertension control for 3 months. The primary outcome was hypertension control. Key secondary outcomes included systolic blood pressure, diastolic blood pressure, and the proportion of participants taking antihypertensive medications. For the primary outcome, we used mixed-effects logistic regression models. Two months after a 3-month educational intervention, a greater proportion of intervention participants (49.0%) achieved hypertension control than the usual care participants (38.2%), with an odds ratio of 1.89 (95% CI, 1.06-3.35), after adjusting for baseline hypertension control. The odds of taking antihypertensive medications were 1.6 times higher in the intervention group compared with the usual care group (odds ratio, 1.62; 95% CI, 1.08-2.45). The reduction in mean systolic blood pressure was significantly greater in the intervention group by 4.2 mm Hg (95% CI, -7.2 to -1.1) than in the usual care group. Conclusions A nurse-facilitated educational intervention was effective in improving the control and treatment rates of hypertension as well as reducing systolic blood pressure among schoolteachers with hypertension. Registration URL: https://www.ctri.nic.in; Unique Identifier: CTRI/2018/01/011402.
abstract_id: PUBMED:31118740
Two-year results of a community-based randomized controlled lifestyle intervention trial to control prehypertension and/or prediabetes in Thailand: a brief report. The aim of the study was to assess the 2-year effect of a community-based randomized controlled lifestyle intervention on glucose and blood pressure levels in Thailand. Participants (N=443, with prediabetes and/or prehypertension) received a six 2-hr group sessions lifestyle intervention (physical activity and nutrition) over 6 months. Measurements were at baseline, 12 months (89%) and at 24 months (84.7%). Statistically significant interaction effects on fasting plasma glucose and DBP at 12 months were not maintained at 24 months, while significant interaction effects were found on high-density lipoprotein and low-density lipoprotein cholesterol at 24 months. Improvements were found for fasting plasma glucose, SBP, DBP and total cholesterol, but no significant interaction effect was detected. The development of type 2 diabetes was higher in the control than in the intervention group, but it was not reaching significance (P=0.181), while 3.0% in the intervention group and 2.6% in the control group developed hypertension at 24 months. No significant group differences were found in psycho-behavioral variables. The lifestyle intervention did not provide additional benefits compared to the control group at 24-month follow-up. Trial registration number: TCTR20170721001.
Answer: The effectiveness of targeted intensive interventions for improvements in hypertension control appears to be variable based on the provided abstracts.
One randomized controlled trial found that neither a moderate nor an intensive intervention targeting general practitioners' management of hypertension had an additional impact on blood pressure (BP) control compared to a control group. The proportion of patients with high BP was reduced, but this reduction was not attributed to the interventions (PUBMED:22565110).
In contrast, another study suggested that intensive hypertension control in chronic kidney disease (CKD) patients is economically sound, implying that it could be an effective strategy in this specific high-risk group (PUBMED:36316765). Similarly, intensive hemodialysis in end-stage renal disease (ESRD) patients with hypertension resulted in improved left ventricular hypertrophy (LVH) regression and blood pressure control (PUBMED:32509638).
A post-hoc analysis of the SPRINT trial indicated that intensive blood pressure control might not reduce the risk of stroke, composite cardiovascular outcomes, and all-cause mortality in patients with lower cognitive function, although it was beneficial for those with higher cognitive function (PUBMED:34901238). Another study found that intensive blood pressure control was cost-effective in Korea, suggesting that it could be an effective strategy for hypertensive patients over 50 years old at high risk of cardiovascular disease (PUBMED:34934160).
A randomized community intervention trial in rural China showed that the addition of lifestyle intervention to antihypertensive drug therapy was more effective in controlling BP than drug therapy alone (PUBMED:33021715). Moreover, a cluster-randomized controlled trial among schoolteachers in India demonstrated that a nurse-facilitated educational intervention was effective in improving hypertension control (PUBMED:35023346).
However, a community-based randomized controlled lifestyle intervention trial in Thailand did not provide additional benefits compared to the control group at a 24-month follow-up for prehypertension and/or prediabetes (PUBMED:31118740).
In summary, while some studies show benefits of intensive interventions in specific populations or settings, others do not demonstrate a significant advantage over standard care or control conditions. The effectiveness of targeted intensive interventions for hypertension control may depend on various factors, including the population targeted, the nature of the intervention, and adherence to treatment. |
Instruction: Can uterine rupture in patients attempting vaginal birth after cesarean delivery be predicted?
Abstracts:
abstract_id: PUBMED:17000247
Can uterine rupture in patients attempting vaginal birth after cesarean delivery be predicted? Objective: This study was undertaken to use multivariable methods to develop clinical predictive models for the occurrence of uterine rupture by using both antepartum and early intrapartum factors.
Study Design: This was a planned secondary analysis from a multicenter case-control study of uterine rupture among women attempting vaginal birth after cesarean (VBAC) delivery. Multivariable methods were used to develop 2 separate clinical predictive indices--one that used only prelabor factors and the other that used both prelabor and early labor factors. These indices were also assessed with the use of Receiver operating characteristic curves.
Results: We identified 134 cases of uterine rupture and 665 noncases. No single individual factor is sufficiently sensitive or specific for clinical prediction of uterine rupture. Likewise, the 2 clinical predictive indices were neither sufficiently sensitive nor specific for clinical use (receiver operating characteristic curve [area under the curve] 0.67 and 0.70, respectively).
Conclusion: Uterine rupture cannot be predicted with either individual or combinations of clinical factors. This has important clinical and medical-legal implications.
abstract_id: PUBMED:18631410
Predicting success and reducing the risks when attempting vaginal birth after cesarean. The goal of this manuscript is to review the contemporary evidence on issues pertinent to improving the safety profile of vaginal birth after cesarean (VBAC) attempts. Patients attempting VBAC have success rates of 60%-80%, and no reliable method of predicting VBAC failure for individual patients exists. The rate of uterine rupture in all patients ranges from 0.7% to 0.98%, but the rate of uterine rupture decreases in patients with a prior vaginal delivery. In fact, in patients with a prior vaginal delivery, VBAC appears to be safer from the maternal standpoint than repeat cesarean. Inevitably, the obstetrician today will encounter the situation of deciding whether or not to induce a patient with a uterine scar, and particular attention is paid to the success and risks of inducing labor in this patient population. Induction of labor is associated with a slightly lower successful vaginal delivery rate, although the rate remains above 50% in virtually all patient populations. The rate of uterine rupture increases slightly, but still remains around 2%-3%. Although misoprostol use is discouraged due to its association with increased risks of uterine rupture, transcervical catheters, oxytocin, and amniotomy may be used to induce labor in women attempting VBAC.
abstract_id: PUBMED:20842616
Predicting significant maternal morbidity in women attempting vaginal birth after cesarean section. Attempting vaginal birth after cesarean section (VBAC) places women at an increased risk for complications. We set out to identify factors that are predictive of major morbidity in women who attempt VBAC. A nested case-control study was performed within a large retrospective cohort study of women with a history of at least one cesarean. Women who attempted VBAC were identified and those who experienced at least one complication of a composite adverse outcome consisting of uterine rupture, bladder injury, and bowel injury (cases) were compared with those who did not experience one of these adverse outcomes (controls). We analyzed risk factors for major maternal morbidity using univariable and multivariable methods. The accuracy of the multivariable prediction model was assessed with receiver operator characteristic (ROC) curve analysis. Of 25,005 women with a history of previous cesarean, 13,706 (54.9%) attempted VBAC. The composite outcome occurred in 300 (2.1%) women attempting VBAC. Using logistic regression analysis, prior abdominal surgery (odds ratio [OR] 1.58, 95% confidence interval [CI] 1.2 to 2.1), augmented labor (OR 1.78, 95% CI 1.29 to 2.46), and induction of labor (OR 2.03, 95% CI 1.48 to 2.76) were associated with an increased risk of the composite outcome. Prior vaginal delivery (OR 0.39, 95% CI 0.29 to 0.54) was associated with decreased risk for the composite outcome. The ROC curve generated from the regression model has an area under the curve of 0.65 and an unfavorable tradeoff between sensitivity and specificity. Women attempting VBAC with a history of abdominal surgery or those who undergo augmentation or induction of labor are at an increased risk for major maternal morbidity, and women with a prior vaginal delivery have a decreased risk of major morbidity. The multivariable model developed cannot accurately predict major maternal morbidity.
abstract_id: PUBMED:18455132
Higher maximum doses of oxytocin are associated with an unacceptably high risk for uterine rupture in patients attempting vaginal birth after cesarean delivery. Objective: The objective of the study was to more precisely estimate the effect of maximum oxytocin dose on uterine rupture risk in patients attempting vaginal birth after cesarean (VBAC) by considering timing and duration of therapy.
Study Design: A nested case-control study was conducted within a multicenter, retrospective cohort study of more than 25,000 women with at least 1 prior cesarean delivery, comparing cases of uterine rupture with controls (no rupture) while attempting VBAC. Time-to-event analyses were performed to examine the effect of maximum oxytocin dose on the risk of uterine rupture considering therapy duration, while adjusting for confounders.
Results: Within the nested case-control study of 804 patients, 272 were exposed to oxytocin: 62 cases of uterine rupture and 210 controls. Maximum oxytocin ranges above 20 mU/min increased the risk of uterine rupture 4-fold or greater (21-30 mU/min: hazard ratio [HR] 3.92, 95% confidence interval [CI], 1.06 to 14.52; 31-40 mU/min: HR 4.57, 95% CI, 1.00 to 20.82).
Conclusion: These findings support a maximum oxytocin dose of 20 mU/min in VBAC trials to avoid an unacceptably high risk of uterine rupture.
abstract_id: PUBMED:28285573
Association between prior vaginal birth after cesarean and subsequent labor outcome. Objective: To estimate the effect of prior successful vaginal birth after cesarean (VBAC) on the rate of uterine rupture and delivery outcome in women undergoing labor after cesarean.
Methods: A retrospective cohort study of all women attempting labor after cesarean delivery in a university-affiliated tertiary-hospital (2007-2014) was conducted. Study group included women attempting vaginal delivery with a history of cesarean delivery and at least one prior VBAC. Control group included women attempting first vaginal delivery following cesarean delivery. Primary outcome was defined as the rate of uterine rupture. Secondary outcomes were delivery and maternal outcomes.
Results: Of 62,463 deliveries during the study period, 3256 met inclusion criteria. One thousand two hundred and eleven women had VBAC prior to the index labor and 2045 underwent their first labor after cesarean. Women in the study group had a significantly lower rate of uterine rupture 9 (0.7%) in respect to control 33 (1.6%), p = .036, and had a higher rate of successful vaginal birth (96 vs. 84.9%, p < .001). In multivariate analysis, previous VBAC was associated with decreased risk of uterine rupture (OR = 0.46, 95% CI 0.21-0.97, p = .04).
Conclusions: In women attempting labor after cesarean, prior VBAC appears to be associated with lower rate of uterine rupture and higher rate of successful vaginal birth.
abstract_id: PUBMED:16146414
Predicting cesarean section and uterine rupture among women attempting vaginal birth after prior cesarean section. Background: There is currently no validated method for antepartum prediction of the risk of failed vaginal birth after cesarean section and no information on the relationship between the risk of emergency cesarean delivery and the risk of uterine rupture.
Methods And Findings: We linked a national maternity hospital discharge database and a national registry of perinatal deaths. We studied 23,286 women with one prior cesarean delivery who attempted vaginal birth at or after 40-wk gestation. The population was randomly split into model development and validation groups. The factors associated with emergency cesarean section were maternal age (adjusted odds ratio [OR] = 1.22 per 5-y increase, 95% confidence interval [CI]: 1.16 to 1.28), maternal height (adjusted OR = 0.75 per 5-cm increase, 95% CI: 0.73 to 0.78), male fetus (adjusted OR = 1.18, 95% CI: 1.08 to 1.29), no previous vaginal birth (adjusted OR = 5.08, 95% CI: 4.52 to 5.72), prostaglandin induction of labor (adjusted OR = 1.42, 95% CI: 1.26 to 1.60), and birth at 41-wk (adjusted OR = 1.30, 95% CI: 1.18 to 1.42) or 42-wk (adjusted OR = 1.38, 95% CI: 1.17 to 1.62) gestation compared with 40-wk. In the validation group, 36% of the women had a low predicted risk of caesarean section (< 20%) and 16.5% of women had a high predicted risk (> 40%); 10.9% and 47.7% of these women, respectively, actually had deliveries by caesarean section. The predicted risk of caesarean section was also associated with the risk of all uterine rupture (OR for a 5% increase in predicted risk = 1.22, 95% CI: 1.14 to 1.31) and uterine rupture associated with perinatal death (OR for a 5% increase in predicted risk = 1.32, 95% CI: 1.02 to 1.73). The observed incidence of uterine rupture was 2.0 per 1,000 among women at low risk of cesarean section and 9.1 per 1,000 among those at high risk (relative risk = 4.5, 95% CI: 2.6 to 8.1). We present the model in a simple-to-use format.
Conclusions: We present, to our knowledge, the first validated model for antepartum prediction of the risk of failed vaginal birth after prior cesarean section. Women at increased risk of emergency caesarean section are also at increased risk of uterine rupture, including catastrophic rupture leading to perinatal death.
abstract_id: PUBMED:11014958
Uterine rupture associated with vaginal birth after cesarean section: a complication of intravaginal misoprostol? Intravaginal misoprostol has become increasingly employed for labor induction among patients with an unfavorable Bishop's score. Almost all of the reported studies have specifically excluded patients with prior uterine surgery. There has been, therefore, very little information concerning its usage among patients attempting vaginal birth after cesarean section. We report a patient with two prior low transverse uterine incisions who experienced uterine rupture after having received a single 25-microg intravaginal dose of misoprostol.
abstract_id: PUBMED:11408854
Failed vaginal birth after a cesarean section: how risky is it? I. Maternal morbidity. Objective: Our purpose was to determine the maternal risks associated with failed attempt at vaginal birth after cesarean compared with elective repeat cesarean delivery or successful vaginal birth after cesarean.
Study Design: From 1989 to 1998 all patients attempting vaginal birth after cesarean and all patients undergoing repeat cesarean deliveries were reviewed. Data were extracted from a computerized obstetric database and from medical charts. The following three groups were defined: women who had successful vaginal birth after cesarean, women who had failed vaginal birth after cesarean, and women who underwent elective repeat cesarean. Criteria for the elective repeat cesarean group included no more than two previous low transverse or vertical incisions, fetus in cephalic or breech presentation, no previous uterine surgery, no active herpes, and adequate pelvis. Predictor variables included age, parity, type and number of previous incisions, reasons for repeat cesarean delivery, gestational age, and infant weight. Outcome variables included uterine rupture or dehiscence, hemorrhage >1000 mL, hemorrhage >2000 mL, need for transfusion, chorioamnionitis, endometritis, and length of hospital stay. The Student t test and the chi(2) test were used to compare categoric variables and means; maternal complications and factors associated with successful vaginal birth after cesarean were analyzed with multivariate logistic regression, allowing odds ratios, adjusted odds ratios, 95% confidence intervals, and P values to be calculated.
Results: A total of 29,255 patients were delivered during the study period, with 2450 having previously had cesarean delivery. Repeat cesarean deliveries were performed in 1461 women (5.0%), and 989 successful vaginal births after cesarean delivery occurred (3.4%). Charts were reviewed for 97.6% of all women who underwent repeat cesarean delivery and for 93% of all women who had vaginal birth after cesarean. Vaginal birth after cesarean was attempted by 1344 patients or 75% of all appropriate candidates. Vaginal birth after cesarean was successful in 921 women (69%) and unsuccessful in 424 women. Four hundred fifty-one patients undergoing cesarean delivery were deemed appropriate for vaginal birth after cesarean. Multiple gestations were excluded from analysis. Final groups included 431 repeat cesarean deliveries and 1324 attempted vaginal births after cesarean; in the latter group 908 were successful and 416 failed. The overall rate of uterine disruption was 1.1% of all women attempting labor; the rate of true rupture was 0.8%; and the rate of hysterectomy was 0.5%. Blood loss was lower (odds ratio, 0.5%; 95% confidence interval, 0.3-0.9) and chorioamnionitis was higher (odds ratio, 3.8%; 95% confidence interval, 2.3-6.4) in women who attempted vaginal births after cesarean. Compared with women who had successful vaginal births after cesarean, women who experienced failed vaginal births after cesarean had a rate of uterine rupture that was 8.9% (95% confidence interval, 1.9-42) higher, a rate of transfusion that was 3.9% (95% confidence interval, 1.1-13.3) higher, a rate of chorioamnionitis that was 1.5% (95% confidence interval, 1.1-2.1) higher, and a rate of endometritis that was 6.4% (95% confidence interval, 4.1-9.8) higher.
Conclusion: Patients who experience failed vaginal birth after cesarean have higher risks of uterine disruption and infectious morbidity compared with patients who have successful vaginal birth after cesarean or elective repeat cesarean delivery. Because actual numbers of morbid events are small, caution should be exercised in interpreting results and counseling patients. More accurate prediction for safe, successful vaginal birth after cesarean delivery is needed.
abstract_id: PUBMED:12634665
The effect of birth weight on vaginal birth after cesarean delivery success rates. Objective: The purpose of this study was to evaluate the effect of increasing birth weight on the success rates for a trial of labor in women with one previous cesarean delivery.
Study Design: To evaluate the effect of increasing birth weight for women who undergo a trial of labor, the medical records of women who had attempted a vaginal birth after cesarean delivery (VBAC) from 1995 through 1999 in 16 community and university hospitals were reviewed retrospectively by trained abstractors. Information was collected about demographics, medical history, obstetric history, neonatal birth weight, complications, treatment, and outcome of the index pregnancy. The analysis was limited to women with singleton gestations with a history of 1 previous cesarean delivery. Because women with previous vaginal deliveries have higher vaginal birth after cesarean delivery success rates, the women were divided into four risk groups on the basis of their birth history. Groups were defined as women with no previous vaginal deliveries (group 1), women with a history of a previous vaginal birth after cesarean delivery (group 2), women with a history of a vaginal delivery before their cesarean delivery (group 3), and a group of women with a vaginal delivery both before and after the previous cesarean delivery (group 4).
Results: There were 9960 women with a singleton gestation and a history of one previous cesarean delivery. The overall vaginal birth after cesarean delivery success rate for the cohort was 74%. The overall vaginal birth after cesarean delivery success rates for groups 1, 2, 3, and 4 were 65%, 94%, 83%, and 93%, respectively. An analysis of neonatal birth weights of <4000 g, 4000 to 4249 g, 4250 to 4500 g, and >4500 g in group 1 showed a reduction in vaginal birth after cesarean delivery success rates from 68%, 52%, 45%, and 38%, respectively. In the remaining groups, there was no success rate below 63% for any of the birth weight strata. For group 1, vaginal birth after cesarean delivery success rates were decreased when the indication for the previous cesarean delivery was cephalopelvic disproportion or failure to progress or when the treatment was either an induction or augmentation of labor. The uterine rupture rate was higher in women for group 1 with birth weights of > or =4000 g (relative risk, 2.3; P <.001).
Conclusion: Women with macrosomic fetuses and a history of a previous vaginal delivery should be informed of the favorable vaginal birth after cesarean delivery success rates. Given the risks of vaginal birth after cesarean delivery, those women with no history of a vaginal delivery should be counseled that the success rates may be <50% when the neonatal birth weight exceeds 4000 g and that the success rates may be even lower if the indication for the previous cesarean delivery was cephalopelvic disproportion or failure to progress or if the treatment requires either induction or augmentation of labor. The uterine rupture rate was 3.6% in women for group 1 with a birth weight > or =4000 g.
abstract_id: PUBMED:20093908
Effect of birth weight on adverse obstetric outcomes in vaginal birth after cesarean delivery. Objective: To estimate the association between neonatal birth weight and adverse obstetric outcomes in women attempting vaginal birth after cesarean.
Methods: We reviewed the medical records of all women undergoing a trial of labor after a prior low transverse cesarean delivery in our institution between 1987 and 2004. Patients were categorized according to birth weight (less than 3,500 g [group 1, reference], 3,500-3,999 g [group 2], and 4,000 g or more [group 3]) and prior vaginal delivery. The rates of failed trial of labor, uterine rupture, shoulder dystocia, and third- and fourth-degree perineal laceration were compared among groups. Multivariable logistic regressions were performed to adjust for potential confounding factors.
Results: Of 2,586 women, 1,519 (59%), 798 (31%), and 269 (10%) were included in groups 1, 2, and 3, respectively. Birth weight was directly correlated to the rate of failed trial of labor (19%, 28%, and 38% for groups 1, 2, and 3, respectively; P<.01), uterine rupture (0.9%, 1.8%, and 2.6%; P<.05), shoulder dystocia (0.3%, 1.6%, and 7.8%; P<.01), and third- and fourth-degree perineal laceration (5%, 7%, and 12%; P<.01). After adjustment for confounding variables, birth weight of 4,000 g or more remained associated with uterine rupture (odds ratio [OR] 2.62, 95% confidence interval [CI] 1.001-6.85), failed trial of labor (OR 2.47, 95% CI 1.82-3.34), shoulder dystocia (OR 25.13, 95% CI 9.31-67.86), and third- and fourth-degree perineal laceration (OR 2.64, 95% CI 1.66-4.19).
Conclusion: Birth weight and specifically macrosomia are linked with failed trial of labor, uterine rupture, shoulder dystocia, and third- and fourth-degree perineal laceration in women who underwent prior cesarean delivery. Estimated fetal weight should be included in the decision-making process for all women contemplating a trial of labor after cesarean delivery.
Level Of Evidence: II.
Answer: Uterine rupture in patients attempting vaginal birth after cesarean delivery (VBAC) cannot be reliably predicted using individual or combinations of clinical factors. Studies have attempted to develop clinical predictive models for uterine rupture using both antepartum and early intrapartum factors, but these models were not sufficiently sensitive or specific for clinical use (PUBMED:17000247). No single factor has been found to be adequately sensitive or specific for the clinical prediction of uterine rupture (PUBMED:17000247).
Although certain factors have been associated with an increased risk of uterine rupture, such as prior abdominal surgery, augmented labor, and induction of labor, and a decreased risk with prior vaginal delivery, the predictive models developed from these factors do not accurately predict major maternal morbidity (PUBMED:20842616). Higher maximum doses of oxytocin are associated with an increased risk for uterine rupture, suggesting that a maximum dose of 20 mU/min may be safer for women attempting VBAC (PUBMED:18455132).
Additionally, a history of a successful VBAC appears to be associated with a lower rate of uterine rupture and a higher rate of successful vaginal birth in subsequent attempts (PUBMED:28285573). However, the overall ability to predict uterine rupture remains limited, and the decision to attempt VBAC should be made on an individual basis, considering all known risk factors and the clinical judgment of the healthcare provider. |
Instruction: Does magnifying narrow-band imaging or magnifying chromoendoscopy help experienced endoscopists assess invasion depth of large sessile and flat polyps?
Abstracts:
abstract_id: PUBMED:24839918
Does magnifying narrow-band imaging or magnifying chromoendoscopy help experienced endoscopists assess invasion depth of large sessile and flat polyps? Background: Distinguishing deep submucosa (SM) from superficial SM cancer in large sessile and flat colorectal polyps (>2 cm) is crucial in making the most appropriate therapeutic decision. We evaluated the additional role of magnifying narrow-band imaging (NBI) and magnifying chromoendoscopy (MCE) in assessing the depth of invasion in large sessile and flat polyps in comparison to morphological evaluation performed by experienced endoscopists.
Methods: From May 2011 to December 2011, a total of 85 large sessile and flat polyps were analyzed. Endoscopic features of the polyps were independently evaluated by experienced endoscopists. Subsequently, the polyps were observed using magnifying NBI and MCE.
Results: A total of 58 intramucosal lesions and 27 SM cancers (five superficial and 22 deep) were identified. The diagnostic accuracy of the experienced endoscopists, NBI, and MCE were 92.9, 90.6, and 89.4 %, respectively, for deep SM cancer. In combination with NBI or MCE, the diagnostic accuracy of the experienced endoscopists did not change significantly for deep SM cancer, with an accuracy of 95.3 % for both NBI and MCE.
Conclusions: Conventional colonoscopy can differentiate superficial from deep SM cancers with an accuracy of as high as 92.9 % in large sessile and flat polyps. Further diagnostic strategies are required in order to precisely assess the depth of invasion, especially in large colorectal polyps.
abstract_id: PUBMED:29658652
Additional chromoendoscopy for colorectal lesions initially diagnosed with low confidence by magnifying narrow-band imaging: Can it improve diagnostic accuracy? Background And Aim: Magnifying chromoendoscopy has been one of the most reliable diagnostic methods for distinguishing neoplastic from non-neoplastic lesions. The aim of this prospective study was to clarify the clinical usefulness of magnifying chromoendoscopy for colorectal polyps initially diagnosed with low confidence (LC) by magnifying narrow-band imaging (NBI).
Methods: Consecutive adult patients who underwent total colonoscopic examination with magnifying NBI between July and December 2016 at Sano Hospital were prospectively recruited. Endoscopists were asked to carry out additional magnifying chromoendoscopy for cases that had been initially diagnosed as Japan NBI Expert Team (JNET) Type 1 or 2A with LC by magnifying NBI. We investigated the diagnostic performance of magnifying NBI for polyps diagnosed as JNET Type 1 or 2A with LC (first phase) and that of subsequent magnifying chromoendoscopy (second phase) in differentiating neoplasia from non-neoplasia.
Results: In 50 patients, we analyzed 53 polyps classified as JNET Type 1 or 2A with LC prediction. Accuracy and negative predictive value of magnifying NBI (first phase) were 58.5% (95% CI, 44.1-71.9%) and 66.0% (95% CI, 36.6-77.9%), and those of magnifying chromoendoscopy (second phase) were 66.0% (95% CI, 51.7-78.5%) and 61.1% (95% CI, 43.5-76.9%), respectively.
Conclusion: Regardless of the findings of additional chromoendoscopy, all polyps should be resected and submitted for histopathological examination when the confidence level in differentiating adenomatous from hyperplastic polyps by magnifying NBI is low.
abstract_id: PUBMED:36739935
Diagnostic Value of Adding Magnifying Chromoendoscopy to Magnifying Narrow-Band Imaging Endoscopy for Colorectal Polyps. Background & Aims: This study examined the additional value of magnifying chromoendoscopy (MCE) on magnifying narrow-band imaging endoscopy (M-NBI) in the optical diagnosis of colorectal polyps.
Methods: A multicenter prospective study was conducted at 9 facilities in Japan and Germany. Patients with colorectal polyps scheduled for resection were included. Optical diagnosis was performed by M-NBI first, followed by MCE. Both diagnoses were made in real time. MCE was performed on all type 2B lesions classified according to the Japan NBI Expert Team classification and other lesions at the discretion of endoscopists. The diagnostic accuracy and confidence of M-NBI and MCE for colorectal cancer (CRC) with deep invasion (≥T1b) were compared on the basis of histologic findings after resection.
Results: In total, 1173 lesions were included between February 2018 and December 2020, with 654 (5 hyperplastic polyp/sessile serrated lesion, 162 low-grade dysplasia, 403 high-grade dysplasia, 97 T1 CRCs, and 32 ≥T2 CRCs) examined using MCE after M-NBI. In the diagnostic accuracy for predicting CRC with deep invasion, sensitivity, specificity, positive predictive value, negative predictive value, and accuracy for M-NBI were 63.1%, 94.2%, 61.6%, 94.5%, and 90.2%, respectively, and for MCE they were 77.4%, 93.2%, 62.5%, 96.5%, and 91.1%, respectively. The sensitivity was significantly higher in MCE (P < .001). However, these additional values were limited to lesions with low confidence in M-NBI or the ones diagnosed as ≥T1b CRC by M-NBI.
Conclusions: In this multicenter prospective study, we demonstrated the additional value of MCE on M-NBI. We suggest that additional MCE be recommended for lesions with low confidence or the ones diagnosed as ≥T1b CRC. Trials registry number: UMIN000031129.
abstract_id: PUBMED:26668794
Polyp Detection, Characterization, and Management Using Narrow-Band Imaging with/without Magnification. Narrow-band imaging (NBI) is a new imaging technology that was developed in 2006 and has since spread worldwide. Because of its convenience, NBI has been replacing the role of chromoendoscopy. Here we review the efficacy of NBI with/without magnification for detection, characterization, and management of colorectal polyps, and future perspectives for the technology, including education. Recent studies have shown that the next-generation NBI system can detect significantly more colonic polyps than white light imaging, suggesting that NBI may become the modality of choice from the beginning of screening. The capillary pattern revealed by NBI, and the NBI International Colorectal Endoscopic classification are helpful for prediction of histology and for estimating the depth of invasion of colorectal cancer. However, NBI with magnifying colonoscopy is not superior to magnifying chromoendoscopy for estimation of invasion depth. Currently, therefore, chromoendoscopy should also be performed additionally if deep submucosal invasive cancer is suspected. If endoscopists become able to accurately estimate colorectal polyp pathology using NBI, this will allow adenomatous polyps to be resected and discarded; thus, reducing both the risk of polypectomy and costs. In order to achieve this goal, a suitable system for education and training in in vivo diagnostics will be necessary.
abstract_id: PUBMED:26864801
Advantages of magnifying narrow-band imaging for diagnosing colorectal cancer coexisting with sessile serrated adenoma/polyp. Background And Aim: In the present study, we investigated the advantages of narrow-band imaging (NBI) for efficient diagnosis of sessile serrated adenoma/polyp (SSA/P). The main objective of this study was to analyze the characteristic features of cancer coexisting with serrated lesion by carrying out NBI.
Methods: We evaluated 264 non-malignant serrated lesions by using three modalities (conventional white light colonoscopy, magnifying chromoendoscopy, and magnifying NBI). Of the evaluated cancer cases with serrated lesions, 37 fulfilled the inclusion criteria.
Results: In diagnosing non-malignant SSA/P, an expanded crypt opening (ECO) under magnifying NBI is a useful sign. One hundred and twenty-five lesions (87%) of observed ECO were, at the same time, detected to have type II open pit pattern, which is known to be a valuable indicator when using magnifying chromoendoscopy. ECO had high sensitivity of 80% for identifying SSA/P, with 62% specificity and 83% positive predictive value (PPV). In detecting the cancer with SSA/P, irregular vessels under magnifying NBI were frequently observed with 100% sensitivity and 99% specificity, 86% PPV and 100% negative predictive value.
Conclusions: A focus on irregular vessels in serrated lesions might be useful for identification of cancer with SSA/P. This is an advantage of carrying out magnifying NBI in addition to being used simultaneously with other modalities by switching, and observations can be made by using wash-in water alone. We can carry out advanced examinations for selected lesions with irregular vessels. To confirm cancerous demarcation and invasion depth, a combination of all three aforementioned modalities should be done.
abstract_id: PUBMED:22741112
Correlation of narrow band imaging with magnifying colonoscopy and histology in colorectal tumors. Background/aims: Narrow band imaging (NBI) is a new technique that uses optical filters for imaging of mucosal morphology. The aim of this study was to correlate findings of NBI with magnifying colonoscopy and histology for prediction of neoplastic colorectal lesion.
Methods: Between September 2005 and December 2007, 107 colon polyps from 68 patients were detected by conventional colonoscopy and subsequently evaluated by NBI with magnifying colonoscopy and analyzed for a pit pattern and a capillary pattern. More analysis was done regarding thickness and irregularity of capillary features.
Results: Pit pattern with NBI magnification to discriminate between neoplastic and non-neoplastic lesions had a sensitivity of 88.9% and a specificity of 87.5%; capillary pattern yielded test performance characteristics of 91.9% and 87.5%. In respect of capillary thickness, invisible capillaries were found significantly more often in hyperplastic lesions. All thick capillaries were found in neoplastic polyps, and found significantly more often in carcinomas with submucosal massive invasion (sm-m) (p<0.01). In respect of capillary irregularity, invisible capillaries were found significantly more often in hyperplasic lesions, and severely irregular capillaries were found significantly more often in sm-m lesions (p<0.01).
Conclusions: Observation of capillary thickness and irregularity by NBI magnification is useful for correlating histological grade with carcinoma, especially with depth of submucosal invasion.
abstract_id: PUBMED:36906661
Magnifying chromoendoscopy is a reliable method in the selection of rectal neoplasms for local excision. Purpose: Adequate staging of early rectal neoplasms is essential for organ-preserving treatments, but magnetic resonance imaging (MRI) frequently overestimates the stage of those lesions. We aimed to compare the ability of magnifying chromoendoscopy and MRI to select patients with early rectal neoplasms for local excision.
Methods: This retrospective study in a tertiary Western cancer center included consecutive patients evaluated by magnifying chromoendoscopy and MRI who underwent en bloc resection of nonpedunculated sessile polyps larger than 20 mm, laterally spreading tumors (LSTs) [Formula: see text] 20 mm, or depressed-type lesions of any size (Paris 0-IIc). Sensitivity, specificity, accuracy, and positive and negative predictive values of magnifying chromoendoscopy and MRI to determine which lesions were amenable to local excision (i.e., [Formula: see text] T1sm1) were calculated.
Results: Specificity of magnifying chromoendoscopy was 97.3% (95% CI 92.2-99.4), and accuracy was 92.7% (95% CI 86.7-96.6) for predicting invasion deeper than T1sm1 (not amenable to local excision). MRI had lower specificity (60.5%, 95% CI 43.4-76.0) and lower accuracy (58.3%, 95% CI 43.2-72.4). Magnifying chromoendoscopy incorrectly predicted invasion depth in 10.7% of the cases in which the MRI was correct, while magnifying chromoendoscopy provided a correct diagnosis in 90% of the cases in which the MRI was incorrect (p = 0.001). Overstaging occurred in 33.3% of the cases in which magnifying chromoendoscopy was incorrect and 75% of the cases in which MRI was incorrect.
Conclusion: Magnifying chromoendoscopy is reliable for predicting invasion depth in early rectal neoplasms and selecting patients for local excision.
abstract_id: PUBMED:26927367
Narrow-band imaging (NBI) magnifying endoscopic classification of colorectal tumors proposed by the Japan NBI Expert Team. Many clinical studies on narrow-band imaging (NBI) magnifying endoscopy classifications advocated so far in Japan (Sano, Hiroshima, Showa, and Jikei classifications) have reported the usefulness of NBI magnifying endoscopy for qualitative and quantitative diagnosis of colorectal lesions. However, discussions at professional meetings have raised issues such as: (i) the presence of multiple terms for the same or similar findings; (ii) the necessity of including surface patterns in magnifying endoscopic classifications; and (iii) differences in the NBI findings in elevated and superficial lesions. To resolve these problems, the Japan NBI Expert Team (JNET) was constituted with the aim of establishing a universal NBI magnifying endoscopic classification for colorectal tumors (JNET classification) in 2011. Consensus was reached on this classification using the modified Delphi method, and this classification was proposed in June 2014. The JNET classification consists of four categories of vessel and surface pattern (i.e. Types 1, 2A, 2B, and 3). Types 1, 2A, 2B, and 3 are correlated with the histopathological findings of hyperplastic polyp/sessile serrated polyp (SSP), low-grade intramucosal neoplasia, high-grade intramucosal neoplasia/shallow submucosal invasive cancer, and deep submucosal invasive cancer, respectively.
abstract_id: PUBMED:19603174
Comparative study of conventional colonoscopy, magnifying chromoendoscopy, and magnifying narrow-band imaging systems in the differential diagnosis of small colonic polyps between trainee and experienced endoscopist. Background: Removal of colorectal neoplastic polyps can reduce the incidence of colorectal cancers. It is important to distinguish neoplastic from nonneoplastic polyps. We compared the ability of a trainee and an experienced endoscopist in distinguishing between neoplastic polyps and nonneoplastic polyps by conventional white-light, magnifying narrow-band imaging (NBI), and magnifying chromoendoscopy.
Materials And Methods: One hundred and sixty-three small colorectal polyps from 104 patients were studied. All polyps were diagnosed by trainees and experienced endoscopists using conventional white-light, magnifying NBI, and magnifying chromoendoscopy. The kappa values of interobserver agreement between trainees and experienced endoscopists were evaluated before this study. Sensitivity, specificity, and diagnostic accuracy were assessed by reference to histopathology. The first 50 polyps were diagnosed by the trainee as the first stage and the rest 113 polyps were diagnosed as the second stage.
Results: Magnifying NBI and magnifying chromoendoscopy were significant better than conventional white-light by the experienced endoscopist (diagnostic accuracy: NBI 85.3%, chromoendoscopy 87.7%, conventional view 74.8%). No significant differences were found for the trainee. The kappa values (0.77 approximately 0.85) were good for each endoscopic modality for the experienced endoscopist. However, only NBI and chromoendoscopy had acceptable kappa values (0.40 approximately 0.48) for the trainee. The trainee improved diagnostic accuracy in the second stage, but not yielded the level of the experienced endoscopist.
Conclusion: Magnifying NBI and magnifying chromoendoscopy had a better interobserver agreement than conventional white-light among trainees and experienced endoscopists. The trainee needs learning time to improve diagnostic ability, even using a new modality such as magnifying NBI.
abstract_id: PUBMED:25088839
Diagnostic features of sessile serrated adenoma/polyps on magnifying narrow band imaging: a prospective study of diagnostic accuracy. Background And Aims: The narrow band imaging classification system (NBI International Colorectal Endoscopic [NICE] classification) classifies colorectal polyps very accurately. However, sessile serrated adenoma/polyps (SSA/Ps) pathologically resembles hyperplastic polyp and has a possibility to be left in situ on NICE classification. The aim of this study was to establish and evaluate new simple diagnostic features for SSA/Ps using magnifying narrow band imaging (M-NBI).
Methods: We performed a single-arm observational study of diagnostic accuracy in two stages, as follows: seeking stage, development of simple diagnostic features for SSA/P and definition of diagnostic criteria based on retrospective assessments of M-NBI; and validation stage, prospective validation and evaluation of the new diagnostic criteria.
Results: In the seeking stage, we identified brownish, oval, expanded crypt openings and thick-branched vessels on the surfaces of SSA/Ps. We named these "expanded crypt openings" (ECOs) and "thick and branched vessels" (TBVs), respectively. In the validation stage, we enrolled 796 polyps in 261 patients, and classified 126 polyps as NICE type 1; all these lesions were endoscopically removed and assessed histopathologically. The sensitivity, specificity, and accuracy of ECOs for SSA/Ps were 84.3%, 81.1%, and 82.4%, whereas those of TBVs were 45.1%, 68.9%, and 59.2%, respectively. M-NBI provided a sensitivity of 98% and specificity of 59.5% for discrimination of SSA/Ps from other lesions classified as NICE type 1.
Conclusions: Identification of ECOs, supplemented with TBVs, has high sensitivity for the diagnosis of SSA/P. These findings may facilitate the use of endoscopic optical diagnosis in clinical practice.
Answer: Yes, both magnifying narrow-band imaging (NBI) and magnifying chromoendoscopy (MCE) can assist experienced endoscopists in assessing the invasion depth of large sessile and flat polyps. Studies have shown that conventional colonoscopy can differentiate superficial from deep submucosal (SM) cancers with high accuracy in large sessile and flat polyps, but the addition of magnifying NBI or MCE does not significantly change the diagnostic accuracy of experienced endoscopists for deep SM cancer (PUBMED:24839918). However, in cases where the confidence level in differentiating adenomatous from hyperplastic polyps by magnifying NBI is low, all polyps should be resected and submitted for histopathological examination, regardless of the findings of additional chromoendoscopy (PUBMED:29658652).
Furthermore, a multicenter prospective study demonstrated the additional value of MCE on M-NBI, suggesting that additional MCE be recommended for lesions with low confidence or those diagnosed as ≥T1b colorectal cancer (CRC) by M-NBI (PUBMED:36739935). While NBI with magnifying colonoscopy is not superior to magnifying chromoendoscopy for estimating invasion depth, chromoendoscopy should be performed additionally if deep submucosal invasive cancer is suspected (PUBMED:26668794). Additionally, magnifying chromoendoscopy has been shown to be reliable for predicting invasion depth in early rectal neoplasms and selecting patients for local excision (PUBMED:36906661).
In summary, while conventional colonoscopy performed by experienced endoscopists is highly accurate, the use of magnifying NBI and MCE can provide additional diagnostic value, particularly in cases with low confidence or suspected deep invasion. These advanced imaging techniques can enhance the endoscopist's ability to make more informed therapeutic decisions based on the assessed depth of invasion. |
Instruction: Is there a role for adjuvant chemotherapy in pathological complete response rectal cancer tumors following neoadjuvant chemoradiotherapy?
Abstracts:
abstract_id: PUBMED:38273352
Effect of adjuvant chemotherapy on the oncological outcome of rectal cancer patients with pathological complete response. Background: Locally advanced rectal cancer is typically treated using a combination of neoadjuvant chemoradiotherapy and total mesorectal resection. While achieving pathological complete response following neoadjuvant chemoradiotherapy has been recognized as a positive prognostic factor in oncology, the necessity of adjuvant chemotherapy for locally advanced rectal cancer patients with pathological complete response after surgery remains uncertain. The objective of this meta-analysis was to examine the impact of adjuvant chemotherapy on the oncological outcomes of rectal cancer patients who attain pathological complete response after neoadjuvant chemoradiotherapy.
Methods: This meta-analysis followed the guidelines outlined in the preferred reporting items for systematic review and meta-analysis (PRISMA). The Web of Science, PubMed, and Cochrane Library databases were systematically searched to identify relevant literature.
Results: A total of 34 retrospective studies, including 9 studies from the NCBD database, involving 31,558 patients with pathological complete response rectal cancer, were included in the meta-analysis. The included studies were published between 2008 and 2023. The pooled analysis demonstrated that adjuvant chemotherapy significantly improved overall survival (HR = 0.803, 95% CI 0.678-0.952, P = 0.011), and no heterogeneity was observed (I2 = 0%). Locally advanced rectal cancer patients with pathological complete response who underwent adjuvant chemotherapy exhibited a higher 5-year overall survival rate compared to those who did not receive adjuvant chemotherapy (OR = 1.605, 95% CI 1.183-2.177, P = 0.002). However, the analysis also revealed that postoperative ACT did not lead to improvements in disease-free survival and recurrence-free survival within the same patient population. Subgroup analysis indicated that pathological complete response patients with clinical stage T3/T4, lymph node positivity, and younger than 70 years of age may benefit from adjuvant chemotherapy in terms of overall survival.
Conclusions: The findings of this meta-analysis suggest that adjuvant chemotherapy has a beneficial effect on improving overall survival among rectal cancer patients with pathological complete response. However, no such association was observed in terms of disease-free survival and recurrence-free survival.
abstract_id: PUBMED:30368569
Is adjuvant chemotherapy necessary for locally advanced rectal cancer patients with pathological complete response after neoadjuvant chemoradiotherapy and radical surgery? A systematic review and meta-analysis. Purpose: Current clinical guidelines recommended the routine use of adjuvant chemotherapy for locally advanced rectal cancer (LARC) patients. However, the effects of adjuvant chemotherapy in patients with pathological complete response (pCR) after neoadjuvant chemoradiotherapy and radical surgery showed discrepancies in different investigations.
Methods: A systematic review and meta-analysis were conducted using PubMed, Embase and Web of Science databases. All original comparative studies published in English that were related to adjuvant versus non-adjuvant chemotherapy for LARC patients with pCR were included.
Results: A total of 6 studies based on 18 centres or databases involving 2948 rectal cancer patients with pCR (adjuvant group = 1324, non-adjuvant group = 1624) were included in our overall analysis. Based on our meta-analysis, LARC patients with pCR who received adjuvant chemotherapy showed a significantly improved overall survival (OS) when compared to patients with observation (HR = 0.65, 95% CI = 0.46-0.90, P = 0.01). In addition, investigations focused on this issue based on the National Cancer Database (NCDB) were systematically reviewed in our current study. Evidence from all three analyses demonstrated that LARC patients with clinical nodal positive disease that achieved pCR might benefit the most from additional adjuvant chemotherapy.
Conclusion: Our meta-analysis indicated that adjuvant chemotherapy is associated with improved OS in LARC patients with pCR after neoadjuvant chemoradiotherapy and radical surgery.
abstract_id: PUBMED:32234159
Clinical factors of pathological complete response after neoadjuvant chemoradiotherapy in rectal cancer Objective: To explore the feasibility of clinical factors to predict the pathological complete response after neoadjuvant chemoradiotherapy in rectal cancer. Methods: A retrospective analysis was performed on clinical factors of 162 patients with rectal cancer, who underwent neoadjuvant chemoradiotherapy in the General Hospital of People's Liberation Army from January 2011 to December 2018.According to the postoperative pathological results, the patients were divided into pathological complete response (pCR) group and non-pathological complete response group (non-pCR group) to check the predictive clinical factors for pCR. Results: Twenty-eight cases achieved pCR after neoadjuvant chemoradiation (17.3%, 28/162). Univariate analysis showed that patients with higher differentiation (P=0.024), tumor occupation of the bowel lumen≤1/2 (P=0.006), earlier clinical T stage (P=0.013), earlier clinical N stage (P=0.009), the time interval between neoadjuvant chemoradiotherapy and surgery>49 days (P=0.006), and maximum tumor diameter≤5 cm (P=0.019) were more likely to obtain pCR, and the differences werestatistically significant. Multivariate analysis showed that tumor occupation of the bowel lumen≤1/2 (P=0.01), maximum tumor diameter≤5 cm (P=0.035), and the interval>49 days (P=0.009) were independent factors in predicting pCR after neoadjuvant therapy. Conclusion: Tumor occupation of the bowel lumen, maximum tumor diameter, and the time interval between neoadjuvant chemoradiotherapy and surgery can predict the pCR in rectal cancer.
abstract_id: PUBMED:38406812
The role of adjuvant chemotherapy in rectal cancer patients with ypT0-2N0 after neoadjuvant chemoradiotherapy. Background: Neoadjuvant chemoradiotherapy has emerged as the established treatment for locally advanced rectal cancer. Nevertheless, there remains a debate regarding the necessity of adjuvant chemotherapy for patients with locally advanced rectal cancer who exhibit a favorable tumor response (ypT0-2N0) after neoadjuvant chemoradiotherapy and surgery. Thus, the objective of this study is to investigate the impact of adjuvant chemotherapy on the oncological prognosis of rectal cancer patients who have a good response to neoadjuvant chemoradiotherapy.
Materials And Methods: The study was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses protocol. Articles were searched in the Web of Science, PubMed, and Cochrane Library databases. The primary outcomes assessed were 5-year overall survival, disease-free survival, cancer-specific survival, recurrence-free survival, local recurrence, and distant metastasis. The data was summarized using a random effects model.
Results: A meta-analysis was conducted using 18 retrospective studies published between 2009 and 2023. The studies included 9 from China and 5 from Korea, involving a total of 6566 patients with ypT0-2N0 rectal cancer after neoadjuvant chemoradiotherapy. The pooled data revealed that adjuvant chemotherapy significantly improved 5-year overall survival (OR=1.75, 95% CI: 1.15-2.65, P=0.008), recurrence-free survival (OR=1.73, 95% CI: 1.20-2.48, P=0.003), and reduced distant metastasis (OR=0.68, 95% CI: 0.51-0.92, P=0.011). However, adjuvant chemotherapy did not have a significant effect on disease-free survival, cancer-specific survival, and local recurrence in ypT0-2N0 rectal cancer. Subgroup analysis indicated that adjuvant chemotherapy was beneficial in improving overall survival for ypT1-2N0 rectal cancer (OR=1.89, 95% CI: 1.13-3.19, P=0.003).
Conclusion: The findings of the meta-analysis suggest that adjuvant chemotherapy may provide benefits in terms of oncological outcomes for rectal cancer patients with ypT0-2N0 after neoadjuvant chemoradiotherapy and radical surgery. However, further prospective clinical studies are needed to confirm these findings.
abstract_id: PUBMED:36510993
Predictors of pathological complete response following neoadjuvant chemoradiotherapy for rectal cancer. Background: Neoadjuvant chemoradiotherapy (NACRT) is an established treatment option for locally advanced rectal cancer (LARC). Patients achieving pathological complete response (pCR) following NACRT have better oncological outcomes and may be subjected to wait and watch policy as well. The aim of this study was to identify predictors of pCR in LARC following NACRT.
Materials And Methods: A retrospective analysis of a prospectively maintained colorectal cancer database from January 2018 to December 2019 was undertaken. A total of 129 patients of LARC who were subjected to conventional long course NACRT, followed by surgery were included in the study. Pathological response to NACRT was assessed using Mandard grading system and response was categorized as pCR or not-pCR. Correlation between various clinico pathological parameters and pCR was determined using univariate and multivariate logistic regression analysis.
Results: Mean age of patients was 53.79 ± 1.303 years. Complete pathological response (Mandard Gr 1) was achieved in 24/129 (18.6%) patients. Age of patients more than 60 years (P = 0.011; odds ratio [OR] 3.194, 95% confidence interval [CI] 1.274-8.011), interval between last dose of NACRT and surgery >8 weeks (P = 0.004; OR 4.833, 95% CI 1.874-12.467), well-differentiated tumors (P < 0.0001; OR 32.00, 95% CI 10.14-100.97) and node-negative disease (P = 0.003; OR 111.0, 95% CI 2.51-48.03) proved to be strong predictors of pCR.
Conclusion: Older age, longer interval between NACRT and surgery, node-negative disease and favorable tumor grade help in achieving better pCR rates. Awareness of these variables can be valuable in counseling patients regarding prognosis and treatment options.
abstract_id: PUBMED:36573638
Personalized total neoadjuvant therapy versus chemotherapy during the 'wait period' versus standard chemoradiotherapy for locally advanced rectal cancer. Background: This study aimed to compare current treatment response rates with personalized Total Neoadjuvant Therapy (pTNT), against extended chemotherapy in the 'wait period' (xCRT) and standard chemoradiotherapy (sCRT) with adjuvant chemotherapy for rectal cancer.
Methods: This was a multicentre retrospective cohort analysis. Consecutive patients with rectal cancer treated with pTNT over a 3.9-year period were compared to a historical cohort of patients treated with xCRT or sCRT as part of the published WAIT Trial. pTNT patients received 8 cycles mFOLFOX6 or 6 cycles CAPOX in the neoadjuvant setting (no adjuvant treatment). Patients in the WAIT Trial received either 3 cycles 5-FU/LV during the 10-week wait period after chemoradiotherapy or standard chemoradiotherapy, followed by adjuvant chemotherapy. The primary endpoint was overall complete response (oCR) rate defined as the proportion of patients who achieved either complete clinical response (cCR) or pathological complete response (pCR).
Results: Of 284 patients diagnosed with rectal cancer during the 3.9-year period, 107 received pTNT. Forty of these were matched with 49 patients from the WAIT Trial (25 received xCRT and 24 received sCRT). There was a significant difference in oCR between the groups (pTNT n = 21, xCRT n = 6, sCRT n = 7, P = 0.043). Of the patients that underwent surgery, pCR occurred in 13 patients with no significant difference between groups (P = 0.415). There were no significant differences in 2-year disease-free survival or overall survival.
Conclusion: Compared with sCRT and xCRT, pTNT results in a significantly higher complete response rate which may facilitate organ preservation.
abstract_id: PUBMED:27935051
The survival impact of delayed surgery and adjuvant chemotherapy on stage II/III rectal cancer with pathological complete response after neoadjuvant chemoradiation. Neoadjuvant concurrent chemoradiation (CCRT) is standard treatment for clinical stage II/III rectal cancers. However, whether patients with pathological complete response (pT0N0, pCR) should receive adjuvant chemotherapy and whether delayed surgery will influence the pCR rate remains controversial. A nationwide population study was conducted using the Taiwan Cancer Registry Database from January 2007 to December 2013. Kaplan-Meier survival analysis was performed. Cox proportional hazards models were used to estimate multivariate adjusted hazard ratios (HR) and 95% confidence intervals (95% CI). Of the 1,914 patients who received neoadjuvant CCRT, 259 (13.6%) achieved pCR and had better survival (adjusted HR: 0.37, 95% CI: 0.24-0.58; p < 0.001). The cumulative rate of pCR rose up to 83.4% in the 9th week and slowly reached a plateau after the 11th week. Among the patients with pCR, those who received adjuvant chemotherapy had no survival benefits compared to those without adjuvant chemotherapy (adjusted HR: 0.72, 95 CI: 0.27-1.93; p = 0.52). By subgroup analysis, those younger than 70-year old and received adjuvant chemotherapy had better survival benefit than those without adjuvant chemotherapy (adjusted HR: 0.19, 95% CI: 0.04-0.97; p = 0.046). Delayed surgery by 9-12 weeks after the end of neoadjuvant CCRT can maximize the pCR rate, which is correlated with better survival. Adjuvant chemotherapy may be considered in patients with pCR and aged <70-year old, but further prospectively randomized controlled trials are warranted to validate these findings.
abstract_id: PUBMED:27044403
Is adjuvant chemotherapy necessary for patients with pathological complete response after neoadjuvant chemoradiotherapy and radical surgery in locally advanced rectal cancer? Long-term analysis of 40 ypCR patients at a single center. Objectives: According to practice guidelines, adjuvant chemotherapy (ACT) is required for all patients with locally advanced rectal cancer who have received neoadjuvant chemoradiotherapy (NCRT) and total mesorectal excision (TME). The objective of this study was to determine whether ACT is necessary for patients achieving pathological complete response (pCR) after NCRT followed by surgery.
Methods: By retrospectively reviewing a prospectively collected database in our single tertiary care center, 210 patients with locally advanced rectal cancer who underwent NCRT followed by TME were identified between February 2005 and August 2013. All patients achieving ypCR were enrolled in this study, in which who underwent ACT (chemo group) and who did not (non-chemo group) were compared in terms of local recurrence (LR) rate, 5-year disease-free survival (DFS) rate and overall survival (OS) rate.
Results: Forty consecutive patients with ypCR were enrolled, 19 (47.5 %) in chemo group and 21 (52.5 %) in non-chemo group. After a median follow-up of 57 months, five patients developed systemic recurrences, with the 5y-DFS rate of 83.5 %. No LR occurred in the two groups. The 5y-DFS rates for patients in chemo group and non-chemo group was 90.9 and 76.0 %, respectively, showing no statistically significant difference (p = 0.142). Multivariate analysis showed that tumor grade was the only independent prognostic factor for 5y-DFS and 5y-OS.
Conclusions: Results of this study suggested that it may not be necessary for all rectal cancer patients with ypCR after NCRT and radical surgery to receive ACT. Prospective randomized trials are warranted to further determine the value of ACT for ypCR patients.
abstract_id: PUBMED:33419188
Adding Three Cycles of CAPOX after Neoadjuvant Chemoradiotherapy Increases the Rates of Complete Response for Locally Advanced Rectal Cancer. Background And Objectives: the total neoadjuvant chemoradiotherapy (TNT) includes different strategies, but the most appropriate model remains uncertain. The purpose of this retrospectively study was to evaluate the safety and pathological response in the consolidation chemotherapy model.
Methods: patients with cT3/T4 or TxN + M0 rectal cancer that were receiving neoadjuvant chemoradiotherapy (CRT) (50 Gy with oral capecitabine)/TNT (CRT followed by three cycles of CAPOX) during September 2017 to September 2019 in our department were included. All of the patients were recommended to receive radical surgery.
Results: a total of 197 patients were included. Eighty-one patients received CRT, while one hundred and sixteen patients received TNT. Nine patients did not undergo surgery because of the distant metastases (one patient (1.2%) in CRT group, two patients (1.7%) in TNT group) or a refusal of resection (two patients in CRT group, four patients in TNT group). The pathological complete response (pCR) rate was 32.7% in TNT compared with 12.8% in CRT (p = 0.002). There was no statistically significant difference in grade 3 acute toxicities of neoadjuvant treatment and surgical complications between the two groups.
Conclusions: the consolidation chemotherapy model is safe for patients with locally advanced rectal cancer and it has a high pCR rate. The long-term follow-up is necessary to be evaluated in a future prospective, randomized trial.
abstract_id: PUBMED:24849731
Is there a role for adjuvant chemotherapy in pathological complete response rectal cancer tumors following neoadjuvant chemoradiotherapy? Purpose: To investigate the contribution of neoadjuvant chemotherapy in rectal cancer patients with pathological complete response (pCR).
Methods: Data were collected on all consecutive locally advanced rectal cancer patients treated with neoadjuvant chemotherapy and later resected in our institution between 2001 and 2013. Surgery was performed by a single proctology team, and tumor specimens were evaluated by the hospital pathologists.
Results: The medical records of 260 patients were analyzed, and 54 patients of those patients were found to have achieved pCR (20.8 %). Two of those patients were lost to follow-up. Thirty-five of the 54 pCR patients received adjuvant chemotherapy (Group A) and 17 did not (Group B). With the sole exception of the Group A patients being younger than the Group B patients (60.9 ± 11.9 vs. 68.7 ± 10.8 years, respectively, p = 0.0272), all other evaluated parameters were identical between the two groups. There was no advantage for the administration of adjuvant chemotherapy for disease-free survival (DFS) and overall survival (OS).
Conclusions: Adjuvant chemotherapy played no part in disease-free survival and OS of patients with rectal cancer who had been treated with neoadjuvant chemotherapy and achieved pCR. Our findings indicate a tendency for adjuvant chemotherapy to be administered to younger rectal cancer patients. A randomized trial should be conducted to resolve the question of whether they derive any benefit from it.
Answer: The role of adjuvant chemotherapy (ACT) in rectal cancer patients who achieve a pathological complete response (pCR) following neoadjuvant chemoradiotherapy (NCRT) is a subject of ongoing debate. Several studies and meta-analyses have been conducted to address this question.
A meta-analysis by PUBMED:38273352 found that ACT significantly improved overall survival (OS) in rectal cancer patients with pCR, with a higher 5-year OS rate compared to those who did not receive ACT. However, no improvements were observed in disease-free survival (DFS) and recurrence-free survival (RFS). Subgroup analysis suggested that patients with clinical stage T3/T4, lymph node positivity, and those younger than 70 years might benefit from ACT in terms of OS.
Similarly, another meta-analysis by PUBMED:30368569 indicated that ACT is associated with improved OS in locally advanced rectal cancer (LARC) patients with pCR after NCRT and radical surgery. Evidence from the National Cancer Database (NCDB) also suggested that LARC patients with clinical nodal positive disease who achieved pCR might benefit the most from additional ACT.
Conversely, a study by PUBMED:27935051 found that among patients with pCR, those who received ACT had no survival benefits compared to those without ACT. However, subgroup analysis showed that patients younger than 70 years old who received ACT had better survival benefits.
Another study by PUBMED:27044403 suggested that it might not be necessary for all rectal cancer patients with ypCR after NCRT and radical surgery to receive ACT, as no significant difference in 5-year DFS and OS rates was observed between patients who received ACT and those who did not.
PUBMED:24849731 also reported that there was no advantage in DFS and OS for the administration of ACT in patients with pCR who had been treated with neoadjuvant chemotherapy.
In summary, while some studies and meta-analyses suggest a potential benefit of ACT in improving OS for certain subgroups of rectal cancer patients with pCR after NCRT, other studies do not find a clear survival benefit. The decision to administer ACT in this context may depend on individual patient factors, including clinical stage, lymph node status, age, and possibly other clinical factors that predict pCR (PUBMED:32234159, PUBMED:36510993). Further prospective randomized trials are warranted to clarify the role of ACT in this patient population. |
Instruction: Is unipolar mania a distinct subtype?
Abstracts:
abstract_id: PUBMED:27569010
Differences in Subjective Experience Between Unipolar and Bipolar Depression Introduction: It is important to make distinction between bipolar and unipolar depression because treatment and prognosis are different. Since the diagnosis of the two conditions is purely clinical, find symptomatic differences is useful.
Objectives: Find differences in subjective experience (first person) between unipolar and bipolar depression.
Methods: Phenomenological-oriented qualitative exploratory study of 12 patients (7 with bipolar depression and 5 with unipolar depression, 3 men and 9 women). We used a semi-structured interview based on Examination of Anomalous Self-Experience (EASE).
Results: The predominant mood in bipolar depression is emotional dampening, in unipolar is sadness. The bodily experience in bipolar is of a heavy, tired body; an element that inserts between the desires of acting and performing actions and becomes an obstacle to the movement. In unipolar is of a body that feels more comfortable with the stillness than activity, like laziness of everyday life. Cognition and the stream of consciousness: in bipolar depression, compared with unipolar, thinking is slower, as if to overcome obstacles in their course. There are more difficult to understand what is heard or read. Future perspective: in bipolar depression, hopelessness is stronger and broader than in unipolar, as if the very possibility of hope was lost.
Conclusions: Qualitative differences in predominant mood, bodily experience, cognition and future perspective were found between bipolar and unipolar depression.
abstract_id: PUBMED:24210629
Unipolar mania: a distinct entity? Background: Whether or not unipolar mania is a separate nosological entity remains a subject of dispute. This review discusses that question in light of recent data.
Methods: Unipolar mania studies in the PUBMED database and relevant publications and cross-references were searched.
Results: There seems to be a bipolar subgroup with a stable, unipolar recurrent manic course, and that 15-20% of bipolar patients may be unipolar manic. Unipolar mania may be more common in females. It seems to have a slightly earlier age of illness onset, more grandiosity, psychotic symptoms, hyperthymic temperament, but less rapid-cycling, suicidality and comorbid anxiety disorders. It seems to have a better course of illness with better social and professional adjustment. However, its response to lithium prophylaxis seems to be worse, although its response to valproate is the same when compared to that of classical bipolar.
Limitations: The few studies on the subject are mainly retrospective, and the primary methodological criticism is the uncertainty of the diagnostic criteria for unipolar mania.
Conclusions: The results indicate that unipolar mania displays some different clinical characteristics from those of classical bipolar disorder. However, whether or not it is a separate nosological entity has not been determined due to the insufficiency of relevant data. Further studies with standardized diagnostic criteria are needed. Considering unipolar mania as a course specifier of bipolar disorder could be an important step in this respect.
abstract_id: PUBMED:17445513
Is unipolar mania a distinct subtype? Background: Some recent reports raised the question whether unipolar mania, without severe or mild depression, really exists and whether it defines a distinct disorder. Literature on this topic is still scarce, although this was a matter of debate since several decades.
Method: Eighty-seven inpatients with Diagnostic and Statistical Manual of Mental Disorder, Revised Third Edition, manic episode and at least 3 major affective episodes, in 10 years of illness duration, were systematically evaluated to collect demographic and clinical information. The symptomatological evaluation was conducted by means of the Comprehensive Psychopathological Rating Scale. Clinical features, social disability, first-degree family history, and temperaments were compared between unipolar and bipolar manics.
Results: Nineteen (21.8%) of 87 patients presented a course of illness characterized by recurrent unipolar manic episodes without history of major or mild depression (MAN). When this group was compared with 68 (78.2%) manic patients with a previous history of depressive episodes (BIP), we found substantial similarities in most demographic, familial, and clinical characteristics. MAN group reported more congruent psychotic symptoms and more frequent chronic course of the current episode in comparison to BIP group. In the MAN patients, we also observed a high percentage of hyperthymic temperament and a complete absence of depressive temperament. This latter difference was statistically significant. MAN patients compared with BIP ones also reported lower severity scores in social, familial, and work disability, and they showed less depressive features, hostility, and anxiety.
Conclusion: The numerous demographic, clinical, and psychopathological overlapping characteristics in unipolar and bipolar mania raise questions about the general nosographic utility of this categorization. Nonetheless, our data suggest a clinical and prognostic validity of keeping unipolar manic patients as a separate subgroup, in particular, as social adjustment and disability are concerned.
abstract_id: PUBMED:31818781
Unipolar mania: Identification and characterisation of cases in France and the United Kingdom. Background: Unipolar mania is a putative subtype of bipolar disorder (BD) in which individuals experience recurrent manic but not major depressive episodes. Few studies of unipolar mania have been conducted in developed countries and none in the UK. This study aimed to identify and characterise people with unipolar mania in the UK and France.
Methods: People with unipolar mania were ascertained using a South London UK electronic case register and a French BD case series. Each unipolar mania group was compared to a matched group of people with BD who have experienced depressive episodes.
Results: 17 people with unipolar mania were identified in South London and 13 in France. The frequency of unipolar mania as a percentage of the BD clinical population was 1.2% for the South London cohort and 3.3% for the French cohort. In both cohorts, people with unipolar mania experienced more manic episodes than people with BD, and in the French cohort were more likely to experience a psychotic illness onset and more psychiatric admissions. Treatment and self-harm characteristics of people with unipolar mania were similar to people with BD.
Limitations: The relatively small number of people with unipolar mania identified by this study limits its power to detect differences in clinical variables.
Conclusions: People with unipolar mania can be identified in France and the UK, and they may experience a higher frequency of manic episodes but have similar treatment and self-harm characteristics as people with BD.
abstract_id: PUBMED:34837882
Postpartum anhedonia: Emergent patterns in bipolar and unipolar depression. The objective of this study was to identify differences in the longitudinal course anhedonia symptoms during postpartum in women diagnosed with unipolar or bipolar disorder. Female participants diagnosed with either bipolar (n = 104) or unipolar (n = 136) depression at week 20 during pregnancy were evaluated prospectively at weeks 2, 12, 26, and 52 postpartum using clinical interviews. A semi-parametric, group-based mixture model was applied to separate distinct longitudinal patterns of symptoms of anhedonia. Across time, among those who reported anhedonia, twice as many women had the diagnoses of bipolar depression relative to unipolar depression (65.03% versus 39.47%, respectively). Moreover, the rate and stability of anhedonia was higher in women with bipolar relative to unipolar depression. Across groups, anhedonia was associated with significantly higher depressive symptom severity. Anhedonia is a more stable and frequent symptom in women with postpartum bipolar relative to unipolar depressive disorder.
abstract_id: PUBMED:2149654
The concept of distinct but voluminous groups of bipolar and unipolar diseases. III. Bipolar and unipolar comparison. Comparing unipolar diseases (n = 121) as one group with bipolar diseases (n = 86) as another group (both groups including affective and schizoaffective disorders) relevant differences were found in sex distribution, age at onset, premorbid personality, long-term course and some aspects of long-term outcome. Although building two voluminous groups of "unipolar diseases" and "bipolar diseases" runs some risk of inhomogeneity, this danger could perhaps be limited by referring to the "affective subtype" and the "schizoaffective subtype".
abstract_id: PUBMED:30992610
Impulsivity differences between bipolar and unipolar depression. Background: Even though particularly bipolar depression and unipolar depression seem to be similar, they show differences in terms of the etiology, phenomenology, course, and treatment process. Bipolar depression is associated with mood lability, motor retardation, and hypersomnia to a larger extent. Early age of onset, a high frequency of depressive episodes, and history of bipolar disease in the family are suggestive of bipolar disorder (BD) rather than major depression. Bipolar and unipolar disorders are also associated with increased impulsivity during illness episodes. However, there is little information about impulsivity during euthymia in these mood disorders. The aim of this study was to illustrate the difference in impulsivity in euthymic bipolar and unipolar patients.
Materials And Methods: Impulsivity was evaluated by the Barratt Impulsiveness Scale (BIS-11A), in 78 interepisode BD patients, 72 interepisode unipolar disorder patients, and 70 healthy controls. The diagnosis was established by severe combined immunodeficiency. One-way between-groups ANOVA was used to compare the BIS-11A mean scores for all three groups.
Results: Impulsivity scores of the bipolar and unipolar disorder patients were significantly higher than controls on total and all subscales measures. There was no difference between the bipolar and unipolar disorder groups on total, attentional, and nonplanning impulsivity measures. However, BD patients scored significantly higher than the unipolar patients on motor impulsivity measures.
Conclusions: Both interepisode bipolar and unipolar disorder patients had increased impulsivity compared to healthy individuals. There was no significant difference on attention and nonplanning impulsivity subscales; however, on the motor subscale, bipolar patients were more impulsive than unipolar disorder patients.
abstract_id: PUBMED:22949290
Unipolar mania: a distinct entity or characteristic of manic preponderance? Objective: It has been reported that fewer patients with unipolar mania respond to lithium prophylaxis as do those with classical bipolar disorder. This study aimed to determine if the difference to response to lithium is related to unipolar mania or to a high preponderance of mania during the course of bipolarity.
Materials And Methods: The study included bipolar-I patients (according to DSM-IV criteria) that had a ≥ 2-year history of either lithium or valproate prophylaxis as monotherapy. The response rate in the patients with unipolar mania and classical bipolar disorder were compared. Then, the response rate to lithium in all the patients with a manic episode rate <50% and >50%, and <80% and >80% during their course were compared. Finally, the above comparisons were repeated, excluding the patients with unipolar mania.
Results: The study included 121 bipolar-I patients (34 unipolar mania and 87 classical bipolar disorder). The response rate to lithium prophylaxis was significantly lower in the unipolar mania group than that in the bipolar group, whereas, the response rate to valproate prophylaxis was similar in both groups. Additionally, significantly fewer patients with a manic episode rate >80% during their course responded to lithium, followed by those with a manic episode rate >50%; however, these differences disappeared when the unipolar mania group was excluded from the comparison.
Conclusion: Fewer patients with unipolar mania responded to lithium prophylaxis than those with classical bipolar disorder, which appeared to be related to unipolar mania, rather than to a high manic predominance during the disease course. On the other hand, response to valproate prophylaxis was similar in the unipolar mania and classical bipolar disorder groups.
abstract_id: PUBMED:12167505
Unipolar mania: a distinct disorder? Background: This study aimed to identify the differences between unipolar mania and classical bipolar disorder.
Methods: Patients with at least four manic episodes and at least 4 years of follow-up without any depressive episodes were classified as unipolar mania. This group was compared to other bipolar-I patients defined according to DSM-IV regarding their clinical and socio-demographic variables.
Results: The rate for unipolar mania as defined by the study criteria was found to be 16.3% in the whole group of bipolar-I patients. Unipolar manic patients tended to have more psychotic features and be less responsive to lithium prophylaxis compared to other bipolar-I patients.
Limitations: Because it was a retrospective study, there may be some minor depressive episodes left unrecorded in the unipolar mania group despite careful and thorough investigation. In addition, even with our fairly strict criteria for the diagnosis of unipolar mania, the possibility of a future depressive episode cannot be excluded.
Conclusions: Unipolar mania may be the presentation of a nosologically distinct entity.
abstract_id: PUBMED:37305515
Gait disorders in unipolar and bipolar depression. Objectives: Bipolar and unipolar depressions have a similar clinical picture, but different neurological and psychological mechanisms. These misleading similarities can lead to overdiagnosis and increased suicide risk. Recent studies show that gait is a sensitive objective marker for distinguishing the type of depression. The present study aims to compare psychomotor reactivity disorders and gait activity in unipolar and bipolar depression.
Methods: A total of 636 people aged 40.7 ± 11.2 years are studied with an ultrasound cranio-corpo-graph. They are divided into three groups - patients with unipolar depression, with bipolar depression and healthy controls. Each person performs three psychomotor tasks - a classic Unterberger task, a simplified version with open eyes and a complex version with an additional cognitive task.
Results: We find significant differences in psychomotor activity and reactivity between the three groups. Bipolar patients have more inhibited psychomotor skills than unipolar and they are both more inhibited than the norms. The simplified variant of the equilibriometric task is the most sensitive one and psychomotor reactivity is a more precise marker than psychomotor activity.
Conclusion: Both psychomotor activity and reactivity in gait could be sensitive markers for distinguishing similar psychiatric conditions. The application of the cranio-corpo-graph and the possible development of similar devices could lead to new diagnostic and therapeutic approaches, including early detection and prediction of the type of depression.
Answer: The question of whether unipolar mania is a distinct subtype remains a subject of debate in the psychiatric community. Some studies suggest that unipolar mania, characterized by recurrent manic episodes without major depressive episodes, may indeed be a separate nosological entity with distinct clinical characteristics (PUBMED:24210629; PUBMED:17445513; PUBMED:31818781; PUBMED:22949290; PUBMED:12167505).
For instance, unipolar mania may present with more grandiosity, psychotic symptoms, and a hyperthymic temperament, and it may have a slightly earlier age of illness onset compared to classical bipolar disorder. It also appears to have a better course of illness with better social and professional adjustment, although its response to lithium prophylaxis seems to be worse (PUBMED:24210629). Additionally, unipolar mania has been associated with more congruent psychotic symptoms, a higher percentage of hyperthymic temperament, and less social disability compared to bipolar mania (PUBMED:17445513).
However, the diagnostic criteria for unipolar mania are not universally agreed upon, and the studies conducted so far have been mainly retrospective with methodological limitations (PUBMED:24210629). Moreover, the relatively small number of people with unipolar mania identified in studies limits the power to detect differences in clinical variables (PUBMED:31818781).
Despite these challenges, the identification of unipolar mania as a potential subtype could have important implications for treatment and prognosis. Considering unipolar mania as a course specifier of bipolar disorder has been suggested as an important step in further understanding this condition (PUBMED:24210629).
In conclusion, while there is evidence to suggest that unipolar mania may be a distinct subtype with unique clinical features, further research with standardized diagnostic criteria and larger sample sizes is needed to definitively determine its status as a separate nosological entity. |
Instruction: Is lymphocytic thyroiditis a unique type or merely a type of Hashimoto's thyroiditis?
Abstracts:
abstract_id: PUBMED:15372359
Genetic susceptibility and immunological synapse in type 1 diabetes and thyroid autoimmune disease. Type 1 diabetes mellitus results from an immune mediated or idiopathic destruction of the pancreatic beta cells. Its aetiopathogenesis remains to be elucidated, despite great progress in the characterisation of beta-cell antigens, T-lymphocyte and antibody markers as well as whole genome screening. The incidence of type 1 diabetes is rising in most countries. Moreover, a considerable proportion of patients initially presenting with type 2 diabetes mellitus have an underlying type 1 diabetes with a latent course. A proportion of type 1 diabetes patients have concomittant thyroid autoimmune disease, either Hashimoto's thyroiditis or Graves' disease. Whereas Hashimoto's thyroiditis shares the same destructive immune process as type 1 diabetes, Graves' disease is unique in stimulating specifically the TSH-receptor through high-affinity immunoglobulins. However, both the thyroid autoimmune disorders and type 1 diabetes have susceptibility genes in common, implying shared pathways of immunopathogenesis. It has become clear that the genetic composition of a host at least partly determines the course of an immune response leading either to an organ-specific autoimmune disease or creating a state of balance where antibodies are hallmarks of (auto)immunity but normal function prevails. Genetic factors including MHC ( IDDM 1-genetic locus) and non-MHC genes (IDDM 2 - IDDM X) have been shown to determine susceptibility to autoimmunity in type 1 diabetes or lifelong tolerance. Currently the importance of the various diabetes associated genes is becoming clearer due to functional studies. Our review attempts to compile the relevant data that have accumulated in recent years and offers perspectives for prediction, prevention and possibly even therapy of immune-mediated endocrinopathies.
abstract_id: PUBMED:37344441
Autoimmune Polyglandular Syndrome Type 3 Complicated with IgG4-related Disease. A 52-year-old Japanese woman developed type 1 diabetes mellitus (type 1 DM) at 41 years old. She became complicated with Hashimoto's disease and showed swelling of both submandibular glands, which was diagnosed as IgG4-related disease (IgG4-RD). This is a rare case of a Japanese patient with autoimmune polyglandular syndrome type 3A (APS-3A) coexisting with autoimmune thyroid disease (AITD) and type 1 DM complicated by IgG4-RD. Bilateral submandibular gland resection was successfully performed without steroid therapy. We discuss the possibility that the immunological pathogenic mechanisms of APS-3A and IgG4-RD are related.
abstract_id: PUBMED:37516895
Autoimmune adrenal insufficiency in children: a hint for polyglandular syndrome type 2? Background: Primary adrenal insufficiency (PAI) in childhood is a life-threatening disease most commonly due to impaired steroidogenesis. Differently from adulthood, autoimmune adrenalitis is a rare condition amongst PAI's main aetiologies and could present as an isolated disorder or as a component of polyglandular syndromes, particularly type 2. As a matter of fact, autoimmune polyglandular syndrome (APS) type 2 consists of the association between autoimmune Addison's disease, type 1 diabetes mellitus and/or Hashimoto's disease.
Case Presentation: We report the case of an 8-year-old girl who presented Addison's disease and autoimmune thyroiditis at an early stage of life. The initial course of the disease was characterized by numerous crises of adrenal insufficiency, subsequently the treatment was adjusted in a tertiary hospital with improvement of disease control.
Conclusions: APS type 2 is a rare condition during childhood, probably because it may remain latent for long periods before resulting in the overt disease. We recommend an early detection of APS type 2 and an adequate treatment of adrenal insufficiency in a tertiary hospital. Moreover, we underline the importance of a regular follow-up in patients with autoimmune diseases, since unrevealed and incomplete forms are frequent, especially in childhood.
abstract_id: PUBMED:26292458
Autoimune Conditions Associated With Type 1 Diabetes. Type 1 diabetes is the most commonly seen endocrinopathy in pediatrics. This is an autoimmune condition. Children with type 1 diabetes are at much greater risk for other autoimmune conditions, particularly autoimmune thyroiditis, most commonly Hashimoto's thyroiditis, and celiac disease. It is important for the primary care practitioner to be aware of subtle symptoms of these conditions and how to screen for them because early treatment of both conditions can lead to better diabetes control and improved health in general.
abstract_id: PUBMED:34754918
Autoimmune thyroiditis - track towards autoimmune polyendocrinopathy type III. Autoimmune polyendocrinopathies are rare diseases characterized by the coexistence of at least two endocrine diseases linked to an autoimmune mechanism, however sometimes are associated with non-endocrine autoimmune diseases. They are divided into two main subgroups: autoimmune polyendocrinopathy type I and polyendocrinopathies type II-IV. We report a case of a 53-year-old female patient followed for 2 years for Hashimoto's thyroiditis. On admission, she was complaining of polyuropolydipsic syndrome, asthenia, weight loss, abdominal pain and vomiting. The clinical examination noted a dehydrated patient in poor general condition, without fever, tachycardic at 104 beats/min, and polypneic at 24 cycles/min. Laboratory tests revealed hyperglycemia (4.7 g/l), glucosuria, acetonuria, anti-GAD>2000 UI/l antibody, normal TSH. The 8-hour cortisol level and anti-21 hydroxylase antibodies level were normal. In this context, the patient was diagnosed with diabetes type 1 associated with Hashimoto's thyroiditis (autoimmune polyendocrinopathy type III). In conclusion, the autoimmune polyendocrinopathy type III is a rare syndrome, predominantly affecting females. In our patient's case, the initial presentation of the disease was dominated by the autoimmune thyroiditis, which is the most frequent endocrine autoimmunity diagnosed in adults with polyglandular autoimmune syndrome. Therefore, the recommended treatment is based on hormonal substitution.
abstract_id: PUBMED:24867187
Is lymphocytic thyroiditis a unique type or merely a type of Hashimoto's thyroiditis? Aim: Objective of the study was to clarify the role of apoptosis in the pathogenesis of lymphocytic thyroiditis (LT) and the existence of difference between Hashimoto's thyroiditis (HT) and LT.
Methods: We evaluated levels of antithyroglobulin and antithyroperoxidase antibodies, the apoptosis by in situ Cell Death Detection-TUNEL and the expression of Bcl2 and Bax by immunohistochemistry in thyroid tissues from 16 patient with HT, 10 with LT and 10 with euthyroid goiter-EG (control group).
Results: It was found that apoptosis of thyrocytes in HT (mean 3.05%, SD 1.29%) and LT (mean 2.70%, SD 1.17%) was statistically significantly higher than EG (mean 0.56%, SD 0.23%), but the difference in the percentage of thyrocytes between HT and LT was not statistically significant. In HT the percentage of apoptotic infiltrating lymphocytes (mean 0.59%, SD 0.23%) was smaller than in EG (mean 2.26%, SD 1.42%), but it showed no significant difference in comparison to LT. The expression of Bax in infiltrating lymphocytes in HT (mean 0.72%, SD 0.34%) was statistically significantly higher than LT (mean 0.11%, SD 0.06%). The level of thyroglobulin was lower in HT compared to LT (P<0.01) and compared to EG (P<0.01). The level of antithyroglobulin/antithyroperoxidase antibodies was higher in HT compared to LT (P<0.01) and compared to EG (P<0.01). There was no statistically significant difference in the level of thyroglobulin and level of antibodies between LT and EG.
Conclusion: These results suppose that apoptosis represents one of significant mechanisms in the pathogenesis of both HT and LT and that LT probably differs from HT.
abstract_id: PUBMED:18292034
Incidence of thyroid autoimmunity in children with type 1 diabetes mellitus Unlabelled: It is well known that patients suffering from an autoimmune disease are more prone to develop another one, too. The authors have previously shown frequent occurrence of celiac disease in patients with type 1 diabetes mellitus compared to the background population. Autoimmune thyroid disease, the most common autoimmune disease associated with type 1 diabetes mellitus, generally occurs after the manifestation of diabetes, in the second decade of life. The aim of the study was to investigate the prevalence of thyroid autoimmunity as well as the frequency of autoimmune thyroid disease in patients with type 1 diabetes mellitus. Their aim was also to compare the prevalence of autoimmune thyroid disease in patients with type 1 diabetes mellitus and in those with type 1 diabetes mellitus and celiac disease.
Methods: Screening was performed in 268 patients with type 1 diabetes mellitus alone and in 48 children with type 1 diabetes mellitus and celiac disease, with anti-peroxidase and anti thyroglobulin antibody. In case of autoantibody positivity, testing thyroid function and ultrasonography confirmed the autoimmune thyroid disease. According to the results, frequency of autoantibody positivity was significantly higher in diabetic patients suffering from celiac disease (type 1 diabetes mellitus: 43 (16%), type 1 diabetes mellitus + celiac disease: 16 (33,3%), p < 0,01). Hypothyroidism due to thyroiditis was also more prevalent in patients with type 1 diabetes mellitus and celiac disease.
Conclusions: Due to increased risk, the authors emphasise the need of frequent screening for autoimmune thyroid disorder in patients with type 1 diabetes mellitus and celiac disease.
abstract_id: PUBMED:25185856
A case of autoimmune urticaria accompanying autoimmune polyglandular syndrome type III associated with Hashimoto's disease, type 1 diabetes mellitus, and vitiligo. We present a case of autoimmune polyglandular syndrome type III (APS III) associated with Hashimoto's disease, type 1 diabetes mellitus, vitiligo and autoimmune urticaria. This rare genetic disorder occurs with unknown frequency in the Polish population. It is characterised by endocrine tissue destruction resulting in the malfunction of multiple organs.Several cases of APS III associated with organ-specific autoimmune diseases such as coeliac disease, hypogonadism and myasthenia gravis, as well as organ-nonspecific or systemic autoimmune diseases such as sarcoidosis, Sjögren syndrome, and rheumatoid arthritis have been described. To the best of our knowledge, we here describe the first case of APS III associated with autoimmune thyroiditis, type 1 diabetes mellitus, vitiligo and autoimmune urticaria in an adult patient.
abstract_id: PUBMED:23540228
Type III Polyglandular Autoimmune Syndromes in children with type 1 diabetes mellitus. Introduction: Type III Polyglandular Autoimmune Syndrome (PAS III) is composed of autoimmune thyroid diseases associated with endocrinopathy other than adrenal insufficiency. This syndrome is associated with organ-specific and organ-nonspecific or systemic autoimmune diseases. The frequency of PAS syndromes in diabetic children is unknown.
Objectives: The aim of the study was to evaluate the incidence of PAS III in children with diabetes mellitus type 1.
Patients And Methods: The study consisted of 461 patients with diabetes mellitus type 1(T1DM), who were 1-19 years of age. TSH, free thyroxin, TPO autoantibodies, and thyroglobulin autoantibodies were determined annually. Autoimmune Hashimoto's thyroiditis was diagnosed in children with positive tests for TPO Ab and Tg Ab and thyroid parenchymal hypogenicity in the ultrasound investigation. Elevated TSI antibodies were used to diagnose Graves' disease. Additionally, Anti-Endomysial Antibodies IgA class were determined every year as screening for celiac disease. During clinical control, other autoimmune diseases were diagnosed. Adrenal function was examined by the diurnal rhythm of cortisol.
Results: PAS III was diagnosed in 14.5% children: PAS IIIA (T1DM and autoimmune thyroiditis) was recognized in 11.1 % and PAS III C (T1DM and other autoimmune disorders: celiac disease, and JIA, psoriasis and vitiligo) in 3.5% children. PAS IIIA was more prevalent in girls than in boys - 78.4% versus 21.6% (p<0.05). PAS III was observed between 1-5 years of life in 66.6% children; the frequency decreased in consecutive years and successively increased in the adolescence period to 22.7%.
Conclusions: PAS III occurs in 14.5% of children with DM type1 and the incidence is positively correlated with patients' age and female gender. Children with PAS III should be carefully monitored as a group at risk for the development of other autoimmune diseases.
abstract_id: PUBMED:25402387
Precocious presentation of autoimmune polyglandular syndrome type 2 associated with an AIRE mutation. Autoimmune polyglandular syndrome type 2 (type 2 APS), or Schmidt's syndrome, is defined by the presence of Addison's disease in combination with type 1 diabetes and/or autoimmune thyroid disease. The estimated prevalence of this syndrome is 1.4-4.5 per 100,000 inhabitants and it is more frequent in middle-aged females, whilst it is quite rare in children. Type 2 APS, which shows a pattern of autosomal dominant inheritance with low penetrance, has been associated with HLA specific DR3/DQ2 and DR4/DQ8 haplotypes. However, it has been hypothesized that genetic variability in the AIRE gene, which causing type 1 APS, may play a role in more common organ-specific autoimmune conditions like type 1 diabetes, Hashimoto's disease and type 2 APS, among others. Here we present the case of an 8-year-old girl, with a past medical history of type 1 diabetes diagnosed at the age of 3. She was taken to the Emergency Department because she complained of abdominal pain, nausea and vomiting, and her blood analysis revealed a severe hyponatremia. She also had seizures as a consequence of the hyponatremia and frequent hypoglycemia. She was ultimately found to be suffering from autoimmune primary adrenal insufficiency. The combination of both mentioned conditions, type 1 diabetes and Addison's disease, in the absence of chronic mucocutaneous candidiasis, made a diagnosis of type 2 APS plausible in this girl. The genetic study showed two heterozygous variants: NM_000383.2:C.1411C>T (p. Arg471Cys) in exon 12 and IVS9+6G>A in intron 9 of the AIRE gene. The description of an uncommon case of type 2 APS with precocious presentation associated with an AIRE mutation in a very young girl could help to clarify the role of AIRE in the development of autoimmune diseases.
Answer: Lymphocytic thyroiditis (LT) and Hashimoto's thyroiditis (HT) are both characterized by the presence of thyroid autoantibodies and thyrocyte apoptosis, but there is evidence suggesting that LT may differ from HT. A study aimed to clarify the role of apoptosis in the pathogenesis of LT and to determine whether there is a difference between HT and LT. The study evaluated levels of antithyroglobulin and antithyroperoxidase antibodies, apoptosis by in situ Cell Death Detection-TUNEL, and the expression of Bcl2 and Bax by immunohistochemistry in thyroid tissues from patients with HT, LT, and a control group with euthyroid goiter (EG). The results showed that apoptosis of thyrocytes was significantly higher in both HT and LT compared to EG, but there was no significant difference between HT and LT. However, the expression of Bax in infiltrating lymphocytes was significantly higher in HT than in LT. Additionally, the level of thyroglobulin was lower in HT compared to LT and EG, and the level of antithyroglobulin/antithyroperoxidase antibodies was higher in HT compared to LT and EG. There was no significant difference in the level of thyroglobulin and level of antibodies between LT and EG. These findings suggest that apoptosis is a significant mechanism in the pathogenesis of both HT and LT, but LT probably differs from HT (PUBMED:24867187).
Based on this evidence, it can be inferred that lymphocytic thyroiditis is not merely a type of Hashimoto's thyroiditis but may represent a distinct entity with its own pathogenic mechanisms. |
Instruction: Do graphic health warning labels have an impact on adolescents' smoking-related beliefs and behaviours?
Abstracts:
abstract_id: PUBMED:35223730
Association Between Graphic Health Warning Labels on Cigarette Packs and Smoking Cessation Attempts in Korean Adolescent Smokers: A Cross-Sectional Study. Graphic health warning labels on cigarette packs inform smokers about the health risks associated with tobacco smoking. Adolescents are generally the main targets to influence by graphic health warning labels. This study investigated the association between graphic health warning labels on cigarette packs and attempts to quit smoking in South Korean adolescents. This cross-sectional study used data from the 2017 to 2019 Korea Youth Risk Behavior Web-based Survey, using multiple logistic regression for the analysis. The study population comprised 11,142 adolescents aged 12-18 years. The outcome variable was attempts to quit smoking among adolescent smokers who had seen graphic health warning labels. Attempts to quit smoking were higher among adolescent smokers who had seen graphic health warning labels compared to those who had not {boys, odds ratio (OR) = 1.72 [95% confidence interval (CI), 1.48-2.00]; girls, OR = 1.74 (95% CI, 1.33-2.28)}. The correlation was greater for adolescents who thought about the harm of smoking [boys, OR = 1.86 (95% CI, 1.60-2.16); girls, OR = 1.85 (95% CI, 1.41-2.43)] and the willingness to quit [boys, OR = 2.03 (95% CI, 1.74-2.36); girls, OR = 2.04 (95% CI, 1.55-2.68)] after seeing graphic health warning labels. Our findings indicate that graphic health warning labels on cigarette packs have the potential to lower smoking intentions of adolescents. We suggest that the use of graphic health warning labels is an effective policy-related intervention to reduce smoking in South Korean adolescents.
abstract_id: PUBMED:18783508
Do graphic health warning labels have an impact on adolescents' smoking-related beliefs and behaviours? Aims: To assess the impact of the introduction of graphic health warning labels on cigarette packets on adolescents at different smoking uptake stages.
Design: School-based surveys conducted in the year prior to (2005) and approximately 6 months after (2006) the introduction of the graphic health warnings. The 2006 survey was conducted after a TV advertising campaign promoting two new health warnings.
Setting: Secondary schools in greater metropolitan Melbourne, Australia.
Participants: Students in year levels 8-12: 2432 students in 2005, and 2050 in 2006, participated.
Measures: Smoking uptake stage, intention to smoke, reported exposure to cigarette packs, knowledge of health effects of smoking, cognitive processing of warning labels and perceptions of cigarette pack image.
Findings: At baseline, 72% of students had seen cigarette packs in the previous 6 months, while at follow-up 77% had seen packs and 88% of these had seen the new warning labels. Cognitive processing of warning labels increased, with students more frequently reading, attending to, thinking and talking about warning labels at follow-up. Experimental and established smokers thought about quitting and forgoing cigarettes more at follow-up. At follow-up intention to smoke was lower among those students who had talked about the warning labels and had forgone cigarettes.
Conclusions: Graphic warning labels on cigarette packs are noticed by the majority of adolescents, increase adolescents' cognitive processing of these messages and have the potential to lower smoking intentions. Our findings suggest that the introduction of graphic warning labels may help to reduce smoking among adolescents.
abstract_id: PUBMED:33553059
Effectiveness of warning graphic labels on cigarette packs in enhancing late-teenagers' perceived fear of smoking-related harms in Bangkok, Thailand. Background: This study investigated the level of fear of smoking- related harms for teenagers of different gender, different levels of smoking behaviour, and difference in smoking levels of friends and family members, as influenced by warning graphic images on cigarette packs. The study also compared levels of this fear in categories based on participants' perception (e.g., scarier or less scary images). Design and Methods: The sample group was 353 undergraduate students at King Mongkut's University of Technology Thonburi in Bangkok, Thailand. Questionnaires containing 21 warning graphic images, aimed at measuring levels of fear of smoking-related harms, were conducted. Both descriptive statistics and inferential statistics, such as independent and dependent ttest, were used to analyse the data. Results: The results showed that warning graphic images exhibiting patients suffering from cancers (e.g., lung cancer or laryngeal cancer) and images of damaged body parts were perceived as the scariest warning images. In contrast, images that did not illustrate serious disease suffered by smokers were perceived as the least scary images. The scariest images generated a significant higher level of fear of smoking-related harms than the least scary images. In addition, non-smoking participants were more sensitive to scary warning images than smoking participants. It was also found that the level of fear of smoking-related harms was significantly based on individual cognitive judgment, and it was not affected by the influence of social groups such as friends and family members. Conclusions: Developing effective warning graphic images could directly contribute to individuals' perceived health risks and danger associated with smoking.
abstract_id: PUBMED:27617273
The Age-related Positivity Effect and Tobacco Warning Labels. Objectives: This study tested whether age is a factor in viewing time for tobacco warning labels. The approach drew from previous work demonstrating an age-related positivity effect, whereby older adults show preferences toward positive and away from negative stimuli.
Methods: Participants were 295 daily smokers from Appalachian Ohio (age range: 21-68). All participants took part in an eye-tracking paradigm that captured the attention paid to elements of health warning labels in the context of magazine advertisements. Participants also reported on their past cessation attempts and their beliefs about the dangers of smoking.
Results: Consistent with theory on age-related positivity, older age predicted weaker beliefs about smoking risks, but only among those with no past-year quit attempts. In support of our primary hypothesis, older age was also related to a lower percentage of time spent viewing tobacco warning labels, both overall (text + image) and for the graphic image alone. These associations remained after controlling for cigarettes smoked per day.
Conclusions: Overall, findings suggest that age is an important consideration for the design of future graphic warning labels and other tobacco risk communications. For older adults, warning labels may need to be tailored to overcome the age-related positivity effect.
abstract_id: PUBMED:38022915
Smoking cessation policy and treatments derived from the protective motivation of smokers: a study on graphic health warning labels. Introduction: Smoking is a leading public health risk. Many countries are reducing the demand for tobacco through graphic health warning labels (GHWLs). This study aims to explore smokers' perceptions of GHWLs and analyze the effect of GHWLs on their behavioral intentions to quit smoking.
Methods: A theoretical model is designed by synthesizing protection motivation theory, an extension of the extended parallel process model, and the theory of planned behavior. We collected a cross-sectional sample of 547 anonymous smokers through a stratified random sampling strategy. GHWLs published in 2011 by the US Food and Drug Administration were used in the survey to assess smokers' responses to them, and then the hypotheses are validated through structural equation models.
Results: The results suggest that perceived severity, perceived vulnerability, response efficacy, and health anxiety have a significant impact on smokers' protection motivation. Furthermore, smokers' protection motivation directly impacts the behavioral intention to quit smoking and indirectly influences intention to quit through attitudes.
Discussion: These findings have practical implications for the implementation and improvement of GHWLs policies. Meanwhile, this study enriches the literature on public health protection measures (i.e., GHWLs) and smokers' behavioral intention to quit smoking.
abstract_id: PUBMED:34886370
The Influence of Episodic Future Thinking and Graphic Warning Labels on Delay Discounting and Cigarette Demand. Delay discounting and operant demand are two behavioral economic constructs that tend to covary, by degree, with cigarette smoking status. Given historically robust associations between adverse health outcomes of smoking, a strong preference for immediate reinforcement (measured with delay discounting), and excessive motivation to smoke cigarettes (measured with operant demand), researchers have made numerous attempts to attenuate the extent to which behaviors corresponding to these constructs acutely appear in smokers. One approach is episodic future thinking, which can reportedly increase the impact of future events on present decision making as well as reduce the reinforcing value of cigarettes. Graphic cigarette pack warning labels may also reduce smoking by increased future orientation. Experiment 1 evaluated the combined effects of episodic future thinking and graphic warning labels on delay discounting; Experiment 2 evaluated solely the effects of episodic future thinking on delay discounting and operant demand. We observed no statistically significant effects of episodic future thinking when combined with graphic warning labels or when assessed on its own. These results serve as a call for further research on the boundary conditions of experimental techniques reported to alter behaviors associated with cigarette smoking.
abstract_id: PUBMED:27009143
Speaking out about physical harms from tobacco use: response to graphic warning labels among American Indian/Alaska Native communities. Objective: This study is the first to explore the impact of graphic cigarette labels with physical harm images on members of American Indian/Alaska Native (AI/AN) communities. The aim of this article is to investigate how AI/AN respond to particular graphic warning labels.
Methods: The parent study recruited smokers, at-risk smokers and non-smokers from three different age groups (youths aged 13-17 years, young adults aged 18-24 years and adults aged 25+ years) and five population subgroups with high smoking prevalence or smoking risk. Using nine graphic labels, this study collected participant data in the field via an iPad-administered survey and card sorting of graphic warning labels. This paper reports on findings for AI/AN participants.
Results: After viewing graphic warning labels, participants rated their likelihood of talking about smoking risks to friends, parents and siblings higher than their likelihood of talking to teachers and doctors. Further, this study found that certain labels (eg, the label of the toddler in the smoke cloud) made them think about their friends and family who smoke.
Conclusions: Given the influence of community social networks on health beliefs and attitudes, health communication using graphic warning labels could effect change in the smoking habits of AI/AN community members. Study findings suggest that graphic labels could serve as stimuli for conversations about the risks of smoking among AI/AN community members, and could be an important element of a peer-to-peer smoking cessation effort.
abstract_id: PUBMED:30994083
Emotional Impact and Perceived Effectiveness of Text-Only versus Graphic Health Warning Tobacco Labels on Adolescents. The study of smoking in adolescence is of major importance as nicotine dependence often begins in younger groups. Tobacco health warnings have been introduced to inform people of the negative consequences of smoking. This study assessed the emotions and perceived effectiveness of two formats of tobacco warnings on adolescents: Text-only versus graphic warning labels. In addition, we analyzed how emotions predicted their perceived effectiveness. In a cross-sectional study, 413 adolescents (131 smokers, 282 non-smokers) between 13-20 years of age rated their emotions (valence and arousal) and perceived effectiveness towards a set of tobacco warnings. Results showed that graphic warnings evoked higher arousal than text-only warning labels (p = .038). Most of the warning labels also evoked unpleasantness with smokers reporting higher unpleasantness regarding text-only warnings compared to non-smokers (p = .002). In contrast, perceived effectiveness of the warnings was lower in smokers than in non-smokers (p = .029). Finally, high arousal and being a non-smoker explained 14% of the variance of perceiving the warnings more effective. Given the role that warnings may play in increasing health awareness, these findings highlight how smoking status and emotions are important predictors of the way adolescents consider tobacco health labels to be effective.
abstract_id: PUBMED:29176913
Addiction Treatment Clients' Reactions to Graphic Warning Labels on Cigarette Packs. Graphic warning labels (GWLs) on cigarette packs have been tested among diverse groups at high risk for tobacco use. However, little is known about the effectiveness of GWL interventions for persons with substance use disorders, whose smoking prevalence is 3 to 4 times that of the general population. After an experimental study which exposed clients in residential addiction treatment to GWLs for 30 days, we conducted five focus groups with trial participants (N = 33) to explore how exposure to the labels may have impacted their readiness to quit smoking. Focus group interviews were analyzed thematically. Interviewees reported that GWLs were more effective than text-based warnings for increasing quit intentions due to greater cognitive and emotional impact. Male and female interviewees expressed gender-specific reactions to the labels. Addiction treatment programs are a strategic site for GWL and other tobacco interventions due to the tobacco-vulnerable populations they serve.
abstract_id: PUBMED:37891513
Effects of cigarette package colors and warning labels on marlboro smokers' risk beliefs, product appraisals, and smoking behavior: a randomized trial. Objective: Plain packaging and graphic warning labels are two regulatory strategies that may impact cigarette risk beliefs and reduce consumption, but data are needed to better understand how smokers respond to such regulations.
Methods: Adult, daily, Marlboro non-menthol smokers (Red [n = 141] or Gold [n = 43]) completed a mixed factorial randomized trial. Participants smoked their usual cigarettes during baseline (5-days) and were randomized to receive cigarette packs with a warning label manipulation (graphic vs. text-only). Within each warning label condition, participants completed three within-subjects pack color manipulations (red, gold, plain), each lasting 15 days. Participants were blinded to the fact that all packs contained their usual cigarettes. Mixed-effects models examined between- and within-subject differences on risk beliefs, product perceptions, and smoking behavior.
Results: Warning type and package color did not impact cigarette consumption or subjective ratings. However, use increased in all conditions (2.59-3.59 cigarettes per day) relative to baseline. While smokers largely held correct risk beliefs at baseline (Mean = 6.02, SE = 0.17, Range:0-8), the cumulative number of incorrect or uncertain cigarette risk beliefs increased from baseline in all pack color manipulations in the text (IRR range = 1.70-2.16) and graphic (IRR range = 1.31-1.70) warning conditions. Across all pack color periods, those in the graphic (vs. text) warning condition had reduced odds of reporting their study cigarettes as 'safer' than regular cigarettes (OR range = 0.22-0.32).
Conclusions: Pack color modification may increase uncertainty about several key cigarette risk beliefs, though graphic warnings may attenuate these effects. Regulatory agencies could consider supporting policy changes with information campaigns to maximize public knowledge.
Trial Registration: November 25, 2014; Registration number: NCT02301351.
Answer: Yes, graphic health warning labels do have an impact on adolescents' smoking-related beliefs and behaviors. Studies have shown that graphic health warning labels on cigarette packs are noticed by the majority of adolescents, increase adolescents' cognitive processing of these messages, and have the potential to lower smoking intentions (PUBMED:18783508). Specifically, in Korean adolescent smokers, attempts to quit smoking were higher among those who had seen graphic health warning labels compared to those who had not, with a greater correlation observed among adolescents who thought about the harm of smoking and the willingness to quit after seeing the labels (PUBMED:35223730).
Additionally, graphic warning labels have been found to evoke higher arousal than text-only warning labels, and this emotional response is an important predictor of the perceived effectiveness of the warnings (PUBMED:30994083). In American Indian/Alaska Native communities, graphic warning labels stimulated conversations about the risks of smoking among community members, suggesting that they could be an important element of a peer-to-peer smoking cessation effort (PUBMED:27009143).
Furthermore, the emotional impact and perceived effectiveness of graphic health warning labels were also studied among adolescents, showing that graphic warnings evoked higher arousal and unpleasantness, particularly among non-smokers, and that high arousal and being a non-smoker were associated with perceiving the warnings as more effective (PUBMED:30994083). This indicates that graphic health warning labels can be a significant deterrent to smoking initiation and an encouragement for smoking cessation among adolescents. |
Instruction: Is follow-up of lung cancer patients after resection medically indicated and cost-effective?
Abstracts:
abstract_id: PUBMED:8787445
Is follow-up of lung cancer patients after resection medically indicated and cost-effective? Background: There are no guidelines for the appropriate follow-up of patients after pulmonary resection for lung cancer.
Methods: Three-hundred fifty-eight consecutive patients who had undergone complete resections of non-small cell lung cancer between 1987 and 1991 were evaluated for tumor recurrence and development of second primary tumors. Recurrences were categorized by site (local or distant), mode of presentation (symptomatic or asymptomatic), treatment given (curative intent or palliative), and duration of overall survival.
Results: Recurrences developed in 135 patients (local only, 32; local and distant, 13; and distant only, 90). Of these, 102 were symptomatic and 33 were asymptomatic (most diagnosed by screening chest roentgenogram). Forty patients received treatment with curative intent (operation or radiation therapy > 50 Gy) and 95 were treated palliatively. The median survival duration from time of recurrence was 8.0 months for symptomatic patients and 16.6 months for asymptomatic patients (p = 0.008). Multivariate analysis shows that disease-free interval (greater than 12 months or less than or equal to 12 months) was the most important variable in predicting survival after recurrence and that mode of presentation, site of recurrence, initial stage, and histologic type did not significantly affect survival. New primary tumors developed in 35 patients.
Conclusions: Although detection of asymptomatic recurrences gives a lead time bias of 8 to 10 months, mode of treatment and overall survival duration are not greatly affected by this earlier detection. Disease-free interval appears to be the most important determinant of survival. Screening for asymptomatic recurrences in patients who have had lung cancer is unlikely to be cost-effective. Frequent follow-up and extensive radiologic evaluation of patients after operation for lung cancer are probably unnecessary.
abstract_id: PUBMED:26094172
Cost-effectiveness of stereotactic radiation, sublobar resection, and lobectomy for early non-small cell lung cancers in older adults. Objectives: Stereotactic ablative radiation (SABR) is a promising alternative to lobectomy or sublobar resection for early lung cancer, but the value of SABR in comparison to surgical therapy remains debated. We examined the cost-effectiveness of SABR relative to surgery using SEER-Medicare data.
Materials And Methods: Patients age ≥66 years with localized (<5 cm) non-small cell lung cancers diagnosed from 2003-2009 were selected. Propensity score matching generated cohorts comparing SABR with either sublobar resection or lobectomy. Costs were determined via claims. Median survival was calculated using the Kaplan-Meier method. Incremental cost-effectiveness ratios (ICERs) were calculated and cost-effectiveness acceptability curves (CEACs) were constructed from joint distribution of incremental costs and effects estimated by non-parametric bootstrap.
Results: In comparing SABR to sublobar resection, 5-year total costs were $55,120 with SABR vs. $77,964 with sublobar resection (P<0.001) and median survival was 3.6 years with SABR vs. 4.1 years with sublobar resection (P=0.95). The ICER for sublobar resection compared to SABR was $45,683/life-year gained, yielding a 46% probability that sublobar resection is cost-effective. In comparing SABR to lobectomy, 5-year total costs were $54,968 with SABR vs. $82,641 with lobectomy (P<0.001) and median survival was 3.8 years with SABR vs. 4.7 years with lobectomy (P=0.81). The ICER for lobectomy compared to SABR was $28,645/life-year gained, yielding a 78% probability that lobectomy is cost-effective.
Conclusion: SABR is less costly than surgery. While lobectomy may be cost-effective compared to SABR, sublobar resection is less likely to be cost-effective. Assessment of the relative value of SABR versus surgical therapy requires further research.
abstract_id: PUBMED:32171847
Cost-Effectiveness of Follow-Up for Subsolid Pulmonary Nodules in High-Risk Patients. Objective: To evaluate the cost-effectiveness of a number of follow-up guidelines and variants for subsolid pulmonary nodules.
Methods: We used a simulation model informed by data from the literature and the National Lung Screening Trial to simulate patients with ground-glass nodules (GGNs) detected at baseline computed tomography undergoing follow-up. The nodules were allowed to grow and develop solid components over time. We tested the guidelines generated by varying follow-up recommendations for low-risk nodules, that is, pure GGNs or those stable over time. For each guideline, we computed average US costs and quality-adjusted life-years (QALYs) gained per patient and identified the incremental cost-effectiveness ratios of those on the efficient frontier. In addition, we compared the costs and effects of the most recently released version of the Lung Computed Tomography Screening Reporting and Data System (Lung-RADS), version 1.1, with those of the previous version, 1.0. Finally, we performed sensitivity analyses of our results by varying several relevant parameters.
Results: Relative to the no follow-up scenario, the follow-up guideline system that was cost-effective at a willingness-to-pay of $100,000/QALY and had the greatest QALY assigned low-risk nodules a 2-year follow-up interval and stopped follow-up after 2 years for GGNs and after 5 years for part-solid nodules; this strategy yielded an incremental cost-effectiveness ratio of $99,970. Lung-RADS version 1.1 was found to be less costly but no less effective than Lung-RADS version 1.0. These findings were essentially stable under a range of sensitivity analyses.
Conclusions: Ceasing follow-up for low-risk subsolid nodules after 2 to 5 years of stability is more cost-effective than perpetual follow-up. Lung-RADS version 1.1 was cheaper but similarly effective to version 1.0.
abstract_id: PUBMED:11936523
Regular follow-up after curative resection of nonsmall cell lung cancer: a real benefit for patients? Even though complete resection is regarded as the only curative treatment for nonsmall cell lung cancer (NSCLC), >50% of resected patients die from a recurrence or a second primary tumour of the lung within 5 yrs. It remains unclear, whether follow-up in these patients is cost-effective and whether it can improve the outcome due to early detection of recurrent tumour. The benefit of regular follow-up in a consecutive series of 563 patients, who had undergone potentially curative resection for NSCLC at the University Hospital, was analysed. The follow-up consisted of clinical visits and chest radiography according to a standard protocol for up to 10 yrs. Survival rates were estimated using the Kaplan-Meier analysis method and the cost-effectiveness of the follow-up programme was assessed. A total of 23 patients (6.4% of the group with lobectomy) underwent further operation with curative intent for a second pulmonary malignancy. The regular follow-up over a 10-yr period provided the chance for a second curative treatment to 3.8% of all patients. The calculated costs per life-yr gained were 90,000 Swiss Francs. The cost-effectiveness of the follow-up protocol was far above those of comparable large-scale surveillance programmes. Based on these data, the intensity and duration of the follow-up was reduced.
abstract_id: PUBMED:15316214
Cost-effectiveness of early intervention: comparison between intraluminal bronchoscopic treatment and surgical resection for T1N0 lung cancer patients. Background: For patients with early-stage lung cancer (ESLC) and severe comorbidities, the cost-effectiveness of early intervention may be reduced by screening and treatment-related morbidity and mortality in addition to the risk for non-cancer-related deaths.
Objectives: The use of bronchoscopic treatment (BT) for centrally located ESLC as minimally invasive technique has raised questions whether this approach will be more cost-effective than standard surgical resection in the above-mentioned cohort of patients.
Methods: The cost-effectiveness of BT of 32 medically inoperable patients with intraluminal tumor has been compared to a matched control group of surgically treated stage IA cancer patients.
Results: Median follow-up after BT for ESLC has been 5 years (range 2-10) versus 6.7 years (range 2-10) for the surgical group. Five patients (16%) developed subsequent primaries/local recurrences after BT versus 4 (12.5%) in the surgical group. The respective percentages of actual survival during follow-up have been 50 and 41%, non-lung-cancer-related death 22 and 31% and lung-cancer-related death 28% in both groups, respectively. So far, the average costs per individual for early management by BT have been Euro 22,638 by surgery, and total expenses have been Euro 209,492 and Euro 724,403, respectively.
Conclusions: Despite the worse initial health status of patients treated with BT, actual survival rates and costs for early intervention underscore the superior cost-effectiveness of BT as early intervention in properly selected individuals with ESLC in the central airways.
abstract_id: PUBMED:32039928
Outcomes and cost of lung cancer patients treated surgically or medically in Catalunya: cost-benefit implications for lung cancer screening programs. Lung cancer screening programs with computed tomography of the chest reduce mortality by more than 20%. Yet, they have not been implemented widely because of logistic and cost implications. Here, we sought to: (1) use real-life data to compare the outcomes and cost of lung cancer patients with treated medically or surgically in our region and (2) from this data, estimate the cost-benefit ratio of a lung cancer screening program (CRIBAR) soon to be deployed in our region (Catalunya, Spain). We accessed the Catalan Health Surveillance System (CHSS) and analysed data of all patients with a first diagnosis of lung cancer between 1 January 2014 and 31 December 2016. Analysis was carried forward until 30 months (t = 30) after lung cancer diagnosis. Main results showed that: (1) surgically treated lung cancer patients have better survival and return earlier to regular home activities, use less healthcare related resources and cost less tax-payer money and (2) depending on incidence of lung cancer identified and treated in the program (1-2%), the return on investment for CRIBAR is expected to break even at 3-6 years, respectively, after its launch. Surgical treatment of lung cancer is cheaper and offers better outcomes. CRIBAR is estimated to be cost-effective soon after launch.
abstract_id: PUBMED:32278848
Analysis of Out-of-Pocket Cost of Lung Cancer Screening for Uninsured Patients Among ACR-Accredited Imaging Centers. Purpose: To determine the variability in out-of-pocket costs of lung cancer screening (LCS) for uninsured patients and assess accessibility of this information by telephone or Internet.
Methods: LCS centers from the ACR's LCS database were randomly selected. Centers were called between July and August 2019 to determine out-of-pocket cost. Telephone call variables, accessibility of cost information on screening centers' websites, screening centers' chargemasters, and publicly available facility and state insurance coverage variables were obtained. Cost information was summarized using descriptive analyses. Multiple variable linear regression analyses were conducted to evaluate effects of facility and state-level characteristics on out-of-pocket costs.
Results: Fifty-five ACR-accredited LCS centers were included with 78% (43 of 55) willing to provide out-of-pocket cost. Average out-of-pocket cost was $583 ± $607 (mean ± standard deviation), range $49 to $2,409. Average telephone call length 6 ± 3.8 min. Two of fifty-five screening centers' websites provided out-of-pocket cost information, and one matched cost given over the telephone. A chargemaster was found for 30 of 55 screening centers. No statistically significant differences in out-of-pocket costs were found by geographic region, state percentages of uninsured residents, state percentages of residents with public insurance, or facility safety net hospital affiliation.
Discussion: Out-of-pocket LCS costs for uninsured patients and availability of this information is highly variable. Radiology practices should be aware of this variability that may influence participation rates among uninsured patients.
abstract_id: PUBMED:23720093
Cost-effectiveness of stereotactic body radiation therapy versus surgical resection for stage I non-small cell lung cancer. Background: The traditional treatment for clearly operable (CO) patients with stage I non-small cell lung cancer (NSCLC) is lobectomy, with wedge resection (WR) and stereotactic body radiation therapy (SBRT) serving as alternatives in marginally operable (MO) patients. Given an aging population with an increasing prevalence of screening, it is likely that progressively more people will be diagnosed with stage I NSCLC, and thus it is critical to compare the cost-effectiveness of these treatments.
Methods: A Markov model was created to compare the cost-effectiveness of SBRT with WR and lobectomy for MO and CO patients, respectively. Disease, treatment, and toxicity data were extracted from the literature and varied in sensitivity analyses. A payer (Medicare) perspective was used.
Results: In the base case, SBRT (MO cohort), SBRT (CO cohort), WR, and lobectomy were associated with mean cost and quality-adjusted life expectancies of $42,094/8.03, $40,107/8.21, $51,487/7.93, and $49,093/8.89, respectively. In MO patients, SBRT was the dominant and thus cost-effective strategy. This result was confirmed in most deterministic sensitivity analyses as well as probabilistic sensitivity analysis, in which SBRT was most likely cost-effective up to a willingness-to-pay of more than $500,000/quality-adjusted life year. For CO patients, lobectomy was the cost-effective treatment option in the base case (incremental cost-effectiveness ratio of $13,216/quality-adjusted life year) and in nearly every sensitivity analysis.
Conclusions: SBRT was nearly always the most cost-effective treatment strategy for MO patients with stage I NSCLC. In contrast, for patients with CO disease, lobectomy was the most cost-effective option.
abstract_id: PUBMED:10369639
Follow-up of patients after resection for bronchogenic carcinoma. Objective: To investigate how the members of the European Association for Cardio-Thoracic Surgery (EACTS) follow up their patients after pulmonary resection for bronchogenic carcinoma.
Methods: A questionnaire was sent to 317 EACTS members (thoracic and cardiothoracic surgeons as well as surgeons of unknown field of clinical practice). We eventually received completed questionnaires from 101 (31.9%) surgeons, who were classified into "thoracic" and "others". Their answers were analysed by the chi-square test.
Results: One out of four EACTS members does not follow up his/her patients, while the remainder follow them up with or without the collaboration of a clinical oncologist, a pneumonologist or a family physician. Among the surgeons who follow up their patients, only one out of two does so throughout the patient's remaining life. The frequency of the routine follow-up visits as well as the type and frequency of the examinations used vary significantly among the members of the Association, but generally the frequency of visits tends to decrease with time. Although 89.8% of surgeons believe that a well scheduled follow-up is beneficial to the patient, only 67% think that such a follow-up is cost-effective.
Conclusions: A great diversity was observed in the way patients operated on for lung cancer are followed up by the EACTS members. The differences were more evident between "thoracic" and "other" surgeons. However, hard data showing the effect of these differences on patients' long-term survival are not available and prospective cooperative studies on this subject are required. Taking into account that these patients are, for the rest of their lives, at high risk of development of a metachronous primary bronchogenic carcinoma or other potentially curable malignancies, we believe that a life-long follow-up is mandatory.
abstract_id: PUBMED:38325848
Evaluation of the Factors Affecting Higher Hospitalization Cost of Lung Resection for Primary Lung Cancer: A Retrospective Cohort Study. Purpose: This study aims to evaluate the factors associated with the higher hospitalization cost of lung resection for primary lung cancer to contribute to the reduction of healthcare spending.
Methods: A total of 435 consecutive primary lung cancer patients who underwent lung resection by a single surgeon at a single institution were enrolled. Baseline patient characteristics, operative procedures, postoperative complications, and postoperative courses were analyzed in relation to the hospitalization cost. Patients with higher costs (exceeding the third quartile [TQ]) were compared with patients with lower costs (less than TQ).
Results: Median and TQ medical costs for overall cases were 11177 US dollars (USD) and 12292 USD, respectively. Smoking history, history of coronary artery disease, previous thoracotomy, multiple sealant material use, transfusion, tumor factor T3 or higher, squamous cell carcinoma, postoperative complications, and longer postoperative hospital stay (>10 POD) were significant risk factors for increased hospitalization cost in multivariate analysis. The 5-year survival rate was significantly lower in the higher hospitalization cost group.
Conclusion: In addition to postoperative complications and prolonged hospitalization, patient background, histological types, and intraoperative factors were also considered as the risk factors for higher medical costs.
Answer: The follow-up of lung cancer patients after resection is a topic of debate regarding its medical necessity and cost-effectiveness. According to the study by PUBMED:8787445, frequent follow-up and extensive radiologic evaluation after operation for lung cancer may be unnecessary, as the detection of asymptomatic recurrences does not greatly affect the mode of treatment or overall survival duration. The study suggests that disease-free interval is the most important determinant of survival, and screening for asymptomatic recurrences in patients who have had lung cancer is unlikely to be cost-effective.
Another study, PUBMED:11936523, analyzed the benefit of regular follow-up in patients who had undergone potentially curative resection for NSCLC. The study found that regular follow-up over a 10-year period provided a chance for a second curative treatment to 3.8% of all patients, but the cost-effectiveness of the follow-up protocol was considered high compared to other large-scale surveillance programs, leading to a reduction in the intensity and duration of the follow-up.
PUBMED:32171847 evaluated the cost-effectiveness of follow-up guidelines for subsolid pulmonary nodules and concluded that ceasing follow-up for low-risk subsolid nodules after 2 to 5 years of stability is more cost-effective than perpetual follow-up.
In contrast, PUBMED:10369639 found that while a majority of surgeons believe that a well-scheduled follow-up is beneficial to the patient, only 67% think that such a follow-up is cost-effective. The study highlighted the diversity in follow-up practices among surgeons and the lack of hard data showing the effect of these differences on patients' long-term survival.
Lastly, PUBMED:38325848 identified factors associated with higher hospitalization costs for lung resection, including postoperative complications and prolonged hospitalization, suggesting that these factors could be targeted to reduce healthcare spending.
In summary, while some studies suggest that follow-up may not be cost-effective and does not significantly affect survival outcomes (PUBMED:8787445, PUBMED:11936523), others highlight the potential benefits of follow-up and the need for more research to determine the most cost-effective strategies (PUBMED:32171847, PUBMED:10369639). Additionally, factors contributing to higher costs post-resection have been identified, which could inform efforts to reduce healthcare spending (PUBMED:38325848). |
Instruction: Radiographer technique: Does it contribute to the question of clip migration?
Abstracts:
abstract_id: PUBMED:26108860
Radiographer technique: Does it contribute to the question of clip migration? Introduction: Marker clips are commonly deployed at the site of a percutaneous breast biopsy. Studies have shown that displacement of the clip from the site of deployment is not uncommon. The objective of this study was to determine how much 'migration' could be seen with fixed structures within the breast tissue across three consecutive annual screening examinations, and therefore attempt to quantify how much of the reported clip migration could be due to radiographer technique.
Methods: Large, easily identified benign calcifications were measured by two investigators across three consecutive cycles of screening mammography. The position of the calcifications on the two standard mammographic views was measured in two planes. Other variables recorded included breast size and density, compression force used, and location of the benign calcifications within the breast.
Results: In 38% of cases, benign breast calcifications showed a mimicked movement of >15 mm in at least one plane. This was greatest in large breasts, those where fibroglandular tissue occupied less than 50% of the breast volume, and in the upper outer quadrant of the breast where mimicked movement >10 mm was noted in up to 90% of the larger breasts.
Conclusion: Fixed immobile objects in the breast can appear to move a distance of >15 mm in up to 30% of cases. Clinically, some of what has previously been called marker 'migration' may be spurious and accounted for by differences in radiographic positioning techniques.
abstract_id: PUBMED:37214331
An unusual site for breast clip migration: A case report. Clip migration following breast biopsy is a known complication. However, the migrated clip is usually found within the breast. We describe a rare case of delayed clip migration to the skin, following a magnetic resonance guided biopsy of the breast, highlighting its natural history of presentation and its treatment.
abstract_id: PUBMED:22809531
Choledochal lithiasis and stenosis secondary to the migration of a surgical clip The migration of a clip to the common bile duct after cholecystectomy is an uncommon, usually late, complication that can lead to diverse complications like stone formation, stenosis, and obstruction in the bile duct. We present the case of a patient who presented with signs and symptoms of cholangitis due to clip migration one year after laparoscopic cholecystectomy; endoscopic retrograde cholangiopancreatography and biliary tract stent placement resolved the problem.
abstract_id: PUBMED:32983443
Surgical clip migration following laparoscopic cholecystectomy: A rare cause of acute cholangitis. Clip migration following laparoscopic cholecystectomy (LC) is a rare and late complication of LC. The first case of surgical clip migration after LC was reported in 1992, and since then less than 100 cases have been reported in the literature. We report the case of cholangitis secondary to a surgical clip migration in an 83 years old male patient, 8 years after LC. Contrast-enhanced computed tomography of the abdomen (CT) showed intra and extrahepatic ducts dilatation secondary to a hyperdense object located in the distal common bile duct (CBD). It was removed successfully from the CBD by endoscopic retrograde cholangiopancreatography after sphincterotomy. At the last follow-up of one year after her admission, the patient is symptom-free with normal liver enzyme and abdominal CT. Surgical clip migration into CBD, should be included in the differential diagnosis while treating patients with the past surgical history of LC. Early diagnosis and treatment of this complication can avoid serious complications.
abstract_id: PUBMED:29794362
Clip-stone and T clip-sinus: A clinical analysis of six cases on migration of clips and literature review from 1997 to 2017. Introduction: With the development of laparoscopic skills, the laparoscopic common bile duct exploration (LCBDE) and laparoscopic cholecystectomy (LC) has become the standard surgical procedure for choledocholithiasis. We usually use Hem-o-lok clips to control cystic duct and vessels, which is safe on most occasions and has few perioperative complications such as major bleeding, wound infection, bile leakage, and biliary and bowel injury. However, a rare complication of post-cholecystectomy clip migration (PCCM) increases year by year due to the advancement and development of LC, CBD exploration as well as the wide use of surgical ligation clips.
Materials And Methods: Six patients whose clips are found dropping into CBD or forming T-tube sinus after laparoscopic surgery in our department.
Results: Six patients whose clips are found dropping into CBD (clip-stone) (3/6) or forming T-tube sinus (T clip-sinus) (3/6) after LCBDE or LC.
Conclusions: PCCM is a rare but severe complication of LCBDE. A pre-operative understanding of bile duct anatomy, the use of the minimum number of clips and the harmonic scalpel during the surgeries is necessary. Considering clip-stone or clip-sinus in the differential diagnosis of patients with biliary colics or cholangitis after LCBDE even years after surgery, the detailed medical history and pre-operative examination are inevitable, especially for these patients who had undergone LCBDE.
abstract_id: PUBMED:35282556
Acute Cholangitis Secondary to Surgical Clip Migration 18 Years After Cholecystectomy: A Case Report. Gallstone disease is a common condition and reason for consultation and hospitalizations. The standard of care is laparoscopic cholecystectomy. Early complications include bile duct injury and retained stone, and chronic complications include bile duct stricture and clip migration. It is important for clinicians to be aware of such complications as they can occur long after surgery. We report an interesting case of clip migration resulting in acute cholangitis, 18 years after laparoscopic cholecystectomy and review the literature on this interesting phenomenon of a commonly performed surgery. The diagnosis of clip migration in our case was suspected on abdominal radiograph and confirmed on endoscopic stone extraction.
abstract_id: PUBMED:26853991
Migration of intracranial hemostatic clip into the spinal canal: A case report and literature review. Spontaneous migration of intracranial hemostatic clip into the spinal canal is uncommon. We report a case of spontaneous migration of intracranial hemostatic clip into the lumbar spinal canal causing severely painful radiculopathy in a 55-year-old woman.
abstract_id: PUBMED:34931266
Does lateral arm technique decrease the rate of clip migration in stereotactic and tomosynthesis-guided biopsies? Background: Mammography-guided vacuum-assisted biopsies (MGVAB) can be done with stereotaxis or digital breast tomosynthesis guidance. Both methods can be performed with a conventional (CBA) or a lateral arm biopsy approach (LABA). Marker clip migration is relatively frequent in MGVAB (up to 44%), which in cases requiring surgery carries a risk of positive margins and re-excision. We aimed to compare the rates of clip migration and hematoma formation between the CBA and LABA techniques of prone MGVAB. Our HIPAA compliant retrospective study included all consecutive prone MGVAB performed in a single institution over a 20-month period. The LABA approach was used with DBT guidance; CBA utilized DBT or stereotactic guidance. The tissue sampling techniques were otherwise identical.
Results: After exclusion, 389 biopsies on 356 patients were analyzed. LABA was done in 97 (25%), and CBA in 292 (75%) cases. There was no statistical difference in clip migration rate with either 1 cm or 2 cm distance cut-off [15% for CBA and 10% for LABA for 1 cm threshold (p = 0.31); 5.8% or CBA and 3.1% or LABA for 2 cm threshold (p = 0.43)]. There was no difference in the rate of hematoma formation (57.5% in CDB and 50.5% in LABA, p = 0.24). The rates of technical failure were similar for both techniques (1.7% for CBA and 3% for LABA) with a combined failure rate of 1%.
Conclusions: LABA and CBA had no statistical difference in clip migration or hematoma formation rates. Both techniques had similar success rates and may be helpful in different clinical situations.
abstract_id: PUBMED:31076332
Immediate Migration of Biopsy Clip Markers After Upright Digital Breast Tomosynthesis-Guided Vacuum-Assisted Core Biopsy. Rationale And Objectives: The goal of this retrospective study was to evaluate the rate of immediate post-biopsy clip migration on an upright digital tomosynthesis-guided vacuum-assisted core biopsy unit and determine if any factors were associated with immediate clip migration.
Materials And Methods: We performed a retrospective review of patients who had undergone a biopsy performed at one facility from November 1, 2014 to September 30, 2016. Post-biopsy mammograms were reviewed to assess immediate clip position relative to the targeted lesion. The effects of age, lesion type, breast density, biopsy approach, number of samples, size of the biopsy chamber, and clip type on clip migration were examined using logistic regression analysis.
Results: One hundred ninety-seven biopsies were performed on 188 patients for calcifications (n = 159), architectural distortions (n = 29), masses (n = 5), and asymmetries (n = 4). The clip migration rate was 38% as defined as greater than 0.5 cm from the site of the biopsied lesion. The only independently predictive variable of clip migration was breast density in a numeric covariate in the logistic regression model, as migration was more likely with decreased breast density. The estimated odds ratio for a single level increase in BI-RADS breast density was 0.60 (95% confidence interval: 0.40, 0.91) with p = 0.018.
Conclusion: Immediate clip migration following biopsy was more likely with decreased breast density. Radiologists should be aware of immediate clip migration as correct clip location guides preoperative localization and allows the biopsy site to be monitored for changes on future mammograms.
abstract_id: PUBMED:32939240
Titanium clip migration after cholecystectomy: original technique for laparoscopic transcystic extraction when endoscopic retrograde cholangiopancreatography fails. Surgical clip migration into the common bile duct (CBD) is a rare complication after laparoscopic cholecystectomy (LC). Few cases of surgical clip migration have been reported in the literature, and most of them have been successfully treated with endoscopic retrograde cholangiopancreatography (ERCP). We present a 71-year-old woman with 48 h of abdominal pain, jaundice and fever 6 years after laparoscopic cholecystectomy. She was diagnosed with common bile duct obstruction from surgical clip migration. After failure of ERCP, the patient was successfully treated with an innovative approach by laparoscopic transcystic extraction using endoscopic hose-type biopsy forceps. The presented technique was feasible and safe in expert hands, representing a valuable alternative to avoid the need of a choledochotomy in patients with unsuccessful ERCP.
Answer: Yes, radiographer technique does contribute to the question of clip migration. The study described in abstract PUBMED:26108860 investigated the potential for apparent clip migration due to differences in radiographic positioning techniques. The researchers measured the position of large, benign calcifications across three consecutive annual screening mammograms to simulate fixed structures within the breast tissue. They found that in 38% of cases, these calcifications appeared to move more than 15 mm in at least one plane. This mimicked movement was most pronounced in large breasts, breasts with less than 50% fibroglandular tissue, and particularly in the upper outer quadrant of the breast. The study concluded that some of what has been reported as marker 'migration' may actually be spurious, resulting from variations in radiographic positioning rather than actual physical movement of the marker clips within the breast tissue. |
Instruction: Is superfertility associated with recurrent pregnancy loss?
Abstracts:
abstract_id: PUBMED:24964397
Is superfertility associated with recurrent pregnancy loss? Problem: A recent hypothesis has implicated superfertility as a cause of recurrent pregnancy loss. Clinical support for the concept comes from one report that 40% of women experiencing recurrent miscarriages had monthly fecundity rates of 60% or greater and thus were designated as superfertile.
Methods Of Study: To confirm or refute this finding, clinical histories of 201 women with a history of recurrent pregnancy loss were reviewed and months to desired pregnancy, karyotypes of their products of conception as well as results of laboratory tests including antiphospholipid antibodies and circulating natural killer cells were recorded.
Results: The prevalence of superfertility was 32% (64/201) among recurrently aborting women compared with 3% of the general population according to the model of Tietze (P < 0.0001). Fifty-nine of the 201 (30%) study patients displayed presence of APA,LA, increased CD56(+) cells, or increased NK cytotoxicity and were designated as having an immunologic risk factor. Of the 192 karyotypes of products of conception from women with a history of recurrent miscarriage, 153 (80%) had a normal chromosome complement and 38 (20%) were abnormal. Among the normal karyotypes, 86 (56%) were 46XX and 67 (44%) were 46XY.
Conclusion: Recurrent pregnancy loss is associated with superfertility in 32%, immunologic risk factors in 30% and a 20% frequency of chromosomally abnormal pregnancy losses. Thus, implantation failure can result from too much or too little implantation.
abstract_id: PUBMED:37598542
Superfertility and subfertility in patients with recurrent pregnancy loss: A comparative analysis of clinical characteristics and etiology based on differences in fertile ability. This study aimed to elucidate the etiologies of and risk factors for recurrent pregnancy loss (RPL) according to fertile ability, focusing on the differences between superfertile and subfertile patients. This retrospective observational study included 828 women with RPL between July 2017 and February 2020. Patients were divided into three groups based on time to pregnancy (TTP): superfertile (SUP) (TTP ≤3 months for all previous pregnancies), subfertile (SUB) (previous TTP ≥12 months and use of assisted reproductive technology [ART]), and Normal (N) (TTP >3 or <12 months without ART). All patients were assessed for uterine anatomy, antiphospholipid antibodies (APAs), thyroid function, and thrombophilia. Of the 828 patients, 22%, 44%, and 34% were assigned to the SUP, SUB, and N groups, respectively. The mean ages were 33.9, 38.2, and 35.9 years in the SUP, SUB, and N groups, respectively, revealing a significant difference (P < 0.001). The anti-CL β2GPI antibody positivity rate was significantly higher in the SUP group (4.6%) than in the N group (0.8%; P = 0.016). The prevalence of APA positivity was lowest in the N group. Overall, the clinical characteristics and etiologies of RPL associated with superfertility and subfertility were strikingly similar, with comparable positivity rates after adjusting for maternal age. Further investigation including chromosomal analysis of products of conception is needed to elucidate the clinical impact of differences in fertility on patients with RPL.
abstract_id: PUBMED:26840642
Superfertility is more prevalent in obese women with recurrent early pregnancy miscarriage. Objective: To investigate the effects of obesity on superfertility.
Design: Retrospective observational study.
Setting: A tertiary referral implantation clinic.
Population: Four hundred and fourteen women attending a tertiary implantation clinic with a history of recurrent miscarriage (RMC), over a 4-year period.
Methods: Pattern of pregnancy loss and time to pregnancy intervals for each pregnancy were collected by medical staff from women with RMC. The women were categorised into normal, overweight and obese according to their body mass index (BMI). Kaplan-Meier curves were constructed estimating the cumulative probability of a spontaneous pregnancy over time. The pregnancy loss patterns were correlated with BMI and data were compared between the categories using the Kruskal-Wallis test.
Main Outcome Measures: Pregnancy loss pattern and time to pregnancy intervals.
Results: Overall, 23.2, 51.4 and 64.2% of women conceived within first 1, 3 and 6 months, respectively. Obese women had cumulative pregnancy rates of 65.2 and 80% by three and 6 months, respectively, which was more than the cumulative pregnancy rates for women with normal BMI (49.2 and 65.8%). Comparison of survival curves indicated a significant difference in time to conceive for obese when compared with normal and overweight women (*P = 0.01), suggesting a higher prevalence of superfertility in obese women with RMC.
Conclusions: Our findings suggest that obese women may have a greater efficacy to achieve pregnancy, but with an increased risk of miscarriage, which may suggest the possible metabolic effects of obesity on endometrium.
abstract_id: PUBMED:32811769
Defining recurrent pregnancy loss: associated factors and prognosis in couples with two versus three or more pregnancy losses. Research Question: The definition of recurrent pregnancy loss (RPL) differs internationally. The European Society of Human Reproduction and Embryology (ESHRE) defines RPL as two or more pregnancy losses. Different definitions lead, however, to different approaches to care for couples with RPL. This study aimed to determine whether the distribution of RPL-associated factors was different in couples with two versus three or more pregnancy losses. If a similar distribution were found, couples with two pregnancy losses should be eligible for the same care pathway as couples with three pregnancy losses.
Design: This single-centre, retrospective cohort study investigated 383 couples included from 2012 to 2016 at the Leiden University Medical Center RPL clinic. Details on age, body mass index, smoking status, number of pregnancy losses, mean time to pregnancy loss and performed investigations were collected. The prevalence of uterine anomalies, antiphospholipid syndrome, hereditary thrombophilia, hyperhomocysteinaemia, chromosomal abnormalities and positive thyroid peroxidase antibodies were compared in couples with two versus three or more pregnancy losses.
Results: No associated factor was found in 71.5% of couples with RPL. This did not differ statistically between couples with two versus three or more pregnancy losses (73.6% versus 70.6%; P = 0.569). The distribution of investigated causes did not differ between the two groups.
Conclusions: As the distribution of associated factors in couples with two versus three or more pregnancy losses is equal, couples with two pregnancy losses should be eligible for the same care pathway as couples with three. This study supports ESHRE's suggestion of including two pregnancy losses in the definition of RPL.
abstract_id: PUBMED:36266986
Association of Interleukin-33 with Recurrent Pregnancy Loss in Egyptian Women Background: A successful pregnancy requires a distinct and complex immunological state. Cytokines appear to be critical for the establishment of a tolerogenic environment towards the semi-allogenic foetus during the foeto-maternal interphase, and a shift from a Th1- to a Th2-cytokine profile may be crucial. An imbalance of cytokines can be a significant factor in recurrent pregnancy loss (RPL). Interleukin-33 (IL-33) is a member of the IL- 1 cytokine family, involved in both the innate and adaptive immune responses coordinating immune cell function for a broad range of physiological and pathological processes, including the regulation of pregnancy outcome.
Objectives: The aim of this study was to investigate a possible association between IL-33 and RPL in Egyptian women.
Methods: The study was conducted on 66 Egyptian females recruited from Ain Shams University Specialized Hospital and 66 matched healthy non-pregnant females of typical childbearing age without a history of RPL. Serum IL-33 was measured in all subjects using a sandwich ELISA technique.
Results: Serum IL-33 levels were significantly higher in patients with RPL than in the healthy control group. In addition, in the patient group, there was a positive correlation between serum IL-33 level and both age and number of miscarriages and a negative correlation between serum IL-33 level and the number of deliveries.
Conclusion: In Egyptian women, serum levels of IL-33 are associated with RPL, thus IL-33 level could be a predictive biomarker for RPL in early pregnancy.
abstract_id: PUBMED:25681844
Recurrent pregnancy loss: evaluation and treatment. Recurrent pregnancy loss (RPL) is a multifactorial condition. Approximately half of patients with RPL will have no explanation for their miscarriages. De novo chromosome abnormalities are common in sporadic and recurrent pregnancy loss. Testing for embryonic abnormalities can provide an explanation for the miscarriage in many cases and prognostic information. Regardless of the cause of RPL, patients should be reassured that the prognosis for live birth with an evidence-based approach is excellent for most patients. The authors review current evidence for the evaluation and treatment of RPL and explore the proposed use of newer technology for patients with RPL.
abstract_id: PUBMED:32046416
Time-to-Pregnancy in Women with Unexplained Recurrent Pregnancy Loss: A Controlled Study. To determine whether differences are present in the time-to-pregnancy (TTP) between women with unexplained recurrent pregnancy loss (uRPL) and control women, in this case-control, retrospective study, carried out in tertiary university hospitals, the TTP, defined as the months needed to reach pregnancy from when the woman started to try to conceive, was determined in 512 women, 207 of which were diagnosed as having uRPL and 305 were normal healthy control women. The specific TTPs for each pregnancy, stratified by order of pregnancy occurrence, were also determined. Pregnancy rates by time were calculated by using the Kaplan-Meier method to construct the survival curves. The age at which the pregnancies occurred was determined. Comparisons were carried out between women with uRPL and controls. Overall, 1192 pregnancies occurred and were analyzed. Mean TTP in uRPL women was shorter than in controls (P < 0.001) when all the pregnancies were considered. Similarly, it was shorter in the first, second, third, and fifth pregnancy. The pregnancy rates of uRPL women were shorter than that of control women for the first three pregnancies, for which the numbers of subjects allowed the comparisons to be made. These findings were observed despite maternal age of uRPL women was higher than that of control women. TTP is shorter in uRPL than in normal women. This finding clinically supports to the hypothesis that women with uRPL could be, at least in early stages of pregnancy, more fertile or receptive toward the implanting embryo than healthy women.
abstract_id: PUBMED:30683511
Uterine factor in recurrent pregnancy loss. Objective: To review the current understanding of the role the uterus plays in recurrent pregnancy loss.
Findings: Congenital and acquired uterine abnormalities are associated with recurrent pregnancy loss in the first and second trimester. Relevant congenital Mullerian tract anomalies include unicornuate, didelphys, bicornuate and septate uteri. Pregnancy loss has also been associated with acquired uterine abnormalities that distort the uterine cavity such as intrauterine adhesions and submucosal myomas. Initial evaluation of women with recurrent pregnancy loss should include a uterine assessment such as a pelvic ultrasound or sonohysterography. Uterine abnormalities such as uterine septum, intrauterine adhesions and submucosal myomas may be managed surgically with operative hysteroscopy.
Conclusion: Uterine abnormalities, both congenital and acquired, can be responsible for recurrent pregnancy loss.
abstract_id: PUBMED:28553146
Recurrent pregnancy loss: current perspectives. Recurrent pregnancy loss is an important reproductive health issue, affecting 2%-5% of couples. Common established causes include uterine anomalies, antiphospholipid syndrome, hormonal and metabolic disorders, and cytogenetic abnormalities. Other etiologies have been proposed but are still considered controversial, such as chronic endometritis, inherited thrombophilias, luteal phase deficiency, and high sperm DNA fragmentation levels. Over the years, evidence-based treatments such as surgical correction of uterine anomalies or aspirin and heparin for antiphospholipid syndrome have improved the outcomes for couples with recurrent pregnancy loss. However, almost half of the cases remain unexplained and are empirically treated using progesterone supplementation, anticoagulation, and/or immunomodulatory treatments. Regardless of the cause, the long-term prognosis of couples with recurrent pregnancy loss is good, and most eventually achieve a healthy live birth. However, multiple pregnancy losses can have a significant psychological toll on affected couples, and many efforts are being made to improve treatments and decrease the time needed to achieve a successful pregnancy. This article reviews the established and controversial etiologies, and the recommended therapeutic strategies, with a special focus on unexplained recurrent pregnancy losses and the empiric treatments used nowadays. It also discusses the current role of preimplantation genetic testing in the management of recurrent pregnancy loss.
abstract_id: PUBMED:37373766
Recurrent Pregnancy Loss Etiology, Risk Factors, Diagnosis, and Management. Fresh Look into a Full Box. Recurrent pregnancy loss is a complex health challenge with no universally accepted definition. Inconsistency in definitions involves not only the number of spontaneous abortions (two or three) that are accepted for recurrent pregnancy loss but the types of pregnancy and gestational age at miscarriage. Due to the heterogeneity of definitions and criteria applied by international guidelines for recurrent pregnancy loss, the true incidence of recurrent miscarriage, which is reported to range from 1% to 5%, is difficult to estimate. Moreover, the exact etiology of recurrent pregnancy loss remains questionable; thus, it is considered a polyetiological and multifactorial condition with many modifiable and non-modifiable factors involved. Even after thoroughly evaluating recurrent pregnancy loss etiology and risk factors, up to 75% of cases remain unexplained. This review aimed to summarize and critically analyze accumulated knowledge on the etiology, risk factors, relevant diagnostic options, and management approach to recurrent pregnancy loss. The relevance of various factors and their proposed roles in recurrent pregnancy loss pathogenesis remains a matter of discussion. The diagnostic approach and the management largely depend on the etiology and risk factors taken into consideration by a healthcare professional as a cause of recurrent miscarriage for a particular woman or couple. Underestimation of social and health consequences of recurrent pregnancy loss leads to compromised reproductive health and psychological well-being of women after miscarriage. Studies on etiology and risk factors for recurrent pregnancy loss, especially idiopathic, should be continued. The existing international guidelines require updates to assist clinical practice.
Answer: Yes, superfertility is associated with recurrent pregnancy loss (RPL). The concept of superfertility refers to the ability to conceive rapidly and repeatedly, which paradoxically may lead to an increased risk of miscarriages. A study found that the prevalence of superfertility was 32% among women with a history of recurrent miscarriage, which is significantly higher than the 3% prevalence in the general population (PUBMED:24964397). This suggests that superfertility could be a contributing factor to RPL.
Another study divided patients with RPL into groups based on their time to pregnancy (TTP) and found that 22% of the patients were superfertile, conceiving within 3 months for all previous pregnancies. The study also noted that the anti-CL β2GPI antibody positivity rate was significantly higher in the superfertile group compared to the normal group, indicating a potential immunological factor associated with superfertility and RPL (PUBMED:37598542).
Furthermore, obesity has been linked to a higher prevalence of superfertility in women with recurrent early pregnancy miscarriage. Obese women had higher cumulative pregnancy rates within the first 3 to 6 months compared to women with normal BMI, suggesting that superfertility is more prevalent in obese women with RPL (PUBMED:26840642).
Additionally, a study on time-to-pregnancy in women with unexplained recurrent pregnancy loss found that the mean TTP was shorter in women with RPL compared to controls, supporting the hypothesis that women with RPL could be more fertile or receptive toward the implanting embryo than healthy women (PUBMED:32046416).
In conclusion, superfertility is indeed associated with recurrent pregnancy loss, and this association may involve immunological factors and other etiologies that contribute to the increased risk of miscarriage in superfertile women. |
Instruction: Is kyphoplasty better than vertebroplasty in restoring normal mechanical function to an injured spine?
Abstracts:
abstract_id: PUBMED:20004264
Is kyphoplasty better than vertebroplasty in restoring normal mechanical function to an injured spine? Introduction: Kyphoplasty is gaining in popularity as a treatment for painful osteoporotic vertebral body fracture. It has the potential to restore vertebral shape and reduce spinal deformity, but the actual clinical and mechanical benefits of kyphoplasty remain unclear. In a cadaveric study, we compare the ability of vertebroplasty and kyphoplasty to restore spine mechanical function, and vertebral body shape, following vertebral fracture.
Methods: Fifteen pairs of thoracolumbar "motion segments" (two vertebrae with the intervening disc and ligaments) were obtained from cadavers aged 42-96 years. All specimens were compressed to induce vertebral body fracture. Then one of each pair underwent vertebroplasty and the other kyphoplasty, using 7 ml of polymethylmethacrylate cement. Augmented specimens were compressed for 2 hours to allow consolidation. At each stage of the experiment, motion segment stiffness was measured in bending and compression, and the distribution of loading on the vertebrae was determined by pulling a miniature pressure transducer through the intervertebral disc. Disc pressure measurements were performed in flexed and extended postures with a compressive load of 1.0-1.5 kN. They revealed the intradiscal pressure (IDP) which acts on the central vertebral body, and they enabled compressive load-bearing by the neural arch (F(N)) to be calculated. Changes in vertebral height and wedge angle were assessed from radiographs. The volume of leaked cement was determined by water displacement. Volumetric bone mineral density (BMD) of each vertebral body was calculated using DXA and water displacement.
Results: Vertebral fracture reduced motion segment compressive stiffness by 55%, and bending stiffness by 39%. IDP fell by 61-88%, depending on posture. F(N) increased from 15% to 36% in flexion and from 30% to 58% in extension (P<0.001). Fracture reduced vertebral height by an average 0.94 mm and increased vertebral wedging by 0.95 degrees (P<0.001). Vertebroplasty and kyphoplasty were equally effective in partially restoring all aspects of mechanical function (including stiffness, IDP, and F(N)), but vertebral wedging was reduced only by kyphoplasty (P<0.05). Changes in mechanical function and vertebral wedging were largely maintained after consolidation, but height restoration was not. Cement leakage was similar for both treatments.
Conclusions: Vertebroplasty and kyphoplasty were equally effective at restoring mechanical function to an injured spine. Only kyphoplasty was able to reverse minor vertebral wedging.
abstract_id: PUBMED:25450656
Is kyphoplasty better than vertebroplasty at restoring form and function after severe vertebral wedge fractures? Background Context: The vertebral augmentation procedures, vertebroplasty and kyphoplasty, can relieve pain and facilitate mobilization of patients with osteoporotic vertebral fractures. Kyphoplasty also aims to restore vertebral body height before cement injection and so may be advantageous for more severe fractures.
Purpose: The purpose of this study was to compare the ability of vertebroplasty and kyphoplasty to restore vertebral height, shape, and mechanical function after severe vertebral wedge fractures.
Study Design/setting: This is a biomechanical and radiographic study using human cadaveric spines.
Methods: Seventeen pairs of thoracolumbar "motion segments" from cadavers aged 70-98 years were injured, in a two-stage process involving flexion and compression, to create severe anterior wedge fractures. One of each pair underwent vertebroplasty and the other kyphoplasty. Specimens were then compressed at 1 kN for 1 hour to allow consolidation. Radiographs were taken before and after injury, after treatment, and after consolidation. At these same time points, motion segment compressive stiffness was assessed, and intervertebral disc "stress profiles" were obtained to characterize the distribution of compressive stress on the vertebral body and neural arch.
Results: On average, injury reduced anterior vertebral body height by 34%, increased its anterior wedge angle from 5.0° to 11.4°, reduced intradiscal (nucleus) pressure and motion segment stiffness by 96% and 44%, respectively, and increased neural arch load bearing by 57%. Kyphoplasty caused 97% of the anterior height loss to be regained immediately, although this reduced to 79% after consolidation. Equivalent gains after vertebroplasty were significantly lower: 59% and 47%, respectively (p<.001). Kyphoplasty reduced vertebral wedging more than vertebroplasty (p<.02). Intradiscal pressure, neural arch load bearing, and motion segment compressive stiffness were restored significantly toward prefracture values after both augmentation procedures, even after consolidation, but these mechanical effects were similar for kyphoplasty and vertebroplasty.
Conclusions: After severe vertebral wedge fractures, vertebroplasty and kyphoplasty were equally effective in restoring mechanical function. However, kyphoplasty was better able to restore vertebral height and reverse wedge deformity.
abstract_id: PUBMED:27842431
Vertebroplasty and Kyphoplasty in Vertebral Osteoporotic Fractures. Vertebroplasty and kyphoplasty are minimally invasive treatments and indispensable tools in the treatment of osteoporotic compression fractures. This method of treatment is performed using fluoroscopy or a scanner control an access via the pedicle or the posterolateral angle of the vertebral body. Vertebroplasty requires a smaller caliber needle than kyphoplasty, so it is technically easier. Vertebroplasty uses high-pressure injection, whereas in kyphoplasty the injection is held at low pressure, which together with the effect of compression on the bone that the balloon produces reduces the risk and rate of cement leakage. Vertebroplasty is effective in managing osteoporotic compression vertebral fractures, with improvement in pain and quality of life in the immediate postoperative period and over the medium term.Both techniques have a very low complication rate. There is no consensus on whether the emergence of new fractures in the cases treated by vertebroplasty and kyphoplasty are related to mechanical variations that were introduced or is a complication related to the age and evolution of the patient's osteoporosis. Even with this risk of new fractures, the improvement in quality of life obtained after vertebroplasty and kyphoplasty treatment is worthwhile. The benefits outweigh the risks.
abstract_id: PUBMED:38171283
Utilization of Vertebroplasty/Kyphoplasty in the Management of Compression Fractures: National Trends and Predictors of Vertebroplasty/Kyphoplasty. Objective: The purpose of this study is to examine the utilization of kyphoplasty/vertebroplasty procedures in the management of compression fractures. With the growing elderly population and the associated increase in rates of osteoporosis, vertebral compression fractures have become a daily encounter for spine surgeons. However, there remains a lack of consensus on the optimal management of this patient population.
Methods: A retrospective analysis of 91 million longitudinally followed patients from 2016 to 2019 was performed using the PearlDiver Patient Claims Database. Patients with compression fractures were identified using International Classification of Disease, 10th Revision codes, and a subset of patients who received kyphoplasty/vertebroplasty were identified using Common Procedural Terminology codes. Baseline demographic and clinical data between groups were acquired. Multivariable regression analysis was performed to determine predictors of receiving kyphoplasty/vertebroplasty.
Results: A total of 348,457 patients with compression fractures were identified with 9.2% of patients receiving kyphoplasty/vertebroplasty as their initial treatment. Of these patients, 43.5% underwent additional kyphoplasty/vertebroplasty 30 days after initial intervention. Patients receiving kyphoplasty/vertebroplasty were significantly older (72.2 vs. 67.9, p < 0.05), female, obese, had active smoking status and had higher Elixhauser Comorbidity Index scores. Multivariable analysis demonstrated that female sex, smoking status, and obesity were the 3 strongest predictors of receiving kyphoplasty/vertebroplasty (odds ratio, 1.27, 1.24, and 1.14, respectively). The annual rate of kyphoplasty/vertebroplasty did not change significantly (range, 8%-11%).
Conclusion: The majority of vertebral compression fractures are managed nonoperatively. However, certain patient factors such as smoking status, obesity, female sex, older age, osteoporosis, and greater comorbidities are predictors of undergoing kyphoplasty/vertebroplasty.
abstract_id: PUBMED:25964376
Outcomes of vertebroplasty compared with kyphoplasty: a systematic review and meta-analysis. Background And Purpose: Many studies demonstrate that both kyphoplasty and vertebroplasty are superior to conservative therapy in the treatment of osteoporotic vertebral body compression fractures. We performed a systematic review and meta-analysis of studies comparing the outcomes of vertebroplasty and kyphoplasty, which included prospective non-randomized, retrospective comparative, and randomized studies.
Materials And Methods: We searched MEDLINE, EMBASE, and the Web of Science databases for studies of kyphoplasty versus vertebroplasty from 1 January 1990 to 30 November 2014 and compared the following outcomes: procedure characteristics, pain and disability improvement, complications and anatomic outcomes. A subgroup analysis was performed comparing pain outcomes based on the risk of bias.
Results: 29 studies enrolling 2838 patients (1384 kyphoplasty and 1454 vertebroplasty) were included. 16 prospective non-randomized studies, 10 retrospective comparative studies, and 3 randomized controlled studies were included. No significant differences were found in mean pain scores between the two groups postoperatively (2.9±1.5 kyphoplasty vs 2.9±1.7 vertebroplasty, p=0.39) and at 12 months (2.7±1.8 kyphoplasty vs 3.2±1.8 vertebroplasty, p=0.64). No significant differences were found in disability postoperatively (34.7±7.1 kyphoplasty group vs 36.3±7.8 vertebroplasty group, p=0.74) or at 12 months (28.3±16 kyphoplasty group vs 29.6±13.9 vertebroplasty group, p=0.70). Kyphoplasty was associated with lower odds of new fractures (p=0.06), less extraosseous cement leakage (p<0.01), and greater reduction in kyphotic angle (p<0.01).
Conclusions: No significant difference was found between vertebroplasty and kyphoplasty in short- and long-term pain and disability outcomes. Further studies are needed to better determine if any particular subgroups of patients would benefit more from vertebroplasty or kyphoplasty in the treatment of vertebral body compression fractures.
abstract_id: PUBMED:24596602
Comparative analysis of vertebroplasty and kyphoplasty for osteoporotic vertebral compression fractures. Study Design: A retrospective study.
Purpose: The aim of this study is to compare the efficacy and outcome of vertebroplasty compared with unipedicular and bipedicular kyphoplasty for the treatment of osteoporotic vertebral compression fractures in terms of pain, functional capacity and height restoration rates.
Overview Of Literature: The vertebroplasty procedure was first performed in 1984 for the treatment of a hemangioma at the C2 vertebra. Kyphoplasty was first performed in 1998 and includes vertebral height restoration in addition to using inflation balloons and high-viscosity cement. Both are efficacious, safe and long-lasting procedures. However, controversy still exists about pain relief, improvement in functional capacity, quality of life and height restoration the superiority of these procedures and assessment of appropriate and specific indications of one over the other remains undefined.
Methods: Between 2004 and 2011, 296 patients suffering from osteoporotic vertebral compression fracture underwent 433 vertebroplasty and kyphoplasty procedures. Visual analogue scale (VAS), the Oswestry Disability Index (ODI) and height restoration rates were used to evaluate the results.
Results: Mean height restoration rate was 24.16%±1.27% in the vertebroplasty group, 24.25%±1.28% in the unipedicular kyphoplasty group and 37.05%±1.21% in the bipedicular kyphoplasty group. VAS and ODI scores improved all of the groups.
Conclusions: Vertebroplasty and kyphoplasty are both effective in providing pain relief and improvement in functional capacity and quality of life after the procedure, but the bipedicular kyphoplasty procedure has a further advantage in terms of height restoration when compared to unipedicular kyphoplasty and vertebroplasty procedures.
abstract_id: PUBMED:35838762
Value of routine transpedicular biopsies in kyphoplasty and vertebroplasty for vertebral compression fractures : A survey among 250 spine surgeons Background: Transpedicular cement augmentation is an established therapeutic option in the treatment of pathologic compression fractures of the spine. In addition to osteoporosis, underlying metastatic diseases or, more rarely, a primary bone tumor are recurrent causes of vertebral compression fractures without adequate trauma.
Objective: To obtain a current opinion among spine surgeons in Germany, Switzerland, and Austria on the value of transpedicular biopsy during kyphoplasty and vertebroplasty of vertebral body fractures.
Material And Methods: A web-based (UmfrageOnline®) questionnaire with 11 questions was created and sent to the email distribution lists of the German Spine Society (DWG), the Austrian Society for Spine Surgery (spine.at), and the Swiss Society for Spinal Surgery (SGS), as well as to the email distribution list of the Spine Section of the German Society for Orthopedics and Trauma Surgery (DGOU).
Results: Of a total of 2675 spine surgeons contacted 250 (9.3%) responded to the survey. Approximately one third (29.8%) of respondents regularly perform a transpedicular biopsy with each kyphoplasty or vertebroplasty. Reasons cited for biopsy were image morphology (79.7%) or history of suspected (66.0%) or present (71.4%) tumor disease. Reasons cited against routine biopsy were the associated costs and the limited informative value of the biopsies obtained.
Discussion: Nearly one third of the spine surgeons surveyed regularly perform a transpedicular biopsy with each kyphoplasty or vertebroplasty. Almost all respondents perform biopsies at least when there is an imaging morphologic suspicion of tumor disease or tumor disease is known or suspected based on risk factors. Future studies need to further clarify the cost-effectiveness of transpedicular biopsy.
abstract_id: PUBMED:24976934
Comparative review of vertebroplasty and kyphoplasty. The aim of this review is to compare the effectiveness of percutaneous vertebroplasty and kyphoplasty to treat pain and improve functional outcome from vertebral fractures secondary to osteoporosis and tumor conditions. In 2009, two open randomized controlled trials published in the New England Journal of Medicine questioned the value of vertebroplasty in treating vertebral compression fractures. Nevertheless, the practice of physicians treating these conditions has barely changed. The objective of this review is to try to clarify the most important issues, based on our own experience and the reported evidence about both techniques, and to guide towards the most appropriate choice of treatment of vertebral fractures, although many questions still remain unanswered.
abstract_id: PUBMED:21326593
Vertebroplasty and kyphoplasty for the palliation of pain. Vertebroplasty and kyphoplasty are percutaneous techniques developed over the past 20 years to treat vertebral hemangiomas, osteoporotic compression fractures, and osteolytic tumors of the spine. In carefully selected patients, these procedures have led to the cessation or significant reduction in pain in 80 to 90% of patients. In this article, we review the indications and contraindications of these procedures, appropriate patient selection and evaluation, the technique, outcomes, and the potential complications of this form of therapy when performed for the alleviation of pain for osteolytic tumors of the spine.
abstract_id: PUBMED:25694904
Kyphoplasty versus vertebroplasty for painful osteoporotic vertebral compression fractures-which one is better? A systematic review and meta-analysis. Background: Whether kyphoplasty or vertebroplasty is better for painful osteoporotic vertebral compression fracture is a widely debated issue. Studies on the comparison of the 2 approaches are relative limited and a wide variation exists in the patient population, study design, and results. These factors make it difficult for workers in this field to know the exact value of the 2 approaches.
Objective: To perform a systematic review and meta-analysis to compare the clinical outcomes and complications of kyphoplasty versus vertebroplasty for painful osteoporotic vertebral compression fractures (OVCF).
Study Design: A systematic review and meta-analysis.
Methods: MEDLINE, EMBASE, and other databases were searched for all the relevant original articles published from January 1987 to September 2012 comparing kyphoplasty with vertebroplasty for painful OVCF. The following outcomes were mainly evaluated: visual analog scale (VAS), vertebral height, kyphosis angle, new vertebral fractures, and cement leakage.
Results: A total of 15 articles fulfilled all the inclusion criteria. The baseline characteristics such as sex, age, and number of prevalent fractures were comparable for both groups (P > .05). VAS score for the kyphoplasty group was significantly more than that for the vertebroplasty group at 1-3 days, 3 months, 6 months, 1 year, and 2 years after surgery (P < .05). Vertebral height in the kyphoplasty group was significantly higher than the vertebroplasty group at 3 months, 6 months, and 2 years (P < 0.05). Kyphosis angle in the kyphoplasty group was significantly lower at 3 months, 6 months, and 2 years (P < 0.05). The occurrence of new vertebral fractures in the kyphoplasty group had no significant difference with the vertebroplasty group at 3 months, 6 months, and 2 years (P > 0.05). The occurrence of cement leakage was significantly lower in the vertebroplasty group (P < 0.05).
Limitations: The main limitations of this review are that the demographics and comorbidities of study participants were not reported. These possible sources of heterogeneity could not be examined.
Conclusions: Percutaneous kyphoplasty is better than vertebroplasty in the treatment of painful OVCF. Kyphoplasty had better improvement at VAS score, vertebral height, and kyphosis angle with lower occurrence of cement leakage.
Answer: Based on the provided abstracts, kyphoplasty appears to be better than vertebroplasty in certain aspects of restoring normal mechanical function to an injured spine, particularly in terms of restoring vertebral height and reducing vertebral wedging after severe vertebral wedge fractures (PUBMED:25450656). Both kyphoplasty and vertebroplasty are effective in partially restoring mechanical function, including stiffness, intradiscal pressure (IDP), and neural arch load-bearing (F(N)), but kyphoplasty is able to reverse minor vertebral wedging, which vertebroplasty does not (PUBMED:20004264).
Kyphoplasty has also been shown to cause a significant regain in anterior height loss immediately after the procedure, although some of this height restoration is lost after consolidation. However, the gains in height restoration are still significantly higher with kyphoplasty compared to vertebroplasty (PUBMED:25450656). Additionally, kyphoplasty is associated with less extraosseous cement leakage and a greater reduction in kyphotic angle compared to vertebroplasty (PUBMED:25964376).
In terms of pain relief and improvement in functional capacity, both procedures are effective, but bipedicular kyphoplasty has an advantage in terms of height restoration when compared to unipedicular kyphoplasty and vertebroplasty (PUBMED:24596602). Moreover, kyphoplasty has been found to have better improvement in visual analog scale (VAS) scores, vertebral height, and kyphosis angle with a lower occurrence of cement leakage compared to vertebroplasty (PUBMED:25694904).
In conclusion, while both kyphoplasty and vertebroplasty are effective in restoring mechanical function to an injured spine, kyphoplasty seems to offer additional benefits in terms of restoring vertebral height and reducing wedge deformity, which may contribute to a closer approximation of normal mechanical function. |
Instruction: Child maltreatment severity and adult trauma symptoms: does perceived social support play a buffering role?
Abstracts:
abstract_id: PUBMED:23623620
Child maltreatment severity and adult trauma symptoms: does perceived social support play a buffering role? Objectives: The current study investigates the moderating effect of perceived social support on associations between child maltreatment severity and adult trauma symptoms. We extend the existing literature by examining the roles of severity of multiple maltreatment types (i.e., sexual, physical, and emotional abuse; physical and emotional neglect) and gender in this process.
Methods: The sample included 372 newlywed individuals recruited from marriage license records. Participants completed a number of self-report questionnaires measuring the nature and severity of child maltreatment history, perceived social support from friends and family, and trauma-related symptoms. These questionnaires were part of a larger study, investigating marital and intrapersonal functioning. We conducted separate, two-step hierarchical multiple regression models for perceived social support from family and perceived social support from friends. In each of these models, total trauma symptomatology was predicted from each child maltreatment severity variable, perceived social support, and the product of the two variables. In order to examine the role of gender, we conducted separate analyses for women and men.
Results: As hypothesized, increased severity of several maltreatment types (sexual abuse, emotional abuse, emotional neglect, and physical neglect) predicted greater trauma symptoms for both women and men, and increased physical abuse severity predicted greater trauma symptoms for women. Perceived social support from both family and friends predicted lower trauma symptoms across all levels of maltreatment for men. For women, greater perceived social support from friends, but not from family, predicted decreased trauma symptoms. Finally, among women, perceived social support from family interacted with child maltreatment such that, as the severity of maltreatment (physical and emotional abuse, emotional neglect) increased, the buffering effect of perceived social support from family on trauma symptoms diminished.
Conclusions: The results of the current study shed new light on the potential for social support to shield individuals against long-term trauma symptoms, and suggest the importance of strengthening perceptions of available social support when working with adult survivors of child maltreatment.
abstract_id: PUBMED:28282596
Childhood maltreatment, postnatal distress and the protective role of social support. The postpartum period is a vulnerable period for women with a history of childhood maltreatment. This study investigated the association between childhood maltreatment and postnatal distress three months postpartum and examined the role of social support provided by different sources (intimate partner, parents, parents-in-law, and friends). Analyses are based on N=66 women, who were screened for maltreatment experiences shortly after parturition with the Childhood Trauma Questionnaire. Their levels of postnatal distress (symptoms of depression, anxiety, and stress; assessed with the Hospital Anxiety and Depression Scale and the 4-Item version of the Perceived Stress Scale) and postpartum social support (measured with the Postpartum Social Support Questionnaire) were assessed three months postpartum. Adjusting for educational level and the experience of a recent stressful event, childhood maltreatment was directly associated with higher levels of postnatal distress. Social support provided by friends moderated this association in a heteroscedastic regression analysis. No moderating effect was observed for support provided by the own parents, the intimate partner, or parents-in-law. The association between childhood maltreatment and postnatal distress was not mediated by social support. Additional analyses revealed no main, moderating, or mediating effects of satisfaction with support. Results suggest that support provided by friends may promote resilience during the postpartum period in women with a history of childhood maltreatment. Efforts to better understand the role of postpartum support and mechanisms that may enhance a mother's ability to develop and maintain supportive friendships may be promising for guiding preventive interventions.
abstract_id: PUBMED:28028663
Family Social Support Modifies the Relationships Between Childhood Maltreatment Severity, Economic Adversity and Postpartum Depressive Symptoms. Objectives: This study examines the main and moderating effects of childhood abuse or neglect severity, income, and family social support on the presence of postpartum depressive symptoms (PDS).
Methods: Participants included 183 postpartum mothers who endorsed a history of childhood maltreatment (CM) and enrolled in a longitudinal study of mother and child outcomes. Participants completed questionnaires to assess CM severity, associated societal and maternal characteristics, and depressive symptom severity.
Results: The results confirm previously identified links between CM severity and PDS. Further, hierarchical linear regression analyses indicate the interaction of household income and interpersonal support from the family attenuates the relationship between CM severity and PDS. The final model accounted for 29% of the variance of PDS scores, a large effect size.
Conclusions: This study is the first to demonstrate interrelationships between income and social support on resilience to postpartum psychopathology in childhood trauma-surviving women. Social support appeared to protect against PDS for all mothers in this study while income only conferred a protective effect when accompanied by family support. For clinicians, this implies the need to focus on improving family and other relationships, especially for at-risk mothers.
abstract_id: PUBMED:29448910
Assessing the Mediating Role of Social Support in Childhood Maltreatment and Psychopathology Among College Students in Northern Ireland. The detrimental impact of early trauma, particularly childhood maltreatment, on mental health is well documented. Although it is understood that social support can act as a protective factor toward mental health for children who experience such adversity, few studies have addressed the experience of childhood maltreatment and the important function of social support in adulthood. The current study aimed to assess the mediating role of social support in the relationship between childhood experiences of maltreatment and mental health outcomes including anxiety, depression, posttraumatic stress disorder (PTSD), and problematic alcohol use in a sample of university students (N = 640) from Northern Ireland. Results of binary logistic regression analyses indicated that those reporting experiences of childhood maltreatment were at increased odds of mental health outcomes of PTSD, anxiety, and depression, but not alcohol use. Those reporting greater social support were significantly less likely to report on these mental health outcomes. In addition, the indirect paths from childhood maltreatment through social support to PTSD, depression, and anxiety were all significant, suggesting that social support, particularly family support, is a significant mediator of these relationships. Such findings have important implications for the social care response to children experiencing maltreatment and future support for such children as they transition to adolescence and adulthood.
abstract_id: PUBMED:38070306
The effects of childhood maltreatment on social support, inflammation, and depressive symptoms in adulthood. Rationale: Social Safety Theory (SST) suggests that social threats increase inflammation, exacerbating health risks, but that social support may decrease inflammatory signaling. One of the key health problems affected by both social forces and inflammation is major depression.
Objective: The present study sought to test aspects of the SST, to understand how social support and inflammation may mediate the effects of childhood maltreatment on depressive symptoms in adulthood.
Methods: This study utilized data from the national Midlife Development in the United States study (n = 1969; mean age 53; 77.2% White; 53.6% female) to model the effects of childhood maltreatment on depressive symptoms in adulthood and the potential serial mediating effects of social support and inflammation. Analyses were conducted via structural equation modeling, using the four subscales of the Center for Epidemiologic Studies Depression Scale to indicate depressive symptoms, the five subscales of the Childhood Trauma Questionnaire to indicate childhood maltreatment, and the Positive Relations Scale and a network level measure of support as indicators of social support. Inflammation was indexed using C-reactive protein (CRP). The model was estimated via maximum likelihood with robust standard errors and significance of indirect effects were assessed via a Sobel test.
Results: Childhood maltreatment was associated with increased depressive symptoms and CRP but decreased social support. Social support was associated with decreased depressive symptoms while CRP was associated with increased depressive symptoms. Assessing indirect effects yielded no serial mediation effect; however, a significant indirect effect from childhood maltreatment to depressive symptoms through social support was identified.
Conclusions: Analyses indicate mixed support for the SST with respect to depressive symptoms. Results highlight the role of social support in mitigating the effects depressive symptoms in adulthood; although, alternative strategies may be needed to decrease the effects of childhood maltreatment on inflammation as indexed by CRP.
abstract_id: PUBMED:26260146
From Childhood Maltreatment to Allostatic Load in Adulthood: The Role of Social Support. Although previous research has documented that social support acts as a protective factor for individuals exposed to trauma, most research relies on assessments of social support at one point in time. The present study used data from a prospective cohort design study to examine the stability of social support from childhood through middle adulthood in individuals with documented histories of childhood abuse and neglect and matched controls (aged 0-11) and assessed the impact of social support on allostatic load, a composite measure of physiological stress response assessed through blood tests and physical measurements, in middle adulthood. Maltreated children are more likely to have unstable social support across the life span, compared to matched controls. Social support across the life span partially mediated the relationship between child maltreatment and allostatic load in adulthood, although there were differences by race and sex. These findings have implications for interventions to prevent the negative consequences of child maltreatment.
abstract_id: PUBMED:32884627
Childhood Exposure to Family Violence and Adult Trauma Symptoms: The Importance of Social Support from a Spouse. This study examines the roles of both positive and negative social support from a spouse as potential moderators of associations between experiences of physical abuse and exposure to intimate partner violence (IPV) as a child and adult trauma symptoms. We hypothesized that positive social support received from a spouse would have a buffering effect on trauma symptoms, while negative social support from a spouse would have a potentiating effect. Participants were 193 newlywed couples (total N = 386) randomly recruited from a marriage license database. Participants completed self-report questionnaires measuring the nature and severity of child maltreatment and trauma symptoms, and engaged in a brief video-taped task in which they discussed a personal problem with their partner. Positive and negative support behaviors exhibited during the recorded task were then coded. Results of a dyadic data analysis (actor partner interdependence model) indicated that positive social support from a spouse buffered against trauma symptoms among men who were exposed to IPV during childhood, while negative social support from a spouse potentiated trauma symptoms among men who were exposed either to IPV or child physical abuse (CPA). The buffering and potentiating effects of spousal support were reduced among men who were exposed to increasingly severe levels of both IPV and CPA. By contrast, women's trauma symptoms were unrelated to either positive or negative support from a spouse. These findings extend prior research by suggesting that, for men, day-to-day provisions of support from a spouse may play a key role in posttraumatic recovery.
abstract_id: PUBMED:36430105
Social Determinants of Health and Child Maltreatment Prevention: The Family Success Network Pilot. Child maltreatment is a highly prevalent public health concern that contributes to morbidity and mortality in childhood and short- and long-term health consequences that persist into adulthood. Past research suggests that social determinants of health such as socioeconomic status and intergenerational trauma are highly correlated with child maltreatment. With support from the U.S. Children's Bureau, the Ohio Children's Trust Fund is currently piloting the Family Success Network, a primary child maltreatment prevention strategy in Northeast Ohio that seeks to address these social determinants through pillars of service that include family coaching, financial assistance, financial education, parenting education, and basic life skills training. This study highlights the initial development phase of a pilot study. Plans for in-depth process and outcome evaluations are discussed. The project seeks to improve family functioning and reduce child protective services involvement and foster care entry in an economically disadvantaged region.
abstract_id: PUBMED:31316399
Childhood Maltreatment Influences Mental Symptoms: The Mediating Roles of Emotional Intelligence and Social Support. Childhood maltreatment and its influence on mental health are key concerns around the world. Previous studies have found that childhood maltreatment is a positive predictor of mental symptoms, but few studies have been done to explore the specific mediating mechanisms between these two variables. Previous studies have found that there is a negative correlation between childhood maltreatment and emotional intelligence and between childhood maltreatment and social support, both of which are strong indicators of mental symptoms. Therefore, in this study, we took emotional intelligence and social support as mediating variables, exploring their mediating effects between childhood maltreatment and mental symptoms via the structural equation modeling method. We recruited 811 Chinese college students to complete the Childhood Trauma Questionnaire (CTQ), the Symptom Checklist 90 Scale (SCL-90), the Wong Law Emotional Intelligence Scale (WLEIS), and the Perceived Social Support Scale (PSSS). The results showed a significant and positive correlation between childhood maltreatment and mental symptoms (β = 0.26, P < 0.001); meanwhile, social support played a significant mediating role in the influence of childhood maltreatment on emotional intelligence [95% confidence intervals, (-0.594 to -0.327)]; and emotional intelligence likewise played a significant mediating role in the effect of social support on mental symptoms [95% confidence intervals, (-0.224 to -0.105)]. These results indicated that childhood maltreatment not only directly increases the likelihood of developing mental symptoms, but also affects emotional intelligence through influencing social support and then indirectly increasing the likelihood of developing mental symptoms. This study provided a theoretical basis for ameliorating adverse effects of childhood maltreatment on mental symptoms by enhancing emotional intelligence and social support.
abstract_id: PUBMED:31293398
Infant Trauma Alters Social Buffering of Threat Learning: Emerging Role of Prefrontal Cortex in Preadolescence. Within the infant-caregiver attachment system, the primary caregiver holds potent reward value to the infant, exhibited by infants' strong preference for approach responses and proximity-seeking towards the mother. A less well-understood feature of the attachment figure is the caregiver's ability to reduce fear via social buffering, commonly associated with the notion of a "safe haven" in the developmental literature. Evidence suggests this infant system overlaps with the neural network supporting social buffering (attenuation) of fear in the adults of many species, a network known to involve the prefrontal cortex (PFC). Here, using odor-shock conditioning in young developing rats, we assessed when the infant system transitions to the adult-like PFC-dependent social buffering of threat system. Rat pups were odor-shock conditioned (0.55 mA-0.6 mA) at either postnatal day (PN18; dependent on mother) or 28 (newly independent, weaned at PN23). Within each age group, the mother was present or absent during conditioning, with PFC assessment following acquisition using 14C 2-DG autoradiography and cue testing the following day. Since the human literature suggests poor attachment attenuates the mother's ability to socially buffer the infants, half of the pups at each age were reared with an abusive mother from PN8-12. The results showed that for typical control rearing, the mother attenuated fear in both PN18 and PN28 pups, although the PFC [infralimbic (IL) and ventral prelimbic (vPL) cortices] was only engaged at PN28. Abuse rearing completely disrupted social buffering of pups by the mother at PN18. The results from PN28 pups showed that while the mother modulated learning in both control and abuse-reared pups, the behavioral and PFC effects were attenuated after maltreatment. Our data suggest that pups transition to the adult-like PFC social support circuit after independence from the mother (PN28), and this circuit remains functional after early-life trauma, although its effectiveness appears reduced. This is in sharp contrast to the effects of early life trauma during infancy, where social buffering of the infant is more robustly impacted. We suggest that the infant social buffering circuit is disengaged by early-life trauma, while the adolescent PFC-dependent social buffering circuit may use a safety signal with unreliable safety value.
Answer: Yes, perceived social support plays a buffering role in the relationship between child maltreatment severity and adult trauma symptoms. Studies have shown that increased severity of various types of child maltreatment, such as sexual, physical, and emotional abuse, as well as physical and emotional neglect, predict greater trauma symptoms in both women and men. However, perceived social support from family and friends can predict lower trauma symptoms across all levels of maltreatment for men, and for women, greater perceived social support from friends (but not family) is associated with decreased trauma symptoms (PUBMED:23623620).
Moreover, social support provided by friends has been found to moderate the association between childhood maltreatment and postnatal distress, suggesting that friends' support may promote resilience during the postpartum period in women with a history of childhood maltreatment (PUBMED:28282596). Additionally, the interaction of household income and interpersonal support from the family can attenuate the relationship between childhood maltreatment severity and postpartum depressive symptoms (PUBMED:28028663).
The mediating role of social support in the relationship between childhood experiences of maltreatment and mental health outcomes, including PTSD, anxiety, and depression, has also been documented. Social support, particularly from the family, is a significant mediator of these relationships (PUBMED:29448910). Furthermore, social support has been shown to mitigate the effects of depressive symptoms in adulthood, although it may not decrease the effects of childhood maltreatment on inflammation as indexed by C-reactive protein (PUBMED:38070306).
In the context of spousal relationships, positive social support from a spouse can buffer against trauma symptoms among men who were exposed to intimate partner violence (IPV) or child physical abuse during childhood, while negative social support from a spouse can potentiate trauma symptoms (PUBMED:32884627).
Overall, these findings underscore the importance of strengthening perceptions of available social support when working with adult survivors of child maltreatment and suggest that social support is a crucial factor in mitigating long-term trauma symptoms resulting from childhood maltreatment. |
Instruction: Clips versus suture technique: is there a difference?
Abstracts:
abstract_id: PUBMED:26806237
Comparison of Nonpenetrating Titanium Clips versus Continuous Polypropylene Suture in Dialysis Access Creation. Background: Nonpenetrating titanium surgical clips (clips) offer a theoretical advantage of inducing less intimal hyperplasia at an anastomosis because of less endothelial injury. Whether this translates into improved outcomes when used in the creation of arteriovenous fistulas (AVFs) remains unclear. We sought to compare the maturation, patency, and failure rates of anastomoses created using traditional continuous polypropylene suture and clips.
Methods: All primary AVF created at a single Veterans Administration Medical Center were reviewed over a 6-year period. Anastomoses were created with either clips or suture based on surgeon preference. Patient characteristics and surgical outcomes were collected. Comparisons were made between the 2 groups.
Results: Over a 6-year period, 334 fistulas were created (29% suture and 71% clips) in 326 patients. The mean age was 64.8 ± 11 years with 98% males. Comorbidities included diabetes (70%), hypertension (96.1%), and tobacco use (52.9% previous or current). Approximately half the patients were predialysis. Comparison of patient characteristics showed no differences between the suture and clip groups. There was no significant difference in maturation rate (suture 79% versus clips 72%, P = 0.25), median time to maturation (suture 62 ± 35 versus clips 71 ± 13 days, P = 0.07), 1 year primary patency rate (suture 37.4% versus clips 39.6, P = 0.72), 1 year assisted primary patency rate (suture 82.4% versus clips 76.3%, P = 0.31), or overall failure rates (suture 62% versus clips 58%, P = 0.56). Median time to initial failure or reintervention was not significantly different in the clip group (suture 615 [range, 239-991] versus clips 812 [range, 635-989] days, P = 0.72).
Conclusions: Compared to traditional polypropylene suture creation of upper extremity AVFs, nonpenetrating clips had equivalent maturation, 1-year patency, and overall failure rates. Neither clips nor suture offers any clear advantage in the creation of AVF.
abstract_id: PUBMED:22322376
Hydrostatic comparison of nonpenetrating titanium clips versus conventional suture for repair of spinal durotomies. Study Design: Biomechanics.
Objective: To compare the hydrostatic strength of suture and nonpenetrating titanium clip repairs of standard spinal durotomies.
Summary Of Background Data: Dural tears are a frequent complication of spine surgery and can be associated with significant morbidity. Primary repair of durotomies with suture typically is attempted, but a true watertight closure can be difficult to obtain because of leakage through suture tracts. Nonpenetrating titanium clips have been developed for vascular anastomoses and provide a close apposition of the tissues without the creation of a suture tract.
Methods: Twenty-four calf spines were prepared with laminectomies and the spinal cord was evacuated leaving an intact dura. After Foley catheters were inserted from each end and inflated adjacent to a planned dural defect, the basal flow rate was measured and a 1-cm longitudinal durotomy was made with a scalpel. Eight repairs were performed for each material, which included monofilament suture, braided suture, and nonpenetrating titanium clips. The flow rate at 30, 60, and 90 cm of water and the time needed for each closure were measured.
Results: There was no statistically significant difference in the baseline leak rate for all 3 groups. There was no difference in the leakage rate of durotomies repaired with clips and intact specimens at any pressure. Monofilament and braided suture repairs allowed significantly more leakage than both intact and clip-repaired specimens at all pressures. The difference in leak rate increased as the pressure increased. Closing the durotomy with clips took less than half the time of closure with suture.
Conclusion: Nonpenetrating titanium clips provide a durotomy closure with immediate hydrostatic strength similar to intact dura whereas suture repair with either suture was significantly less robust. The use of titanium clips was more rapid than that of suture repair.
abstract_id: PUBMED:28755123
Closure of hip wound, clips or subcuticular sutures: does it make a difference? The purpose of this study was to investigate wound healing and complications following surgery for fracture neck of femur. Seventy-one patients were prospectively divided into two groups, according to the method of skin closure: group A had clips; group B had subcuticular vicryl® sutures. There were 41 patients in group A, and 30 patients in group B. There were 13 males and 58 females with an average age of 84.3 years (range 67-100 years). Thirty-seven patients underwent fixation with a dynamic hip screw, while 34 had undergone either a hemi or total hip arthroplasty. The wounds were inspected at days 2, 5, 7, 10 and 14 days, for discharge, redness and infection. There was a statistically significantly greater amount of wound discharge (P<0.002) and redness (P<0.009) in group A (clips) as compared to group B (vicryl). There were three cases of infection; all in patients where clips had been used for skin closure. We concluded that subcuticular vicryl sutures were significantly better than clips in terms of wound healing as well as cost. Except for some decrease in operative time there does not seem to be any advantage in the use of clips for wound closure.
abstract_id: PUBMED:18608996
The use of superelastic suture clips in laparoscopic gastric banding. Nickel-Titanium suture clips have been developed to enhance suturing in cardiovascular surgery (U-CLIP), Medtronic, Minneapolis, MN, USA). The first applications of superelastic suture clips in bariatric surgery were reported by Barba and Kane in 2004. No other experiences in this field have been reported or published. Our experience with this newly developed suture clip, used for suturing the anterior wrap in laparoscopic gastric banding, started in 2007. The U-Clip technology and the surgical technique are described and discussed in this article.
abstract_id: PUBMED:33249206
Economic evaluation of suture versus clip anastomosis in arteriovenous fistula creation. Objective: Techniques such as the use of nonpenetrating vascular clips for arteriovenous fistula (AVF) anastomotic creation have been developed in an effort to reduce fistula-related complications. However, the outcomes data for the use of clips have remained equivocal, and the cost evaluations to support their use have been largely theoretical. Therefore, the present study aimed to determine both the clinical and the cost outcomes of AVFs created with nonpenetrating vascular clips compared with the continuous suture technique during a 10-year period at a single institution.
Methods: All patients undergoing AVF creation in the upper extremity from 2009 through 2018 were retrospectively analyzed. The patient demographics and AVF outcomes were collected and compared stratified by the surgical technique used. A cost analysis was performed of a subgroup of patients from 2013 to 2018.
Results: During the 10-year study period, 916 AVFs were created (79% using the continuous suture technique and 21% using nonpenetrating vascular clips). Patient demographics and comorbid conditions did not differ between the two groups, and no differences were present in maturation, primary patency, assisted primary patency, or complication rates between the two groups at 1 year. The suture group had a shorter time to maturation (4.3 months vs 5.5 months; P < .01) and improved secondary patency compared with the clip group (77.13% vs 69.59%; P = .03) The cost analysis of the procedures revealed a significant difference in direct costs (suture, $1389.26 vs clip, $1716.51; P < .01) and contribution margin (suture, $1770.19 vs clip, $1128.36; P < .01) for the two groups.
Conclusions: Both suture and clip techniques in AVF creation demonstrated equivalent rates of maturation, primary patency, assisted primary patency, and complications at 1 year with higher expense associated with the use of clips. Thus, in an effort to reduce the economic burden of healthcare in the United States, the findings from the present study support the preferential use of the standard polypropylene suture technique when creating upper extremity AVFs.
abstract_id: PUBMED:9244079
Randomised trial of subcuticular suture versus metal clips for wound closure after thyroid and parathyroid surgery. A randomised trial was conducted to compare the results of neck wound closure using metal (Michel) clips or subcuticular suture. All operations were performed using a standardised technique, which included wound infiltration with 10 ml bupivacaine and adrenaline solution, no strap muscle division and the use of suction drains. All the collar incisions and wound closures were performed by the same surgeon. At the end of each operation patients were randomised to wound closure by either metal clips (n = 38) or a continuous 3/0 prolene subcuticular suture (n = 42). Daily postoperative pain scores and the discomfort caused by clip/suture removal were recorded. The cosmetic appearance of each wound was scored by the patient, the surgeon, and an independent observer using verbal response and linear analogue scales. The two study groups were well matched for age, sex, indication for surgery and operation performed. There were no differences in postoperative pain scores between clips and sutures. Removal of subcuticular sutures was performed more quickly (P < 0.0001) and caused less pain (P < 0.0001, visual analogue scale; P = 0.0042, verbal response scale) than the removal of clips. At the time of discharge, the cosmetic appearance scores generated by the surgeon, patient and independent observer were higher for suture closed wounds than clips. However, by 3 and 6 months follow-up there were no differences in cosmetic appearance between the two methods of closure. Only very short-term cosmetic results are influenced by the type of wound closure in thyroid and parathyroid surgery, but sutures are quicker and less painful to remove than Michel clips.
abstract_id: PUBMED:11240530
Can minimal arterial aggressions using non-penetrating mechanical clip suture prevent myo-intimal hyperplasia? Preliminary results Subject: Vascular anastomosis is still associated with a significant rate of early (stenosis, thrombosis) and delayed (intimal hyperplasia) complications. Even though suture closure remains the most widespread standard procedure, many mechanical systems have been developed mostly using non penetrating clips, aiming to make the suture easier, to reduce the operating time and to reduce the scarring process of the arterial wall. We investigated the usefulness of non penetrating titanium Vascular Closure Staple (VCS) developed for peripheral blood vessels anastomosis, in a study on 20 rabbits with the small VCS system.
Material And Methods: On 20 rabbits, 9 of the aortic sutures were done with VCS clips and 11 were done by standard closure.
Results: We found a significant improvement in the operating time of the closure (9 +/- 2 minutes versus 14 +/- 4 minutes), early and delayed (10 weeks) patency and the respect of the aorta diameter (0.248 +/- 0.01 centimetres versus 0.246 +/- 0.039 centimetres) and loss of surface (40.3 +/- 5.59% versus 45.6 +/- 6.34%). The main improvement is the reduced intimal hyperplasia (0.128 +/- 0.05 millimetres versus 0.198 +/- 0.032 millimetres. P=0.012).
Conclusion: Arterial closure can be performed more rapidly with VCS clips than with suture closure, and with a marked reduced reaction of intimal hyperplasia. With those elements it is necessary to continue the experimental studies and to evaluate the VCS sutures at mean and long term.
abstract_id: PUBMED:30392266
Application of purse string suture with Harmonious Clips and Olympus endoloop in single channel endoscopy for large gastric antrum mucosa defect Objective: To evaluate the feasibility and safety of purse string suture with Harmonious Clips and Olympus endoloop in single channel endoscopy for large gastric antrum mucosa defect. Methods: A total of 33 patients who underwent ESD of gastric antrum in single channel endoscopy at the First people's Hospital of Wujiang District from January 2015 to April 2018 were retrospectively analyzed. Everyone had one lesion, and the diameters were all more than 3 cm. After the resection and hemostasis, purse string suture with Harmonious Clips and Olympus endoloop or no suture in study group (n=16) and the control group (n=17). The degree of abdominal pain, postoperative gastrointestinal decompression time, incidence of delayed hemorrhage, postoperative hospital stay and the healing rate were observed and compared. Results: All patients successfully completed resection, no perforation occurred, and all the lesions were resected completely in one time. All patients in the study group were sutured successfully. The abdominal pain score on the first day after operation was (2.7±0.7) in the study group, and (3.6±0.8) in the control group(t=3.686, P=0.001). The mean time of postoperative gastrointestinal decompression was (1.6±0.5) days in the study group, while (2.4±0.7) days in the control group(t=3.675, P=0.001). No delayed bleeding occurred in the study group, while 5 cases in the control group had delayed bleeding. The rate of delayed hemorrhage was 29.4% in control group, 4 cases successfully achieved hemostasis through endoscopy therapy, 1 case was given surgical intervention after ineffective endoscopic hemostasis(P=0.044). The average postoperative hospital stay were (6.2±1.1) days and (5.9±2.0) days respectively (t=0.423, P=0.675). Two months after the operation, the two groups of patients reviewed the gastroscopy, the results showed that, all wounds in the study group were healed completely, the healing rate was 100%. There were 6 cases of incomplete wound healing in the control group, the healing rate was 64.7%(P=0.018). No recurrence was found after 6 months of operation through reviewing gastroscopy. Conclution: It indicates good feasibility and efficacy about the purse string suture with Harmonious Clip and Olympus endoloop in single channel endoscopy for large gastric antrum mucosa defect. It is safe and reliable, worth of being generalized.
abstract_id: PUBMED:11761863
Comparative study of microvascular anastomotic clips and suture in small vessel anastomosis Objective: To explore an ideal way of small vessel anastomosis for microsurgery.
Methods: Anastomosis of both carotid arteries were performed in 20 rabbits. One side of the arteries were anastomosed with anastomotic clips, the other side of the arteries, as comparison, were anastomosed with suture. The vessels were harvested at first and 14th day after operation and were evaluated using operating microscope, light microscope and electronic microscope.
Results: The average anastomotic time for suture was about 15 minutes, while for the clips was 2 to 5 minutes. There were no difference in patency between the two techniques. Endothelialization at the anastomotic sites were both completed 14 days postoperatively. However, for the anastomotic clips, there were no endothelia damage and foreign bodies formation inside the vessels.
Conclusion: This experiment has confirmed that the anastomotic clip's procedure provides a very safe and easy way to perform anastomosis and reduce the incidence of thromboses.
abstract_id: PUBMED:11109037
Clips versus suture technique: is there a difference? Introduction: Coronary artery bypass grafting (CABG) is one of the most common procedures performed today, and wound complications are a major source of morbidity and cost.
Objective: To determine whether there is any difference in wound outcome (including cost in a Canadian context) between a subcuticular suture technique and skin stapling technique for closure of sternal and leg incisions in CABG patients.
Patients And Methods: One hundred and sixty-two patients undergoing CABG were prospectively, randomly placed to have their sternal and leg incisions closed with either a subcuticular suture technique or with a skin clip. Data were obtained through chart review, in-hospital assessments and follow-up visits. Nonblinded assessments were made regarding wound leakage, inflammation, infection, necrosis, swelling, dehiscence and cosmesis. Each of the parameters was graded on a scale from 1 to 4. The cost was evaluated in Canadian dollars.
Results: There were trends toward increased rates of in-hospital sternal (P=0.09) and leg (P=0.17) incision inflammation when the wounds were closed with skin clips. There was a significantly greater (P=0.05) rate of sternal wound infection with clips, as well as a tendency (P=0.15) toward a greater rate of mediastinitis at follow-up assessment. Cosmetic outcome was similar for both groups. The cost incurred was significantly greater when skin clips were used for closure. There was a greater than threefold difference, which translates to a greater than $10,000 difference over one year.
Conclusions: Closure with a subcuticular technique achieves better outcomes than the use of skin clips. When factoring in the increased cost incurred by using clips, as well as other intangible factors such as surgical skill acquisition, subcuticular suture closure appears to be a favourable method of wound closure in CABG patients compared with the use of skin stapling techniques.
Answer: The comparison between clips and suture techniques for various surgical procedures has been investigated in multiple studies, and the results suggest that there are both similarities and differences depending on the specific context and outcomes measured.
For arteriovenous fistula (AVF) creation, nonpenetrating titanium clips were found to have equivalent maturation, 1-year patency, and overall failure rates compared to traditional polypropylene suture, with no clear advantage for either method (PUBMED:26806237). However, an economic evaluation indicated that while both suture and clip techniques demonstrated equivalent rates of maturation, primary patency, assisted primary patency, and complications at 1 year, the use of clips was associated with higher expenses, suggesting that sutures may be more cost-effective (PUBMED:33249206).
In spinal durotomy repair, nonpenetrating titanium clips provided a closure with immediate hydrostatic strength similar to intact dura, whereas suture repair was significantly less robust. Additionally, the use of clips was faster than suture repair (PUBMED:22322376).
For skin closure after hip surgery, subcuticular vicryl sutures resulted in significantly better wound healing and fewer complications compared to clips, with no advantage for clips except for a decrease in operative time (PUBMED:28755123).
In thyroid and parathyroid surgery, subcuticular sutures were quicker and less painful to remove than metal clips, and while there was a short-term cosmetic advantage for sutures, there were no long-term differences in cosmetic outcomes between the two methods (PUBMED:9244079).
In microvascular anastomosis, clips significantly reduced anastomotic time compared to sutures without compromising patency, and there was no endothelial damage or foreign body formation inside the vessels with clips (PUBMED:11761863).
For coronary artery bypass grafting (CABG) patients, a subcuticular suture technique resulted in better outcomes and was more cost-effective than skin stapling with clips, which had higher rates of sternal wound infection and mediastinitis (PUBMED:11109037).
Overall, the choice between clips and sutures may depend on the specific surgical context, desired outcomes, and cost considerations. While some studies show equivalent clinical outcomes, others suggest differences in terms of cost, speed, wound healing, and complication rates. |
Instruction: Is posttraumatic stress in youth a culture-bound phenomenon?
Abstracts:
abstract_id: PUBMED:38395506
Posttraumatic Stress Disorder in Our Migrant Youth. There is an ongoing diagnostic and treatment challenge for migrant youth with posttraumatic stress disorder (PTSD) that many clinicians face. Current studies have helped clinicians to develop a better understanding of the migrant youth's journey including potentially traumatic and adverse events they encounter. This includes determining if premigration, migration, and postmigration stressors have had an impact on the individual. This has also helped clinicians, educators, and legal advocates to use a collaborative approach to address the migrant youth's needs for managing the severity of PTSD symptoms.
abstract_id: PUBMED:37745625
Youth Dually-Involved in the Child Welfare and Juvenile Justice Systems: Varying Definitions and Their Associations with Trauma Exposure, Posttraumatic Stress, & Offending. Recently, scholars have placed increasing effort on better understanding the unique needs of youth involved in both the child welfare and juvenile justice systems. This study drew from the Developmental Cascade of Multisystem Involvement Framework to examine group differences in trauma exposure, posttraumatic stress symptoms, and offending among youth solely involved in the juvenile justice system and youth with varying degrees of dual-system involvement, including crossover youth (i.e., youth with a history of maltreatment and offending regardless of system involvement), dual-contact youth (i.e., youth who had a history of a substantiated CW maltreatment petition prior to their involvement in the current study), and dually-involved youth (i.e., youth under the care and custody of the state's child welfare system at the time of study participation). Four-hundred adolescents (25% girls, Mage = 15.97) who were recruited from a detention center and completed self-report measures assessing trauma exposure, posttraumatic stress, and offending. Juvenile justice and child welfare records also were collected. Results indicated that, compared to youth solely involved in the juvenile justice system, crossover youth reported significantly more exposure to traumatic events, more severe posttraumatic stress symptoms, and more self-reported offending. In contrast, results indicated few differences between dual-contact youth and youth solely involved in the juvenile justice system; these groups only differed in age and in recidivism charges. There also were few differences between dually-involved youth and youth solely involved in the juvenile justice system; these groups only differed in age and exposure to non-Criterion A traumatic events. The current results suggest that categorizing youth as crossover youth based on their own self-reported history of child maltreatment exposure resulted in more observed differences between dual-system youth and youth solely involved in juvenile justice. The present results have valuable implications for how we operationalize youth's system involvement and highlight the importance of examining child maltreatment as a point of prevention and intervention efforts for these youth.
abstract_id: PUBMED:30408700
Changes in posttraumatic stress symptoms, cognitions, and depression during treatment of traumatized youth. Objective: Although there is compelling evidence that trauma-focused cognitive behavioral therapy (TF-CBT) is an effective treatment for traumatized youth, we know less about the mechanisms contributing to symptom reduction. To improve the understanding of change mechanisms in TF-CBT, this paper investigates the possible bi-directional longitudinal relationship between changes in posttraumatic stress symptoms (PTSS), cognitions and depression in a clinical sample of traumatized youth.
Methods: The study includes 79 youth (M age = 15.0 years, SD = 2.2, 74.7% girls) who received TF-CBT. The youth were assessed for PTSS, posttraumatic cognitions, and depressive symptoms at baseline, mid-treatment, post-treatment, 12 months after baseline, and 18 months after post-treatment.
Results: Growth curve analyses showed that PTSS, posttraumatic cognitions and depressive symptoms decreased over time. Cross-lagged mediation analyses demonstrated that reduction in posttraumatic cognitions predicted reduction in both PTSS and depression at the subsequent measurement wave, but we did not find a clear pattern in the longitudinal relationship between PTSS and depression.
Conclusions: Changes in posttraumatic cognitions mediate the therapeutic effects of TF-CBT on symptoms of posttraumatic stress and depression. Future studies should seek to tease out how clinicians can best proceed to help youth reduce their posttraumatic cognitions and thereby improve treatment outcome.
abstract_id: PUBMED:15741471
Is posttraumatic stress in youth a culture-bound phenomenon? A comparison of symptom trends in selected U.S. and Russian communities. Objective: The cross-cultural applicability of the concept of posttraumatic stress was investigated by assessing symptom frequency and levels of comorbid psychopathology in adolescents from the United States and Russia.
Method: A self-report survey was conducted in representative samples of 2,157 adolescents 14 to 17 years old from urban communities of the United States (N=1,212) and Russia (N=945).
Results: In both countries, the levels of all three major clusters of posttraumatic symptoms (reexperiencing, avoidance, and arousal), as well as of internalizing psychopathology, increased along with the level of posttraumatic stress. Expectations about the future had a tendency to decrease with increasing posttraumatic stress. No differences between countries in significant interaction effects for symptom levels were found.
Conclusions: The current findings suggest that posttraumatic symptoms and their associations with other adolescent mental health problems are not culture bound and that the psychological consequences of trauma follow similar dynamics cross-culturally.
abstract_id: PUBMED:31024363
The Contribution of Posttraumatic Stress Disorder and Depression to Insomnia in North Korean Refugee Youth. Refugees are exposed to multiple traumatic and stressful events and thereby are at higher risk for developing a variety of psychological sequelae including posttraumatic stress disorder (PTSD). However, the relation of PTSD to other mental health conditions has not been fully revealed in refugee populations. The present study investigated relationships among trauma exposure, PTSD, depression, and insomnia in North Korean refugee youth. Seventy-four refugee youth were assessed for exposure to traumatic events, PTSD, depression, and insomnia symptoms. The results showed high rates of multiple trauma exposures among the refugee youth and high incidences of co-occurring symptoms of PTSD and insomnia in those who have multiple trauma. Furthermore, the overall symptoms and four cluster symptoms of PTSD were strongly correlated with insomnia in addition to depression. In the path model to predict insomnia, PTSD affected insomnia only through depression, indicating that the greater the levels of PTSD suffered, the greater the likelihood for developing sleep problems via depression. The present study indicates how sleep problems relate to trauma-related symptoms, i.e., PTSD and depression in refugee populations, and highlights the need for further investigation of the specific relation between sleep problems and trauma-related symptoms for effective evaluation and intervention.
abstract_id: PUBMED:28285446
Executive function as a mediator in the link between single or complex trauma and posttraumatic stress in children and adolescents. Purpose: In this study, we examined whether there is a mediating role of executive function (EF) in the relationship between trauma exposure and posttraumatic stress in youth.
Methods: Children and adolescents exposed to trauma were recruited at an academic center for child psychiatry in The Netherlands. The total sample consisted of 119 children from 9 to 17 years old (M = 13.65, SD = 2.45). Based on retrospective life event information, the sample was divided into three groups: a single trauma group (n = 41), a complex trauma group (n = 38), and a control group that was not exposed to traumatic events (n = 40).
Results: Our findings revealed that youth exposed to complex trauma had more deficits in EF compared to youth in the single trauma and control groups. EF was found to partly mediate posttraumatic stress symptoms for youth exposed to complex trauma, but not for youth exposed to single trauma. Youth exposed to complex trauma showed more deficits in EF, which was in turn associated with higher levels of posttraumatic stress symptoms.
Conclusions: Our findings provide partial support for the role of EF in mediating posttraumatic stress outcomes for youth exposed to complex trauma. This points to the important role of EF in the etiology and treatment of complexly traumatized youth.
abstract_id: PUBMED:34957916
Formula: see text] Relationship of parent-rated and objectively evaluated executive function to symptoms of posttraumatic stress and attention-deficit/hyperactivity disorder in homeless youth. Compared to their stably housed peers, homeless, and highly mobile (HHM) youth experience disproportionately greater adversity and risk leading to a wide variety of poor developmental outcomes, and targeted interventions have the potential to mitigate such outcomes. A growing literature highlights the need for accurate diagnosis in high-risk populations given the considerable overlap between posttraumatic symptomology and behaviorally based disorders such as ADHD. Objective testing inferring neurobiological and circuit-based abnormalities in posttraumatic stress disorder (PTSD) and ADHD may provide a useful clinical tool to aid accurate diagnosis and treatment recommendations. This novel, exploratory study examined the relation between executive function (EF) as measured by objective testing and parent ratings with symptoms of posttraumatic stress and ADHD in 86 children (age 9 to 11) living in emergency homeless shelters. Parent-rated EF problems suggested broad impairment associated with ADHD symptoms but specific impairment in emotional/behavioral function associated with posttraumatic stress symptoms. While measures of inhibition and shifting EF were strongly associated with symptomology in bivariate correlations, they explained minimal variance in regression models. Internalizing behavior problems were associated with posttraumatic stress symptoms, while externalizing behavior problems were associated with ADHD symptoms. Implications for clinical practice and future research are discussed.
abstract_id: PUBMED:30829044
Depression, anxiety, and posttraumatic stress as predictors of immune functioning: differences between youth with behaviorally and perinatally acquired HIV. Youth living with HIV (YLWH) face significant mental health problems, namely depression, anxiety, and PTSD with rates of these disorders higher than in the general population. This study explored the relationship between symptoms of depression, anxiety, and PTSD and biological markers among a sample of 145 YLWH ages 13-25 years. Participants completed the Center for Epidemiologic Studies Depression Scale (CES-D), Generalized Anxiety Disorder-7 Item Scale (GAD-7), and Primary Care-Posttraumatic Stress Disorder Screen (PC-PTSD). Biological markers included CD4 count and viral load (VL) abstracted from medical records. Findings revealed a relationship between depression and anxiety and CD4 count as well as anxiety and VL. The relationship between depression and anxiety and CD4 count and anxiety and VL was moderated by transmission mode (i.e., behavioral versus perinatal). For youth perinatally infected, greater psychological symptoms of depression and anxiety were associated with a decline in CD4 count and increase in VL, but this was not true for youth with behaviorally acquired HIV. These findings point to the need for individualized mental health prevention and intervention services for YLWH.
abstract_id: PUBMED:23047596
Family functioning and mental health in runaway youth: association with posttraumatic stress symptoms. This study examined the direct effects of physical and sexual abuse, neglect, poor family communication and worries concerning family relationships, depression, anxiety, and dissociation on posttraumatic stress symptoms. Runaway youth were recruited from emergency youth shelters in New York and Texas. Interviews were completed with 350 youth who averaged 15 years of age. Structural equation modeling was used to examine family functioning, maltreatment, depression, dissociation, and anxiety in relation to posttraumatic stress symptoms. Results indicated that direct effects of family relationship worry to dissociation, β = .77, p < .001; depression, β = .85, p < .001; and anxiety, β = .90, p < .001 were significant, as were relationships between family communication and youth dissociation, β = .42, p < .001; depression, β = .46, p < .001; and anxiety, β = .32, p < .001. No significant effects of physical/sexual abuse or neglect were found. Higher levels of dissociation, β = .21, p < .001 and anxiety symptoms, β = .34, p = .01 were positively and significantly associated with posttraumatic stress symptoms, but depression was not. Findings underscore the critical role of family relationships in mental health symptoms experienced by runaway adolescents.
abstract_id: PUBMED:32515294
Beyond Trauma Exposure: Discrimination and Posttraumatic Stress, Internalizing, and Externalizing Problems Among Detained Youth. According to the race-based traumatic stress model, racial discrimination is proposed to comprise a traumatic experience that results in posttraumatic stress symptoms, as well as internalizing and externalizing problems among youth. Accordingly, a significant body of research has emerged that supports the associations among these constructs. However, the majority of these empirical studies have not accounted for the potential role of traditionally defined traumatic events in these associations. This study investigated whether self-reported racial discrimination was related to posttraumatic stress symptoms, internalizing, and externalizing symptoms above and beyond the impact of other trauma exposures in a sample of 266 detained youth (79% boys, 60% identified as an ethnic minority). Results of hierarchical linear regressions demonstrated that, after accounting for youths' other trauma exposures, racial discrimination accounted for significant variance in the models predicting delinquency and risk-taking but no other externalizing and internalizing problems, or posttraumatic stress symptoms. These findings indicate that racial discrimination may be particularly important for understanding offending behavior among detained youth.
Answer: No, posttraumatic stress in youth is not a culture-bound phenomenon. A study comparing symptom frequency and levels of comorbid psychopathology in adolescents from the United States and Russia found that the levels of all three major clusters of posttraumatic symptoms (reexperiencing, avoidance, and arousal), as well as of internalizing psychopathology, increased along with the level of posttraumatic stress in both countries. There were no differences between the countries in significant interaction effects for symptom levels, suggesting that posttraumatic symptoms and their associations with other adolescent mental health problems are not culture-bound and that the psychological consequences of trauma follow similar dynamics cross-culturally (PUBMED:15741471). |
Instruction: Sex hormones in Malay and Chinese men in Malaysia: are there age and race differences?
Abstracts:
abstract_id: PUBMED:23525310
Sex hormones in Malay and Chinese men in Malaysia: are there age and race differences? Objectives: Variations in the prevalence of sex-hormone-related diseases have been observed between Asian ethnic groups living in the same country; however, available data concerning their sex hormone levels are limited. The present study aimed to determine the influence of ethnicity and age on the sex hormone levels of Malay and Chinese men in Malaysia.
Methods: A total of 547 males of Malay and Chinese ethnicity residing in the Klang Valley Malaysia underwent a detailed screening, and their blood was collected for sex hormones analyses.
Results: Testosterone levels were normally distributed in the men (total, free and non-sex hormone-binding globulin (SHBG) bound fractions), and significant ethnic differences were observed (p<0.05); however, the effect size was small. In general, testosterone levels in males began to decline significantly after age 50. Significant ethnic differences in total, free and non-SHBG bound fraction estradiol levels were observed in the 20-29 and 50-59 age groups (p<0.05). The estradiol levels of Malay men decreased as they aged, but they increased for Chinese men starting at age 40.
Conclusions: Small but significant differences in testosterone levels existed between Malay and Chinese males. Significant age and race differences existed in estradiol levels. These differences might contribute to the ethnic group differences in diseases related to sex hormones, which other studies have found in Malaysia.
abstract_id: PUBMED:33008519
Sex differences in cognition and aging and the influence of sex hormones. Sex differences in cognitive functioning have been consistently reported in some cognitive tasks, with varying effect sizes. The most consistent findings in healthy adults are sex differences in the areas of mental rotation and aspects of attention and verbal memory. Sex differences in the vulnerability and manifestation of several psychiatric and neurologic diseases that involve cognitive disruption provide strong justification to continue investigating the social and biologic influences that underpin sex differences in cognitive functioning across health and disease. The biologic influences are thought to include genetic and epigenetic factors, sex chromosomes, and sex hormones. Sex steroid hormones that regulate reproductive function have multiple effects on the development, maintenance, and function of the brain, including significant effects on cognitive functioning. The aim of the current chapter is to provide a theoretical review of sex differences across different cognitive domains in adulthood and aging, as well as provide an overview on the role of sex hormones in cognitive function and cognitive decline.
abstract_id: PUBMED:35107834
Ethnic differences in serum testosterone concentration among Malay, Chinese and Indian men: A cross-sectional study. Objective: To investigate non-urological patients with multiple comorbidities for factors contributing towards differences in testosterone concentration in multiethnic Malaysian men.
Design: An observational study.
Patients: Sexually active men, ≥40 years, with no known urological problems, were recruited at the phlebotomy clinic at our centre.
Measurements: A brief history along with latest fasting lipid profile and plasma glucose levels were obtained. An Aging Male Symptoms questionnaire was administered; waist circumference (WC) and serum testosterone concentration were measured.
Statstical Analysis: Analysis of testosterone concentration between Malay, Indian and Chinese men was performed. Statistical tests such as analysis of variance, χ2 test, univariate and multivariable regression were performed. Any p < .05 was noted as statistically significant.
Results: Among the 604 participants analysed, mean testosterone concentration was significantly lower in Malays (15.1 ± 5.9 nmol/L) compared to the Chinese (17.0 ± 5.9 nmol/L) and Indian (16.1 ± 6.5 nmol/L) participants. The mean WC was also found to be higher among the Malays (96.1 ± 10.9 cm) compared to Chinese (92.6 ± 9.6 cm) and Indians (95.6 ± 9.9 cm). Testosterone concentration tended to be lower with higher age, but this was not statistically significant (p > .05). In the multivariable analysis only Malay ethnicity, WC ≥ 90 cm and low high-density lipoprotein (HDL) were associated with lower testosterone concentration.
Conclusion: In this study, Malaysian men of Malay origin had lower testosterone concentration compared with Indian and Chinese men. WC and low HDL were also associated with lower testosterone concentrations.
abstract_id: PUBMED:30154388
Sex Differences and the Influence of Sex Hormones on Cognition through Adulthood and the Aging Process. Hormones of the hypothalamic-pituitary-gonadal (HPG) axis that regulate reproductive function have multiple effects on the development, maintenance and function of the brain. Sex differences in cognitive functioning have been reported in both health and disease, which may be partly attributed to sex hormones. The aim of the current paper was to provide a theoretical review of how sex hormones influence cognitive functioning across the lifespan as well as provide an overview of the literature on sex differences and the role of sex hormones in cognitive decline, specifically in relation to Alzheimer's disease (AD). A summary of current hormone and sex-based interventions for enhancing cognitive functioning and/or reducing the risk of Alzheimer's disease is also provided.
abstract_id: PUBMED:28848613
Racial/Ethnic Differences in the Associations of Overall and Central Body Fatness with Circulating Hormones and Metabolic Factors in US Men. Background: Racial/ethnic disparities in the associations of body fatness with hormones and metabolic factors remain poorly understood. Therefore, we evaluated whether the associations of overall and central body fatness with circulating sex steroid hormones and metabolic factors differ by race/ethnicity.
Methods: Data from 1,243 non-Hispanic white (NHW), non-Hispanic black (NHB) and Mexican-American (MA) adult men in the third national health and nutrition examination survey (NHANES III) were analyzed. Waist circumference (central body fatness) was measured during the physical examination. Percent body fat (overall body fatness) was calculated from bioelectrical impedance. Associations were estimated by using weighted linear regression models to adjust the two measures of body fatness for each other.
Results: Waist circumference, but not percent body fat was inversely associated with total testosterone and SHBG in all three racial/ethnic groups after their mutual adjustment (all P < 0.0001). Percent body fat (P = 0.02), but not waist circumference was positively associated with total estradiol in NHB men; no association was present in NHW and MA men (P-interaction = 0.04). Waist circumference, but not body fat was strongly positively associated with fasting insulin (all P < 0.0001) and inversely associated with HDL cholesterol (all P ≤ 0.003) in all three racial/ethnic groups. Both percent body fat and waist circumference were positively associated with leptin (all P < 0.0001) in all three racial/ethnic groups.
Conclusions: There was no strong evidence in the associations of sex hormones and metabolic factors with body fatness in different racial/ethnic groups. These findings should be further explored in prospective studies to determine their relevance in racial/ethnic disparities of chronic diseases.
abstract_id: PUBMED:29058253
Why We Still Need To Speak About Sex Differences and Sex Hormones in Pain. In the world of pain, we must always consider the presence of gender. In nociception, as well as in pain, women are different from men in many, if not all, aspects of the system. Nociception is the sum of several events that occur from the periphery to the CNS, and there is much evidence that female nociception differs from male nociception. Moreover, it has to be considered that pain results from a male or a female cortex. Genetic, anatomical, physiological, hormonal, psychological, and social factors have been considered to explain the differences present at both levels. Notwithstanding all the evidence, it is still difficult to observe the application of this knowledge to the treatment of pain. Drugs are still given per kilogram, and clinical studies, albeit including women, often mix data from both sexes. Moreover, reports on these studies often fail to mention the women's age and reproductive status, i.e., sex hormones. Hormone levels vary from hour to hour, from day to day, and, as repeatedly confirmed, are affected by several pain killers commonly used in pain therapy. All the data confirm the urgent need to include sex differences and sex hormones among the key factors that play an important role in pain and pain treatment.
abstract_id: PUBMED:25636047
Association between sex hormones and adiposity: qualitative differences in women and men in the multi-ethnic study of atherosclerosis. Context: Sex hormones may influence adipose tissue deposition, possibly contributing to sex disparities in cardiovascular disease risk.
Objective: We hypothesized that associations of sex hormone levels with visceral and subcutaneous fat differ by sex.
Design, Setting, And Participants: Participants were from the Multi-Ethnic Study of Atherosclerosis with sex hormone levels at baseline and visceral and subcutaneous fat measurements from computed tomography at visit 2 or 3 (n = 1835).
Main Outcome Measures: Multivariable linear regression was used to investigate the relationships between sex hormones and adiposity. Testing for interaction by sex, race/ethnicity, and age was conducted.
Results: In adjusted models, there was a modest significant positive association between estradiol and visceral fat in both sexes (percent difference in visceral fat for 1% difference in hormone [95% confidence interval] in women, 5.44 [1.82, 9.09]; and in men, 8.22 [0.61, 16.18]). Higher bioavailable T was significantly associated with higher visceral and subcutaneous fat in women and with the reverse in men (women, 14.38 [10.23, 18.69]; men, -7.69 [-13.06, -1.00]). Higher dehydroepiandrosterone was associated with higher visceral fat in women (7.57 [1.71, 13.88]), but not in men (-2.47 [-8.88, 4.29]). Higher SHBG was associated with significantly lower levels of adiposity in both sexes (women, -24.42 [-28.11, -20.55]; men, -27.39 [-32.97, -21.34]). There was no significant interaction by race/ethnicity or age.
Conclusion: Sex hormones are significantly associated with adiposity, and the associations of androgens differ qualitatively by sex. This heterogeneity may help explain the complexity of the contribution of sex hormones to sex differences in cardiovascular disease.
abstract_id: PUBMED:24117402
Race and sex differences in small-molecule metabolites and metabolic hormones in overweight and obese adults. In overweight/obese individuals, cardiometabolic risk factors differ by race and sex categories. Small-molecule metabolites and metabolic hormone levels might also differ across these categories and contribute to risk factor heterogeneity. To explore this possibility, we performed a cross-sectional analysis of fasting plasma levels of 69 small-molecule metabolites and 13 metabolic hormones in 500 overweight/obese adults who participated in the Weight Loss Maintenance trial. Principal-components analysis (PCA) was used for reduction of metabolite data. Race and sex-stratified comparisons of metabolite factors and metabolic hormones were performed. African Americans represented 37.4% of the study participants, and females 63.0%. Of thirteen metabolite factors identified, three differed by race and sex: levels of factor 3 (branched-chain amino acids and related metabolites, p<0.0001), factor 6 (long-chain acylcarnitines, p<0.01), and factor 2 (medium-chain dicarboxylated acylcarnitines, p<0.0001) were higher in males vs. females; factor 6 levels were higher in Caucasians vs. African Americans (p<0.0001). Significant differences were also observed in hormones regulating body weight homeostasis. Among overweight/obese adults, there are significant race and sex differences in small-molecule metabolites and metabolic hormones; these differences may contribute to risk factor heterogeneity across race and sex subgroups and should be considered in future investigations with circulating metabolites and metabolic hormones.
abstract_id: PUBMED:28669217
Risky sexual behaviors among Malay adolescents: a comparison with Chinese adolescents in Singapore. Objective: Malays, with majority of the individuals being Muslim, form the largest ethnic group in Southeast Asia. This region is experiencing a rising incidence of HIV infections. Due to circumcision and prohibition of sex outside marriage, being Muslim was argued to be a protective factor against sexually transmitted infections (STI) and Human Immunodeficiency Virus (HIV). However, Malay adolescents were found to be more likely to contract chlamydia and gonorrhea than non-Malay adolescents in Singapore.
Design: Using a cross-sectional survey, we examined and compared safer sex knowledge, attitudes and self-efficacy, and sexual behaviors of 248 sexually active Malay adolescents with 384 Chinese adolescents aged 16-19 years in Singapore. Poisson regression, adjusted for socio-demographic characteristics, was used for modeling each dependent variable. Adjusted prevalence ratios (aPR) with 95% confidence intervals (CI) were obtained.
Results: On multivariate analysis, Malay adolescents were more likely to report marginally unfavorable attitude towards condom use (aPR 1.21 CI 1.00-1.48) and significantly lower confidence in using condoms correctly (aPR 1.24 CI 1.05-1.47) than Chinese adolescents. They were also more likely to report significantly younger first sex age (aPR 0.98 CI 0.96-1.00), never use of condoms for vaginal sex (aPR 1.32 CI 1.16-1.49) and anal sex (aPR 1.75 CI 1.11-2.76) and non-use of contraceptives at last sex (aPR 1.30 CI 1.17-1.45) than Chinese respondents. Malay males were less likely to buy sex (aPR 0.56 CI 0.37-0.85), but they reported higher likelihood of inconsistent condom use with female sex workers (aPR 2.24 CI 1.30-3.87).
Conclusion: Malay ethnicity was associated with unfavorable condom use attitude and lower self-efficacy in using condoms, which was consistent with risky sexual behaviors such as non-use of condoms. Future research should use mixed methods to explore and identify cultural influences to these behaviors.
abstract_id: PUBMED:31338015
Beyond Biological Sex: Interactive Effects of Gender Role and Sex Hormones on Spatial Abilities. Sex differences in spatial abilities are well documented, even though their underlying causes are poorly understood. Some studies assume a biological basis of these differences and study the relationship of sex hormone levels to spatial abilities. Other studies assume social influences and study the relationship of gender role (masculinity/femininity) to spatial abilities. Contemporary theories postulate a psychobiosocial model of sex differences in spatial abilities, in which both biological (e.g., hormonal) and psychosocial (e.g., gender role) variables interactively modulate spatial abilities. However, few studies have addressed both aspects simultaneously. Accordingly, the present study explores potential interactive effects between gender role and sex hormones on spatial performance. 41 men and 41 women completed a mental rotation and a virtual navigation task. Sex hormone levels and gender role were assessed in all participants. Sex differences favoring men were observed in both tasks. We found that neither sex hormones nor gender role alone emerged as mediators of these sex differences. However, several interactive effects between gender role and sex hormones were identified. Combined effects of masculinity and testosterone were observed for those variables that displayed sex differences. Participants with both, high masculinity and high testosterone showed the best performance. However, this association was further modulated by biological sex and progesterone levels. Furthermore, we observed an interactive effect of femininity, estradiol and testosterone on response times in both tasks. Consistent across both tasks and irrespective of biological sex, testosterone related to response times in participants with low estradiol levels, depending on their femininity. In participants with low femininity, testosterone was related to slower reaction times, while in participants with higher femininity, testosterone was related to faster reaction times.
Answer: Yes, there are age and race differences in sex hormone levels among Malay and Chinese men in Malaysia. A study found that testosterone levels were normally distributed among the men and significant ethnic differences were observed, although the effect size was small. Testosterone levels in males generally began to decline significantly after age 50. There were also significant ethnic differences in total, free, and non-SHBG bound fraction estradiol levels in the 20-29 and 50-59 age groups. Malay men's estradiol levels decreased with age, while Chinese men's levels increased starting at age 40 (PUBMED:23525310). Another study confirmed that Malaysian men of Malay origin had lower testosterone concentrations compared with Indian and Chinese men. Waist circumference and low HDL were also associated with lower testosterone concentrations (PUBMED:35107834). These differences might contribute to ethnic group differences in diseases related to sex hormones, as observed in Malaysia (PUBMED:23525310). |
Instruction: Impact of inverse ratios on patients with exsanguinating vascular injuries: should more be the new paradigm?
Abstracts:
abstract_id: PUBMED:23354231
Impact of inverse ratios on patients with exsanguinating vascular injuries: should more be the new paradigm? Background: Resuscitation strategies in patients with severe hemorrhage have evolved throughout the years. Optimal resuscitation ratios for civilian exsanguinating vascular injuries has not been determined. We hypothesize improved outcomes in patients with exsanguinating vascular injuries when an aggressive hemostatic resuscitation is used with an inverse ratio of fresh frozen plasma (FFP) to packed red blood cell (PRBC).
Methods: This is a 5-year retrospective analysis of vascular injuries requiring hemostatic resuscitation. Resuscitation groups by ratios of FFP/PRBC were inverse (>1:1), high (1-1:2), and low (<1:2). Patients with 10 or greater units of PRBC (massively transfused patients) were evaluated in each of the resuscitation groups. Demographics and complications throughout hospital length of stay and were compared between the resuscitation groups. Survivability Kaplan-Meier curves were generated at 6 hours and 5 days.
Results: A total of 258 patients with vascular injuries required component therapy resuscitation (low, n = 78; high, n = 156; inverse, n = 24). Massively transfused patients (n = 162, 62.7%) showed a significant Kaplan-Meier survivability difference at 6 hours (low, 65.0% vs. high, 75.0% vs. inverse, 100%, p = 0.024) and at 5 days (low, 52.5% vs. high, 62.0% vs. inverse, 100%, p = 0.008). Moreover, for massively transfused patients with extremity vascular injuries (n = 65, 39%), a relationship between resuscitation ratio and amputations was significant (low vs. high vs. inverse was 36.8% vs. 12.8% vs. 0%, respectively; p = 0.033).
Conclusion: This is the first study that highlights the potential outcomes benefits of an inverse ratio of FFP-PRBC in patients with exsanguinating vascular injuries. Multi-institutional prospective analysis is needed to potentially elucidate the cytoprotective effect of FFP to validate these results.
Level Of Evidence: Therapeutic study, level IV; diagnostic study, level III.
abstract_id: PUBMED:36241045
Association of the severity of vascular damage with discordance between the fractional flow reserve and non-hyperemic pressure ratios. Background: While there is a discordance between fractional flow reserve (FFR) and non-hyperemic pressure ratios (NHPRs) in some cases, the mechanisms underlying these discordances have not yet been fully clarified. We examined whether vascular damage as assessed by measurement of the brachial-ankle pulse wave velocity (baPWV), a marker of arterial stiffness, or ankle brachial pressure index (ABI), a marker of atherosclerotic arterial stenosis, might be associated with their discordances.
Methods: FFR and NHPRs were measured in 283 consecutive patients (69 ± 10 years old). Based on previously established cut-off values of the two markers (i.e. +/- = FFR ≤/> 0.80 or =NHPRs ≤/> 0.89), the study participants were divided into four groups (the + and - signs denoting "predictive of significant stenosis" and "not predictive of significant stenosis," respectively): the FFR+/NHPRs+ group (n = 124), FFR-/NHPRs+ group (n = 16), FFR+/NHPRs- group(n = 65), and FFR-/NHPRs- group (n = 78). The baPWV and ABI were also measured in all the participants, and values of <2000 cm/s and ≥1.00 of the baPWV and ABI, respectively, were considered as representing relatively less advanced atherosclerotic systemic vascular damage.
Results: The prevalence of subjects with ABI ≥1.00 was higher in the FFR+/NHPRs- group than in the FFR-/NHPRs- group (p < 0.05). When the study subjects were divided into 2 groups, namely, the FFR+/NHPRs- group and the combined group, the prevalence of ABI ≥1.00 and that of baPWV <2000 cm/s were higher in the FFR+/NHPRs- group as compared with those in the combined group (p < 0.05). The results of binary logistic regression analysis demonstrated that ABI ≥1.00 was associated with a significant odds ratio (2.34, p < 0.05) for the FFR+/NHPRs- discordance.
Conclusion: The FFR+/NHPRs- discordance appears to be observed in patients with relatively less advanced atherosclerotic systemic vascular damage. Thus, ABI ≥1.00 may be a marker of the presence of the FFR+/NHPRs- discordance.
abstract_id: PUBMED:25013345
Unilateral atlanto-axial fractures in near side impact collisions: An under recognized entity in cervical trauma. Objective: Nearside impact collisions presenting with lateral mass fractures of atlanto-axial vertebrae contralateral to the impact site represents a rare fracture pattern that does not correlate with previously described injury mechanism. We describe our clinical experience with such fractures and propose a novel description of biomechanical forces involved in this unique injury pattern. The findings serve to alert clinicians to potentially serious consequences of associated unrecognized and untreated vertebral artery injury.
Material And Methods: In addition to describing our clinical experience with three of these fractures, a review of Crash Injury Research and Engineering Network (CIREN) database was conducted to further characterize such fractures. A descriptive analysis of three recent lateral mass fractures of the atlanto-axial segment is coupled with a review of the CIREN database. A total of 4047 collisions were screened for unilateral fractures of atlas or axis. Information was screened for side of impact and data regarding impact velocity, occupant injuries and use of restraints.
Results: Following screening of unilateral fractures of atlas and axis for direct side impacts, 41 fractures were identified. Cross referencing these cases for occurrence contralateral to side of impact identified four such fractures. Including our recent clinical experience, seven injuries were identified: Five C1 and two C2 fractures. Velocity ranged from 14 to 43 km/h. Two associated vertebral artery injuries were identified.
Conclusions: Complexity of the atlanto-axial complex is responsible for a sequence of events that define load application in side impacts. This study demonstrates the vulnerability of vertebral artery to injury under unique translational forces and supports the use or routine screening for vascular injury. Diminished sensitivity of plain radiography in identifying these injuries suggests that computerized tomography should be used in all patients wherein a similar pattern of injury is suspected.
abstract_id: PUBMED:19659617
Atrial fibrillation ablation in patients with therapeutic international normalized ratios. Aims: Pulmonary vein antrum isolation (PVAI) plays a pivotal role in the comprehensive treatment of atrial fibrillation (AF). The need for effective anticoagulation bridging following PVAI is associated with significant vascular complication rates and increased costs. We investigated the safety of PVAI in patients with therapeutic international normalized ratios (INR) the day of the procedure.
Methods: A case-control analysis was performed on patients who underwent PVAI with therapeutic INR (>2). Patients with normal preprocedure INR served as controls. The incidence of major and minor hematomas, fistulas, vascular injury, and cardiac perforation or tamponade were catalogued. PVAI was performed under fluoroscopic, electro-anatomical, and intracardiac echocardiographic guidance, with an open irrigation ablation technique.
Results: A total of 194 patients (mean age 64 +/- 12) were included; 87 patients underwent PVAI with therapeutic INR (cases) and 107 with normal INR (controls). Persistent AF was more prevalent than paroxysmal AF in the therapeutic INR group. The mean INR for cases was 2.8 +/- 0.7 compared to 1.4 +/- 0.3 in the control group (P < 0.01). All procedures were completed without acute complications. Two major adverse events were observed, one in each arm. No significant difference in terms of minor (6.5% vs. 5.7%, P = 0.23) or major (0.93% vs. 1.15%, P = 0.49) vascular events or bleeding was detected between the therapeutic INR and the control group. The combined endpoint of major and minor complications did not differ among groups (9.35% vs. 8.05%, P = 0.19).
Conclusion: Atrial fibrillation ablation in patients with therapeutic INR on the day of a procedure appears to be safe and feasible. Expensive outpatient anti-coagulation bridging may be safely avoided in this type of population.
abstract_id: PUBMED:23842084
Impact of hypertension on vascular remodeling in patients with psoriatic arthritis. We studied the impact of hypertension along with traditional and new cardiovascular risk factors on the structural and functional properties of arteries in psoriatic arthritis (PsA) patients. We examined 42 PsA subjects (aged 51±9 years) stratified according to hypertensive status (19 normotensive, PsA-NT and 23 hypertensives, PsA-HT). Thirty-eight normotensive subjects (C-NT) and 23 hypertensives (C-HT) comparable by age and sex served as controls. Mean carotid intima-media thickness (mean-IMT) and mean of the maximum IMT (M-Max) were evaluated by ultrasound in carotid artery segment bilaterally. Post-occlusion flow-mediated dilation (FMD) of the brachial artery was evaluated by ultrasonography. These parameters were correlated with risk factors, markers of inflammation and disease activity. Values of mean-IMT were higher in both groups of PsA patients compared with C-NT (0.68 mm in PsA-NT and 0.75 mm in PsA-HT versus 0.61 mm in C-NT). PsA-HT displayed higher M-Max (0.95 mm) versus both C-HT (0.71 mm) and PsA-NT (0.79 mm). FMD was impaired in PsA subjects compared with C-NT (5.7% in PsA-NT and 6.0% PsA-HT versus 9.3% in C-NT), whereas there was no difference among PsA-HT, PsA-NT, and C-HT groups. Values of carotid IMT were directly related to tumor necrosis factor (TNF)-α, osteoprotegerin (OPG), blood pressure and lipid profile levels. FMD showed an inverse relationship with TNF-α and blood pressure, but no correlation with lipids. In conclusion, PsA per se implies a pro-atherogenic remodeling, which is enhanced by the hypertensive status. TNF-α and OPG may have an independent role in the development of such vascular damage.
abstract_id: PUBMED:35389063
Diagnostic value of chest radiography in the early management of severely injured patients with mediastinal vascular injury. Introduction: Time is of the essence in the management of severely injured patients. This is especially true in patients with mediastinal vascular injury (MVI). This rare, yet life threatening injury needs early detection and immediate decision making. According to the ATLS guidelines [American College of Surgeon Committee on Trauma in Advanced Trauma Life Support (ATLS®), 10th edn, 2018], chest radiography (CXR) is one of the first-line imaging examinations in the Trauma Resuscitation Unit (TRU), especially in patients with MVI. Yet thorough interpretation and the competence of identifying pathological findings are essential for accurate diagnosis and drawing appropriate conclusion for further management. The present study evaluates the role of CXR in detecting MVI in the early management of severely injured patients.
Method: We addressed the question in two ways. (1) We performed a retrospective, observational, single-center study and included all primary blunt trauma patients over a period of 2 years that had been admitted to the TRU of a Level-I Trauma Center. Mediastinal/chest (M/C) ratio measurements were calculated from CXRs at three different levels of the mediastinum to identify MVI. Two groups were built: with MVI (VThx) and without MVI (control). The accuracy of the CXR findings were compared with the results of whole-body computed tomography scans (WBCT). (2) We performed another retrospective study and evaluated the usage of sonography, CXR and WBCT over 15 years (2005-2019) in level-I-III Trauma Centers in Germany as documented in the TraumaRegister DGU® (TR-DGU).
Results: Study I showed that in 2 years 267 patients suffered from a significant blunt thoracic trauma (AIS ≥ 3) and met the inclusion criteria. 27 (10%) of them suffered MVI (VThx). Through the initial CXR in a supine position, MVI was detected in 56-92.6% at aortic arch level and in 44.4-100% at valve level, depending on different M/C-ratios (2.0-3.0). The specificity at different thresholds of M/C ratio was 63.3-2.9% at aortic arch level and 52.9-0.4% at valve level. The ROC curve showed a statistically random process. No significant differences of the cardiac silhouette were observed between VThx and Control (mean cardiac width was 136.5 mm, p = 0.44). Study II included 251,095 patients from the TR-DGU. A continuous reduction of the usage of CXR in the TRU could be observed from 75% in 2005 to 25% in 2019. WBCT usage increased from 35% in 2005 to 80% in 2019. This development was observed in all trauma centers independently from their designated level of care.
Conclusion: According to the TRU management guidelines (American College of Surgeon Committee on Trauma in Advanced Trauma Life Support (ATLS®), 10th edn, 2018; Reissig and Kroegel in Eur J Radiol 53:463-470, 2005) CXR in supine position is performed to detect pneumothorax, hemothorax and MVI. Our study showed that sensitivity and specificity of CXR in detecting MVI was statistically and clinically not reliable. Previous studies have already shown that CXR is inferior to sonography in detecting pneumothorax and hemothorax. Therefore, we challenge the guidelines and suggest that the use of CXR in the early management of severely injured patients should be individualized. If sonography and WBCT are available and reasonable, CXR is unnecessary and time consuming. The clinical reality reflected in the usage of CXR and WBCT over time, as documented in the TR-DGU, seems to support our statement.
abstract_id: PUBMED:16942931
Impact of oxidative stress on arterial elasticity in patients with atherosclerosis. Background: Alterations in the elastic behavior of arteries is an early sign of vascular damage in atherogenesis and may be promoted by oxidative stress (OxS). However, studies designed for simultaneous assessment of arterial elasticity and OxS status in patients with peripheral arterial disease (PAD) are absent. The purpose of this study was to assess large (C1) and small artery elasticity (C2) and indices of OxS in patients with PAD as well as to investigate possible relationships between these parameters.
Methods: Arterial elasticity was assessed noninvasively by pulse wave analysis (PWA) and biochemical measurements were taken from 38 patients with PAD and from 28 matched control subjects. The elasticity indices of the arteries were derived from PWA based on the modified Windkessel model and the OxS status was measured using urinary 8-iso-prostaglandin F2alpha (F2-IsoPs) and plasma baseline diene conjugates of low-density lipoproteins (LDL-BDC).
Results: Patients with PAD showed significantly reduced C1 and C2 and increased values of F2-IsoPs and LDL-BDC. There was an inverse association between C1 and F2-IsoPs, as well as between C2 and F2-IsoPs (R=-.3, P=.04; R=-.49, P=.002, respectively) in the patient group, but not in the controls. After controlling for potential confounders in a multiple regression model, the associations between C2 and F2-IsoPs remained significant in the patient group (P<.001).
Conclusions: The possible link between arterial elasticity and F2-IsoPs in patients with PAD suggests that oxidative modifications may be involved in alterations of arterial elastic properties in atherosclerosis.
abstract_id: PUBMED:34797785
Circulatory Trauma: A Paradigm for Understanding the Role of Endovascular Therapy in Hemorrhage Control. Abstract: The pathophysiology of traumatic hemorrhage is a phenomenon of vascular disruption and the symptom of bleeding represents one or more vascular injuries. In the Circulatory Trauma paradigm traumatic hemorrhage is viewed as injury to the circulatory system and suggests the underlying basis for endovascular hemorrhage control techniques. The question "Where is the patient bleeding?" is replaced by "Which blood vessels are disrupted?" and stopping bleeding becomes a matter of selective vessel access and vascular flow control. Control of traumatic hemorrhage has traditionally been performed via external access to the end organ that is bleeding followed by the application of direct pressure, packing, or clamping and repair of directly affected blood vessels. In the circulatory trauma paradigm, bleeding is seen as disruption to vessels which may be accessed internally, from within the vascular system. A variety of endovascular treatments such as balloon occlusion, embolization, or stent grafting can be used to control hemorrhage throughout the body. This narrative review presents a brief overview of the current role of endovascular therapy in the management of circulatory trauma. The authors draw on their personal experience combined with the last decade of published experiences with the use of endovascular techniques in trauma and present general recommendations for their evolving use. The focus of the review is on the use of endovascular techniques as specific vascular treatments using the circulatory trauma paradigm.
abstract_id: PUBMED:19851079
Soluble e-selectin is an inverse and independent predictor of left ventricular wall thickness in end-stage renal disease patients. Background: E-selectin is a specific endothelial cell product involved in leukocyte recruitment on the endothelium, which is an important early step in the reparative process following vascular damage. In end-stage renal disease (ESRD), the relationship of E-selectin with left ventricular function has been so far neglected.
Methods: We studied 237 patients on chronic dialysis (200 on hemodialysis, 37 on continuous ambulatory peritoneal dialysis) for at least 6 months, without clinical evidence of heart failure. On a mid-week non-dialysis day, fasting blood sampling and echocardiography were performed.
Results: Left ventricular mass index (LVMI, corrected for height) was inversely related to E-selectin levels, increasing from 56.8 +/- 18.9 (>75th percentile E-selectin tertile) to 66.7 +/- 20.1 g/m(2.7) (<50th percentile E-selectin tertile) (p = 0.002). However, in multiple regression models, including traditional (age, sex, smoking, diabetes, systolic blood pressure, hemoglobin, albumin, previous cardiovascular events) and emerging (asymmetric dimethylarginine, interleukin-6) risk factors associated with ESRD, soluble E-selectin has proved to be a significant inverse and independent predictor of mean wall thickness, but not of LVMI.
Conclusion: This study demonstrates that soluble E-selectin is inversely associated with the muscular component of the left ventricle, thereby suggesting that the lack of such a reparative factor may be associated with cardiac remodeling in ESRD patients.
abstract_id: PUBMED:29196693
Relationship between erectile dysfunction and the neutrophil to lymphocyte and platelet to lymphocyte ratios. The most important cause of erectile dysfunction (ED) among aging men is organic disease due to vascular disturbance that is often caused by atherosclerosis. Recently, studies have shown that atherosclerosis can manifest as an active inflammatory process rather than as passive vascular injury caused by lipid infiltration. Our study aimed to examine the association of ED with the neutrophil/lymphocyte ratio (NLR) and the platelet/lymphocyte ratio (PLR), both of which are markers of inflammation. Between December 2014 and May 2015, 101 male patients aged 40-70 years who were seen at our institute due to ED were included in this study. Thirty-one sexually active men with similar clinical and demographic characteristics without ED were included in our study as a control group. The control and patient groups were compared with respect to their NLR and PLR values as well as other hormonal, biochemical, hematological parameters. The median ages of the patient and control groups were 49 (40-69) and 48 (43-65) years old, respectively. Comorbidities such as hypertension, diabetes, chronic obstructive pulmonary disease (COPD), and coronary artery disease were not significantly different between the groups (p > 0.05). The neutrophil-to-lymphocyte and platelet-to-lymphocyte ratios were significantly higher in the patient group than in the control group (p < 0.05). Furthermore, the detected CRP levels were also significantly higher in the patient group than in the control group (p < 0.001). In the correlation analysis, the NLR, PLR, and CRP levels were negatively correlated with the IIEF-5 scores. A multivariate analysis was performed to determine the independent predictors of ED. PLR was identified as an independent predictor for ED. The neutrophil-to-lymphocyte and especially platelet-to-lymphocyte ratios are correlated with a diagnosis of ED, and these ratios could serve as practical parameters that will not elicit additional costs.
Answer: The study presented in abstract PUBMED:23354231 suggests that an inverse ratio of fresh frozen plasma (FFP) to packed red blood cells (PRBC) may lead to improved outcomes in patients with exsanguinating vascular injuries. This retrospective analysis of vascular injuries requiring hemostatic resuscitation found that massively transfused patients (those receiving 10 or more units of PRBC) had significantly better survivability at 6 hours and 5 days when treated with an inverse ratio of FFP to PRBC compared to those treated with lower ratios. Additionally, the study observed a significant relationship between resuscitation ratio and the rate of amputations in patients with extremity vascular injuries, with no amputations occurring in the group receiving an inverse ratio. These findings suggest that an inverse ratio of FFP to PRBC could potentially offer outcomes benefits for patients with exsanguinating vascular injuries, although the authors call for multi-institutional prospective analysis to validate these results and potentially elucidate the cytoprotective effect of FFP. Therefore, based on this study, adopting a more aggressive hemostatic resuscitation strategy with an inverse ratio of FFP to PRBC could be considered a new paradigm for the treatment of severe hemorrhage due to vascular injuries. |
Instruction: Are characteristics of the medical home associated with diabetes care costs?
Abstracts:
abstract_id: PUBMED:22710277
Are characteristics of the medical home associated with diabetes care costs? Objective: To examine the relationship between primary care medical home clinical practice systems corresponding to the domains of the Chronic Care Model and annual diabetes-related health care costs incurred by members of a health plan with type-2 diabetes and receiving care at one of 27 Minnesota-based medical groups.
Study Design: Cross-sectional analysis of the relation between patient-level costs and Patient-Centered Medical Home (PCMH) practice systems as measured by the Physician Practice Connections Readiness Survey.
Methods: Multivariate regressions adjusting for patient demographics, health status, and comorbidities estimated the relationship between the use of PCMH clinical practice systems and 3 annual cost outcomes: total costs of diabetes-related care, outpatient medical costs of diabetes-related care, and inpatient costs of diabetes-related care (ie, inpatient and emergency care).
Results: Overall PCMH scores were not significantly related to any annual cost outcome; however, 2 of 5 subdomains were related. Health Care Organization scores were related to significantly lower total (P=0.04) and inpatient costs (P=0.03). Clinical Decision Support was marginally related to a lower total cost (P=0.06) and significantly related to lower inpatient costs (P=0.02). A detailed analysis of the Health Care Organization domain showed that compared with medical groups with only quality improvement, those with performance measurement and individual provider feedback averaged $245/patient less. Medical groups with clinical reminders for counseling averaged $338/patient less.
Conclusions: Certain PCMH practice systems were related to lower costs, but these effects are small compared with total costs. Further research about how these and other PCMH domains affect costs over time is needed.
abstract_id: PUBMED:24570245
Association between participation in a multipayer medical home intervention and changes in quality, utilization, and costs of care. Importance: Interventions to transform primary care practices into medical homes are increasingly common, but their effectiveness in improving quality and containing costs is unclear.
Objective: To measure associations between participation in the Southeastern Pennsylvania Chronic Care Initiative, one of the earliest and largest multipayer medical home pilots conducted in the United States, and changes in the quality, utilization, and costs of care.
Design, Setting, And Participants: Thirty-two volunteering primary care practices participated in the pilot (conducted from June 1, 2008, to May 31, 2011). We surveyed pilot practices to compare their structural capabilities at the pilot's beginning and end. Using claims data from 4 participating health plans, we compared changes (in each year, relative to before the intervention) in the quality, utilization, and costs of care delivered to 64,243 patients who were attributed to pilot practices and 55,959 patients attributed to 29 comparison practices (selected for size, specialty, and location similar to pilot practices) using a difference-in-differences design.
Exposures: Pilot practices received disease registries and technical assistance and could earn bonus payments for achieving patient-centered medical home recognition by the National Committee for Quality Assurance (NCQA).
Main Outcomes And Measures: Practice structural capabilities; performance on 11 quality measures for diabetes, asthma, and preventive care; utilization of hospital, emergency department, and ambulatory care; standardized costs of care.
Results: Pilot practices successfully achieved NCQA recognition and adopted new structural capabilities such as registries to identify patients overdue for chronic disease services. Pilot participation was associated with statistically significantly greater performance improvement, relative to comparison practices, on 1 of 11 investigated quality measures: nephropathy screening in diabetes (adjusted performance of 82.7% vs 71.7% by year 3, P < .001). Pilot participation was not associated with statistically significant changes in utilization or costs of care. Pilot practices accumulated average bonuses of $92,000 per primary care physician during the 3-year intervention.
Conclusions And Relevance: A multipayer medical home pilot, in which participating practices adopted new structural capabilities and received NCQA certification, was associated with limited improvements in quality and was not associated with reductions in utilization of hospital, emergency department, or ambulatory care services or total costs over 3 years. These findings suggest that medical home interventions may need further refinement.
abstract_id: PUBMED:16608587
Paediatric home care: a systematic review of randomized trials on costs and effectiveness. Objective: To review systematically randomized trials (RCTs) on the effectiveness and costs of paediatric home care.
Methods: National Health Service (NHS) Centre for Reviews and Dissemination guidelines were followed. In all, 20 electronic and other sources were searched, using specially designed strategies. Economic studies and other selected designs were included, but only RCT findings--on service use, clinical outcomes, costs, and impact on the family--are reported here. Analysis is descriptive, with pooled standard mean differences used where meta-analysis was possible.
Results: About 1730 identified records up to August 2001 were potentially relevant. In all, 10 RCTs (24 papers) were finally included, covering five types of paediatric home care--for very low birth weight or medically 'fragile' infants, for asthma or diabetes, for technology-dependent children, for mental health, and generic home care. Paediatric home care may enhance physical and mental development for very low birth weight infants and may be cheaper but the evidence is not strong. Home care for diabetes or asthma may reduce parents' costs with some clinical but no social differences noticeable. No randomized trials for technologically dependent children were found. Home care for mental health may increase parental satisfaction with services and reduce some health service and residential care costs. Generic home care showed no clinical effects at early follow-up. Partial follow-up after five years suggested improved psychological adjustment. No cost data were available for this care model.
Conclusions: Despite recent expansion, research evidence from randomized trials for paediatric home care is slight, and methods used are weak in places. Paediatric home care poses practical and ethical questions that cannot be addressed by RCTs.
abstract_id: PUBMED:27659297
Medical Home Characteristics and Quality of Diabetes Care in Safety Net Clinics. We examined associations between patient-centered medical home (PCMH) characteristics and quality of diabetes care in 15 safety net clinics in five states. Surveys among clinic directors assessed PCMH characteristics using the Safety Net Medical Home Scale. Chart audits among 864 patients assessed diabetes process and outcome measures. We modeled the odds of the patient receiving performance measures as a function of total PCMH score and of PCMH subscales and covariates. PCMH characteristics had mixed, inconsistent associations with the quality of diabetes care. The PCMH model may require refinement in design and implementation to improve diabetes care among vulnerable populations.
abstract_id: PUBMED:21211599
Patient characteristics and medical care costs associated with hypertriglyceridemia. Hypertriglyceridemia is a lipid abnormality prevalent in 1/3 of the United States adult population. Our objective was to describe the independent contribution of hypertriglyceridemia to medical care costs. Using an observational cohort of 108,324 members of Kaiser Permanente Northwest, we analyzed the electronic medical records of those patients aged ≥18 years who had triglyceride (TG) measurements in 2008 and had been members of Kaiser Permanente Northwest for the entire year. After assigning patients to TG categories of <150, 150 to 199, 200 to 499, and ≥500 mg/dl, we compared the annual direct medical costs. To isolate the independent contribution of the TG levels, we adjusted the costs for age, gender, body mass index, blood pressure, smoking history, low-density lipoprotein and high-density lipoprotein cholesterol, and health conditions such as cardiovascular disease, diabetes, and renal disease. Of the 108,324 study subjects, 64.1% had normal TG levels (<150 mg/dl), 16.4% had borderline high levels (150 to 199 mg/dl), 18.0% had high TG levels (200 to 499 mg/dl), and 1.5% had very high TG levels (≥500 mg/dl). After adjustment, the patients with TG levels ≥500 mg/dl (severe hypertriglyceridemia) had significantly greater mean total costs ($8,567, 99% confidence interval $7,034 to $10,100) than those with levels <150 mg/dl ($6,186, 99% confidence interval $6,058 to $6,314), 150 to 199 mg/dl ($6,449, 99% confidence interval $6,196 to $6,702), or 200 to 499 mg/dl ($6,376, 99% confidence interval $6,118 to $6,634). The differences were driven by both outpatient and pharmaceutical costs. The inpatient costs were also greater for those with TG levels ≥500 mg/dl, but the difference did not reach statistical significance. In conclusion, severe hypertriglyceridemia was associated with 33% to 38% greater medical costs per annum, independent of resource-intensive conditions such as cardiovascular disease, heart failure, hypertension, and diabetes.
abstract_id: PUBMED:30058505
COSTS OF HOME-BASED TELEMEDICINE PROGRAMS: A SYSTEMATIC REVIEW. Objectives: The aim of this study was to systematically investigate existing literature on the costs of home-based telemedicine programs, and to further summarize how the costs of these telemedicine programs vary by equipment and services provided.
Methods: We undertook a systematic review of related literature by searching electronic bibliographic databases and identifying studies published from January 1, 2000, to November 30, 2017. The search was restricted to studies published in English, results from adult patients, and evaluation of home telemedicine programs implemented in the United States. Summarized telemedicine costs per unit of outcome measures were reported.
Results: Twelve studies were eligible for our review. The overall annual cost of providing home-based telemedicine varied substantially depending on specific chronic conditions, ranging from USD1,352 for heart failure to USD206,718 for congestive heart failure (CHF), chronic obstructive pulmonary disease (COPD), and diabetes as a whole. The estimated cost per-patient-visit ranged from USD24 for cancer to USD39 for CHF, COPD, or chronic wound care.
Conclusions: The costs of home-based telemedicine programs varied substantially by program components, disease type, equipment used, and services provided. All the selected studies indicated that home telemedicine programs reduced care costs, although detailed cost data were either incomplete or not presented in detail. A comprehensive analysis of the cost of home-based telemedicine programs and their determinants is still required before the cost efficiency of these programs can be better understood, which becomes crucial for these programs to be more widely adopted and reimbursed.
abstract_id: PUBMED:15319047
Home telehealth reduces healthcare costs. The aim of this study was to determine whether home telehealth, when integrated with the health facility's electronic medical record system, reduces healthcare costs and improves quality-of-life outcomes relative to usual home healthcare services for elderly high resource users with complex co-morbidities. Study patients were identified through the medical center's database. Intervention patients received home telehealth units that used standard phone lines to communicate with the hospital. FDA-approved peripheral devices monitored vital signs and valid questionnaires were used to evaluate quality-of-life outcomes. Out-of-range data triggered electronic alerts to nurse case managers. (No live video or audio was incorporated in either direction.) Templated progress notes facilitated seamless data entry into the patient's electronic medical record. Participants (n = 104) with complex heart failure, chronic lung disease, and/or diabetes mellitus were randomly assigned to an intervention or control group for 6-12 months. Parametric and nonparametric analyses were performed to compare outcomes for (1) subjective and objective quality-of-life measures, (2) health resource use, and (3) costs. In contrast to the control group, scores for home telehealth subjects showed a statistically significant decrease at 6 months for bed-days-of-care (p < 0.0001), urgent clinic/emergency room visits (p = 0.023), and A1C levels (p < 0.0001); at 12 months for cognitive status (p < 0.028); and at 3 months for patient satisfaction (p < 0.001). Functional levels and patient-rated health status did not show a significant difference for either group. Integrating home telehealth with the healthcare institution's electronic database significantly reduces resource use and improves cognitive status, treatment compliance, and stability of chronic disease for homebound elderly with common complex co-morbidities.
abstract_id: PUBMED:26465125
Impact of the Rochester Medical Home Initiative on Primary Care Practices, Quality, Utilization, and Costs. Background: Patient-centered medical homes (PCMH) may improve the quality of primary care while reducing costs and utilization. Early evidence on the effectiveness of PCMH has been mixed.
Objectives: We analyze the impact of a PCMH intervention in Rochester NY on costs, utilization, and quality of care.
Research Design: A propensity score-matched difference-in-differences analysis of the effect of the PCMH intervention relative to a comparison group of practices. Qualitative interviews with PCMH practice managers on their experiences and challenges with PCMH practice transformation.
Subjects: Seven pilot practices and 61 comparison practices (average of 36,531 and 30,192 attributed member months per practice, respectively). Interviews with practice leaders at all pilot sites.
Measures: Individual HEDIS quality measures of preventive care, diabetes care, and care for coronary artery disease. Utilization measures of hospital use, office visits, imaging and laboratory tests, and prescription drug use. Cost measures are inpatient, prescription drug, and total spending.
Results: After 3 years, PCMH practices reported decreased ambulatory care sensitive emergency room visits and use of imaging tests, and increased primary care visits and laboratory tests. Utilization of prescription drugs increased but drug spending decreased. PCMH practices reported increased rates of breast cancer screening and low-density lipid screening for diabetes patients, and decreased rates of any prevention quality indicator.
Conclusions: The PCMH model leads to significant changes in patient care, with reductions in some services and increases in others. This study joins a growing body of work that finds no effect of PCMH transformation on total health care spending.
abstract_id: PUBMED:971966
A comparative study of the economics of home care. The costs of home care and treatment solely in hospital for patients in a variety of short-term diagnostic categories are compared. Five hundred and eighty-three patients included in an experimental home care program were randomly assigned either to a group which received home care as part of their treatment, or to a control group that remained in hospital the traditional length of time. It is argued that the only costs relevant in an economic comparison of the two modes of treatment are those attributable to the direct care of the patient. A technique is presented whereby changes in the daily amount of nursing service provided can be costed. The economic analysis shows that, when similar diagnoses are compared for an episode of illness, there is basically no difference in cost between home care and treatment in hospital.
abstract_id: PUBMED:23781894
Medical costs associated with type 2 diabetes complications and comorbidities. Objectives: To estimate the direct medical costs associated with type 2 diabetes, its complications, and its comorbidities among U.S. managed care patients.
Study Design: Data were from patient surveys, chart reviews, and health insurance claims for 7109 people with type 2 diabetes from 8 health plans participating in the Translating Research Into Action for Diabetes (TRIAD) study between 1999 and 2002.
Methods: A generalized linear regression model was developed to estimate the association of patients' demographic characteristics, tobacco use status, treatments, related complications, and comorbidities with medical costs.
Results: The mean annualized direct medical cost was $2465 for a white man with type 2 diabetes who had been diagnosed fewer than 15 years earlier, was treated with oral medication or diet alone, and had no complications or comorbidities. We found annualized medical costs to be 10% to 50% higher for women and for patients whose diabetes had been diagnosed 15 or more years earlier, who used tobacco, who were being treated with insulin, or who had several other complications. Coronary heart disease, congestive heart failure, hemiplegia, and amputation were each associated with 70% to 150% higher costs. Costs were approximately 300% higher for end-stage renal disease treated with dialysis and approximately 500% higher for end-stage renal disease with kidney transplantation.
Conclusions: Most medical costs incurred by patients with type 2 diabetes are related to complications and comorbidities. Our cost estimates can help when determining the most cost-effective interventions to prevent complications and comorbidities.
Answer: Yes, characteristics of the medical home are associated with diabetes care costs. A study examining the relationship between primary care medical home clinical practice systems and annual diabetes-related health care costs found that certain Patient-Centered Medical Home (PCMH) practice systems were related to lower costs. Specifically, Health Care Organization scores were related to significantly lower total and inpatient costs. Clinical Decision Support was marginally related to a lower total cost and significantly related to lower inpatient costs. Medical groups with performance measurement and individual provider feedback, as well as those with clinical reminders for counseling, averaged lower costs per patient. However, the overall PCMH scores were not significantly related to any annual cost outcome, indicating that the effects of PCMH practice systems on costs are small compared to total costs (PUBMED:22710277).
Another study on the Southeastern Pennsylvania Chronic Care Initiative, a multipayer medical home pilot, found limited improvements in quality and no significant changes in utilization or costs of care over three years. This suggests that medical home interventions may need further refinement to be effective in improving quality and containing costs (PUBMED:24570245).
In safety net clinics, PCMH characteristics had mixed, inconsistent associations with the quality of diabetes care, indicating that the PCMH model may require refinement in design and implementation to improve diabetes care among vulnerable populations (PUBMED:27659297).
Overall, while certain characteristics of the medical home are associated with diabetes care costs, the evidence suggests that the impact may be modest and that further research and refinement of the PCMH model are needed to achieve more substantial cost savings and quality improvements in diabetes care. |
Instruction: Do echocardiographic parameters predict mortality in patients with end-stage renal disease?
Abstracts:
abstract_id: PUBMED:25552882
Predictive value of echocardiographic parameters for clinical events in patients starting hemodialysis. Echocardiographic parameters can predict cardiovascular events in several clinical settings. However, which echocardiographic parameter is most predictive of each cardiovascular or non-cardiovascular event in patients starting hemodialysis remains unresolved. Echocardiography was used in 189 patients at the time of starting hemodialysis. We established primary outcomes as follows: cardiovascular events (ischemic heart disease, cerebrovascular disease, peripheral artery disease, and acute heart failure), fatal non-cardiovascular events, all-cause mortality, and all combined events. The most predictable echocardiographic parameter was determined in the Cox hazard ratio model with a backward selection after the adjustment of multiple covariates. Among several echocardiographic parameters, the E/e' ratio and the left ventricular end-diastolic volume (LVEDV) were the strongest predictors of cardiovascular and non-cardiovascular events, respectively. After the adjustment of clinical and biochemical covariates, the predictability of E/e' remained consistent, but LVEDV did not. When clinical events were further analyzed, the significant echocardiographic parameters were as follows: s' for ischemic heart disease and peripheral artery disease, LVEDV and E/e' for acute heart failure, and E/e' for all-cause mortality and all combined events. However, no echocardiographic parameter independently predicted cerebrovascular disease or non-cardiovascular events. In conclusion, E/e', s', and LVEDV have independent predictive values for several cardiovascular and mortality events.
abstract_id: PUBMED:23542473
Do echocardiographic parameters predict mortality in patients with end-stage renal disease? Background: Left ventricular function predicts cardiovascular mortality both in the general population and those with end-stage renal disease. Echocardiography is commonly undertaken as a screening test before kidney transplantation; however, there are little data on its predictive power.
Methods: This was a retrospective review of patients assessed for renal transplantation from 2000 to 2009. A survival analysis using demographic and echocardiographic variables was undertaken using the Cox proportional hazards regression model.
Results: Of 862 patients assessed for transplantation, 739 had an echocardiogram and 217 of 739 (29%) died during a mean follow-up of 4.2 years. In a multivariate survival analysis, increased age (P<0.0001), diabetes (P<0.0001), transplant listing status (P<0.0001), severely impaired left ventricular function (P<0.01), pulmonary hypertension and/or right ventricular dysfunction (P=0.01), and regional wall motion abnormalities (P<0.01) were associated with all-cause mortality. Combined in a score where one point was given for the presence of each of the parameters above, these factors were strongly predictive of increased mortality with a hazard ratio of 3.57, 6.80, and 44.47 for the presence of one, two, or more factors, respectively, compared with the absence of any of these factors.
Conclusions: In patients with end-stage renal disease, multiple easily determined echocardiographic parameters, including regional wall motion abnormalities and pulmonary hypertension and/or right ventricular dysfunction, were independently associated with all-cause and cardiovascular mortality. Combining these factors in a simple score may further assist in risk stratifying patients being considered for renal transplantation.
abstract_id: PUBMED:32998138
Serum Prealbumin and Echocardiography Parameters Predict Mortality in Peritoneal Dialysis Patients. Aim: Protein-energy malnutrition and cardiovascular (CV) disease predisposes patients with end-stage renal disease (ESRD) on dialysis to a high risk of early death, but the prognostic value of prealbumin (PAB) and echocardiographic indices in ESRD patients treated with maintenance peritoneal dialysis (PD) remains unclear.
Methods: A total of 211 PD patients (mean age 49.2 ± 15.4 years, 51.7% male) were prospectively studied. PAB and echocardiography parameters were recorded at baseline. Follow-up (mean ± SD: 33.7 ± 17.3 months) was conducted based on hospital records, clinic visits, and telephone reviews, to record death events and their causes.
Results: In the Cox proportional hazards model, PAB and the echocardiographic parameters listed below were found to be optimal predictors of all-cause mortality: PAB (p = 0.003), aortic root diameter (ARD) (p = 0.004), interventricular septum end-diastolic thickness (IVSd) (p = 0.046), and left ventricular end-diastolic diameter index (LVEDDI) (p = 0.029). Of the above-mentioned factors, PAB (p = 0.018), ARD (p = 0.031), and IVSd (p = 0.037) were independent predictors of CV mortality in PD patients. Of note, malnutrition, degradation of the aorta, and myocardial hypertrophy are also known death risk factors in the general population. The all-cause mortality and CV death rate significantly increased as the number of risk factors increased, reaching values as high as 40 and 22% in patients who had all of the risk factors, i.e., abnormal PAB, ARD, and IVSd (p < 0.001 and p = 0.011).
Conclusion: In PD patients, low serum PAB and abnormal echocardiographic parameters together were significantly associated with all-cause mortality and CV death, independently of other risk factors. These risk factors for death in PD are similar to those in the general population. Noticeably, the combination of echocardiographic parameters and PAB could provide additional predictive value for mortality in PD patients. In light of these findings, more studies in an optimal model containing PAB and echocardiographic parameters for the prediction of outcomes in ESRD are required.
abstract_id: PUBMED:29996910
Echocardiographic parameters and renal outcomes in patients with preserved renal function, and mild- moderate CKD. Background: Echocardiographic characteristics across the spectrum of chronic kidney disease (CKD) have not been well described. We assessed the echocardiographic characteristics of patients with preserved renal function and mild or moderate CKD referred for echocardiography and determined whether echocardiographic parameters of left ventricular (LV) and right ventricular (RV) structure and function were associated with changes in renal function and mortality.
Methods: This retrospective cohort study enrolled all adult patients who had at least one trans-thoracic echocardiography between 2004 and 2014 in our institution. The composite outcome of doubling of serum creatinine or initiation of maintenance dialysis or kidney transplantation was the primary outcome. Mortality was the secondary outcome.
Results: 29,219 patients were included. Patients with worse renal function had higher prevalence of structural and functional LV and RV abnormalities. Higher estimated glomerular filtration rate (eGFR) was independently associated with preserved LV ejection fraction, preserved RV systolic function, and lower LV mass, left atrial diameter, pulmonary artery pressure, and right atrial pressure, as well as normal RV structure. 1041 composite renal events were observed. 8780 patients died during the follow-up. Pulmonary artery pressure and the RV, but not the LV, echocardiographic parameters were independently associated with the composite renal outcome. In contrast, RV systolic function, RV dilation or hypertrophy, LV ejection fraction group, LV diameter quartile, and pulmonary artery pressure quartile were independently associated with all-cause mortality.
Conclusions: Echocardiographic abnormalities are frequent even in early CKD. Echocardiographic assessment particularly of the RV may provide useful information for the care of patients with CKD.
abstract_id: PUBMED:38038004
Evaluation of Clinical, Echocardiographic, and Therapeutic Characteristics, and Prognostic Outcomes of Coexisting Heart Failure among Patients with Atrial Fibrillation: The Jordan Atrial Fibrillation (JoFib) Study. Background: Atrial fibrillation (AF) is the most commonly encountered cardiac arrhythmia in clinical practice. Heart failure (HF) can occur concurrently with AF.
Aim: We compared different demographic, clinical, and echocardiographic characteristics between patients with AF+HF and patients with AF only. Furthermore, we explored whether concurrent HF independently predicts several outcomes (all-cause mortality, cardiovascular mortality, ischemic stroke/systemic embolism (IS/SE), major bleeding, and clinically relevant non-major bleeding (CRNMB)).
Materials And Methods: Comparisons between the AF+HF and the AF-only group were carried out. Multivariable Cox proportional hazard models were constructed for each outcome to assess whether HF was predictive of any of them while controlling for possible confounding factors.
Results: A total of 2020 patients were included in this study: 481 had AF+HF; 1539 had AF only. AF+HF patients were older, more commonly males, and had a higher prevalence of diabetes mellitus, dyslipidemia, coronary artery disease, and chronic kidney disease (p≤0.05). Furthermore, AF+HF patients more commonly had pulmonary hypertension and low ejection fraction (p≤0.001). Finally, HF was independently predictive of all-cause mortality (adjusted HR 2.17, 95% CI (1.66-2.85) and cardiovascular mortality (adjusted HR 2.37, 95% CI (1.68-3.36).
Conclusion: Coexisting AF+HF was associated with a more labile and higher-risk population among Jordanian patients. Furthermore, coexisting HF independently predicted higher all-cause mortality and cardiovascular mortality. Efforts should be made to efficiently identify such cases early and treat them aggressively.
abstract_id: PUBMED:26788816
Proof of concept study: Improvement of echocardiographic parameters after renal sympathetic denervation in CKD refractory hypertensive patients. Aim: Evaluation of the effectiveness of the renal sympathetic denervation (RSD) in reducing lesions of target organs such as the heart and kidneys, in resistant hypertensive CKD patients.
Methods And Results: Forty-five patients were included and treated with an ablation catheter with open irrigated tip. RSD was performed by a single operator following the standard technique. Patients included with CKD were on stages 2 (n=22), 3 (n=16), and 4 (n=7). Data were obtained at baseline and monthly until the 6th month of follow-up. Twenty-six out of the 45 patients had LVH and nineteen did not present LVH. The LV mass index decreased from 123.70±38.44g/m(2) at baseline to 106.50±31.88g/m(2) at the 6th month after RSD, P<0.0001. The end-diastolic left ventricular internal dimension (LVIDd) reduced from 53.02±6.59mm at baseline to 51.11±5.85mm 6months post procedure, P<0.0001. The left ventricular end-diastolic posterior wall thickness (PWTd) showed a reduction from 10.58±1.39mm at baseline to 9.82±1.15mm at the 6th month of follow-up, P<0.0001. The end-diastolic interventricular septum thickness (IVSTd) also decreased from 10.58±1.39mm at baseline to 9.82±1.15mm 6months post procedure, P<0.0001. The left ventricular ejection fraction (LVEF) improved from 58.90±10.48% at baseline to 62.24±10.50% at the 6th month of follow-up, P<0.0001. When the ∆ between baseline and the 6th month post RSD in LVH patients and non LVH patients were compared to the same parameters no significant difference was found.
Conclusions: The RSD seemed to be feasible, effective, and safe resulting in an improvement of echocardiographic parameters in LVH and non LVH CKD refractory hypertensive patients.
abstract_id: PUBMED:35051925
Can Renal Parameters Predict the Mortality of Hospitalized COVID-19 Patients? Introduction: Our study aimed to analyze whether renal parameters can predict mortality from COVID-19 disease in hospitalized patients.
Methods: This retrospective cohort includes all adult patients with confirmed COVID-19 disease who were consecutively admitted to the tertiary hospital during the 4-month period (September 1 to December 31, 2020). We analyzed their basic laboratory values, urinalysis, comorbidities, length of hospitalization, and survival. The RIFLE and KDIGO criteria were used for AKI and CKD grading, respectively. To display renal function evolution and the severity of renal damage, we subdivided patients further into 6 groups as follows: group 1 (normal renal function), group 2 (CKD grades 2 + 3a), group 3 (AKI-DROP defined as whose s-Cr level dropped by >33.3% during the hospitalization), group 4 (CKD 3b), group 5 (CKD 4 + 5), and group 6 (AKI-RISE defined as whose s-Cr level was elevated by ≥50% within 7 days or by ≥26.5 μmol/L within 48 h during hospitalization). Then, we used eGFR on admission independently of renal damage to check whether it can predict mortality. Only 4 groups were used: group I - normal renal function (eGFR > 1.5 mL/s), group II - mild renal involvement (eGFR 0.75-1.5), group III - moderate (eGFR 0.5-0.75), and group IV - severe (GFR <0.5).
Results: A total of 680 patients were included in our cohort; among them, 244 patients displayed normal renal function, 207 patients fulfilled AKI, and 229 patients suffered from CKD. In total, a significantly higher mortality rate was found in the AKI and the CKD groups versus normal renal function - 37.2% and 32.3% versus 9.4%, respectively (p < 0.001). In addition, the groups 1-6 divided by severity of renal damage reported mortality of 9.4%, 21.2%, 24.1%, 48.7%, 62.8%, and 55.1%, respectively (p < 0.001). The mean hospitalization duration of alive patients with normal renal findings was 9.5 days, while it was 12.1 days in patients with any renal damage (p < 0.001). When all patients were compared according to eGFR on admission, the mortality was as follows: group I (normal) 9.8%, group II (mild) 22.1%, group III (moderate) 40.9%, and group IV (severe) 50.5%, respectively (p < 0.001). It was a significantly better mortality predictor than CRP on admission (AUC 0.7053 vs. 0.6053).
Conclusions: Mortality in patients with abnormal renal function was 3 times higher compared to patients with normal renal function. Also, patients with renal damage had a worse and longer hospitalization course. Lastly, eGFR on admission, independently of renal damage type, was an excellent tool for predicting mortality. Further, the change in s-Cr levels during hospitalization reflected the mortality prognosis.
abstract_id: PUBMED:35924178
Evaluation of the Charlson Comorbidity Index and Laboratory Parameters as Independent Early Mortality Predictors in Covid 19 Patients. Purpose: Various parameters have been proposed to predict the outcome of patients with coronavirus disease. The aim of this study was to evaluate the utility of the age-adjusted CCI score and biochemical parameters for predicting outcomes for COVID-19 patients on admission.
Patients And Methods: A total of 511 patients were included in the study. Only swab or serological tests positive patients were included. The clinical characteristics of the patients were compared between survival and non-survival COVID-19 inpatients. Hemoglobin, platelet, sedimentation, creatinine, AST, ALT, LDH, CK, albumin, ferritin, lymphocyte, neutrophil, CRP (1-5;5-10;10-20 × upper limit), procalcitonin (5-10;10-20; > 20 × upper limit), D Dimer (> 2 × upper limit), age, gender, chronic diseases and CCI scores were compared between the two groups.
Results: 68 patients died and 443 patients survived. Mean age was 74.3±7.3 years in survival group and 76.7±8.0 in nonsurvival group. Age, male sex, ischemic heart disease (CHD), chronic kidney disease and active malignancy was statistically higher in non-survivor group. The biochemical parameters was compared in survival and nonsurvival group. CCI score, AST, LDH, CK, Ferritin, CRP are significantly higher and albumin, lymphocyte levels are significantly lower in nonsurvival group. D-dimer and procalcitonin levels are significantly higher in nonsurvival group. CCI score and neutrophil, creatinine, ALT, AST, d-dimer and procalcitonin elevations were correlated. Low albumin and lymphocyte levels were correlated with the CCI score. There was no significant correlation between ferritin, sedimentation, CRP levels and CCI score. A multivariate logistic regression analysis indicated that anaemia, elevated CRP (> 10-20 × upper limit), procalcitonin (> 5-10 × upper limit), ALT, AST levels and higher CCI score were independent risk factors for mortality in COVID-19 patients.
Conclusion: Anaemia, elevated CRP, procalcitonin levels, ALT, AST levels and higher CCI score were found independent risk factors for mortality in COVID-19 patients.
abstract_id: PUBMED:22766916
Echocardiographic parameters as cardiovascular event predictors in hemodialysis patients. Background: Patients with chronic kidney disease (CKD) on hemodialysis have high rates of cardiovascular morbidity and mortality. Although structural and functional echocardiographic alterations in patients undergoing hemodialysis have been the subject of several survival analysis studies, the prognostic value of these alterations is not well established in literature.
Objective: To determine the prognostic value of echocardiographic parameters in patients with CKD on hemodialysis.
Methods: Sixty consecutive patients with CKD on hemodialysis were clinically evaluated and underwent Doppler echocardiography, being followed for 19 ± 6 months. The outcome measures were fatal and nonfatal cardiovascular events and overall mortality. The predictive value of echocardiographic variables was evaluated by Cox regression model and survival curves were constructed using the Kaplan-Meier method and log rank test to compare them.
Results: Rates of survival free of cardiovascular events, of cardiovascular and overall mortality in two years were 79.4%, 88.5% and 83% respectively. Diabetes, previous diagnosis of cardiovascular disease (CVD), ejection fraction, fractional shortening, left ventricular systolic diameter and E/e' ratio were predictors of cardiovascular outcome at univariate analysis. In the multivariate analysis, previous history of CVD (HR = 6.17, 95%CI: 1.7 - 22.2, p = 0.005) and moderate to severe diastolic dysfunction (HR = 3.76, 95%CI: 1.05 - 13.4, p = 0.042) were independent risk factors for cardiovascular events.
Conclusion: Moderate to severe diastolic dysfunction is an independent predictor of cardiovascular events in hemodialysis patients.
abstract_id: PUBMED:23751144
Effect of hemoglobin variability on mortality and some cardiovascular parameters in hemodialysis patients. Background And Objectives: Most hemodialysis patients show hemoglobin fluctuations between low-normal and high levels. This hemoglobin variability may cause left ventricle hypertrophy and may increase mortality as well. Recently, many studies were designed to evaluate the effect of hemoglobin variability on mortality but results were conflicting. We aimed to investigate the effect of hemoglobin variability on mortality and some cardiovascular parameters in hemodialysis population.
Design, Setting, Participants And Measurements: Hundred and seventy-five prevalent hemodialysis patients classified into three hemoglobin variability groups according to their hemoglobin levels throughout 24 month observation period: Low-Normal, Low-High, Normal-High. Groups were compared in terms of laboratory, demographical data and mortality rates, initial and the end of 24 month echocardiographic data. Initial and last echocardiographic data were compared within groups in terms of left ventricle mass index increase.
Results: Mortality rates and cardiovascular risk factors such as coronary heart disease, diabetes mellitus and hypertension that may affect mortality were same between three groups. There was no significant difference between three groups in terms of echocardiographic and laboratory parameters. Only Low-High group showed significant increase on left ventricle mass index when initial and last echocardiographic parameters were compared.
Conclusions: Consistent with previous studies, we found that most of the patients exhibited hemoglobin variability and our study is consistent with some of the studies that did not find any relationship between hemoglobin variability and mortality. Firstly, in this study based on objective data, it was shown that hemoglobin variability has adverse effect on left ventricle geometry independent from anemia.
Answer: Yes, echocardiographic parameters have been shown to predict mortality in patients with end-stage renal disease (ESRD). Several studies have identified various echocardiographic parameters as independent predictors of mortality in this patient population.
For instance, one study found that in patients with ESRD assessed for renal transplantation, echocardiographic parameters such as severely impaired left ventricular function, pulmonary hypertension and/or right ventricular dysfunction, and regional wall motion abnormalities were associated with all-cause mortality. Combining these factors into a simple score strongly predicted increased mortality (PUBMED:23542473).
Another study in peritoneal dialysis patients showed that serum prealbumin and echocardiographic parameters such as aortic root diameter, interventricular septum end-diastolic thickness, and left ventricular end-diastolic diameter index were optimal predictors of all-cause mortality. These factors, along with malnutrition and myocardial hypertrophy, were also independent predictors of cardiovascular mortality (PUBMED:32998138).
In patients starting hemodialysis, echocardiographic parameters like the E/e' ratio, left ventricular end-diastolic volume, and s' were found to have independent predictive values for several cardiovascular and mortality events. However, not all cardiovascular events could be predicted by echocardiographic parameters, as no parameter independently predicted cerebrovascular disease or non-cardiovascular events (PUBMED:25552882).
Furthermore, in patients with preserved renal function and mild to moderate chronic kidney disease (CKD), echocardiographic abnormalities were frequent, and certain parameters, particularly those assessing right ventricular function, were independently associated with changes in renal function and mortality (PUBMED:29996910).
These findings suggest that echocardiographic assessment, especially of ventricular function and structure, can be a valuable tool in predicting mortality and cardiovascular events in patients with ESRD and those with varying stages of CKD. |
Instruction: Do dental students have a neutral working posture?
Abstracts:
abstract_id: PUBMED:33259658
Changes in dental hygiene students' working posture following digital sound feedback. Introduction: This study aimed to observe changes in working posture by measuring the REBA (Rapid Entire Body Assessment) score of dental hygiene students according to digital sound feedback linked with a smartphone application.
Methods: This study was conducted on 28 fourth-year dental hygiene students who received theoretical and practical training on dental posture in the second year and then practised on mannequins and patients for about four semesters. Periodontal instrumentation was performed freely by applying digital sound notification feedback for four weeks after baseline, 30 minutes per week. REBA was measured after performing periodic structure construction without providing digital sound notification feedback for the last 1-2 minutes. Follow-up was conducted the same way 2-3 weeks after the intervention period.
Results: The REBA score for total, neck and trunk of all subjects showed statistically significant decreases post-intervention compared with the baseline scores (total p < .001, neck p < .001 and trunk p = .042).
Conclusions: A digital sound feedback system was shown to be effective in encouraging correct working posture in dental hygiene students by helping them improve their REBA scores.
abstract_id: PUBMED:27197705
Do dental students have a neutral working posture? Background: Dentists are susceptible to Musculoskeletal Disorders (MSDs) due to prolonged static postures. To prevent MSDs, working postures of dental students should be assessed and corrected in early career life.
Objective: This study estimated the risk of developing musculoskeletal disorders in dental students using Rapid Upper Limb Assessment (RULA) tool.
Methods: A number of 103 undergraduate dental students from fourth and fifth academic years participated. Postures of these students were assessed using RULA tool while working in the dental clinic. They also answered a questionnaire regarding their knowledge about postural dental ergonomic principles.
Results: The majority of the students (66%) were at intermediate and high risk levels to develop MSDs and their postures needed to be corrected. There was no significant correlation between RULA score and gender, academic year and different wards of dental clinics. There was no significant correlation between knowledge and RULA scores.
Conclusions: Dental students did not have favorable working postures. They were at an intermediate to high risk for developing MSDs which calls for a change in their working postures. Therefore students should be trained with ergonomic principles and to achieve the best results, ergonomic lessons should be accompanied by practice and periodical evaluations.
abstract_id: PUBMED:35291489
Effect of magnification factor by Galilean loupes on working posture of dental students in simulated clinical procedures: associations between direct and observational measurements. Objectives: To determine the effect of different levels of Galilean loupe magnification on working posture as measured by compliance with ergonomic posture positions, angular deviation from the neutral position of the neck, and muscle activation in the neck and upper back region during simulated clinical conditions.
Methods: An experimental laboratory study was performed in which the dependent variables were compliance with ergonomic posture requirements while performing simulated restorative procedures in Restorative Dentistry, angular deviation from the neutral position of the neck, and muscle activation in the neck and upper back. The independent variable was the level of Galilean loupe magnification, which was tested at four levels (naked eye, 2.5× magnification, 3.0× magnification, and 3.5× magnification). The cavity preparations and Class I composite resin restorations were performed on artificial first molars on a mannequin in a dental chair. The Compliance Assessment of Dental Ergonomic Posture Requirements (CADEP) was used for the postural analysis; as was an analysis of the angular deviation from the neutral position of the neck and surface electromyography. Working posture was recorded on video over the course of the procedure. Participants were filmed from three different angles. The Compliance Assessment of Dental Ergonomic Posture Requirements (CADEP) assessed compliance with ergonomic posture requirements. A locally produced posture assessment software analyzed angular deviation. Surface electromyography bilaterally assessed activation of the sternocleidomastoid, descending trapezius and ascending trapezius muscles. Two-factor analysis of variance (ANOVA) and either Tukey's post-hoc test or the Games-Howell post-hoc test were performed (α = 0.05).
Results: During the cavity preparations and restorations, the use of Galilean loupes at all magnifications positively influenced working posture as measured by participants' compliance with ergonomic posture positions (p < 0.01) and neck angulation (p < 0.01); the use of these loupes did not affect muscle activation in the regions evaluated (p > 0.05).
Conclusion: The use of Galilean loupes had a positive effect on dental students' working posture during the restoration procedures performed.
abstract_id: PUBMED:33180967
Comparative analysis of preclinical dental students' working postures using dental loupes and dental operating microscope. Background: Dentists are susceptible to musculoskeletal disorders due to prolonged static postures during dental treatments. Using a magnification tool like dental operating microscope (DOM) or the dental loupes may correct the operator's posture. Up until now, few studies have focused on preclinical dental students' posture when working with the DOM, while most of them have focused on the loupes. The aim of this study was to comparatively analyse the working posture of preclinical students during a dental restoration procedure, working with two different magnification methods.
Materials And Methods: This study used a randomised cross-over design in which seventeen third year students were randomly divided into three groups. The exclusion criteria were previous contact with magnification systems and previous clinical working experience. Each student prepared 3 Black class 1 cavities on artificial lower molars: first with no magnification, following dental loupes and DOM. They were video-recorded throughout the preparation. Trunk, neck and upper-arm position were evaluated using the Posture Assessment Instrument. Students completed a questionnaire on their subjective perception of the two magnification systems.
Results: The statistical analysis showed significant improvement of the working posture using magnification systems compared to direct vision. The biggest improvement was obtained through the use of DOM, followed by the dental loupes. Students perceived dental loupes as being the most comfortable and easy to adapt to. They reported being more focused when using DOM.
Conclusions: Both magnification systems had a positive impact on the working posture, DOM having the best results. Loupes showed better adaptability while DOM showed better concentration.
abstract_id: PUBMED:35238116
Preclinical dental training: Association between fine motor skills and compliance with ergonomic posture techniques. Objective: The aim of this study was to evaluate the dental students' fine motor skills and their compliance with ergonomic posture techniques over the course of a preclinical training year. The correlation between fine motor skills and compliance was also assessed.
Methods: The ergonomic posture of students enrolled in the second year of a five-year undergraduate dental degree programme (n = 62) was assessed using the Compliance Assessment of Dental Ergonomic Posture Requirements (CADEP). This assessment relied on photographs of the students performing preclinical laboratory procedures during the school year. The photographs of each procedure received a score, and the final score obtained (0 to 10) corresponded to the extent of the student's compliance with ergonomic posture techniques. Initial compliance was calculated during the first two months of the training programme, whilst final compliance was calculated during the last two months. Fine motor skills were evaluated using the modified Dental Manual Dexterity Assessment (DMDA), which was also applied at the beginning and the end of the school year. Data were assessed statistically by Student's paired t test, and the correlation between fine motor skills and compliance with ergonomic posture techniques was estimated by Pearson's correlation coefficient (r) and Student's t test (α = 0.05).
Results: The compliance scores were higher at the end of the academic year than at the beginning of year (p < 0.001; t = -5.300). Fine motor skills improved significantly with time (p < 0.001; t = -10.975). Non-significant correlations were found between students' fine motor skills and their economic posture compliance both at the beginning (r = -0.197; p = 0.126) and at the end of the training year (r = 0.226; p = 0.078).
Conclusion: The students' manual dexterity and compliance with ergonomic posture techniques increased over the course of the preclinical training year, and the correlation between students' fine motor skills and their ergonomic posture compliance was not significant.
abstract_id: PUBMED:30745350
The Effect of Magnification Loupes on Spontaneous Posture Change of Dental Students During Preclinical Restorative Training. Scientific evidence validating the beneficial effect of loupes in preventing musculoskeletal disorders is very scarce. The aim of this study was to assess the impact of dental loupes on dental students' posture during a preclinical restorative dentistry course. Using a randomized crossover design, this study was conducted at the School of Dentistry, University of Nantes, France, in 2017. Forty students in their second year of dental study were randomly divided into two groups of 20 each: group A used loupes, whereas group B did not. The week after, students reversed configurations (each subject served as his or her own treatment and control group). Students were video-recorded during cavity preparation. Trunk, head and neck, and upper arm positions were analyzed using continuous scores based on the modified Posture Assessment Instrument. Additionally, cavities were rated, and students completed a questionnaire on their perceptions of the loupes. On a scale on which lower scores indicated better posture, the results showed significantly higher posture ergonomic scores per minute for students without loupes (146.3±6.64 points/min) than with loupes (123.2±6.77 points/min; p<0.05). The majority of the students (32/39, 82%) showed improvements in ergonomic postures with the use of loupes. Trunk, head, and neck were positively impacted by the use of loupes, but not the upper arms. Cavity preparations were not improved by the use of loupes. The questionnaire revealed negative aspects (pain and difficulty adapting) but underlined the perceived positive impact on posture. This study documented the ergonomic advantages and challenges of introducing magnification near the beginning of the dental training program.
abstract_id: PUBMED:33527639
Development and assessment of an indirect vision training programme for operatory dentistry: Effects on working posture. Objectives: Students experience difficulty working with indirect vision and often adopt inadequate working postures because of it. This study created and then assessed the effects of an indirect vision preclinical training programme on dental students' working posture.
Methods: The study enrolled students in the third year of the 5-year undergraduate programme in dentistry in the School of Dentistry of São Paulo State University (UNESP), Araraquara (N = 54). The programme consisted of four training sessions in which students performed different types of activities in which only a mirror was used to see the procedure they were performing. To evaluate posture, students were asked to perform class III cavity preparations (distal-palatal and mesial-palatal preparations) on upper central and lateral incisors in a dental mannequin (tooth numbers 11, 12, 21 and 22) both before and after the indirect vision training programme. Photographs were taken of the students' working postures. The photographs were assessed by a duly trained researcher using the Compliance Assessment of Dental Ergonomic Posture Requirements. A descriptive statistical analysis was performed, and the assumptions of normality were verified. Student's paired t test was also performed. The significance level adopted was 5%.
Results: A significant difference was found between the percentages of correct ergonomic postures adopted before and after the training programme (p = 0.039).
Conclusions: The preclinical training programme for indirect vision was found to have a positive effect on the working postures of the students evaluated herein.
abstract_id: PUBMED:27417601
Musculoskeletal Disorders and Working Posture among Dental and Oral Health Students. The prevalence of musculoskeletal disorders (MSD) in the dental professions has been well established, and can have detrimental effects on the industry, including lower productivity and early retirement. There is increasing evidence that these problems commence during undergraduate training; however, there are still very few studies that investigate the prevalence of MSD or postural risk in these student groups. Thus, the aim of this study was to determine the prevalence of MSD and conduct postural assessments of students studying oral health and dentistry. A previously validated self-reporting questionnaire measuring MSD prevalence, derived from the Standardised Nordic Questionnaire, was distributed to students. Posture assessments were also conducted using a validated Posture Assessment Instrument. MSD was highly prevalent in all student groups, with 85% reporting MSD in at least one body region. The neck and lower back were the most commonly reported. The final year dental students had the highest percentage with poor posture (68%), while the majority of students from other cohorts had acceptable posture. This study supports the increasing evidence that MSD could be developing in students, before the beginning of a professional career. The prevalence of poor posture further highlights the need to place further emphasis on ergonomic education.
abstract_id: PUBMED:18037853
Assessment of dental student posture in two seating conditions using RULA methodology - a pilot study. Objectives: To assess dental students' posture on two different seats in order to determine if one seat predisposes to a difference in working posture.
Design: A between-subject experimental design was selected.
Setting: The study was undertaken at the University of Birmingham School of Dentistry in 2006. Subjects (materials) and methods Sixty second year dental students at the University of Birmingham who were attending their fi rst classes in the phantom head laboratory were randomly selected and allocated to two different seats (30 Bambach Saddle Seats and 30 conventional seats). Students were trained in the use of the seats. After ten weeks, the students were observed, photographs were taken by the researcher and these were assessed using Rapid Upper Limb Assessment (RULA).
Main Outcome Measures: The posture of the students was assessed using the RULA. Each student was given a risk score. A Mann Whitney test was used for statistical analysis.
Results: The results indicated that the students using the conventional seat recorded significantly higher risk scores (p <0.05) when compared with the students using Bambach Saddle Seat, suggesting an improvement in posture when using the Bambach Saddle Seat.
Conclusion: RULA has identified that dental students using a Bambach Saddle Seat were able to maintain an acceptable working posture during simulated dental treatment and this seating may reduce the development of work-related musculoskeletal disorders.
abstract_id: PUBMED:28295943
Wrong postural hygiene and ergonomics in dental students of the University of Valencia (Spain) (part I). Introduction: Failure to adopt a correct working posture can lead to occupational diseases.
Objectives: Evaluate knowledge in relation to ergonomics about BHOP concept and its application to routine clinical practice amongst undergraduate and postgraduate dental students in the University of Valencia (Valencia, Spain).
Material And Methods: A study based on interviews of undergraduate and postgraduate dental students in the University of Valencia (Valencia, Spain) was carried out. The information from a total of 336 interviews was used for the statistical analysis, differentiating according to gender and academic year: knowledge of ergonomics, pain prevalence and antecedents, assessment of the possible necessity for improved training in ergonomics, and evaluation of postural hygiene.
Results: Only 28.6% of the students were found to sit correctly in the dentist chair. Furthermore, in the opinion of the students, very few subjects during the career afforded adequate teaching in relation to ergonomics and working posture. The analysis of postural hygiene showed great variability. There were no significant differences in posture between males and females, although some incorrect postures appeared to be associated with the academic year (P<.05). However, no significant improvements in postural hygiene were noted on progressing from one academic year to the next.
Conclusions: The students in our study were not familiar with the principles of ergonomics and did not sit correctly in the dentist chair. Improved training in this field is required in dental school.
Answer: No, dental students do not typically have a neutral working posture. Studies have shown that the majority of dental students (66%) were at intermediate and high risk levels for developing musculoskeletal disorders (MSDs) and their postures needed to be corrected (PUBMED:27197705). Additionally, the assessment of dental student posture in two seating conditions using RULA methodology indicated that students using conventional seats recorded significantly higher risk scores, suggesting that their posture was not neutral and could benefit from improved seating (PUBMED:18037853). Furthermore, another study found that only 28.6% of the students sat correctly in the dentist chair, indicating that a large proportion of students do not maintain a neutral working posture (PUBMED:28295943). |
Instruction: Does problem-solving treatment work through resolving problems?
Abstracts:
abstract_id: PUBMED:12420901
Does problem-solving treatment work through resolving problems? Background: A randomized controlled trial of problem-solving treatment, antidepressant medication and the combination of the two treatments found no difference in treatment efficacy for major depressive disorders in primary care. In addition to treatment outcome, the trial sought to determine possible mechanisms of action of the problem-solving intervention.
Method: Two potential mechanisms of action of problem-solving treatment were evaluated by comparison with drug treatment. First, did problem-solving treatment work by achieving problem resolution and secondly, did problem-solving treatment work by increasing the patients' sense of mastery and self-control?
Results: Problem-solving treatment did not achieve a greater resolution in the patients' perception of their problem severity by comparison with drug treatment, neither did problem-solving treatment result in a greater sense of mastery or self-control.
Conclusions: The results from this study did not support the hypotheses that for patients with major depression, by comparison with antidepressant medication: problem-solving treatment would result in better problem resolution; or that problem-solving treatment would increase the patients' sense of mastery and self-control.
abstract_id: PUBMED:35592169
Developing Problem-Solving Expertise for Word Problems. Studying worked examples impose relatively low cognitive load because learners' attention is directed to learn the schema, which is embedded in the worked examples. That schema encompasses both conceptual knowledge and procedural knowledge. It is well-documented that worked examples are effective in facilitating the acquisition of problem-solving skills. However, the use of worked examples to develop problem-solving expertise is less known. Typically, experts demonstrate an efficient way to solve problems that is quicker, faster, and having fewer solution steps. We reviewed five studies to validate the benefit of worked examples to develop problem-solving expertise for word problems. Overall, a diagram portrays the problem structure, coupled with either study worked examples or complete multiple example-problem pairs, facilitates the formation of an equation to solve words problems efficiently. Hence, an in-depth understanding of conceptual knowledge (i.e., problem structure) might contribute to superior performance of procedural knowledge manifested in the reduced solution steps.
abstract_id: PUBMED:32468563
Transitioning Knowledge Levels Through Problem Solving Methods. Problem solving is one of the most important goals of mathematics teaching. Several researches, such as the one attached to this dissertation, demonstrate that the problems of mathematics and their resolution are ultimately a problem for the majority of students. The very low performance for pupils in our 15-year-old country in the PISA international competition, coupled with references to problem solving research, demonstrates the weakness of our country's education system to bridge the gap between school reality and everyday problems or original problems. In contrast to the unpleasant results of the mathematical problem solving research, several theories in the science of mathematics teaching describe problem solving methods, such as Polya's method of questioning, which help the student develop the thinking skills he/she has, his/her metacognostic abilities, and above all he/she can raise a level of knowledge in relation to what he/she is before dealing with problem solving. Key findings of the research are that there is no absolute link between pupils' skills in solving original problems and answers to the problems of school reality. However, the questionnaire method we applied in the second phase of the survey has shown to deliver. The students found the value of the method in solving a problem by correcting errors or shortcomings and eventually answering correctly. Methods and research on them, however, present the way to acquiring mathematical knowledge through problems. For safer conclusions, we expect as a learning community the results of research in cognitive science and neuroscience around problem solving. The last two areas of educational research, cognitive science and neuro-education, are expected to provide answers for the transition of knowledge from one level to another.
abstract_id: PUBMED:24324454
Noticing relevant problem features: activating prior knowledge affects problem solving by guiding encoding. This study investigated whether activating elements of prior knowledge can influence how problem solvers encode and solve simple mathematical equivalence problems (e.g., 3 + 4 + 5 = 3 + __). Past work has shown that such problems are difficult for elementary school students (McNeil and Alibali, 2000). One possible reason is that children's experiences in math classes may encourage them to think about equations in ways that are ultimately detrimental. Specifically, children learn a set of patterns that are potentially problematic (McNeil and Alibali, 2005a): the perceptual pattern that all equations follow an "operations = answer" format, the conceptual pattern that the equal sign means "calculate the total", and the procedural pattern that the correct way to solve an equation is to perform all of the given operations on all of the given numbers. Upon viewing an equivalence problem, knowledge of these patterns may be reactivated, leading to incorrect problem solving. We hypothesized that these patterns may negatively affect problem solving by influencing what people encode about a problem. To test this hypothesis in children would require strengthening their misconceptions, and this could be detrimental to their mathematical development. Therefore, we tested this hypothesis in undergraduate participants. Participants completed either control tasks or tasks that activated their knowledge of the three patterns, and were then asked to reconstruct and solve a set of equivalence problems. Participants in the knowledge activation condition encoded the problems less well than control participants. They also made more errors in solving the problems, and their errors resembled the errors children make when solving equivalence problems. Moreover, encoding performance mediated the effect of knowledge activation on equivalence problem solving. Thus, one way in which experience may affect equivalence problem solving is by influencing what students encode about the equations.
abstract_id: PUBMED:26078309
The effectiveness of return-to-work interventions that incorporate work-focused problem-solving skills for workers with sickness absences related to mental disorders: a systematic literature review. Objectives: This paper reviews the current state of the published peer-reviewed literature related to return-to-work (RTW) interventions that incorporate work-related problem-solving skills for workers with sickness absences related to mental disorders. It addresses the question: What is the evidence for the effectiveness of these RTW interventions?
Design: Using a multiphase screening process, this systematic literature review was based on publically available peer-reviewed studies. Five electronic databases were searched: (1) Medline Current, (2) Medline In-process, (3) PsycINFO, (4) Econlit and (5) Web of Science.
Setting: The focus was on RTW interventions for workers with medically certified sickness absences related to mental disorders.
Participants: Workers with medically certified sickness absences related to mental disorders.
Interventions: RTW intervention included work-focused problem-solving skills.
Primary And Secondary Outcome Measures: RTW rates and length of sickness absences.
Results: There were 4709 unique citations identified. Of these, eight articles representing a total of six studies were included in the review. In terms of bias avoidance, two of the six studies were rated as excellent, two as good and two as weak. Five studies were from the Netherlands; one was from Norway. There was variability among the studies with regard to RTW findings. Two of three studies reported significant differences in RTW rates between the intervention and control groups. One of six studies observed a significant difference in sickness absence duration between intervention and control groups.
Conclusions: There is limited evidence that combinations of interventions that include work-related problem-solving skills are effective in RTW outcomes. The evidence could be strengthened if future studies included more detailed examinations of intervention adherence and changes in problem-solving skills. Future studies should also examine the long-term effects of problem-solving skills on sickness absence recurrence and work productivity.
abstract_id: PUBMED:28481739
The effects of monitoring environment on problem-solving performance. While effective and efficient solving of everyday problems is important in business domains, little is known about the effects of workplace monitoring on problem-solving performance. In a laboratory experiment, we explored the monitoring environment's effects on an individual's propensity to (1) establish pattern solutions to problems, (2) recognize when pattern solutions are no longer efficient, and (3) solve complex problems. Under three work monitoring regimes-no monitoring, human monitoring, and electronic monitoring-114 participants solved puzzles for monetary rewards. Based on research related to worker autonomy and theory of social facilitation, we hypothesized that monitored (versus non-monitored) participants would (1) have more difficulty finding a pattern solution, (2) more often fail to recognize when the pattern solution is no longer efficient, and (3) solve fewer complex problems. Our results support the first two hypotheses, but in complex problem solving, an interaction was found between self-assessed ability and the monitoring environment.
abstract_id: PUBMED:35165840
Improving Our Understanding of Impaired Social Problem-Solving in Children and Adolescents with Conduct Problems: Implications for Cognitive Behavioral Therapy. In cognitive behavioral therapy (CBT) children and adolescents with conduct problems learn social problem-solving skills that enable them to behave in more independent and situation appropriate ways. Empirical studies on psychological functions show that the effectiveness of CBT may be further improved by putting more emphasis on (1) recognition of the type of social situations that are problematic, (2) recognition of facial expressions in view of initiating social problem-solving, (3) effortful emotion regulation and emotion awareness, (4) behavioral inhibition and working memory, (5) interpretation of the social problem, (6) affective empathy, (7) generation of appropriate solutions, (8) outcome expectations and moral beliefs, and (9) decision-making. To improve effectiveness, CBT could be tailored to the individual child's or adolescent's impairments of these psychological functions which may depend on the type of conduct problems and their associated problems.
abstract_id: PUBMED:35771846
In the here and now: Future thinking and social problem-solving in depression. This research investigates whether thinking about the consequences of a problem being resolved can improve social problem-solving in clinical depression. We also explore whether impaired social problem solving is related to inhibitory control. Thirty-six depressed and 43 non-depressed participants were presented with six social problems and were asked to generate consequences for the problems being resolved or remaining unresolved. Participants were then asked to solve the problems and recall all the consequences initially generated. Participants also completed the Emotional Stroop and Flanker tasks. We found that whilst depressed participants were impaired at social problem-solving after generating unresolved consequences, they were successful at generating solutions for problems for which they previously generated resolved consequences. Depressed participants were also impaired on the Stroop task, providing support for an impaired inhibitory control account of social problem-solving. These findings advance our understanding of the mechanisms underpinning social problem-solving in depression and may contribute to the development of new therapeutic interventions to improve social-problem solving in depression.
abstract_id: PUBMED:35250689
A Mixed-Methods Study of Creative Problem Solving and Psychosocial Safety Climate: Preparing Engineers for the Future of Work. The future of work is forcing the world to adjust to a new paradigm of working. New skills will be required to create and adopt new technology and working methods. Additionally, cognitive skills, particularly creative problem-solving, will be highly sought after. The future of work paradigm has threatened many occupations but bolstered others such as engineering. Engineers must keep up to date with the technological and cognitive demands brought on by the future of work. Using an exploratory mixed-methods approach, our study sought to make sense of how engineers understand and use creative problem solving. We found significant associations between engineers' implicit knowledge of creativity, exemplified creative problem solving, and the perceived value of creativity. We considered that the work environment is a potential facilitator of creative problem-solving. We used an innovative exceptional cases analysis and found that the highest functioning engineers in terms of knowledge, skills, and perceived value of creativity, also reported working in places that facilitate psychosocially safe environments to support creativity. We propose a new theoretical framework for a creative environment by integrating the Four Ps (Person, Process, Product, and Press) and psychosocial safety climate theory that management could apply to facilitate creative problem solving. Through the acquisition of knowledge to engage in creative problem solving as individuals or a team, a perception of value must be present to enforce the benefit of creativity to the engineering role. The future of work paradigm requires that organisations provide an environment, a psychosocially safe climate, for engineers to grow and hone their sought-after skills that artificial technologies cannot currently replace.
abstract_id: PUBMED:34484617
Teaching Individuals with Autism Problem-Solving Skills for Resolving Social Conflicts. Resolving social conflicts is a complex skill that involves consideration of the group when selecting conflict solutions. Individuals with autism spectrum disorder (ASD) often have difficulty resolving social conflicts, yet this skill is important for successful social interaction, maintenance of relationships, and functional integration into society. This study used a nonconcurrent multiple baseline across participants design to assess the efficacy of a problem-solving training and generalization of problem solving to naturally occurring untrained social conflicts. Three male participants with ASD were taught to use a worksheet as a problem-solving tool using multiple exemplar training, error correction, rules, and reinforcement. The results showed that using the worksheet was successful in bringing about a solution to social conflicts occurring in the natural environment. In addition, the results showed that participants resolved untrained social conflicts in the absence of the worksheet during natural environment probe sessions.
Answer: Based on the abstract from PUBMED:12420901, problem-solving treatment did not achieve a greater resolution in the patients' perception of their problem severity by comparison with drug treatment, nor did it result in a greater sense of mastery or self-control. Therefore, the results from this study did not support the hypothesis that problem-solving treatment would result in better problem resolution or increase the patients' sense of mastery and self-control for those with major depression, in comparison with antidepressant medication. |
Instruction: TV parenting practices: is the same scale appropriate for parents of children of different ages?
Abstracts:
abstract_id: PUBMED:23548115
TV parenting practices: is the same scale appropriate for parents of children of different ages? Purposes: Use multidimensional polytomous item response modeling (MPIRM) to evaluate the psychometric properties of a television (TV) parenting practices (PP) instrument. Perform differential item functioning (DIF) analysis to test whether item parameter estimates differed across education, language, or age groups.
Methods: Secondary analyses of data from three studies that included 358 children between the ages of 3 and 12 years old in Houston, Texas. TV PP included 15 items with three subscales: social co-viewing, instructive parental mediation, and restrictive parenting. The multidimensional partial credit model was used to assess the performance. DIF was used to investigate the differences in psychometric properties across subgroups.
Results: Classical test theory analyses revealed acceptable internal consistency reliability (Cronbach's α: 0.72 to 0.83). More items displaying significant DIF were found across children's age groups than parental education or language groups. A Wright map revealed that items covered only a restricted range of the distribution, at the easier to respond end of the trait.
Conclusions: TV PP scales functioned differently on the basis of parental education, parental language, and child age, with the highest DIF among the latter. Additional research is needed to modify the scales to minimize these moderating influences. Some items may be age specific.
abstract_id: PUBMED:35284952
Emotional competence, attachment, and parenting styles in children and parents. The goal of this study was to examine whether a subject's emotional competence correlates to attachment styles and parenting styles in children and their parents. The study was conducted with fifty children (9-11 years old) and their parents, both of whose emotional competence (EKF) and parenting style (PAQ) were measured. The attachment styles of parents and children were measured with the Adult Attachment Scale (AAS) and the Bochumer Bindungstest (BoBiTe), respectively. The findings provide initial support to the assumption that attachment is related to emotional competence in parents. This relationship, however, was not significantly correlated in children. In addition, authoritative parenting and permissive parenting were significantly associated with emotional competence in parents. Emotional competence in children showed to be associated with an authoritative parenting style.
abstract_id: PUBMED:37955265
Parenting Behaviors and Parental self-efficacy: A comparative study of parents of children with intellectual disabilities and typically developing children. Purpose: The aim of this study was to investigate the parenting behaviors and parental self-efficacy of parents of typically developing and child with an intellectual disability, considering their children's groups of with or without intellectual disability and other relevant variables. The study involved 1194 parents with children aged 3-6 years, of whom 521 parents had children with intellectual disabilities and the remaining 673 parents had typically developing children. Method: The data collection instruments used in this study were the Parental Behavior Scale Short Form and Parental Efficacy Scale. A t-test was used to compare parenting behavior and parental efficacy according to the child with or without an intellectual disability. In addition, MANOVA was used to compare parenting behavior and parental efficacy in relation to parents' level of education and to examine the possible interaction effect between these two independent variables. Results: The findings indicate that parents of typically developing children exhibit more positive parenting behaviors than parents of children with intellectual disabilities. However, the negative parenting behaviors of both groups were similar. In terms of parenting self-efficacy, parents of children with intellectual disabilities display higher self-efficacy than parents of typically developing children. The study also investigated whether there was a common effect in relation to child with or without an intellectual disability and parental education level, but no common effect was observed. Conclusion: Positive parenting behaviors and parental self-efficacy differed according to whether child with or without an intellectual disability.
abstract_id: PUBMED:33353165
Mental Health of Parents of Special Needs Children in China during the COVID-19 Pandemic. We assessed the mental health of parents (N = 1450, Mage = 40.76) of special needs children during the COVID-19 pandemic. We conducted an online survey comprising items on demographic data; two self-designed questionnaires (children's behavioral problems/psychological demand of parents during COVID-19); and four standardized questionnaires, including the General Health Questionnaire, Perceived Social Support, Parenting Stress Index, and Neuroticism Extraversion Openness Five Factor Inventory. The results showed that there were significant differences among parents of children with different challenges. Parents of children with autism spectrum disorder were more likely to have mental health problems compared to parents whose children had an intellectual disability or a visual or hearing impairment. Behavioral problems of children and psychological demands of parents were common factors predicting the mental health of all parents. Parent-child dysfunctional interactions and parenting distress were associated with parents of children with autism spectrum disorder. Family support, having a difficult child, and parenting distress were associated with having children with an intellectual disability. It is necessary to pay attention to the parents' mental health, provide more social and family support, and reduce parenting pressures.
abstract_id: PUBMED:36093720
Parenting styles and dimensions in parents of children with developmental disabilities. Parenting influences child development. There is limited research, however, related to parenting children who have developmental disabilities. The aims of this study were to: (1) describe the parenting styles and dimensions of parents of children with developmental disabilities and (2) assess differences in parenting styles and dimensions among parents of children with autism spectrum disorder (ASD), Down syndrome (DS), and spina bifida (SB). Secondary data analysis was conducted from a nationwide cross-sectional study of 496 parents of children aged 5-16 years with ASD (n = 180), DS (n = 156), or SB (n = 160). Parent scores indicated high use of the authoritative parenting style and associated parenting dimensions, mid-to-low use of the permissive parenting style, and low use of the authoritarian parenting style and associated dimensions. Variation in parenting styles and dimensions among parents was primarily related to the child's diagnosis and family income. An unanticipated but positive finding was that parents with lower family incomes had significantly higher scores for the authoritative parenting style. Results from this study can inform future research that might inform clinical practice.
abstract_id: PUBMED:26335730
Parenting stress and affective symptoms in parents of autistic children. We examined parenting stress and mental health status in parents of autistic children and assessed factors associated with such stress. Participants were parents of 188 autistic children diagnosed with DSM-IV criteria and parents of 144 normally developing children. Parents of autistic children reported higher levels of stress, depression, and anxiety than parents of normally developing children. Mothers of autistic children had a higher risk of depression and anxiety than that did parents of normally developing children. Mothers compared to fathers of autistic children were more vulnerable to depression. Age, behavior problems of autistic children, and mothers' anxiety were significantly associated with parenting stress.
abstract_id: PUBMED:34682659
A Systematic Review on Foster Parents' Psychological Adjustment and Parenting Style-An Evaluation of Foster Parents and Foster Children Variables. The current systematic review aimed to evaluate the variables influencing foster parents' parenting stress, distress and parenting style, thereby supporting their adjustment and well-being as well as that of foster children. A PRISMA-guided search was conducted in three databases. Observational studies examining parenting stress, parenting distress (subsuming anxiety, depression and stress symptoms) and parenting style-all assessed through validated tools-were considered. A total of 16 studies were included, comprising N = 1794 non-relative foster parents (age range = 30-67 years). Results showed heightened parenting stress over time, both overall and compared to parents at large. Neither foster parents' nor foster children's socio-demographic characteristics significantly contributed to the increase in parenting stress; yet child-related stress and children's externalizing problems were its main predictors. Foster parents' couple cooperation was associated with reduced parenting stress. Moreover, the authoritative parenting style was associated with parental warmth, while the authoritarian style was associated with foster parents' greater perceived burden, greater criticism and rejection toward the foster child. Evidence supports the mutual influence between foster parents and children. Foster care services should support foster parents' needs within a concentric modular system, to ultimately provide better care for both foster parents and children.
abstract_id: PUBMED:34111068
Perceived parenting styles and primary attachment styles of single and children living with both parents. Objective: To investigate the association between perceived primary parenting styles and attachment styles between single-parent children and children living with both parents.
Methods: The correlational study was conducted at the Lahore Garrison University, Lahore, Pakistan, from September 2017 to March 2018, and comprised an equal number of children from single-parent families and those living with both the parents. Data was collected using the parental authority questionnaire and the Urdu version of the inventory of parental and peer attachment. Data was analysed using SPSS 21.
Results: Of the 200 children, 100(50%) were in each of the two groups, and both the groups had 50(50%) girls and boys each. The overall mean age of the sample was 14.56±3.03 years (range: 11-18 years). There was a significant negative correlation between permissive parenting styles with mother's communication (p<0.05); authoritarian parenting style had negative correlation with parental communication and trust(p<0.001). Authoritative parenting had significant positive relationship with trust (p<0.001), and communication with parents (p<0.001), and there was negative relationship between authoritative parenting with feeling alienated from parents (p<0.01). Single-parent children perceived their parents as authoritarian (p<0.001) and had more alienated attachment with parents (p<0.001), whereas children living with both the parents had more trust (p<0.001) and had better communication with their parents (p<0.001).
Conclusions: It is important to understand the role of parents and different parenting styles in building up strong parentchild attachment.
abstract_id: PUBMED:33123063
Parents and Children During the COVID-19 Lockdown: The Influence of Parenting Distress and Parenting Self-Efficacy on Children's Emotional Well-Being. On March 10, 2020, Italy went into lockdown due to the Coronavirus Disease-19 (COVID-19) pandemic. The World Health Organization highlighted how the lockdown had negative consequences on psychological well-being, especially for children. The present study aimed to investigate parental correlates of children's emotion regulation during the COVID-19 lockdown. Within the Social Cognitive Theory framework, a path model in which parenting self-efficacy and parental regulatory emotional self-efficacy mediated the relationship between parents' psychological distress and both children's emotional regulation, and children's lability/negativity, was investigated. A total of 277 parents of children aged from 6 to 13 years completed an online survey that assessed their psychological distress, regulatory emotional self-efficacy, and parenting self-efficacy. Parents reported also children's emotional regulation and lability/negativity. A structural equation model (SEM) using MPLUS 8.3 was tested. Results showed that the hypothesized model exhibited excellent fit, chi-square (83) = 140.40, p < 0.01, RMSEA = 0.05, CFI = 0.97, TLI = 0.96, SRMR = 0.04. The influences of parents' psychological distress and parents' regulatory emotional self-efficacy on children's emotional regulation and lability/negativity were mediated by parenting self-efficacy. The mediation model was invariant across children's biological sex and age, and geographical residence area (high risk vs. low risk for COVID-19). Results suggested how parents' beliefs to be competent in managing parental tasks might be a protective factor for their children's emotional well-being. Implications for intervention programs are discussed.
abstract_id: PUBMED:26013560
The association of parent's outcome expectations for child TV viewing with parenting practices and child TV viewing: an examination using path analysis. Background: Television (TV) viewing has been associated with many undesirable outcomes for children, such as increased risk of obesity, but TV viewing can also have benefits. Although restrictive parenting practices are effective in reducing children's TV viewing, not all parents use them and it is currently unclear why. The current study examined parenting practices related to TV viewing in the context of social- cognitive theory. Specifically, we hypothesized that positive and negative Parental Outcome Expectations for child's TV Viewing (POETV) would be associated with social co-viewing and restrictive parenting practices, and that POETV and parenting practices influence the amount of TV viewed by child.
Method: Data were collected from an internet survey of 287 multi-ethnic parents and their 6-12 year old children on participants' sociodemographic information, parenting practices related to TV use, POETV, and parent and child TV viewing. Path analysis was used to examine the relationship amongst variables in separate models for weekday and weekend TV viewing. controlling for child age, household education, and parental TV viewing.
Results: The results provided partial support for the hypotheses, with notable differences between weekday and weekend viewing. The models explained 13.6% and 23.4% of the variance in children's TV viewing on weekdays and weekends respectively. Neither positive nor negative POETV were associated with restrictive TV parenting in either model. One subscale each from positive and negative POETV were associated with social co-viewing parenting on both weekends and weekdays in the expected direction. Restrictive parenting practices were directly negatively associated with children's TV viewing on weekdays, but not weekends. Social co-viewing parenting was directly positively associated with children's TV viewing on weekends, but not weekdays. The strongest influence on children's TV viewing was having a TV in the child's bedroom. Negative POETV was weakly associated with having a TV in the child's room.
Conclusions: These findings suggest that POETV and parenting may have a greater impact on weekend TV viewing, when children tend to watch more TV, than weekday. The models suggest that POETV, parenting and especially removing the TV from children's rooms may be promising targets for interventions.
Answer: The scale for TV parenting practices may not be appropriate for parents of children of different ages. According to the study by PUBMED:23548115, the TV parenting practices (PP) instrument, which includes items on social co-viewing, instructive parental mediation, and restrictive parenting, showed differential item functioning (DIF) across children's age groups. This suggests that the psychometric properties of the TV PP scales functioned differently based on the age of the children, indicating that some items may be age-specific. The study found more items displaying significant DIF across children's age groups than across parental education or language groups. Therefore, additional research is needed to modify the scales to minimize these moderating influences and ensure that the scale is appropriate for parents of children at different developmental stages. |
Instruction: Is there a language divide in pap test use?
Abstracts:
abstract_id: PUBMED:17063131
Is there a language divide in pap test use? Objective: We sought to determine whether primary language use, measured by language of interview, is associated with disparities in cervical cancer screening.
Data Sources: We undertook a secondary data analysis of a pooled sample of the 2001 and 2003 California Health Interview Surveys. The surveys were conducted in English, Spanish, Cantonese, Mandarin, Korean, and Vietnamese.
Study Design: The study was a cross-sectional analysis of 3-year Pap test use among women ages 18 to 64, with no reported cervical cancer diagnosis or hysterectomy (n = 38,931). In addition to language of interview, other factors studied included race/ethnicity, marital status, income, educational attainment, years lived in the United States, insurance status, usual source of care, smoking status, area of residence, and self-rated health status.
Data Collection/extraction Methods: We fit weighted multivariate logit models predicting 3-year Pap test use as a function of language of interview, adjusting for the effects of specified covariates.
Principal Findings: Compared with the referent English interview group, women who interviewed in Spanish were 1.65 times more likely to receive a Pap test in the past 3 years. In contrast, we observed a significantly reduced risk of screening among women who interviewed in Vietnamese (odds ratio [OR] 0.67; confidence interval [CI] 0.48-0.93), Cantonese (OR 0.44; 95% CI 0.30-0.66), Mandarin (OR 0.48; 95% CI 0.33-0.72), and Korean (OR 0.62; 0.40-0.98).
Conclusions: Improved language access could reduce cancer screening disparities, especially in the Asian immigrant community.
abstract_id: PUBMED:34250852
Science, Maddá, and 'Ilm: The language divide in scientific information available to Internet users. The Internet has potential to alleviate inequality in general and specifically with respect to science literacy. Nevertheless, digital divides persist in online access and use, as well as in subsequent social outcomes. Among these, the "language divide" partly determines how successful users are in their Internet use depending on their proficiency in languages, and especially in English. To examine whether the quality of online scientific information varies between languages when conducting searches from the same country, we compared online search results regarding scientific terms in English, Hebrew, and Arabic. Findings indicate that searches in English yielded overall higher quality results, compared with Hebrew and Arabic, but mostly in pedagogical aspects, rather than scientific ones. Clustering the results by language yielded better separation than clustering by scientific field, pointing to a "language divide" in access to online science content. We argue that scientific communities and institutions should mitigate this language divide.1.
abstract_id: PUBMED:7885615
Pap-smear test today Remembering the publication of Papanicolau and Traut's monograph in 1943, the authors have realized the use and the spead of the Pap-Smear until the present. They have analyzed the different risks of cervical carcinoma. They think that to obtain a further reduction of the morbidity and the death-rate of such a neoplasia, it is necessary to do a pap-smear examination once a year for women with HPV infection or other risk factors. It is possible in this way, to improve results till the 90%.
abstract_id: PUBMED:8332276
The Pap test in menopause In a study covering 156 Pap-tests in menopause, the Authors observed a reduction in infectious phlogosis, in dysplasias and neoplasias. They suggested that even during menopause in the Pap-test showed be conducted to monitor the VIN and a physiological senescence of the female genitalia.
abstract_id: PUBMED:33773556
Is Pap Test Awareness Critical to Pap Test Uptake in Women Living in Rural Vietnam? Introduction: Cervical cancer is the second leading cause of cancer death among Vietnamese females. By detecting precancerous cells, Pap test screening plays a critical role in the fight against cervical cancer. The present study aims to investigate health-related factors associated with receipt of Pap test among Vietnamese females living in rural Vietnam, particularly examining the correlation between awareness level of the Pap test and the receiving of Pap test.
Methods: Anderson's Behavioral Model of Health Services Use was utilized as the present study's theoretical framework. A self-administrated questionnaire was completed among 193 females residing in Quantri City, Vietnam.
Results: Only 15.5% (N=30) of participants in our sample have had a Pap test. Pap test awareness (OR = 18.38, p <.001) was a strong predictor of Pap test receipt. Participants who had heard about Pap test were 18.38 times more likely to take a Pap test compared to those who had no prior knowledge. Besides the awareness, variables including employment (OR = .18, p <.05), and health insurance coverage (OR = 10.75, p <.05) were significantly associated with Pap test uptake.
Conclusion: Findings from the present study suggests interventions should be provided through public health efforts to enhance awareness of Pap test by aiming at increasing primary prevention of cervical cancer, especially among Vietnamese women living in rural areas, in order to reduce cancer health disparities.
abstract_id: PUBMED:24082913
Cytomorphology of unusual primary tumors in the Pap test. Rare entities in the Pap test, which include neoplastic and non-neoplastic conditions, pose challenges due to the infrequent occurrence of many of these entities in the daily practice of cytology. Furthermore, these conditions give rise to important diagnostic pitfalls to be aware of in the Pap test. For example, cases with adenoma malignum (AM) have been called benign. Recognition of these conditions can help correctly interpret Pap tests as abnormal and thereby ensure that patients get appropriately diagnosed. In this paper, we illustrate and discuss selected uncommon primary neoplastic lesions of the cervix and the vagina that may be seen in Pap test, with a focus on cytomorphology, differential diagnosis and the role of possible ancillary studies. These cases include high-grade squamous intraepithelial lesion cells with small cell morphology; small cell carcinoma; large neuroendocrine carcinoma; glassy cell carcinoma; AM; malignant mixed Müllerian tumor; clear cell carcinoma and primary malignant melanoma. Recognition of these rare variants/neoplasms is important so that involved Pap tests are not diagnosed as benign and that patients with these conditions get additional follow-up.
abstract_id: PUBMED:37575430
The relevance of words and the language/communication divide. First, the wide applicability of the relevance-theoretic pragmatic account of how new (ad hoc) senses of words and new (ad hoc) words arise spontaneously in communication/comprehension is demonstrated. The lexical pragmatic processes of meaning modulation and metonymy are shown to apply equally to simple words, noun to verb 'conversions', and morphologically complex cases with non-compositional (atomic) meanings. Second, this pragmatic account is situated within a specific view of the cognitive architecture of language and communication, with the formal side of language, its recursive combinatorial system, argued to have different developmental, evolutionary and cognitive characteristics from the meaning side of language, which is essentially pragmatic/communicative. Words straddle the form/meaning (syntax/pragmatics) divide: on the one hand, they are phrasal structures, consisting of a root and variable numbers of functors, with no privileged status in the syntax; on the other hand, they are salient to language users as basic units of communication and are stored as such, in a communication lexicon, together with their families of related senses, which originated as cases of pragmatically derived (ad hoc) senses but have become established, due to their communicative efficacy and frequency of use. Third, in an attempt to find empirical evidence for the proposed linguistic form-meaning divide, two very different cases of atypical linguistic and communicative development are considered: autistic children and deaf children who develop Homesign. The morpho-syntax (the formal side of language) appears to unfold in much the same way in both cases and is often not much different from that of typically developing children, but they diverge markedly from each other in their communication/pragmatics and their development of a system (a lexicon) of meaningful words/signs.
abstract_id: PUBMED:15333062
Impact of English language proficiency on receipt of pap smears among Hispanics. Our aim was to assess the impact of English language proficiency on Pap smear use among Hispanics. We performed a cross-sectional study using 2000 National Health Interview Survey data and included 2,331 Hispanic women, age >/=18 without a hysterectomy. After adjusting for sociodemographic and access factors, highly proficient English speakers were more likely to report a Pap smear in the past 3 years as compared to low proficient (adjusted prevalence ratio, 1.16; 95% confidence interval, 1.08 to 1.22). Also associated with Pap smear use were income, usual source of care, and health insurance. Our finding suggests that low English language proficiency is a barrier to receiving recent Pap smears among Hispanics.
abstract_id: PUBMED:11785116
The Pap test in HIV-positive women The aim of this work was to estimate the frequency of abnormal Papanicolau (Pap) smears in a group of HIV-infected women undergoing cervical screening. We re-examined 162 Pap smears from 118 patients infected with HIV. The patients were aged 23-55 years. A total of 108 smears (66.6%) from 80 patients were negative; 14 smears (8.6%) from 14 patients showed inflammation; 3 smears (1.8%) from 3 patients had atypical squamous cells of undetermined significance (ASCUS); 20 smears (13.5%) from 16 patients were abnormal for human papillomavirus (HPV); 13 smears (8.0%) from 9 patients revealed low-grade, squamous intraepithelial lesions; 10 smears (6.2%) from 7 patients were SIL-HG; the diagnosis of carcinoma was made in 3 cases (1.8%) and 2 smears from 2 patients were unsatisfactory. HIV-infected women have an increased rate of abnormal Pap smears for both HPV infections and cervical dysplasia. These results confirm the validity of cervical screening by Pap test.
abstract_id: PUBMED:25462509
Sociodemographic characteristics and health-related factors affecting the use of Pap smear screening among women with mental disabilities in Taiwan. This study examined the use of the Pap cervical cancer screening test among women with mental disabilities in Taiwan and analyzed factors related thereto. Data were obtained from three national databases in Taiwan: the 2008 database of physically and mentally disabled persons from the Ministry of the Interior, 2007-2008 Pap smear test data from the Health Promotion Administration, and claims data from the National Health Insurance Research Database. The study subjects included 49,642 Taiwanese women aged ≥30 years with mental disabilities. Besides descriptive and bivariate analyses, logistic regression analysis was also performed to examine factors affecting Pap smear use. In 2007-2008, Taiwanese women with mental disabilities had a Pap screening rate of 11.05%. Age, income, education, marital status, catastrophic illness/injury, relevant chronic illnesses, and severity of disability were identified as factors affecting their Pap smear use. Age and severity of disability were negatively correlated with Pap screening, with the odds of screening being 0.37 times as high in ≥70-year-olds as in 30-39-year-olds and 0.49 times as high for very severe disability as for mild disability. Income was positively correlated with Pap screening. Being married (OR=2.55) or divorced or widowed (OR=2.40) relative to being unmarried, and having a catastrophic illness/injury (OR=1.13), cancer (OR=1.47), or diabetes (OR=1.25), were associated with greater odds of screening. In Taiwan, women with mental disabilities receive Pap smears at a far lower rate than women in general.
Answer: Yes, there is evidence of a language divide in Pap test use. A study analyzing data from the California Health Interview Surveys found that primary language use, measured by the language of the interview, is associated with disparities in cervical cancer screening. Women who interviewed in Spanish were more likely to have received a Pap test in the past three years compared to the English interview group. In contrast, women who interviewed in Vietnamese, Cantonese, Mandarin, and Korean had a significantly reduced likelihood of having been screened (PUBMED:17063131). Another study highlighted the impact of English language proficiency on the receipt of Pap smears among Hispanics, with highly proficient English speakers more likely to report a Pap smear in the past three years compared to those with low proficiency (PUBMED:15333062). These findings suggest that language barriers can contribute to disparities in cervical cancer screening and that improved language access could help reduce these disparities, especially in the Asian immigrant community (PUBMED:17063131). |
Instruction: Does knowledge about the genetics of breast cancer differ between nongeneticist physicians who do or do not discuss or order BRCA testing?
Abstracts:
abstract_id: PUBMED:12644779
Does knowledge about the genetics of breast cancer differ between nongeneticist physicians who do or do not discuss or order BRCA testing? Purpose: To assess nongeneticist physicians' knowledge and experience with BRCA1/2 testing.
Methods: In 1998, 2250 internists, obstetrician-gynecologists (Ob-Gyns), and oncologists practicing in Pennsylvania, Maryland, Massachusetts, New York, or New Jersey were surveyed.
Results: Forty percent responded. Only 13% of internists, 21% of Ob-Gyns, and 40% of oncologists correctly answered all four knowledge questions about genetic aspects of breast cancer and testing for it. Knowledge was associated with discussing or ordering only among oncologists.
Conclusion: Despite deficiencies in their knowledge about the genetic aspects of breast cancer, many nongeneticist physicians have discussed testing and some have ordered testing.
abstract_id: PUBMED:25234477
Physician Risk Assessment Knowledge Regarding BRCA Genetics Testing. The study aim was to evaluate the association between genetics referrals, training in medical school, residency, or continuing medical education and physician knowledge of hereditary breast and ovarian cancer (HBOC). A survey of 55 questions was administered to 140 physicians evaluating knowledge and practice patterns regarding HBOC. Physicians with genetics training during residency were more likely to recognize that most instances of ovarian cancer are not hereditary (odds ratio (OR) = 3.16; 95 % confidence interval (CI) 1.32, 7.58). Physicians with continuing medical education (CME) training on genetics were more likely to identify that screening can be improved for those with a hereditary mutation (OR = 4.28; 95 % CI 1.32, 13.90). Primary care physicians who frequently referred for genetics were more likely to recognize that maternal history is not more important than paternal history (OR = 2.51; 95 % CI 1.11, 5.66), that screening can be improved for those with hereditary risk (OR = 4.06; 95 % CI 1.08, 15.22), and that females with a hereditary breast cancer risk would have different recommendations for screening than someone without this risk (OR = 4.91; 95 % CI 1.04, 23.25). Our data suggest that training and frequency of genetics referrals may be associated with knowledge of general risk assessment for HBOC.
abstract_id: PUBMED:36834079
Chinese American and Non-Hispanic White Breast Cancer Patients' Knowledge and Use of BRCA Testing. Breast cancer is the most commonly diagnosed cancer among Chinese American women. Knowing the BRCA1 and BRCA2 (BRCA1/2) gene mutation status can improve breast cancer patients' health outcomes by guiding targeted treatment towards preventing breast cancer recurrence and other BRCA-related cancers. Nevertheless, it is unclear if there is a disparity in knowledge and use of BRCA testing among Chinese American breast cancer patients. This cross-sectional study investigated the possible presence of differences in the knowledge and the use of BRCA testing between Chinese American and Non-Hispanic White (NHW) breast cancer patients. We surveyed 45 Chinese American and 48 NHW adult breast cancer patients who had been diagnosed with breast cancer within the previous two years through telephone interviews. The results showed that race was not statistically related to the use of BRCA testing. BRCA testing utilization was associated with family history (p < 0.05) and age (p < 0.05). However, Chinese American participants' understanding of BRCA testing was significantly lower than that of NHW participants (p = 0.030). Our findings suggest that a disparity exists in BRCA testing knowledge between Chinese American and NHW breast cancer patients. Genetic education and counseling are needed to improve BRCA testing knowledge and uptake among Chinese American breast cancer patients.
abstract_id: PUBMED:37497287
Malaysian nurses' knowledge and attitudes regarding BRCA genetic testing. Background: Breast cancer genetic (BRCA) testing for cancer susceptibility is an emerging technology in medicine.
Objective: This study assessed the knowledge and attitude of nurses regarding BRCA genetic testing in a tertiary teaching hospital in Malaysia.
Methods: A descriptive cross-sectional study was conducted among 150 nurses using a simple random sampling technique in a tertiary teaching hospital in northeast peninsular Malaysia. Data were collected using a self-administered questionnaire consisting of socio-demographic data, assessing nurses' knowledge and attitude regarding BRCA genetic testing. Fisher exact test analysis was used to determine the association between socio-demographic characteristics with knowledge and attitude level. In addition, the overall knowledge and attitude were analysed using the sum score of each outcome based on Bloom's cut-off point.
Results: Of the 150 nurses, 66.7% had high knowledge level about BRCA genetic testing, and 58% were positive towards genetic testing. The participants' mean age was 28.9 years (SD = 6.70). Years of working experience (p = 0.014) significantly influenced knowledge level on BRCA genetic testing, whereas speciality working experience (p <0.001) significantly influenced BRCA genetic testing attitudes.
Conclusions: The results show that most nurses have adequate knowledge of BRCA genetic testing. However, their attitude could be termed negative. Therefore, targeted education programs on BRCA genetic testing and risk are needed to improve the knowledge and attitude of nurses and, ultimately, can educate the women and increase health-seeking behaviour among eligible women.
abstract_id: PUBMED:29260484
Breast Cancer Genetics Knowledge and Testing Intentions among Nigerian Professional Women. Genetic testing services for breast cancer are well established in developed countries compared to African populations that bear a disproportionate burden of breast cancer (BC). The objective of this study is to examine the knowledge of professional Nigerian women about BC genetics and their intentions to utilize genetic testing services when it is made available in Nigeria. In this study, 165 lecturers and 189 bankers were recruited and studied using a validated self-administered questionnaire. The respondents' mean age was 34.9 years (SD = 10.9), 6.5% had family history of BC, and 84.7% had limited knowledge of breast cancer genetics. The proportion of women with genetic testing intentions for breast cancer was 87.3%. Health care access (OR = 2.35, 95% CI, 1.07-5.13), religion (OR = 3.51, 95% CI, 1.03-11.92), and perceived personal risk if a close relative had breast cancer (OR = 2.31, 95% CI, 1.05-5.08) independently predicted testing intentions. The genetic testing intentions for BC were high despite limited knowledge about breast cancer genetics. Promotion of BC genetics education as well as efforts to make BC genetic testing services available in Nigeria at reduced cost remains essential.
abstract_id: PUBMED:14707526
Swiss primary care physicians' knowledge, attitudes and perception towards genetic testing for hereditary breast cancer. Purpose: The Swiss Institute for Applied Cancer Research's (SIAK) Network for Cancer Predisposition Testing and Counseling was established in 1999. To define its role in the care of individuals with inherited cancer predisposition, attitudes, knowledge and perception of primary care physicians towards genetic counseling and testing for hereditary breast cancer were examined.
Methods: A questionnaire was sent to 1391 primary care physicians in private practice in the German-speaking Canton of Zürich.
Results: 628 (45%) questionnaires were returned: 319/778 (41%) general practitioners, 156/367 (43%) internists, 118/218 (54%) obstetrician-gynecologists and 22/28 (76%) oncologists answered. Socio-demographic characteristics were: 74% males and 26% females with a mean age of 51 and a mean number of 14 years in private practice. Fifty-two percent of responding physicians approved of genetic susceptibility testing and seventy-seven percent would recommend it to individuals at risk if asked for it. Of the responding physicians, 47% wanted to disclose test results and discuss its consequences and 79% wanted to provide long term care and support, whereas only 36% and 9%, respectively, assigned these tasks to specialized cancer genetics services. Eight knowledge questions had to be answered: 290 (46%) gave 0-2 correct answers, 284 (45%) gave 3-5 and 54 (9%) gave 6-8 correct answers.
Conclusions: Our findings demonstrate that the majority of responding primary care physicians in the Canton of Zürich approve of genetic testing for hereditary breast cancer and want to play a central role in the management of these families, but lack the knowledge to do so efficiently. Our findings underline the importance of educational programs in cancer genetics.
abstract_id: PUBMED:23448386
A statewide survey of practitioners to assess knowledge and clinical practices regarding hereditary breast and ovarian cancer. Purpose: This study describes practitioner knowledge and practices related to BRCA testing and management and explores how training may contribute to practice patterns.
Methods: A survey was mailed to all BRCA testing providers in Florida listed in a publicly available directory. Descriptive statistics characterized participants and their responses.
Results: Of the 87 respondents, most were community-based physicians or nurse practitioners. Regarding BRCA mutations, the majority (96%) recognized paternal inheritance and 61% accurately estimated mutation prevalence. For a 35-year-old unaffected BRCA mutation carrier, the majority followed national management guidelines. However, 65% also recommended breast ultrasonography. Fewer than 40% recognized the need for comprehensive rearrangement testing when BRACAnalysis(®) was negative in a woman at 30% risk. Finally, fewer than 15% recognized appropriate testing for a BRCA variant of uncertain significance. Responses appeared to be positively impacted by presence and type of cancer genetics training.
Conclusions: In our sample of providers who order BRCA testing, knowledge gaps in BRCA prevalence estimates and appropriate screening, testing, and results interpretation were identified. Our data suggest the need to increase regulation and oversight of genetic testing services at a policy level, and are consistent with case reports that reveal liability risks when genetic testing is conducted without adequate knowledge and training.
abstract_id: PUBMED:21146769
Awareness and utilization of BRCA1/2 testing among U.S. primary care physicians. Background: Testing for mutations in the breast and ovarian cancer susceptibility genes BRCA1 and BRCA2 (BRCA) has been commercially available since 1996.
Purpose: This study sought to determine, among U.S. primary care physicians, the level of awareness and utilization of BRCA testing and the 2005 U.S. Preventive Services Task Force (USPSTF) recommendations.
Methods: In 2009, data were analyzed on 1500 physician respondents to the 2007 DocStyles national survey (515 family practitioners, 485 internists, 250 pediatricians, and 250 obstetricians/gynecologists).
Results: Overall, 87% of physicians were aware of BRCA testing, and 25% reported having ordered testing for at least one patient in the past year. Ordering tests was most prevalent among obstetricians/gynecologists in practice for more than 10 years, with more affluent patients. Physicians were asked to select indications for BRCA testing from seven different clinical scenarios representing increased (4) or low-risk (3) situations consistent with the USPSTF guidelines. Among ordering physicians (pediatricians excluded), 45% chose at least one low-risk scenario as an indication for BRCA testing. Only 19% correctly selected all of the increased-risk and none of the low-risk scenarios.
Conclusions: A substantial majority of primary care physicians are aware of BRCA testing and many report having ordered at least one test within the past year. A minority, however, appear to consistently recognize the family history patterns identified by the USPSTF as appropriate indications for BRCA evaluation. These results suggest the need to improve providers' knowledge about existing recommendations-particularly in this era of increased BRCA direct-to-consumer marketing.
abstract_id: PUBMED:25983051
Experiences of Women Who Underwent Predictive BRCA 1/2 Mutation Testing Before the Age of 30. This qualitative interview study focuses on the experiences of a sample of British female BRCA 1/2 carriers who had predictive testing before the age of 30, which is the minimum age for breast screening in the UK. Following appropriate informed consent procedures participants were recruited through the Cancer Genetics Service for Wales. Semi-structured interviews were conducted face-to-face with seven participants, transcribed in full and analyzed using thematic analysis. The motives for testing and perceived advantages described by participants were similar to those identified in previous studies with older participants, such as increased awareness and knowledge and feeling more in control. However some of the perceived disadvantages were specific to younger women, including feeling pressured to make important life decisions earlier than they would have liked, such as about family planning and risk reducing surgery. Participants also reported feeling abandoned or forgotten because of lack of ongoing clinical contact, or feeling "stuck waiting" for screening to begin. However, none felt that these disadvantages were a reason to regret having testing. Findings in this small study suggest that having BRCA 1/2 predictive testing can have positive outcomes for young women even though they may be unable to access interventions such as breast screening. However it may be helpful to encourage young women during pre-test counseling to explore the decisions and choices they may face. These young women could benefit from ongoing support and follow up and increased interaction with healthcare professionals.
abstract_id: PUBMED:27525501
Evolution of Hereditary Breast Cancer Genetic Services: Are Changes Reflected in the Knowledge and Clinical Practices of Florida Providers? Aims: We describe practitioner knowledge and practices related to hereditary breast and ovarian cancer (HBOC) in an evolving landscape of genetic testing.
Methods: A survey was mailed in late 2013 to Florida providers who order HBOC testing. Descriptive statistics were conducted to characterize participants' responses.
Results: Of 101 respondents, 66% indicated either no genetics education or education through a commercial laboratory. Although 79% of respondents were aware of the Supreme Court ruling resulting in the loss of Myriad Genetics' BRCA gene patent, only 19% had ordered testing from a different laboratory. With regard to pretest counseling, 78% of respondents indicated they usually discuss 11 of 14 nationally recommended elements for informed consent. Pretest discussion times varied from 3 to 120 min, with approximately half spending <20 min. Elements not routinely covered by >40% of respondents included (1) possibility of a variant of uncertain significance (VUS) and (2) issues related to life/disability insurance. With regard to genetic testing for HBOC, 88% would test an unaffected sister of a breast cancer patient identified with a BRCA VUS.
Conclusions: Results highlight the need to identify whether variability in hereditary cancer service delivery impacts patient outcomes. Findings also reveal opportunities to facilitate ongoing outreach and education.
Answer: Yes, knowledge about the genetics of breast cancer does differ between nongeneticist physicians who do or do not discuss or order BRCA testing. A survey conducted in 1998 found that only a small percentage of internists, obstetrician-gynecologists, and oncologists correctly answered all knowledge questions about genetic aspects of breast cancer and testing for it. Knowledge was associated with discussing or ordering BRCA testing only among oncologists (PUBMED:12644779). Additionally, physicians with genetics training during residency or continuing medical education were more likely to have accurate knowledge regarding hereditary breast and ovarian cancer (HBOC), and primary care physicians who frequently referred for genetics were more likely to recognize important aspects of HBOC risk assessment (PUBMED:25234477). These findings suggest that training and frequency of genetics referrals may be associated with knowledge of general risk assessment for HBOC. |
Instruction: Intraoperative bleeding in stereotactic biopsies and its implication on postoperative management: can we predict CT findings?
Abstracts:
abstract_id: PUBMED:24481033
Intraoperative bleeding in stereotactic biopsies and its implication on postoperative management: can we predict CT findings? Background: Stereotactic biopsies are procedures with a high diagnostic yield and a low but serious risk of hemorrhage. Postoperative management remains controversial.
Objectives: To evaluate the predictive value of intraoperative bleeding and its implication on postoperative management.
Methods: Cases of intraoperative bleeding were prospectively documented in a consecutive series comprising 303 patients. Categories were as follows: no bleeding, single drop, ≤10 drops and >10 drops. Incidence, size of hemorrhage and neurological deterioration were noted. Hemorrhage on routine postoperative CT scans was correlated with intraoperative findings, sample size, location and pathology.
Results: A total of 93 patients (30.7%) showed intraoperative bleeding and 68 (22.4%) showed blood on postoperative CT. In 13 patients (4.3%) the diameter was >1 cm; 19 patients (6.3%) experienced neurological worsening, 9 (3.0%) having postoperative hemorrhage and 3 (1.0%) permanent neurological deficits. Bleeding was associated with postoperative hemorrhage (p < 0.0001). The negative predictive values to rule out any postoperative hemorrhage or hemorrhages >1 cm were 92 and 100%, respectively. Number of samples, location and pathology had no significant influence on postoperative hemorrhage.
Conclusion: Stereotactic biopsies have a low risk of symptomatic hemorrhages. Intraoperative bleeding is a surveillance parameter of hemorrhage on CT. Therefore, routine postoperative CT may be restricted to patients who show intraoperative bleeding.
abstract_id: PUBMED:30261475
Prognostic risk factors for postoperative hemorrhage in stereotactic biopsies of lesions in the basal ganglia. Objective: The risk of hemorrhages after stereotactic biopsy is known to be low. Nevertheless hemorrhages in eloquent areas result in neurological deficit for the patients. Since the basal ganglia resemble a particularily high vascularized and eloquent location, which is often the source of hypertensive hemorrhages, we aimed to analyse possible risk factors for hemorrhage after stereotactic biopsy in this region.
Patients And Methods: We performed a retrospective analysis including patients who underwent stereotactic biopsies of lesions in the basal ganglia between January 2012 and January 2017. 63 patients were included in this study. We accessed age, gender, histopathological diagnosis, hypertension, blood pressure intraoperative, anticoagulative medication and postoperative hemorrhage.
Results: Fishers exact test revealed no significant p-values concerning anticoagulative therapy, gender, smoking and hypertension concerning postoperative hemorrhage. Wilcoxon-Mann-Whitney-Test showed no significant correlation for systolic blood pressure intraoperative, number of tissue samples and age with hemorrhage. A trend for lymphoma in correlation with postoperative hemorrhage was in patients with Lymphoma (Wilcoxon-Mann-Whitney Test).
Conclusion: Stereotactic biopsies even in eloquent areas as the basal ganglia are a safe procedure even if patients suffer under hypertension or are smoker. None of the here examined risk factors showed a significant correlation with postoperative hemorrhage. Accessing tumor tissue for histopathological diagnosis is mandatory for adequate therapy.
abstract_id: PUBMED:25380111
Frameless robotic stereotactic biopsies: a consecutive series of 100 cases. Object: Stereotactic biopsy procedures are an everyday part of neurosurgery. The procedure provides an accurate histological diagnosis with the least possible morbidity. Robotic stereotactic biopsy needs to be an accurate, safe, frameless, and rapid technique. This article reports the clinical results of a series of 100 frameless robotic biopsies using a Medtech ROSA device.
Methods: The authors retrospectively analyzed their first 100 frameless stereotactic biopsies performed with the robotic ROSA device: 84 biopsies were performed by frameless robotic surface registration, 7 were performed by robotic bone fiducial marker registration, and 9 were performed by scalp fiducial marker registration. Intraoperative flat-panel CT scanning was performed concomitantly in 25 cases. The operative details of the robotic biopsies, the diagnostic yield, and mortality and morbidity data observed in this series are reported.
Results: A histological diagnosis was established in 97 patients. No deaths or permanent morbidity related to surgery were observed. Six patients experienced transient neurological worsening. Six cases of bleeding within the lesion or along the biopsy trajectory were observed on postoperative CT scans but were associated with transient clinical symptoms in only 2 cases. Stereotactic surgery was performed with patients in the supine position in 93 cases and in the prone position in 7 cases. The use of fiducial markers was reserved for posterior fossa biopsy via a transcerebellar approach, via an occipital approach, or for pediatric biopsy.
Conclusions: ROSA frameless stereotactic biopsies appear to be accurate and safe robotized frameless procedures.
abstract_id: PUBMED:32956884
Stereotactic Brain Biopsy Hemorrhage Risk Factors and Implications for Postoperative Care at a Single Institution: An Argument For Postoperative Imaging. Objective: To determine preoperative factors contributing to postoperative hemorrhage after stereotactic brain biopsy (STB), clinical implications of postoperative hemorrhage, and the role of postoperative imaging in clinical management.
Methods: Retrospective review of STB (2005-2018) across 2 institutions including patients aged >18 years undergoing first STB. Patients with prior craniotomy, open biopsy, or prior STB were excluded. Preoperative variables included age, sex, neurosurgeon seniority, STB method. Postoperative variables included pathology, postoperative hemorrhage on computed tomography, immediate and 30-day postoperative seizure, infection, postoperative hospital stay duration, and 30-day return to operating room (OR). Analysis used the Fisher exact tests for categorical variables.
Results: Overall, 410 patients were included. Average age was 56.5 (±16.5) years; 60% (n = 248) were men. The majority of biopsies were performed by senior neurosurgeons (66%, n = 270); frontal lobe (42%, n = 182) and glioblastoma (45%, n = 186) were the most common location and pathology. Postoperative hemorrhage occurred in 28% (114) of patients with 20% <0.05 cm3 and 8% >0.05 cm3. Postoperative hemorrhage of any size was associated with increased rate of postoperative deficit within both 24 hours and 30 days, postoperative seizure, and length of hospital stay when controlling for pathology. Hemorrhages >0.05 cm3 had a 16% higher rate of return to the OR for evacuation, due to clinical deterioration as opposed to radiographic progression.
Conclusions: Postbiopsy hemorrhage was associated with higher risk of immediate and delayed postoperative deficit and seizure. Postoperative computed tomography should be used to determine whether STB patients can be discharged same day or admitted for observation; clinical evaluation should determine return to OR for evacuation.
abstract_id: PUBMED:29069919
Stereotactic biopsy of cerebellar lesions: straight versus oblique frame positioning. Objective: Biospies of brain lesions with unknown entity are an everyday procedure among many neurosurgical departments. Biopsies can be performed frame-guided or frameless. However, cerebellar lesions are a special entity with a more complex approach. All biopsies in this study were performed stereotactically frame guided. Therefore, only biopsies of cerebellar lesions were included in this study. We compared whether the frame was attached straight versus oblique and we focused on diagnostic yield and complication rate.
Methods: We evaluated 20 patients who underwent the procedure between 2009 and 2017. Median age was 56.5 years. 12 (60%) Patients showed a left sided lesion, 6 (30%) showed a lesion in the right cerebellum and 2 (10%) patients showed a midline lesion.
Results: The stereotactic frame was mounted oblique in 12 (60%) patients and straight in 8 (40%) patients. Postoperative CT scan showed small, clinically silent blood collection in two (10%) of the patients, one (5%) patient showed haemorrhage, which caused a hydrocephalus. He received an external ventricular drain. In both patients with small haemorrhage the frame was positioned straight, while in the patient who showed a larger haemorrhage the frame was mounted oblique. In all patients a final histopathological diagnosis was established.
Conclusion: Cerebellar lesions of unknown entity can be accessed transcerebellar either with the stereotactic frame mounted straight or oblique. Also for cerebellar lesions the procedure shows a high diagnostic yield with a low rate of severe complications, which need further treatment.
abstract_id: PUBMED:31962324
Utilization of the Intraoperative Mobile AIRO® CT Scanner in Stereotactic Surgery: Workflow and Effectiveness. Background: In frame-based stereotactic surgery, intraoperative imaging is crucial. It generally follows a workflow including preoperative MRI and intraoperative frame-based CT. The intraoperative transport of the anesthetized and intubated patient to and from the CT unit can be time-consuming and cumbersome. Here, we report the first 50 patients who underwent stereotactic biopsies using the mobile AIRO® intraoperative CT (iCT) scanner.
Methods: A conventional stereotactic frame was mounted to the AIRO® carbon table via carbon adapter. 0°gantry thin-slice iCT was performed. The imaging data were transferred to a conventional stereotaxy working unit. After fusion of the preoperative MRI and AIRO® iCT, the stereotactic system was built based on the iCT, and trajectories were calculated accordingly.
Results: The frame-based stereotactic iCT was easy to implement and successfully accomplished in all patients. The MRI/iCT image fusion was feasible in all of the studies. A conclusive histological result was obtained in 46 of the 50 cases included. There was no bleeding complication. Net surgery time was reduced by 38 min, on average.
Conclusion: We conclude that the AIRO® system is a safe, easy-to-use, and sufficiently accurate iCT for CT frame-based stereotactic biopsy planning that results in a considerable reduction of surgery time. In the future, it remains to be evaluated if the accuracy rates and intraoperative workflow will permit its application in deep brain stimulation and other functional procedures as well.
abstract_id: PUBMED:32827050
Endoscopic versus stereotactic biopsies of intracranial lesions involving the ventricles. Stereotactic biopsies of ventricular lesions may be less safe and less accurate than biopsies of superficial lesions. Accordingly, endoscopic biopsies have been increasingly used for these lesions. Except for pineal tumors, the literature lacks clear, reliable comparisons of these two methods. All 1581 adults undergoing brain tumor biopsy from 2007 to 2018 were retrospectively assessed. We selected 119 patients with intraventricular or paraventricular lesions considered suitable for both stereotactic and endoscopic biopsies. A total of 85 stereotactic and 38 endoscopic biopsies were performed. Extra procedures, including endoscopic third ventriculostomy and tumor cyst aspiration, were performed simultaneously in 5 stereotactic and 35 endoscopic cases. In 9 cases (5 stereotactic, 4 endoscopic), the biopsies were nondiagnostic (samples were nondiagnostic or the results differed from those obtained from the resected lesions). Three people died: 2 (1 stereotactic, 1 endoscopic) from delayed intraventricular bleeding and 1 (stereotactic) from brain edema. No permanent morbidity occurred. In 6 cases (all stereotactic), additional surgery was required for hydrocephalus within the first month postbiopsy. Rates of nondiagnostic biopsies, serious complications, and additional operations were not significantly different between groups. Mortality was higher after biopsy of lesions involving the ventricles, compared with intracranial lesions in any location (2.4% vs 0.3%, p = 0.016). Rates of nondiagnostic biopsies and complications were similar after endoscopic or stereotactic biopsies. Ventricular area biopsies were associated with higher mortality than biopsies in any brain area.
abstract_id: PUBMED:12777081
Postoperative management of patients after stereotactic biopsy: results of a survey of the AANS/CNS section on tumors and a single institution study. As little consensus exists on the postoperative care of patients undergoing stereotactic biopsy, we sought to establish a new algorithm for their postoperative management. First, we surveyed active members of the AANS/CNS Section on Tumors to determine national practice patterns for patients after stereotactic biopsy. Second, we retrospectively reviewed 84 consecutive stereotactic biopsy procedures at our institution to assess the potential benefit of routine computed tomography (CT) scanning and intensive care unit (ICU) monitoring. Finally, we prospectively applied this new algorithm in 54 patients to assess its validity. Of 629 surgeons, 263 (42%) responded; they were experienced neurosurgeons (mean 15 years in practice) who performed more than 10 stereotactic biopsies per year. Most surgeons (59%) routinely ordered postoperative CT scans, and the remainder ordered scans based on specific indications. Patients were transferred from the recovery room to a special care unit (47%), regular room (47%), or home (6%). In our retrospective review, 81 patients underwent 84 stereotactic biopsy procedures; 79 underwent postoperative CT scanning and all 81 were monitored overnight in the ICU. Among five (6%) patients who experienced intraoperative hemorrhage, two (2%) underwent craniotomy to control arterial bleeding. Three (4%) patients developed new neurological deficits, which occurred within 2 h of surgery. In both groups, CT scans were helpful in excluding hemorrhage that would require re-operation. In the remaining patients (90%), findings on routine postoperative CT did not alter patient management and ICU monitoring appeared unnecessary because neurological complications occurred within 2 h postoperatively. We confirmed these results in the prospective study of 54 patients undergoing stereotactic biopsy without routine postoperative CT scanning or ICU monitoring. In contrast with national practice patterns reported, we recommend that CT scanning and ICU monitoring be reserved for patients who have intraoperative hemorrhage or new deficits after surgery. All other patients can be monitored for 2 h in the recovery room and transferred to a regular hospital room without a postoperative CT scan.
abstract_id: PUBMED:34421791
A Bulk Retrospective Study of Robot-Assisted Stereotactic Biopsies of Intracranial Lesions Guided by Videometric Tracker. Background: Biopsies play an important role in the diagnosis of intracranial lesions, and robot-assisted procedures are increasingly common in neurosurgery centers. This research investigates the diagnoses, complications, and technology yield of 700 robotic frameless intracranial stereotactic biopsies conducted with the Remebot system. Method: This research considered 700 robotic biopsies performed between 2016 and 2020 by surgeons from the Department of Functional Neurosurgery in Beijing's Tiantan Hospital. The data collected included histological diagnoses, postoperative complications, operation times, and the accuracy of robotic manipulation. Results: Among the 700 surgeries, the positive rate of the biopsies was 98.2%. The most common histological diagnoses were gliomas, which accounted for 62.7% of cases (439/700), followed by lymphoma and germinoma, which accounted for 18.7% (131/700) and 7.6% (53/700). Bleeding was found in 14 patients (2%) by post-operation computed tomography scans. A total of 29 (4.14%) patients had clinical impairments after the operation, and 9 (1.29%) experienced epilepsy during the operation. The post-biopsy mortality rate was 0.43%. Operation time-from marking the cranial point to suturing the skin-was 16.78 ± 3.31 min (range 12-26 min). The target error was 1.13 ± 0.30 mm, and the entry point error was 0.99 ± 0.24 mm. Conclusion: A robot-assisted frameless intracranial stereotactic biopsy guided by a videometric tracker is an efficient, safe, and accurate method for biopsies.
abstract_id: PUBMED:27287513
Stereotactic Biopsy of Pineal Lesions. Background: Stereotactic biopsies are carried out to obtain tumor tissue of unknown entity from cerebral lesions. Thus, tumor tissue can be examined, resulting in correct diagnosis and treatment. These procedures go in hand with high accuracy, high diagnostic yield, and low complication rates.
Objectives: The aim of this study is to evaluate the results and complications of stereotactic biopsies carried out for lesions of the pineal region.
Methods: We performed a retrospective analysis of our prospective database and included 14 patients who underwent stereotactic biopsy of a pineal lesion between 2006 and February 2016. The Leksell stereotactic frame was used in all patients, and entry and target points were calculated using the BrainLab stereotactic system. We evaluated histopathologic results and postoperative complications such as hydrocephalus and hemorrhage.
Results: Histopathologic diagnosis was established in all patients. On postoperative computed tomography, 7 patients (50%) showed a circumscribed blood collection at the site of biopsy. Six patients (42.8%) required a cerebrospinal fluid drain preoperatively. Two patients needed external ventricular drain postoperatively (14.2%). One of these patients showed a small hemorrhage area that caused the hydrocephalus. The other patient showed only postoperative swelling. Preoperatively, 2 patients (14.2%) already had a ventriculoperitoneal (VP) shunt, whereas postoperatively, another 4 patients (28.5%) needed a VP shunt. Thus, of the 14 patients, 6 (42.8%) required a VP shunt.
Conclusions: Because of their localization, pineal lesions may result in emergence of a hydrocephalus. Stereotactic biopsies in this area may increase this effect and, thus, even VP shunts are necessary in some patients.
Answer: Intraoperative bleeding during stereotactic biopsies has been evaluated for its predictive value regarding postoperative hemorrhage and its implications on postoperative management. A study documented intraoperative bleeding in a series of 303 patients undergoing stereotactic biopsies, categorizing the bleeding as no bleeding, single drop, ≤10 drops, and >10 drops. The study found that intraoperative bleeding was associated with postoperative hemorrhage on CT scans (p < 0.0001). The negative predictive values to rule out any postoperative hemorrhage or hemorrhages >1 cm were 92% and 100%, respectively. The study concluded that routine postoperative CT may be restricted to patients who show intraoperative bleeding, as stereotactic biopsies have a low risk of symptomatic hemorrhages and intraoperative bleeding serves as a surveillance parameter for hemorrhage on CT (PUBMED:24481033).
Another study focusing on stereotactic biopsies of lesions in the basal ganglia did not find significant correlations between examined risk factors (such as anticoagulative therapy, gender, smoking, hypertension, systolic blood pressure intraoperative, number of tissue samples, and age) and postoperative hemorrhage. This suggests that stereotactic biopsies, even in highly vascularized and eloquent areas like the basal ganglia, are safe procedures and that the risk factors examined do not significantly predict postoperative hemorrhage (PUBMED:30261475).
In summary, intraoperative bleeding during stereotactic biopsies can be predictive of postoperative hemorrhage, and its presence may guide the decision to perform routine postoperative CT scans. However, the risk factors for postoperative hemorrhage may not be significantly correlated with the occurrence of hemorrhage, indicating that the predictive ability of intraoperative bleeding may be more reliable than other potential risk factors. |
Instruction: Orthostatic hypotension in acute geriatric ward: is it a consistent finding?
Abstracts:
abstract_id: PUBMED:35590276
Predictivity of the comorbidity indices for geriatric syndromes. Background: The aging population and increasing chronic diseases make a tremendous burden on the health care system. The study evaluated the relationship between comorbidity indices and common geriatric syndromes.
Methods: A total of 366 patients who were hospitalized in a university geriatric inpatient service were included in the study. Sociodemographic characteristics, laboratory findings, and comprehensive geriatric assessment(CGA) parameters were recorded. Malnutrition, urinary incontinence, frailty, polypharmacy, falls, orthostatic hypotension, depression, and cognitive performance were evaluated. Comorbidities were ranked using the Charlson Comorbidity Index(CCI), Elixhauser Comorbidity Index(ECM), Geriatric Index of Comorbidity(GIC), and Medicine Comorbidity Index(MCI). Because, the CCI is a valid and reliable tool used in different clinical settings and diseases, patients with CCI score higher than four was accepted as multimorbid. Additionally, the relationship between geriatric syndromes and comorbidity indices was assessed with regression analysis.
Results: Patients' mean age was 76.2 ± 7.25 years(67.8% female). The age and sex of multimorbid patients according to the CCI were not different compared to others. The multimorbid group had a higher rate of dementia and polypharmacy among geriatric syndromes. All four indices were associated with frailty and polypharmacy(p < 0.05). CCI and ECM scores were related to dementia, polypharmacy, and frailty. Moreover, CCI was also associated with separately slow walking speed and low muscle strength. On the other hand, unlike CCI, ECM was associated with malnutrition.
Conclusions: In the study comparing the four comorbidity indices, it is revealed that none of the indices is sufficient to use alone in geriatric practice. New indices should be developed considering the complexity of the geriatric cases and the limitations of the existing indices.
abstract_id: PUBMED:11207845
Validation of dizziness as a possible geriatric syndrome. Objective: While dizziness has traditionally been considered solely as a symptom of discrete diseases, recent findings from population-based studies of older persons suggest that it may often be a geriatric syndrome with multiple predisposing risk factors, representing impairments in diverse systems. To validate these findings, we identified predisposing risk factors for dizziness in a clinic-based population.
Design: Cross-sectional study.
Setting: Geriatric assessment center.
Participants: 262 consecutive, eligible patients.
Measurements: Medical history and physical examination data were ascertained and characteristics of patients with and without a report of dizziness were compared.
Results: Seven factors were independently associated with a report of dizziness, namely depressive symptoms, cataracts, abnormal balance or gait, postural hypotension, diabetes, past myocardial infarction, and the use of three or more medications. Of patients with none of these risk factors, none reported dizziness. This proportion rose from 6% among patients with one factor, to 12%, 26%, and 51% among patients with two, three, and four or more factors, respectively.
Conclusions: The finding of similar factors associated with dizziness in previous community-based cohorts and the present clinic-based cohort supports the possibility of a multifactorial etiology of dizziness in many older persons. A multifactorial intervention targeting the factors identified in these studies may be effective at reducing the frequency or severity of dizziness in older patients.
abstract_id: PUBMED:33604797
Assocıatıons between mıld hyponatremıa and gerıatrıc syndromes ın outpatıent settıngs. Purpose: The impact of mild hyponatremia on geriatric syndromes is not clear. Our aim was to determine associations between mild hyponatremia and results of comprehensive geriatric assessment tools in outpatient settings.
Methods: We reviewed medical records of 1255 consecutive outpatient elderly subjects and compared results of comprehensive geriatric assessment measures among patients with mild hyponatremia (serum Na+ 130-135 mEq/L) versus normonatremia (serum Na+ 136-145 mEq/L). The comprehensive geriatric assessment measures included the Basic and Instrumental Activities of Daily Living, Mini Mental State Examination, Geriatric Depression Score, Tinetti Mobility Test, the Timed Up&Go Test, the Mini Nutritional Assessment, the handgrip test, the Insomnia Severity Index, polypharmacy, recurrent falls, urinary incontinence, orthostatic hypotension, and nocturia.
Results: Of the 1255 patients, 855 were female (68.1%), and the mean age was 73.7 ± 8.3 years. Mild hyponatremia was detected in 108 patients (8.6%). The median serum sodium was 140.5 [interquartile range (IQR) 138.4-141.8] versus 133.8 [IQR, 132.3-134.2] in normonatremia and mild hyponatremia groups, respectively (p < 0.001). The only significant difference for comorbidities between normonatremia and mild hyponatremia groups was the frequency of hypertension (66.9% versus 76.7%, respectively (p = 0.041). None of the comprehensive geriatric assessment tools conferred a significant association with mild hyponatremia. Of the 1061 subjects with available survival data, 96 (9.0%) deceased within 3-4 years of follow-up (p = 0.742). Hyponatremia as an independent variable did not have a significant effect on mortality in univariate logistic regression analysis (OR 1.13, 95% CI 0.55-2.33, p = 0.742).
Conclusion: Mild hyponatremia does not apparently affect results of geriatric assessments significantly. Whether particular causes of hyponatremia may have different impacts should be tested in further studies.
abstract_id: PUBMED:24629235
Dizziness in geriatric patients Dizziness is a common complaint in geriatric patients. Age-related changes in organs of balance control make the elderly more susceptible to diseases affecting the same system causing symptoms as dizziness, balance disturbance, fall and syncope. Work-up should be multifactorial and is feasible in geriatric outpatient clinics. Evidence-based interventions are available. New studies have found high frequency of vestibular dysfunction among old fall patients and suggest an association between vestibular dysfunction and orthostatic hypotension. Further research in this area is needed.
abstract_id: PUBMED:12418952
Orthostatic hypotension in acute geriatric ward: is it a consistent finding? Background: Orthostatic hypotension (OH) is a common finding among older patients. We designed a study to examine the prevalence and consistency of OH during the day.
Methods: A total of 502 inpatients (241 men and 261 women) with a mean age of 81.6 years were included in the study. Orthostatic tests were performed 3 times during the day, 30 minutes after meals. In 13 patients only 2 sets of measurements were obtained, and they were omitted from some of the calculations. Orthostatic hypotension was defined as a fall of at least 20 mm Hg in systolic blood pressure and/or 10 mm Hg in diastolic blood pressure on assuming an upright posture.
Results: Three hundred thirty-two (67.9%) of 489 patients experienced OH at least once during the day. Of these, 170 patients (34.8% of the 489) had OH at least twice (persistent OH) and 162 patients (33.1%) experienced OH only once (variable OH). Diastolic OH was more prevalent than systolic OH (57.3% vs 43.4%; P<.001). The intraindividual consistency of OH was low (kappa = 0.2). Orthostatic hypotension was observed less frequently during the evening than during the morning and afternoon (P<.05 vs morning and P =.003 vs afternoon). The difference between meals' constituents (light vs heavy meals) did not affect the prevalence of OH.
Conclusions: Orthostatic hypotension is very common in the elderly, and diastolic OH is more common than systolic OH. The prevalence of OH is the lowest during the evening, and meals do not increase the prevalence of OH. The intraindividual consistency of OH during the day is poor. Thus, in elderly patients, more attention should be paid to diastolic OH and the diagnosis should be based on repeated measurements.
abstract_id: PUBMED:38472505
The prevalence and co-existence of geriatric syndromes in older patients with dementia compared to those without dementia. Background: This study aims to compare frequency and coexistence of geriatric syndromes in older patients with dementia to those without dementia.
Methods: 1392 patients admitted to geriatric outpatient clinics were evaluated. Evaluations for eleven geriatric syndromes including polypharmacy, malnutrition, fraility, sarcopenia, dysphagia, urinary incontinence, fear of falling, falls, insomnia, excessive daytime sleepiness, and orthostatic hypotension (OH) were carried out in consultation with the patient and the caregiver. Two groups with and without dementia were matched according to age and gender using the propensity score matching method.
Results: A total of 738 patients, 369 with dementia and 369 without dementia were included, of whom 70.1% were female and the mean age was 80.5 ± 6.8. Polypharmacy, malnutrition, frailty, sarcopenia, dysphagia, fear of falling, and excessive daytime sleepiness were significantly higher in patients with dementia (p < 0.05). There was no difference between OH, urinary incontinence and insomnia between groups (p > 0.05). The co-existence of 0, 1, 2, 3, 4 and ≥ 5 geriatric syndromes in the same patient was 4.3%, 10.2%, 11.8%, 16.8%, 13.4% and 43.7% in non-dementia patients, respectively; 2.4%, 7.2%, 9.6%, 8.3%, 10.4% and 62.1% in those with dementia, respectively (p < 0.05).
Conclusion: The presence and co-existence of geriatric syndromes is common in patients with dementia. These geriatric syndromes should be examined by clinicians and healthcare professionals who work with the demented population, so that more successful management of dementia patients may be achieved.
abstract_id: PUBMED:36286917
The prevalence of anemia and its associations with other geriatric syndromes in subjects over 65 years old: data of Russian epidemiological study EVKALIPT Background: A low hemoglobin level in older adults impairs cognitive ability and functional status and associates with risk of falls and fractures, sarcopenia, malnutrition, depression, frailty, and decreased autonomy. Epidemiological data on the anemia prevalence in the geriatric population in our country is not available.
Aim: To assess the prevalence of anemia and analyze its associations with geriatric syndromes (GS) in subjects aged 65 years.
Materials And Methods: 4308 subjects (30% of men) aged 65107 years, living in 11 regions of the Russian Federation, were examined and divided into age groups (6574 years, 7584 years and 85 years). All the participants underwent a comprehensive geriatric assessment and determined hemoglobin level.
Results: The anemia prevalence in older adults was 23.9%. It has been shown that with an increase in age per 1 year, the risk of anemia detection increases by 4%. The incidence of anemia was higher in males than females (28.1% versus 22.1%; p0.001). In most cases, anemia was mild. The results of a comprehensive geriatric assessment show that patients with anemia had lower hand grip force, Barthel Index, the sum of points on Lawton instrumental activities of daily living scale, Mini Nutritional Assessment scale, the Mini-Cog test and higher the sum of points on the Geriatric Depression Scale (GDS-15) and the Age Is No Barrier scale. Patients with anemia were more likely to use hearing aids, absorbent underwear, and assistive devices during movement. Patients with anemia had a higher incidence of all GS, except for orthostatic hypotension and chronic pain syndrome. The presence of GS is associated with an increased risk of anemia by 1.33.4 times.
Conclusion: EVKALIPT study obtained domestic data on the prevalence of anemia in older patients and examined its associations with other GS.
abstract_id: PUBMED:29498476
Orthostatic hypotension and overall mortality in 1050 older patients of the outpatient comprehensive geriatric assessment unit. Aim: Orthostatic hypotension is a common problem in individuals aged ≥65 years. Its association with mortality is not clear. The aim of the present study was to evaluate associations between orthostatic hypotension and overall mortality in a sample of individuals aged ≥65 years who were seen at the Outpatient Comprehensive Geriatric Assessment Unit, Clalit Health Services, Beer-Sheva, Israel.
Methods: Individuals who were evaluated in the Outpatient Comprehensive Geriatric Assessment Unit between January 2005 and December 2015, and who had data on orthostatic hypotension were included in the study. The database included sociodemographic characteristics, body mass index, functional and cognitive state, geriatric syndromes reached over the course of the assessment, and comorbidity. Data on mortality were also collected.
Results: The study sample included 1050 people, of whom 626 underwent comprehensive geriatric assessment and 424 underwent geriatric consultation. The mean age was 77.3 ± 5.4 years and 35.7% were men. Orthostatic hypotension was diagnosed in 294 patients (28.0%). In univariate analysis, orthostatic hypotension was associated with overall mortality only in patients aged 65-75 years (HR 1.5, 95% CI 1.07-2.2), but in the multivariate model this association disappeared.
Conclusions: In older frail patients, orthostatic hypotension was not an independent risk factor for overall mortality. Geriatr Gerontol Int 2018; 18: 1009-1017.
abstract_id: PUBMED:16808743
Influence of orthostatic hypotension on mortality among patients discharged from an acute geriatric ward. Background: Orthostatic hypotension (OH) is a common finding among older patients. The impact of OH on mortality is unknown.
Objective: To study the long-term effect of OH on total and cardiovascular mortality.
Patients And Methods: A total of 471 inpatients (227 males and 244 females), with a mean age of 81.5 years who were hospitalized in an acute geriatric ward between the years 1999 and 2000 were included in the study. Orthostatic tests were performed 3 times during the day on all patients near the time of discharge. Orthostatic hypotension was defined as a fall of at least 20 mmHg in systolic blood pressure (BP) and/or 10 mmHg in diastolic BP upon assuming an upright posture at least twice during the day. Patients were followed until August 31, 2004. Mortality data were taken from death certificates.
Results: One hundred and sixty-one patients (34.2%) experienced OH at least twice. Orthostatic hypotension had no effect on all cause and cause specific mortality. Over a follow-up of 3.47+/-1.87 years 249 patients (52.8%) had died 83 of whom (33.3%) had OH. Age-adjusted mortality rates in those with and without OH were 13.4 and 15.7 per 100 person-years, respectively. Cox proportional hazards model analysis demonstrated that male gender, age, diabetes mellitus, and congestive heart failure increased and high body mass index decreased total mortality.
Conclusions: Orthostatic hypotension is relatively common in elderly patients discharged from acute geriatric wards, but has no impact on vascular and nonvascular mortality.
abstract_id: PUBMED:31781734
Prevalence and Risk Factors of Postprandial Hypotension among Elderly People Admitted in a Geriatric Evaluation and Management Unit : An Observational Study. Objectives: To explore the prevalence and potential risk factors of postprandial hypotension (PPH) among elderly patients in an acute geriatric ward.
Design: A prospective observational study.
Setting: Geriatric Unit in a Belgian tertiary-care University Hospital.
Participants: Seventy-six hospitalized elderly patients after stabilization of their acute conditions.
Measurements: PPH and orthostatic hypotension (OH) measured by a non-invasive automated blood pressure device, demographic data, Katz's Basic Activities of Daily Living (ADL) and Lawton's instrumental ADL, Short Physical Performance Battery, Charlson Comorbidity Index, Mini Nutritional Assessment-Short Form, Timed Up and Go test, Get-up Early test, grip strength and 7 classes of drugs.
Results: Overall, the prevalence of PPH was 46% (n=35/76), and it was symptomatic in 31% of the patients (n=11/35). PPH is associated with OH in one-third of the cases (n=12/35). Two-thirds of the patients with HPP had a significant drop in systolic blood pressure within the first 75 minutes after a meal. In univariate analyses, risk factors of PPH were nursing home residence, alpha-blocker consumption, help needed for eating and a good level of global functional status. However, patients with a good functional status were at increased risk of alpha-blocker exposure. In multivariate analyses, only alpha-blocker consumption and help needed for eating remained statistically significant.
Conclusion: PPH is frequent among hospitalized elderly people in a Geriatric Evaluation and Management Unit, affecting nearly one out of two people. The use of alpha-blockers is an important risk factor and may alert clinicians to the risk of PPH.
Answer: Orthostatic hypotension (OH) in acute geriatric wards is a common finding, but it is not consistent throughout the day. A study designed to examine the prevalence and consistency of OH during the day found that while 67.9% of 489 patients experienced OH at least once during the day, only 34.8% had OH at least twice (persistent OH), and 33.1% experienced OH only once (variable OH) (PUBMED:12418952). The intraindividual consistency of OH was low (kappa = 0.2), and OH was observed less frequently in the evening than in the morning and afternoon. The study concluded that the prevalence of OH is the lowest during the evening, and meals do not increase the prevalence of OH. Therefore, in elderly patients, more attention should be paid to diastolic OH, and the diagnosis should be based on repeated measurements (PUBMED:12418952).
Additionally, another study found that OH was not an independent risk factor for overall mortality in older frail patients (PUBMED:29498476). Furthermore, a study that followed patients discharged from an acute geriatric ward found that OH had no impact on vascular and nonvascular mortality (PUBMED:16808743).
These findings suggest that while OH is a common occurrence in the elderly, especially in acute geriatric wards, its presence can vary within the same individual over the course of a day, and it may not consistently predict adverse outcomes such as mortality. Clinicians should consider multiple measurements and the overall clinical context when evaluating OH in geriatric patients. |
Instruction: Should all Pima Indians with type 2 diabetes mellitus be prescribed routine angiotensin-converting enzyme inhibition therapy to prevent renal failure?
Abstracts:
abstract_id: PUBMED:10377929
Should all Pima Indians with type 2 diabetes mellitus be prescribed routine angiotensin-converting enzyme inhibition therapy to prevent renal failure? Objective: To determine how effective angiotensin-converting enzyme (ACE) inhibitors must be in preventing diabetic nephropathy to warrant early and routine therapy in all Pima Indians with type 2 diabetes mellitus.
Design: A computerized medical decision analysis model was used to compare strategy 1, screening for microalbuminuria and treatment of incipient nephropathy as currently recommended with ACE inhibitor therapy, with strategy 2, a protocol wherein all patients were routinely administered an ACE inhibitor 1 year after diagnosis of type 2 diabetes mellitus. The model assumed that ACE inhibitors can block, at least in part, the pathogenic mechanisms responsible for early diabetic nephropathy (microalbuminuria).
Results: The model predicted that strategy 2 would produce more life-years at less cost than strategy 1, if routine drug therapy reduced the rate of development of microalbuminuria by 21% in all patients. Only a 9% reduction in the rate of development of microalbuminuria was cost-effective at $15,000 per additional life-year gained, and only a 2.4% reduction was cost-effective at $75,000 per additional life-year gained for strategy 2 over strategy 1.
Conclusions: Routine ACE inhibitor therapy in Pima Indians with type 2 diabetes mellitus could prove more effective and even cost saving than the currently recommended approach of microalbuminuria screening. A prospective trial examining this goal should be considered.
abstract_id: PUBMED:18853664
Clinical state of a patient with nephrotic proteinuria successfully treated with combined therapy with angiotensin II receptor antagonists and angiotensin II converting enzyme inhibitors and pentoxifylline Pharmacological inhibition of the renin-angiotensin-aldosteron system (RAAS) constitutes a cornerstone strategy in the management of patients with chronic nephropaties and proteinuria. Angiotensin converting enzyme inhibitors (ACEI) as well as angiotensin II subtype 1 receptor antagonists (ARA) have been shown to decrease proteinuria, reduce the local renal inflammatory processes and slow the progression of renal insufficiency. Despite recent progress, there is still no optimal therapy which inhibits progression of renal disease. It is possible that pentoxifilline (PTX) an old medication which is still used to treat peripheral vascular disease will be the new adjunct to RAAS blockade. In addition, PTX has been shown to decrease the production of pro inflammatory cytokines and reactive oxygen species. 61-Year-old man with nephrotic proteinuria, diabetes type 2, hypertension, chronic viral hepatitis type C and slightly impaired renal function was desribed. Proteinuria as daily urine protein excretion (DPE), serum creatinine and eGFR (MDRD mode) were measured. Proteinuria was diagnosed in 2003 (DPE 3.5 g), creatinine 1.3-1.5 mg/dl. The patient had been examined in Department of Nephrology but exact reason of nephrotic syndrome was not recognized because he refused kidney biopsy. Therapy was started with ACEI and temporary effect as decrease of DPE to 0.2 g, eGFR was about 60 ml/min. and serum creatinine in normal range. In 2004 DPE increased to 8.98-9.42 g, serum creatinine 1.1-1.3 mg/dl. The dose of ACEI was increased and after then ARA was added. After one month of combined therapy DPE fell to 7.7 g, next the doses of both drugs were increased to maximum (losartan 100 mg and lisinopril 40 mg) and DPE fell to 6.8 g, serum creatinine was 1.4 mg/dl and potassium 5.4 mmol/l. Next PTF 800 mg/day was added and DPE fell to 0.55 g after 2 months' therapy. Similar DPE was after 6 months (0.53 g) since we started with combination of IKA, ARA and PTF.
abstract_id: PUBMED:7744980
The changing management of diabetic nephropathy. The outlook used to be grim: Dipstick-positive proteinuria usually meant that renal failure was inevitable. But now the diagnosis can be made early with the triad of increased kidney size, elevated GFR, and microalbuminuria. Moreover, management that emphasizes strict glycemic control, control of elevated blood pressure, and ACE inhibition can prevent or retard the process.
abstract_id: PUBMED:14684664
Optimizing therapy in the diabetic patient with renal disease: antihypertensive treatment. Hypertension, impaired renal function, and proteinuria are commonly associated to the presence of diabetes. They play a major role in the development of cardiovascular and renal damage. Effective antihypertensive treatment reduces the progression of diabetic nephropathy and improves cardiovascular prognosis. Accordingly, tight BP control (<130/80 mmHg) is currently recommended in diabetic patients. Achieving BP targets represents the most important determinant of cardiovascular and renal protection. However, it has been suggested that specific classes of antihypertensive drugs may exert additional organ protection beyond their BP control. The pharmacologic blockade of the renin-angiotensin-aldosterone system has been shown to convey greater renal and cardiovascular protection compared with other classes of drugs. In particular, studies focusing on renal end point suggest that angiotensin-converting enzyme inhibitors (ACEI) are the first-choice drugs in type 1 diabetes. Both ACEI and angiotensin II receptor blockers prevent the progression from microalbuminuria to clinical proteinuria in type 2 diabetes, but angiotensin blockers provide better renoprotection in patients with overt nephropathy. Regarding cardiovascular protection, several studies (but not all) have shown that ACEI exert a protective effect on diabetic patients. Recently, interesting results in favor of angiotensin receptor blockers have been reported in the IDNT, RENAAL, and LIFE studies. It should be noted that to achieve maximal renal and cardiovascular protection, most diabetic patients require integrated therapeutic intervention, including not only several antihypertensive drugs, but statins and antiplatelet therapy as well.
abstract_id: PUBMED:1382163
Angiotensin-converting enzyme inhibition and diabetic nephropathy. Hypertension and diabetes mellitus are strongly associated conditions from epidemiologic, genetic, and pathophysiologic points of view. The prevalence of hypertension is high in patients with diabetes, and, conversely, many patients with essential hypertension are glucose intolerant. Proteinuria appears in 40-50% of patients with insulin-dependent diabetes mellitus and 20-30% of patients with non-insulin-dependent diabetes mellitus. Progressive renal failure occurs in 30-40 and 3-8% of patients, respectively, hypertension being a leading factor in its rate of progression. In various animal experiments, ACE inhibitors are able to prevent proteinuria and glomerular sclerosis, presumably by lowering transglomerular capillary pressure. In the diabetic human, ACE inhibitors are powerful antihypertensive drugs, devoid of metabolic side effects. Clinical studies indicate that ACE inhibitors reduce proteinuria and possibly slow the rate of decline in renal function. Such an effect is not observed with beta-blockers. Large-scale studies are needed to confirm this very important hypothesis.
abstract_id: PUBMED:7973905
Reversible hyperkalemia during antihypertensive therapy in a hypertensive diabetic patient with latent hypoaldosteronism and mild renal failure. A 66-year-old hypertensive diabetic patient with latent hypoaldosteronism and mild renal failure was treated by adding enalapril, an angiotensin converting enzyme inhibitor, to the furosemide and nifedipine regimen because of an insufficient antihypertensive response for 1 month. Seven days after enalapril addition, the blood pressure was significantly reduced, but frank hyperkalemia occurred with a marked rise in BUN and a slight increase in serum creatinine. Plasma renin activity (PRA) and plasma aldosterone (PA) values remained low before and during enalapril therapy. Transient treatment with sodium polystyrene sulfate after enalapril withdrawal improved the hyperkalemia and renal function, but PRA and PA levels were low. PA and its precursor steroids also responded poorly to graded angiotensin II infusion and rapid ACTH injection. Latent hypoaldosteronism probably predisposed this patient to frank hyperkalemia with progressive dehydration and slightly reduced renal function during antihypertensive therapy.
abstract_id: PUBMED:20440277
The RAAS in the pathogenesis and treatment of diabetic nephropathy. Angiotensin II and other components of the renin-angiotensin-aldosterone system (RAAS) have a central role in the pathogenesis and progression of diabetic renal disease. A study in patients with type 1 diabetes and overt nephropathy found that RAAS inhibition with angiotensin-converting-enzyme (ACE) inhibitors was associated with a reduced risk of progression to end-stage renal disease and mortality compared with non-RAAS-inhibiting drugs. Blood-pressure control was similar between groups and proteinuria reduction was responsible for a large part of the renoprotective and cardioprotective effect. ACE inhibitors can also prevent microalbuminuria in patients with type 2 diabetes who are hypertensive and normoalbuminuric; in addition, ACE inhibitors are cardioprotective even in the early stages of diabetic renal disease. Angiotensin-II-receptor blockers (ARBs) are renoprotective (but not cardioprotective) in patients with type 2 diabetes and overt nephropathy or microalbuminuria. Studies have evaluated the renoprotective effect of other RAAS inhibitors, such as aldosterone antagonists and renin inhibitors, administered either alone or in combination with ACE inhibitors or ARBs. An important task for the future will be identifying which combination of agents achieves the best renoprotection (and cardioprotection) at the lowest cost. Such findings will have major implications, particularly in settings where money and facilities are limited and in settings where renal replacement therapy is not available and the prevention of kidney failure is life saving.
abstract_id: PUBMED:37468332
Finerenone. In developed countries, diabetes mellitus (DM) is one of the main causes of end stage renal disease (ESRD). In addition, the development of chronic kidney disease (CKD) further increases the already significantly increased cardiovascular (CV) risk in patients with diabetes. Both albuminuria and impaired renal function predict CV disease-related morbidity. The multifactorial pathogenesis of DM-related CKD involves structural, physiological, hemodynamic, and inflammatory processes. Instead of a so-called glucocentric approach, current evidence suggests that a multimodal, interdisciplinary treatment approach is needed to also prevent further progression of CKD and reduce the risk of cardiovascular events. Combined antihypertensive, antihyperglycemic and hypolipidemic therapy is the basis of a comprehensive approach to prevent the progression of diabetic kidney disease. According to recent evidence, adjunctive therapy with the non-steroidal mineralocorticoid receptor antagonist (MRA) finerenone - in addition to the use of an ACE (angiotensin converting enzyme) or AT1 (angiotensin II receptor subtype 1) blocker and an SGLT2 (sodium-glucose cotransporter-2) inhibitor - represents an effective therapeutic tool to improve nephroprotection in CKD. The aim of this review is to provide brief information on this promising pharmacotherapeutic approach to the treatment of diabetic kidney disease.
abstract_id: PUBMED:18653044
The therapy of aged diabetic hypertensives. The great prevalence and incidence of non-insulin dependent diabetes mellitus (NIDDM) and hypertension in the elderly represent several therapeutic problems. Due to aged-related changes in glucose handling and cardiovascular functions which occur with advancing age, it is necessary to treat aged diabetic hypertensive patients with drugs lowering arterial blood pressure but without side effects on glucose metabolism. Non-pharmacological and pharmacological protocols can be taken into account. With regard to the non-pharmacological therapy, a decline in body fatness, an increase in body fitness and an appropriate dietary assumption of sodium, potassium, calcium and magnesium are the most important approach. As far as the therapeutic approach, calcium channel blockers and angiotensin converting enzyme (ACE)-inhibitors seem to be particularly useful in the treatment of aged diabetic hypertensive patients. Calcium channel blockers have no effects on glucose tolerance while they are very effective on heart beating and arterial blood pressure. ACE-inhibitors lowers arterial blood pressure, delay the progression of diabetic nephropathy to the renal failure and, have null or beneficial effects on glucose handling. In conclusion, in aged diabetic hypertensive patients non-pharmacological therapy should be combined to administration of calcium channel blockers and ACE-inhibitors.
abstract_id: PUBMED:12243375
Inhibitors of the renin-angiotensin system reduce the rate of GFR decline and end-stage renal disease in patients with severe renal insufficiency. Drugs that inhibit the renin-angiotensin system (RAS) are of proven benefit in the treatment of hypertension, congestive heart failure, or acute myocardial infarction. In the last decade, several clinical trials have shown that RAS inhibitors also offer significant renoprotection in both diabetic and non-diabetic nephropathy. However, patients with advanced renal insufficiency did not take part in these trials because of the risk of acute renal failure (ARF) and hyperkalemia, and, for the same reason, most physicians do not offer these drugs to patients with impaired renal function. Recently, a post-hoc analysis of the Ramipril Efficacy In Nephropathy (REIN) study which included patients with severe renal insufficiency, showed that RAS inhibition slows glomerular filtration rate (GFR) decline over time and progression to end-stage renal disease (ESRD) in a safe way in patients quite close to ESRD (basal GFR, 10 to 30 ml/min/1.73m2). These beneficial effects have also been shown in the Reduction of Endpoints in NIDDM with the All Antagonist Losartan (RENAAL) study, in patients with type 2 diabetes mellitus, clinical proteinuria, and renal insufficiency, where RAS inhibition therapy significantly reduced the risk of ESRD once doubling of baseline serum creatinine levels had been achieved as compared to non-RAS anti-hypertensive treatment. Thus, these data suggest that RAS inhibition therapy should be given to all patients with proteinuric chronic nephropathy, independently of the level of renal function.
Answer: The question of whether all Pima Indians with type 2 diabetes mellitus should be prescribed routine angiotensin-converting enzyme (ACE) inhibition therapy to prevent renal failure is addressed in the study by PUBMED:10377929. The study used a computerized medical decision analysis model to compare two strategies: the first being screening for microalbuminuria and treating incipient nephropathy with ACE inhibitor therapy as currently recommended, and the second being a protocol where all patients were routinely administered an ACE inhibitor 1 year after diagnosis of type 2 diabetes mellitus. The model assumed that ACE inhibitors could block the pathogenic mechanisms responsible for early diabetic nephropathy. The results suggested that the second strategy would produce more life-years at less cost than the first strategy if routine drug therapy reduced the rate of development of microalbuminuria by 21% in all patients. Even a 9% reduction in the rate of development of microalbuminuria was cost-effective at $15,000 per additional life-year gained, and only a 2.4% reduction was cost-effective at $75,000 per additional life-year gained for the second strategy over the first. The conclusion was that routine ACE inhibitor therapy in Pima Indians with type 2 diabetes mellitus could be more effective and even cost-saving than the currently recommended approach of microalbuminuria screening, and that a prospective trial examining this should be considered.
This conclusion is supported by other studies that highlight the benefits of ACE inhibitors and angiotensin II receptor blockers (ARBs) in reducing proteinuria, slowing the progression of renal insufficiency (PUBMED:18853664), and preventing or retarding the progression of diabetic nephropathy (PUBMED:7744980). Furthermore, the RAAS has been identified as having a central role in the pathogenesis and progression of diabetic renal disease, and RAAS inhibition with ACE inhibitors has been associated with a reduced risk of progression to end-stage renal disease and mortality (PUBMED:20440277).
In summary, the evidence suggests that routine ACE inhibitor therapy could be beneficial for Pima Indians with type 2 diabetes mellitus to prevent renal failure, but a prospective trial is recommended to confirm this approach. |
Instruction: Massive mycobacterial choroiditis during highly active antiretroviral therapy: another immune-recovery uveitis?
Abstracts:
abstract_id: PUBMED:12414430
Massive mycobacterial choroiditis during highly active antiretroviral therapy: another immune-recovery uveitis? Purpose: To describe the ocular presentation of disseminated mycobacterial disease occurring during immune-recovery in a patient with acquired immune deficiency syndrome (AIDS).
Study Design: Case report and literature review.
Participants: A 41-year-old AIDS patient with a prior diagnosis of cytomegalovirus retinitis.
Methods: The patient developed progressive, bilateral multifocal choroiditis with panuveitis 2 months after beginning and responding to highly active antiretroviral therapy. His left eye became blind and painful and was enucleated. Pathologic examination revealed massive choroiditis with well-formed, discrete granulomas and multiple intracellular and extracellular acid-fast organisms within the choroidal granulomas. Culture and polymerase chain reaction of vitreous specimens revealed Mycobacterium avium complex (MAC).
Results: Empiric, and later sensitivity-guided, local and systemic antibiotic therapy was used to treat the remaining right eye, but it continued to deteriorate. Despite medical therapy, three vitrectomies and repeated intravitreal injections of amikacin, a total retinal detachment ensued. One week after the third vitrectomy, the patient died from mesenteric artery thrombosis in the setting of disseminated mycobacterial disease.
Conclusions: This is the first report of ocular inflammation as the presenting finding in the recently recognized syndrome of immune-recovery MAC disease. Pathogenesis of this entity is related to an enhanced immune response to a prior, subclinical, disseminated infection. The formation of discrete granulomas, normally absent in MAC infections in AIDS, reflects this mechanism.
abstract_id: PUBMED:16077362
HIV-associated retinopathy in the HAART era. Background: The effectiveness of highly active antiretroviral therapy (HAART) in restoring immune function in patients with acquired immunodeficiency syndrome (AIDS) has led to changes in the incidence, natural history, management, and sequelae of human immunodeficiency virus (HIV)-associated retinopathies, especially cytomegalovirus (CMV) retinitis.
Methods: The medical literature pertaining to HIV-associated retinopathies was reviewed with special attention to the differences in incidence, management strategies, and complications of these conditions in the eras both before and after the widespread use of HAART.
Results: In the pre-HAART era, CMV retinitis was the most common HIV-associated retinopathy, occurring in 20%-40% of patients. Median time to progression was 47 to 104 days, mean survival after diagnosis was 6 to 10 months, and indefinite intravenous maintenance therapy was mandatory. Retinal detachment occurred in 24%-50% of patients annually. Herpetic retinopathy and toxoplasmosis retinochoroiditis occurred in 1%-3% of patients and Pneumocystis carinii choroiditis, syphilitic retinitis, tuberculous choroiditis, cryptococcal choroiditis, and intraocular lymphoma occurred infrequently. In the HAART era the incidence of CMV retinitis has declined 80% and survival after diagnosis has increased to over 1 year. Immune recovery in patients on HAART has allowed safe discontinuation of maintenance therapy in patients with regressed CMV retinitis and other HIV-associated retinopathies. Immune recovery uveitis (IRU) is a HAART dependent inflammatory response that may occur in up to 63% of patients with regressed CMV retinitis and elevated CD4 counts and is associated with vision loss from epiretinal membrane, cataract, and cystoid macular edema.
Conclusions: The incidence, visual morbidity, and mortality of CMV retinitis and other HIV-associated retinopathies have decreased in the era of HAART and lifelong maintenance therapy may safely be discontinued in patients with restored immune function. Patients with regressed CMV retinitis, however, may still lose vision from epiretinal membrane, cystoid macular edema, and cataract secondary to IRU.
abstract_id: PUBMED:18780260
Ophthalmic manifestations of HIV infections in India in the era of HAART: analysis of 100 consecutive patients evaluated at a tertiary eye care center in India. Purpose: To evaluate ophthalmic manifestations in patients with Human Immunodeficiency Virus (HIV) infection in the era of highly active antiretroviral therapy (HAART) at the apex institute for eye healthcare in India.
Method: This prospective study was undertaken between October 2004 and December 2005. A complete ophthalmological and systemic examination was performed on each patient. Relevant investigations were carried out in selected patients.
Results: One hundred consecutive HIV infected patients (199 eyes) were examined for ophthalmic manifestations. Of these 17% (17/100) had Category A HIV infection (asymptomatic or acute HIV or persistent generalized lymphadenopathy), 23%(23/100) had Category B HIV infection (symptomatic, not A or C), 60%(60/100) had Category C HIV infection (AIDS indicator condition).76%(70/100) were male and 24%(24/100) were female. The median age of patients was 34 years and 52%(52/100) were in the fourth decade. 68%(68/100) patients were on HAART. 45% (45/100) patients had ophthalmic manifestations, the most common being cytomegalovirus (CMV) retinitis (20%) (20/100). Retinal detachment was seen in 70% (14/20) of CMV retinitis patients. HIV vasculopathy was seen in 11% (11/100) of patients. Other lesions included immune recovery uveitis (IRU) (5%)(5/100), acute retinal necrosis (ARN) (3%)(3/100), choroiditis (2%)(2/100), neuro-ophthalmic manifestations (12%)(12/100), complicated cataract (6%)(6/100), keratouveitis (1%)(1/100) and corneal ulcer (1%)(1/100). 7%(7/100) patients presented to us with ophthalmic manifestation as the only presenting sign of HIV infection. Amongst those who had ophthalmic involvement, about 50% (19/40) patients had CD4 count below 100 cells/micro liter and 70% (28/40) patients had CD4 count below 200 cells/micro liter.
Conclusions: CMV Retinitis (20%) (20/100) is still the most common manifestation of HIV infection in this series, even in the era of HAART, and is more common than HIV vasculopathy. Immune recovery uveitis is appears to be more common with the introduction of HAART in absence of affordable anti CMV therapy in India. 7% (7/100) of patients present with ophthalmological features as the initial manifestation of HIV. As before, most (70%) (28/40) of the ophthalmic manifestations of HIV infection are present when CD4 count is less than 200 cells/micro liter.
abstract_id: PUBMED:32547555
Interleukin 35-Producing Exosomes Suppress Neuroinflammation and Autoimmune Uveitis. Corticosteroids are effective therapy for autoimmune diseases but serious adverse effects preclude their prolonged use. However, immune-suppressive biologics that inhibit lymphoid proliferation are now in use as corticosteroid sparing-agents but with variable success; thus, the need to develop alternative immune-suppressive approaches including cell-based therapies. Efficacy of ex-vivo-generated IL-35-producing regulatory B-cells (i35-Bregs) in suppressing/ameliorating encephalomyelitis or uveitis in mouse models of multiple sclerosis or uveitis, respectively, is therefore a promising therapeutic approach for CNS autoimmune diseases. However, i35-Breg therapy in human uveitis would require producing autologous Bregs from each patient to avoid immune-rejection. Because exosomes exhibit minimal toxicity and immunogenicity, we investigated whether i35-Bregs release exosomes that can be exploited therapeutically. Here, we demonstrate that i35-Bregs release exosomes that contain IL-35 (i35-Exosomes). In this proof-of-concept study, we induced experimental autoimmune uveitis (EAU), monitored EAU progression by fundoscopy, histology, optical coherence tomography and electroretinography, and investigated whether i35-Exosomes treatment would suppress uveitis. Mice treated with i35-Exosomes developed mild EAU with low EAU scores and disease protection correlated with expansion of IL-10 and IL-35 secreting Treg cells with concomitant suppression of Th17 responses. In contrast, significant increase of Th17 cells in vitreous and retina of control mouse eyes was accompanied by severe choroiditis, massive retinal-folds, and photoreceptor cell damage. These hallmark features of severe uveitis were absent in exosome-treated mice and visual impairment detected by ERG was modest compared to control mice. Absence of toxicity or alloreactivity associated with exosomes thus makes i35-Exosomes attractive therapeutic option for delivering IL-35 into CNS tissues.
abstract_id: PUBMED:38110833
Tuberculosis reactivation demonstrated by choroiditis and inflammatory choroidal neovascular membrane in a patient treated with immune checkpoint inhibitors for malignant mucosal melanoma. Purpose: To describe a complex case of ocular tuberculosis reactivation with anterior uveitis, choroiditis and inflammatory choroidal neovascular membrane (CNVM) following immune checkpoint inhibitor (ICPI) treatment of malignant mucosal melanoma.
Methods: A retrospective collection of medical history, clinical findings and multimodal imaging with literature review of the topic was conducted.
Results: A 52-year-old Romanian female developed reduced vision and photophobia after three cycles of ICPI therapy comprised of ipilimumab and nivolumab. Bilateral anterior uveitis, multiple left eye choroidal lesions and a CNVM were confirmed using slit-lamp examination with ancillary multimodal imaging. Retinal changes in the right eye as well as a history of previously treated posterior uveitis and high-risk ethnicity increased clinical suspicion for ocular tuberculosis (TB) reactivation. The diagnosis was confirmed by TB positivity on polymerase chain reaction (PCR) analysis of lung aspirate followed by significant clinical improvement on systemic anti-tubercular therapy (ATT), systemic steroids and anti-vascular endothelial growth factor (VEGF) therapy.
Conclusions: ICPIs can cause a myriad of ocular issues, both by primary immunomodulatory effects as well as secondary reactivation of latent disease.
abstract_id: PUBMED:33553805
Swept source OCTA reveals a link between choriocapillaris blood flow and vision loss in a case of tubercular serpiginous-like choroiditis. Optical coherence tomography angiography (OCTA) is a non-invasive technique that is useful in the diagnosis and management of patients with posterior uveitis. Here we report the use of swept source OCTA (SS-OCTA) in a patient with tuberculosis (TB) associated serpiginous like choroiditis (TB-SLC) that made a full visual recovery following treatment with ATT, local and systemic corticosteroids, and systemic immune modulation. By comparing en face images of choriocapillaris (CC) blood flow before and after treatment, we conclude that the patient's visual recovery was associated with resolution of extensive CC flow deficits. This case highlights the utility of SS-OCTA in the multimodal evaluation of patients with choroidal inflammation, and the potential for good visual recovery in patients treated for TB-SLC.
abstract_id: PUBMED:32823477
Simultaneous mutually exclusive active tubercular posterior uveitis. Ocular tuberculosis (TB) is a form of extra-pulmonary TB, which can involve almost any intraocular structure or ocular adnexa. Posterior uveitis, the commonest form of intraocular TB manifests as choroidal tubercles, choroidal tuberculoma, subretinal abscess, neuroretinitis, or serpiginous-like choroiditis. These forms of posterior tubercular lesions can be broadly classified into two groups based on their pathophysiology and morphology. One group of lesions is related to the direct invasion and reactivation of the bacilli in the choroidal tissue, whereas the other is a result of hypersensitivity reaction to the bacilli. Simultaneous bilateral active posterior uveitis with such varying morphology and pathophysiology in either eye of the same patient is an extremely rare presentation. We report a case with pulmonary TB on Anti-tubercular therapy (ATT), who presented to us with two mutually exclusive and distinctly different forms of tubercular posterior uveitis in either eye simultaneously. Both lesions were active at the time of presentation.
abstract_id: PUBMED:2557167
Acute bilateral glaucoma in an LAV-positive subject A bilateral angle-closure glaucoma is described in a homosexual man, with a positive HIV serology. This angle closure is secondary to an anterior rotation of the ciliary body at the scleral spur following development of an inflammatory cilio-choroidal detachment: which is itself connected with a massive uveal-retinal effusion. The cause of this uveal effusion appears to be cytomegalovirus. This unusual syndrome was the initial manifestation of AIDS (Acquired Immune Deficiency Syndrome). A symptomatologic treatment with cycloplegics and prednisolone and a specific one with DHPG (9-1, 3-dihydroxy-2-propoxymethyl guanine), resulted in rapid regression of the local lesions; his overall clinical condition however deteriorated rapidly over two months following onset of syndrome.
abstract_id: PUBMED:6685790
Circulating immune complexes in uveitis The serum of 63 patients suffering from various clinical types of endogenous uveitis with unknown etiology was investigated for the presence of circulating immune complexes (CIC). None of the patients was suffering from a systemic disease usually associated with CIC. Significantly elevated CIC levels were detected in 2 women with chronic cyclitis. One of them was also suffering from paresis of the facial nerve. The investigation reported here indicates that C1q-binding CIC are of no importance in the pathogenesis of endogenous uveitis. On the one hand one should consider the possibility of local IC formation, on the other hand one should be aware of the fact that IC does not always represent a pathogenic factor but that it may also occur as a secondary phenomenon after non-immunologic tissue destruction.
abstract_id: PUBMED:3575890
Bilateral toxoplasma retinochoroiditis in a patient with acquired immune deficiency syndrome. A 32-year-old patient with acquired immune deficiency syndrome (AIDS) was evaluated for bilateral visual loss accompanied by uveitis, vitritis and retinochoroiditis. Diagnostic vitrectomy was performed on the right eye, and the diagnosis of ocular toxoplasmosis made. Central nervous system involvement was suggested by ring enhancing lesions on CT scan. The patient improved on a pyrimethamine, sulfadiazine and clindamycin, but succumbed to disseminated toxoplasmosis when treatment was discontinued.
Answer: The case described in PUBMED:12414430 suggests that massive mycobacterial choroiditis can indeed be considered a form of immune recovery uveitis (IRU). The patient, who had AIDS and a prior diagnosis of cytomegalovirus retinitis, developed progressive, bilateral multifocal choroiditis with panuveitis after starting and responding to highly active antiretroviral therapy (HAART). The pathology revealed massive choroiditis with well-formed granulomas and acid-fast organisms within the choroidal granulomas, indicating Mycobacterium avium complex (MAC) infection. The report concludes that this is the first instance of ocular inflammation presenting as part of the syndrome of immune-recovery MAC disease, which is related to an enhanced immune response to a prior subclinical disseminated infection. The formation of discrete granulomas, which are normally absent in MAC infections in AIDS, reflects this immune recovery mechanism.
This aligns with the broader changes in HIV-associated retinopathies in the HAART era, where the incidence of CMV retinitis has declined, and survival after diagnosis has increased. However, immune recovery in patients on HAART has led to the development of IRU in some patients with regressed CMV retinitis and elevated CD4 counts, which can be associated with vision loss from epiretinal membrane, cataract, and cystoid macular edema (PUBMED:16077362).
The phenomenon of IRU and other ocular manifestations during immune recovery on HAART is further supported by the findings in India, where CMV retinitis remains the most common manifestation of HIV infection even in the HAART era, and IRU appears to be more common with the introduction of HAART in the absence of affordable anti-CMV therapy (PUBMED:18780260).
In summary, the case of massive mycobacterial choroiditis during HAART can be considered another manifestation of immune recovery uveitis, which is a consequence of the restored immune function leading to an inflammatory response against previously subclinical infections. |
Instruction: Can lymph node ratio take the place of pN categories in the UICC/AJCC TNM classification system for colorectal cancer?
Abstracts:
abstract_id: PUBMED:21455596
Can lymph node ratio take the place of pN categories in the UICC/AJCC TNM classification system for colorectal cancer? Background: Lymph node ratio (LNR) has been reported to represent a powerful independent prognostic value in some malignancies. The significance of LNR in colorectal cancer is still under debate.
Methods: A total of 505 patients with stage III colorectal cancer were reviewed. Using running log-rank statistics, we calculated the best cutoff values for LNRs and proposed a novel rN category: rN1, 0% < LNR ≤ 35%; rN2, 35% < LNR ≤ 69%; and rN3, LNR > 69%. A Spearman's correlation coefficient test was used to assess the correlation between the number of retrieved nodes and the number of metastatic nodes, as well as the number of retrieved nodes and the LNRs. Univariate and two-step multivariate analyses were performed, respectively, to identify the significant prognostic clinicopathologic factors.
Results: The 5-year overall survival rate decreased significantly with increasing LNRs: rN(1) = 61% survival rate, rN(2) = 30.3% survival rate, and rN(3) = 11.2% survival rate (P < 0.001). Univariate and two-step multivariate analyses identified the rN category as a significant prognostic factor no matter whether the minimum number of LNs retrieved was met. There was a significant prognostic difference among different rN categories for any pN category, but no apparent prognostic difference was seen between different pN categories in any rN category. Moreover, marked heterogeneity could be seen within III(a-c) substages when survival was compared among rN(1-3) categories but not between pN(1-2) categories.
Conclusions: rN categories have more potential for predicting patient outcomes and are superior to the UICC/AJCC pN categories. We recommend rN categories for prognostic assessment and rN categories should be reported routinely in histopathological reports.
abstract_id: PUBMED:33626938
Metastatic lymph node ratio as a better prognostic tool than the TNM system in colorectal cancer. Background: The minimum number of lymph nodes that should be evaluated in colon cancer to adequately categorize lymph node status is still controversial. The lymph node ratio (LNR) may be a better prognostic indicator. Materials & methods: We studied 1065 patients treated from 1 January 2000 to 31 August 2012. Results: Significant differences in survival were detected according to regional lymph nodes (pN) (p < 0.001) and LNR (p < 0.001). LRN and pN are independent prognostic factors. Spearman correlation analysis showed a significant correlation between the total number of dissected lymph nodes and pN (rs = 0.167; p < 0.001), but the total number of dissected lymph nodes is not significantly correlated with LNR (rs = -0.019; p = 0.550). Interpretation: In this study, LNR seems to demonstrate a superior prognostic value compared with the pN categories, in part due to its greater independence regarding the extent of lymphadenectomy.
abstract_id: PUBMED:31346549
Lymph Node Ratio Versus TNM System As Prognostic Factor in Colorectal Cancer Staging. a Single Center Experience. Objective: This study aims to establish the actual validity of the lymph node ratio (LNR) as a prognostic factor for colorectal cancer patients, and to verify differences of survival and disease-free interval.
Methods: Patients referred with colorectal cancer who underwent potentially curative surgery between January 1997 and December 2011 were included. Lymph node ratio, TNM staging and survival were extracted from surgical, histological and follow-up records.
Results: Two hundred eigthy six patients with different stages of colorectal cancer underwent surgery, with comparison of survival prediction based on lymph node ratio and TNM staging. The overall survival rate was 78.3%, the recurrence rate was 11.9% and the mortality rate was estimated as 21.7%. Univariate analysis in relation to survival was significant for the following variables: serum level of CEA, CA 19.9 value, degree of histological differentiation, and tumor growth. There weren't any statistically significant differences for the LNR (LNR </ ≥0.16: p = 0.116). The TNM system was effective both in discriminating between survival stages (Stage II vs. Stage III: p = 0.05) and in differentiating sub-groups (p = 0.05).
Conclusions: LNR alone could not be considered a better prognostic factor than the TNM system. However, future studies are needed in a larger number of patients with a standardized surgical, pathological and medical protocol.
abstract_id: PUBMED:23521843
Is the lymph node ratio superior to the Union for International Cancer Control (UICC) TNM system in prognosis of colon cancer? Background: Decision making for adjuvant chemotherapy in stage III colon cancer is based on the TNM system. It is well known that prognosis worsens with higher pN classification, and several recent studies propose superiority of the lymph node ratio (ln ratio) to the TNM system. Therefore, we compared the prognosis of ln ratio to TNM system in our stage III colon cancer patients.
Methods: A total of 939 patients underwent radical surgery for colorectal cancer between January 2000 and December 2009. From this pool of patients, 142 colon cancer stage III patients were identified and taken for this analysis. Using martingale residuals, this cohort could be separated into a group with a low ln ratio and one with a high ln ratio. These groups were compared to pN1 and pN2 of the TNM system.
Results: For ln ratio, the cutoff was calculated at 0.2. There was a good prognosis of disease-free and cancer-related survival for the N-category of the TNM system as well as for the lymph node ratio. There was no statistical difference between using the N-category of the TNM system and the ln ratio.
Conclusions: There might not be a benefit in using the lymph node ratio rather than the N category of the TNM system as long as the number of subgroups is not increased. In our consideration, there is no need to change the N categorization of the TNM system to the ln ratio.
abstract_id: PUBMED:24057646
Comparison of metastatic lymph node ratio staging system with the 7th AJCC system for colorectal cancer. Purpose: To evaluate the prognostic value and staging accuracy of the metastatic lymph node ratio (rN) staging system for colorectal cancer.
Methods: A total of 1,127 patients with colorectal cancer who underwent curative surgery between 2000 and 2011 at our institute were analyzed. Lymph nodes status was assigned according to American Joint Committee on Cancer (AJCC) pN system and rN system. Patients with colon cancer (group 1, n = 652) and rectal cancer (group 2, n = 475) were analyzed separately.
Results: The rN staging system was generated using 0.2 and 0.6 as the cutoff values of lymph node ratio and then compared with AJCC pN stages. Linear regression model revealed that the number of retrieved lymph node was related to number of metastatic lymph nodes. After a median follow-up of 46 months, the 5-year survival rates of patients with more than 12 lymph nodes (LNs) retrieved were better than cases with fewer than 12 LNs, while the differences were not obvious in rN classification.
Conclusions: The rN category is a better prognostic tool than the AJCC pN category for colorectal cancer patients after curative surgery.
abstract_id: PUBMED:24379568
Lymph node staging in colorectal cancer: old controversies and recent advances. Outcome prediction based on tumor stage reflected by the American Joint Committee on Cancer (AJCC)/Union for International Cancer Control (UICC) tumor node metastasis (TNM) system is currently regarded as the strongest prognostic parameter for patients with colorectal cancer. For affected patients, the indication for adjuvant therapy is mainly guided by the presence of regional lymph node metastasis. In addition to the extent of surgical lymph node removal and the thoroughness of the pathologist in dissecting the resection specimen, several parameters that are related to the pathological work-up of the dissected nodes may affect the clinical significance of lymph node staging. These include changing definitions of lymph nodes, involved lymph nodes, and tumor deposits in different editions of the AJCC/UICC TNM system as well as the minimum number of nodes to be dissected. Methods to increase the lymph node yield in the fatty tissue include methylene blue injection and acetone compression. Outcome prediction based on the lymph node ratio, defined as the number of positive lymph nodes divided by the total number of retrieved nodes, may be superior to the absolute numbers of involved nodes. Extracapsular invasion has been identified as additional prognostic factor. Adding step sectioning and immunohistochemistry to the pathological work-up may result in higher accuracy of histological diagnosis. The clinical value of more recent technical advances, such as sentinel lymph node biopsy and molecular analysis of lymph nodes tissue still remains to be defined.
abstract_id: PUBMED:30514228
The lymph node status as a prognostic factor in colon cancer: comparative population study of classifications using the logarithm of the ratio between metastatic and nonmetastatic nodes (LODDS) versus the pN-TNM classification and ganglion ratio systems. Background: pN stage in the TNM classification has been the "gold standard" for lymph node staging of colorectal carcinomas, but this system recommends collecting at least 12 lymph nodes for the staging to be reliable. However, new prognostic staging systems have been devised, such as the ganglion quotients or lymph node ratios and natural logarithms of the lymph node odds methods. The aim of this study was to establish and validate the predictive and prognostic ability of the lymph node ratios and natural logarithms of the lymph node odds staging systems and to compare them to the pN nodal classification of the TNM system in a population sample of patients with colon cancer.
Methods: A multicentric population study between January 2004 and December 2007. The inclusion criteria were that the patients were: diagnosed with colon cancer, undergoing surgery with curative intent, and had a complete anatomopathological report. We excluded patients with cancer of the rectum or caecal appendix with metastases at diagnosis. Survival analysis was performed using the Kaplan-Meier actuarial method and the Log-Rank test was implemented to estimate the differences between groups in terms of overall survival and disease-free survival. Multivariate survival analysis was performed using Cox regression.
Results: We analysed 548 patients. For the overall survival, the lymph node ratios and natural logarithms of the lymph node odds curves were easier to discriminate because their separation was clearer and more balanced. For disease-free survival, the discrimination between the pN0 and pN1 groups was poor, but this phenomenon was adequately corrected for the lymph node ratios and natural logarithms of the lymph node odds curves which could be sufficiently discriminated to be able to estimate the survival prognosis.
Conclusions: Lymph node ratios and natural logarithms of the lymph node odds techniques can more precisely differentiate risk subgroups from within the pN groups. Of the three methods tested in this study, the natural logarithms of the lymph node odds was the most accurate for staging non-metastatic colon cancer. Thus helping to more precisely adjust and individualise the indication for adjuvant treatments in these patients.
abstract_id: PUBMED:22461900
Can the tumor deposits be counted as metastatic lymph nodes in the UICC TNM staging system for colorectal cancer? Objective: The 7th edition of AJCC staging manual implicitly states that only T1 and T2 lesions that lack regional lymph node metastasis but have tumor deposit(s) will be classified in addition as N1c, though it is not consistent in that pN1c is also an option for pT3/T4a tumors in the staging table. Nevertheless, in this TNM classification, how to classify tumor deposits (TDs) in colorectal cancer patients with lymph node metastasis (LNM) and TDs simultaneously is still not clear. The aim of this study is to investigate the possibility of counting TDs as metastatic lymph nodes in TNM classification and to identify its prognostic value for colorectal cancer patients.
Methods And Results: In this retrospective study, 513 cases of colorectal cancer with LNM were reviewed. We proposed a novel pN (npN) category in which TDs were counted as metastatic lymph nodes in the TNM classification. Cancer-specific survival according to the npN or pN category was analyzed using Kaplan-Meier survival curves. Univariate and multivariate analyses were performed to identify significant prognostic factors. Harrell's C statistic was used to test the predictive capacity of the prognostic models. The results revealed that the TD was a significant prognostic factor in colorectal cancer. Univariate and multivariate analyses uniformly indicated that the npN category was significantly correlated with prognosis. The results of Harrell's C statistical analysis demonstrated that the npN category exhibited a superior predictive capacity compared to the pN category of the 7th edition TNM classification. Moreover, we also found no significant prognostic differences in patients with or without TD in the same npN categories.
Conclusions: The counting of TDs as metastatic lymph nodes in the TNM classification system is potentially superior to the classification in the 7th edition of the TNM staging system to assess prognosis and survival for colorectal cancer patients.
abstract_id: PUBMED:28529752
Log odds of positive lymph nodes is superior to the number- and ratio-based lymph node classification systems for colorectal cancer patients undergoing curative (R0) resection. The metastatic lymph node status (N classification) is an important prognostic factor for patients with colorectal cancer (CRC). The aim of the present study was to evaluate and compare the prognostic assessment of three different lymph node staging methods, namely standard lymph node (pN) staging, metastatic lymph node ratio (LNR) and log odds of positive lymph nodes (LODDS) in CRC patients who undergo curative resection (R0). Data were retrospectively collected from 192 patients who had undergone R0 resection. Kaplan-Meier survival curves, Cox proportional hazards model and accuracy of the three methods (pN, LNR and LODDS) were compared to evaluate the prognostic effect. Univariate analysis demonstrated that pN, LNR and LODDS were all significantly correlated with survival (P=0.001, P<0.001 and P<0.001, respectively). The final result of the 3-step multivariate analysis demonstrated that LODDS was superior to the other two N categories. Patients in the same pN or LNR classifications may be classified into different LODDS stages with different prognoses. Thus, LODDS may be a meaningful prognostic indicator and superior to the pN and LNR classifications in CRC patients who undergo curative (R0) resection.
abstract_id: PUBMED:28770104
Immunoscore in mismatch repair-proficient and -deficient colon cancer. The aim of this study was to investigate immune response and its prognostic significance in colon carcinomas using the previously described Immunoscore (IS). A population-based series of 779 colorectal cancers, operated on between 2000 and 2010, were classified according to tumour, node, metastasis (TNM) status, mismatch repair (MMR), and BRAF mutation status. Rectal cancer cases (n = 203) were excluded as a high proportion of these patients received preoperative neoadjuvant chemoradiotherapy. Tissue microarray (TMA) samples collected from the tumour centre and invasive front were immunostained for CD3 and CD8. Lymphocytes were then digitally calculated to categorize IS from grade 0 to 4. Samples adequate for IS were available from 510 tumours. IS was significantly associated with AJCC/UICC stage, T stage, lymph node and distant metastases, perineural and lymphovascular invasion, MMR status, and BRAF mutation status. For IS0, IS1, IS2, IS3 and IS4, respectively, the 5-year disease-free survival (DFS) rates were 59, 68, 78, 83 and 94% (p < 0.001); 5-year disease-specific survival (DSS) rates were 47, 55, 75, 80, and 89% (p < 0.001); and 5-year overall survival (OS) rates were 40, 44, 66, 61, and 76% (p < 0.001). IS was also prognostic for DFS, DSS, and OS within subsets of microsatellite-stable (MSS) and microsatellite-instable (MSI) disease. Multivariable analysis showed that IS, AJCC/UICC stage, lymphovascular invasion, and lymph node ratio in AJCC/UICC stage III disease were independent prognostic factors for DFS, DSS, and OS. Age was an independent prognostic factor for DSS and OS. Gender and BRAF mutation were independent prognostic factors for OS. In conclusion, IS differentiated patients with poor versus improved prognosis in MSS and MSI disease and across AJCC/UICC stages. IS, AJCC/UICC stage, lymphovascular invasion, and lymph node ratio in AJCC/UICC stage III disease were independent prognostic factors for DFS, DSS, and OS.
Answer: The lymph node ratio (LNR), which is the ratio of metastatic to examined lymph nodes, has been proposed as a potentially superior prognostic tool compared to the traditional pN categories in the UICC/AJCC TNM classification system for colorectal cancer. Several studies have investigated the prognostic value of LNR in comparison to pN categories.
One study found that the LNR categories (rN) had more potential for predicting patient outcomes and were superior to the UICC/AJCC pN categories, suggesting that rN categories should be reported routinely in histopathological reports (PUBMED:21455596). Another study supported the superior prognostic value of LNR compared to pN categories, partly due to its greater independence from the extent of lymphadenectomy (PUBMED:33626938).
However, not all studies agree on the superiority of LNR. One study concluded that LNR alone could not be considered a better prognostic factor than the TNM system, indicating that further research is needed with a standardized approach (PUBMED:31346549). Similarly, another study found no statistical difference between using the N-category of the TNM system and the LNR, suggesting there might not be a benefit in changing the N categorization of the TNM system to LNR (PUBMED:23521843).
Despite these conflicting results, other studies have also found that the rN category is a better prognostic tool than the AJCC pN category for colorectal cancer patients after curative surgery (PUBMED:24057646), and that methods such as the log odds of positive lymph nodes (LODDS) may be more precise in differentiating risk subgroups within the pN groups (PUBMED:30514228).
In conclusion, while there is evidence to suggest that LNR and related metrics like LODDS may provide better prognostic information than traditional pN categories, the research is not entirely consistent. Some studies support the use of LNR as a replacement for pN categories, while others suggest that further validation is needed before making changes to the current TNM classification system. Therefore, it may be premature to state definitively that LNR can take the place of pN categories in the UICC/AJCC TNM classification system for colorectal cancer without further consensus in the scientific community. |
Instruction: Management of symptoms of gastroesophageal reflux disease: does endoscopy influence medical management?
Abstracts:
abstract_id: PUBMED:34581222
The Medical Management of Gastroesophageal Reflux Disease: A Narrative Review. Objective: The medical management of gastroesophageal reflux disease (GERD) continues to evolve. Our aim was to systematically assess the literature to provide an updated review of the evidence on lifestyle modifications and pharmacological therapy for the management of GERD.
Background: The cornerstones of GERD medical management consist of lifestyle modifications and pharmacologic agents. Most recently, evidence has emerged linking anti-reflux pharmacologic therapy to adverse events, such as kidney injury, metabolic bone disease, myocardial infarction, and even dementia, among others.
Methods: A systematic search of the databases of PubMed/MEDLINE, Embase, and Cochrane Library was performed for articles on the medical management of GERD between inception and March 1, 2021.
Conclusion: Although pharmacological therapy has been associated with potential adverse events, further research is needed to determine if this association exists. For this reason, lifestyle modifications should be considered first-line, while pharmacologic therapy can be considered in patients in whom lifestyle modifications have proven to be ineffective in controlling their symptoms or cannot institute them. Naturally, extra-esophageal causes for GERD-like symptoms must be considered on suspected high-risk patients and excluded before considering treatment for GERD.
abstract_id: PUBMED:12230315
Medical management of obesity. Obesity is the most prevalent and serious nutritional disease among western countries and is rapidly replacing undernutrition as the most common form of malnutrition in the world. Approximately 300,000 deaths a year are currently associated with overweight and obesity, second only to cigarette smoking as a leading cause of preventable death in the United States. Obesity effects 9 organ systems and is a risk factor for gastroesophageal reflux disease, nonalcoholic fatty liver disease, cholelithiasis, and colon cancer. Evidence-based guidelines on the identification, evaluation, and treatment of overweight and obesity have recently been developed by the National Institutes of Health to help practitioners effectively manage their patients. The body mass index is used to classify weight status and risk of disease. Treatment for obesity includes lifestyle management, consisting of diet therapy, physical activity, and behavioral modification, and may include pharmacotherapy or surgery based on level of risk. Currently only 2 medications, sibutramine and orlistat, are approved for long-term use. An initial weight loss of 10% of body weight achieved over 6 months is a recommended target. This article reviews the evaluation and management of the adult obese patient.
abstract_id: PUBMED:9452932
Rett syndrome: habilitation and management reviewed. Experience over the past 10 years in the diagnosis and comprehensive management of females with Rett syndrome has given us a better understanding of the potential skills and abilities which need to be identified. This condition is unique in that after a period of early regression of development there appears to be stabilization with some improvement. The potential for these girls to achieve some functional skills and maintain them presents a challenge, but one that needs to be addressed. Medical management should include stabilization of uncontrolled seizures. Developing a comprehensive plan for feeding disorders is required so that resulting nutritional problems and constipation can be corrected. Recognition of gastroesophageal reflux and its proper management may prevent respiratory complications. Appropriate intervention strategies using different therapeutic techniques are described which have been effective in facilitating communication, maintaining hand function and ambulation, and preventing deformities. Progression of scoliosis can be managed with intensive physical therapy. Management encompasses a comprehensive medical, therapeutic, educational, and psychosocial approach, which is best provided by a team in collaboration with community agencies that serve children with special needs and their families.
abstract_id: PUBMED:25624596
Chronic dry cough: Diagnostic and management approaches. Cough is the most common symptom for which medical treatment is sought in the outpatient setting. Chronic dry cough poses a great diagnostic and management challenge due to myriad etiologies. Chronic cough has been commonly considered to be caused by gastroesophageal reflux, post-nasal drip or asthma. However, recent evidences suggest that many patients with these conditions do not have cough, and in those with cough, the response to specific treatments is unpredictable at best. This raises questions about the concept of a triad of treatable causes for chronic cough. This article discusses the mechanism and etiology of cough, along with recent advances in the field of cough, highlighting some of the diagnostic and management challenges.
abstract_id: PUBMED:31750430
The management of hiatal hernia: an update on diagnosis and treatment. Background And Aim: Hiatal hernia (HH) occurs quite frequently in the general population and is characterized by a wide range of non-specific symptoms, most of them related to gastroesophageal reflux disease. Treatment can be challenging at times, depending on the existence of complications. The most recent guideline regarding the management of hiatal hernia was released by the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES) in the year 2013. This review aims to present the most recent updates on the diagnosis and management of hiatal hernia for clinical practitioners.
Methods: The PubMed database was screened for publications using the terms: "hiatal hernia", "paraesophageal hernia", "management", "treatment", "hiatal repair". A literature review of contemporary and latest studies was completed. The studies that we looked into include prospective, randomized trials, systematic reviews, clinical reviews and original articles. The information was compiled in narrative review format.
Results: This narrative review presents new data on the diagnosis and management of hiatal hernia. While the diagnostic pathway has remained virtually unchanged, new data have come to light regarding the surgical treatment of hiatal hernia. We present the imaging methods used for its diagnosis, as well as the medical and surgical treatment currently available.
Conclusion: In the last five years, there has been vast research in the field of hiatal hernia management, especially regarding the surgical treatment. However, unanswered questions still remain and solid updates on the guidelines have yet to be formulated. To address this, more randomized studies need to be done on subsets of patients, stratified by age, gender, symptoms and comorbidities.
abstract_id: PUBMED:10836161
Medical management of gastroesophageal reflux. Gastroesophageal reflux (GER) is a common problem which can manifest as vomiting, failure to thrive, recurrent pneumonias, asthma, sinusitus, or subglottic stenosis. The medical management plan should be individualized. A "happy spitter" who has no complications of GER may respond well to conservative management, including positioning and thickening of feedings. A child with complications may require treatment with H-2 antagonists or proton pump inhibitors in conjunction with prokinetic agents. Children with gastrointestinal symptoms suggestive of GER who do not respond to antireflux management may need to be treated for eosinophilic esophagitis. Recent studies that assess the effect of medications on recognized complications of GER are reviewed.
abstract_id: PUBMED:34807007
ACG Clinical Guideline for the Diagnosis and Management of Gastroesophageal Reflux Disease. Gastroesophageal reflux disease (GERD) continues to be among the most common diseases seen by gastroenterologists, surgeons, and primary care physicians. Our understanding of the varied presentations of GERD, enhancements in diagnostic testing, and approach to patient management have evolved. During this time, scrutiny of proton pump inhibitors (PPIs) has increased considerably. Although PPIs remain the medical treatment of choice for GERD, multiple publications have raised questions about adverse events, raising doubts about the safety of long-term use and increasing concern about overprescribing of PPIs. New data regarding the potential for surgical and endoscopic interventions have emerged. In this new document, we provide updated, evidence-based recommendations and practical guidance for the evaluation and management of GERD, including pharmacologic, lifestyle, surgical, and endoscopic management. The Grading of Recommendations, Assessment, Development, and Evaluation system was used to evaluate the evidence and the strength of recommendations. Key concepts and suggestions that as of this writing do not have sufficient evidence to grade are also provided.
abstract_id: PUBMED:29861323
Management of Atrial Fibrillation in the Athlete. Atrial fibrillation (AF) is a recognised arrhythmic risk of endurance sports participation, predominantly affecting middle-aged men who are lifelong athletes. Affected athletes were historically included in the category of lone AF, although specific pathophysiological processes apply to this condition, referred to as exercise-related AF. Younger non-endurance athletes may also present with AF, particularly when associated with co-existing cardiomyopathy or arrhythmia syndrome. Management of exercise-related AF is largely based on evidence from randomised trials in non-athletes. Cornerstones of treatment are, thus, thromboembolic risk reduction and risk factor modification. Rhythm control is generally preferred over rate control due to frequent presentation with symptomatic AF during the paroxysmal phase. Many therapies specific to athletes are based on expert consensus alongside observational data in athletic populations. These include: recommendations to detrain; treatment of symptomatic oesophageal reflux; and preferential use of anticholinergic antidysrhythmic agents to address the predominance of "vagal" AF in athletes. Ongoing research involving cardiac ion channel remodelling and systemic inflammation as mediators of AF genesis may provide future novel therapeutic targets for exercise-related AF. Ablative therapy shows promise in the athletic population with AF, although evidence remains limited. International consensus guidance for disqualification from competitive sports exists to guide medical management alongside athletes' preferences to continue to participate. This review focusses on isolated exercise-related AF and reviews the evidence supporting postulated management recommendations of this unique patient population.
abstract_id: PUBMED:7502531
Medical management of esophageal reflux. Gastroesophageal reflux of varying severity is a common disorder for which medical attention is sought at all levels, from pharmacists to specialist physicians and surgeons. This brief overview represents my current understanding of reflux, its effects on the esophagus and my personal approach to treatment of these disorders. Of necessity, because the literature is so extensive (a Medline search on reflux from 1966 to 1993 yielded over 1500 papers.), I have relied in places on the extensive review by Marks and Richter [1]. My paper emphasizes the evaluation and treatment of patients with symptomatic reflux, esophagitis and its complications. It describes why it is important to grade the disorders so that the treatment used is appropriate to the severity of the disease. The more severe the disease, the more specific the diagnostic information needed and the more exacting the treatment. Various treatments and outcomes of therapy are discussed, and a role for surgery is defined. The essence of effective medical treatment of esophagitis is to reduce acidity of the refluxate to a level outside the optimum proteolytic pH range of pepsin, i.e. greater than pH 3.5.
abstract_id: PUBMED:6514361
Medical management of organic laryngeal disorders. Medical management of organic laryngeal disorders always involves evaluation and advice; it frequently includes reassurance and sometimes voice therapy. Occasionally medication is also necessary.
Answer: The management of symptoms of gastroesophageal reflux disease (GERD) can be influenced by endoscopy, as it is a diagnostic tool that can help determine the severity of the disease and identify any complications that may require specific treatments. Endoscopy can reveal the presence of esophagitis, strictures, Barrett's esophagus, or other complications that might alter the medical management plan (PUBMED:7502531). For instance, the presence of esophagitis may necessitate a more aggressive treatment approach, such as the use of proton pump inhibitors (PPIs), to reduce the acidity of the refluxate and promote healing of the esophageal mucosa (PUBMED:7502531).
The ACG Clinical Guideline for the Diagnosis and Management of Gastroesophageal Reflux Disease highlights that our understanding of GERD and its varied presentations has evolved, and that endoscopic findings can play a role in guiding management decisions (PUBMED:34807007). For example, if endoscopy reveals minimal or no mucosal damage, the management may focus on lifestyle modifications and pharmacological therapy with PPIs or H2 antagonists, while avoiding overtreatment (PUBMED:34581222; PUBMED:34807007).
Furthermore, in cases where endoscopy identifies a hiatal hernia, which is frequently associated with GERD, the management may include both medical and surgical treatment options, depending on the existence of complications and the severity of symptoms (PUBMED:31750430). In such cases, endoscopy not only influences the medical management but may also lead to consideration of surgical interventions.
In summary, endoscopy can significantly influence the medical management of GERD by providing detailed information about the esophageal mucosa and related structures, which helps clinicians tailor the treatment plan to the individual patient's needs and the specific findings observed during the procedure. |
Instruction: Topical negative pressure in managing severe peritonitis: a positive contribution?
Abstracts:
abstract_id: PUBMED:19610140
Topical negative pressure in managing severe peritonitis: a positive contribution? Aim: To assess the use of topical negative pressure (TNP) in the management of severe peritonitis.
Methods: This is a four-year prospective analysis from January 2005 to December 2008 of 20 patients requiring TNP following laparotomy for severe peritonitis.
Results: There were 11 males with an average age of (59.3 +/- 3.95) years. Nine had a perforated viscus, five had anastomotic leaks, three had iatrogenic bowel injury, and a further three had severe pelvic inflammatory disease. TNP and the VAC(R) Abdominal Dressing System were initially used. These were changed every two to three days. Abdominal closure was achieved in 15/20 patients within 4.53 +/- 1.64 d. One patient required relaparotomy due to residual sepsis. Two patients with severe faecal peritonitis due to perforated diverticular disease received primary anastomosis at second look laparotomy, as sepsis and their general condition improved. In the remaining 5/20 cases, the abdomen was left open due to bowel oedema and or abdominal wall oedema. Dressing was switched to TNP and VAC GranuFoam. Three of the five patients returned a few months later for abdominal wall reconstruction and restoration of intestinal continuity. Two patients developed intestinal fistulae. All 20 patients survived.
Conclusion: The use of TNP is safe. Further studies are needed to assess its value in managing these difficult cases.
abstract_id: PUBMED:23917343
Laparostomy with topical negative pressure for treating severe peritonitis. Preliminary experience with 16 cases and review of the literature. Introduction: The aim of this study was to assess the authors' initial experience with laparostomy and intraperitoneal topical negative pressure (TNP) in patients with severe peritonitis. The authors also reviewed the recent literature on the effectiveness and safety of abdominal TNP.
Patients And Methods: Sixteen patients (10 male, 6 female, mean age 55 years), suffering from severe peritonitis, underwent emergency laparotomy and laparostomy with TNP. Abdominal sepsis originated from the small intestine (n = 7), large intestine (n = 6), biliary tract (n = 2), and pancreas (n = 1). In 2 patients abdominal wall mesh infection and soft tissue gangrene were observed.
Results: The mortality rate was 31.2%. The main complications probably related to TNP were enteric fistulae (25%), bleeding (25%), abdominal abscesses (12.5%), bowel ischemia (6.2%). Delayed primary closure was performed in 8 patients (57.1%) whereas in 6 cases a parietal graft was necessary, and one patient underwent an autologous skin graft.
Conclusions: Laparostomy with intraperitoneal TNP is a safe and effective method for managing patients with severe peritonitis. Morbidity can be reduced through individualized application of the laparostomy dressing and pressure gradient. The abdominal wall should be managed in such a way as to make possible delayed primary closure.
abstract_id: PUBMED:35514120
Negative pressure wound therapy for managing the open abdomen in non-trauma patients. Background: Management of the open abdomen is a considerable burden for patients and healthcare professionals. Various temporary abdominal closure techniques have been suggested for managing the open abdomen. In recent years, negative pressure wound therapy (NPWT) has been used in some centres for the treatment of non-trauma patients with an open abdomen; however, its effectiveness is uncertain.
Objectives: To assess the effects of negative pressure wound therapy (NPWT) on primary fascial closure for managing the open abdomen in non-trauma patients in any care setting.
Search Methods: In October 2021 we searched the Cochrane Wounds Specialised Register, CENTRAL, MEDLINE, Embase, and CINAHL EBSCO Plus. To identify additional studies, we also searched clinical trials registries for ongoing and unpublished studies, and scanned reference lists of relevant included studies as well as reviews, meta-analyses, and health technology reports. There were no restrictions with respect to language, date of publication, or study setting.
Selection Criteria: We included all randomised controlled trials (RCTs) that compared NPWT with any other type of temporary abdominal closure (e.g. Bogota bag, Wittmann patch) in non-trauma patients with open abdomen in any care setting. We also included RCTs that compared different types of NPWT systems for managing the open abdomen in non-trauma patients.
Data Collection And Analysis: Two review authors independently performed the study selection process, risk of bias assessment, data extraction, and GRADE assessment of the certainty of evidence.
Main Results: We included two studies, involving 74 adults with open abdomen associated with various conditions, predominantly severe peritonitis (N = 55). The mean age of the participants was 52.8 years; the mean proportion of women was 39.2%. Both RCTs were carried out in single centres and were at high risk of bias. Negative pressure wound therapy versus Bogota bag We included one study (40 participants) comparing NPWT with Bogota bag. We are uncertain whether NPWT reduces time to primary fascial closure of the abdomen (NPWT: 16.9 days versus Bogota bag: 20.5 days (mean difference (MD) -3.60 days, 95% confidence interval (CI) -8.16 to 0.96); very low-certainty evidence) or adverse events (fistulae formation, NPWT: 10% versus Bogota: 5% (risk ratio (RR) 2.00, 95% CI 0.20 to 20.33); very low-certainty evidence) compared with the Bogota bag. We are also uncertain whether NPWT reduces all-cause mortality (NPWT: 25% versus Bogota bag: 35% (RR 0.71, 95% CI 0.27 to 1.88); very low-certainty evidence) or length of hospital stay compared with the Bogota bag (NPWT mean: 28.5 days versus Bogota bag mean: 27.4 days (MD 1.10 days, 95% CI -13.39 to 15.59); very low-certainty evidence). The study did not report the proportion of participants with successful primary fascial closure of the abdomen, participant health-related quality of life, reoperation rate, wound infection, or pain. Negative pressure wound therapy versus any other type of temporary abdominal closure There were no randomised controlled trials comparing NPWT with any other type of temporary abdominal closure. Comparison of different negative pressure wound therapy devices We included one study (34 participants) comparing different types of NPWT systems (Suprasorb CNP system versus ABThera system). We are uncertain whether the Suprasorb CNP system increases the proportion of participants with successful primary fascial closure of the abdomen compared with the ABThera system (Suprasorb CNP system: 88.2% versus ABThera system: 70.6% (RR 0.80, 95% CI 0.56 to 1.14); very low-certainty evidence). We are also uncertain whether the Suprasorb CNP system reduces adverse events (fistulae formation, Suprasorb CNP system: 0% versus ABThera system: 23.5% (RR 0.11, 95% CI 0.01 to 1.92); very low-certainty evidence), all-cause mortality (Suprasorb CNP system: 5.9% versus ABThera system: 17.6% (RR 0.33, 95% CI 0.04 to 2.89); very low-certainty evidence), or reoperation rate compared with the ABThera system (Suprasorb CNP system: 100% versus ABThera system: 100% (RR 1.00, 95% CI 0.90 to 1.12); very low-certainty evidence). The study did not report the time to primary fascial closure of the abdomen, participant health-related quality of life, length of hospital stay, wound infection, or pain.
Authors' Conclusions: Based on the available trial data, we are uncertain whether NPWT has any benefit in primary fascial closure of the abdomen, adverse events (fistulae formation), all-cause mortality, or length of hospital stay compared with the Bogota bag. We are also uncertain whether the Suprasorb CNP system has any benefit in primary fascial closure of the abdomen, adverse events, all-cause mortality, or reoperation rate compared with the ABThera system. Further research evaluating these outcomes as well as participant health-related quality of life, wound infection, and pain outcomes is required. We will update this review when data from the large studies that are currently ongoing are available.
abstract_id: PUBMED:21601824
Treatment of small-bowel fistulae in the open abdomen with topical negative-pressure therapy. Background: An open abdomen (OA) can result from surgical management of trauma, severe peritonitis, abdominal compartment syndrome, and other abdominal emergencies. Enteroatmospheric fistulae (EAF) occur in 25% of patients with an OA and are associated with high mortality.
Methods: We report our experience with topical negative pressure (TNP) therapy in the management of EAF in an OA using the VAC (vacuum asisted closure) device (KCI Medical, San Antonio, TX). Nine patients with 17 EAF in an OA were treated with topical TNP therapy from January 2006 to January 2009. Surgery with enterectomy and abdominal closure was planned 6 to 10 weeks later.
Results: Three EAF closed spontaneously. The median time from the onset of fistulization to elective surgical management was 51 days. No additional fistulae occurred during VAC therapy. One patient with a short bowel died as a result of persistent leakage after surgery.
Conclusions: Although previously considered a contraindication to TNP therapy, EAF can be managed successfully with TNP therapy. Surgical closure of EAFs is possible after several weeks.
abstract_id: PUBMED:29384913
Combing a novel device and negative pressure wound therapy for managing the wound around a colostomy in the open abdomen: A case report. Rationale: An open abdomen complicated with small-bowel fistulae becomes a complex wound for local infection, systemic sepsis and persistent soiling irritation by intestinal content. While controlling the fistulae drainage, protecting surrounding skin, healing the wound maybe a challenge.
Patient Concerns: In this paper we described a 68-year-old female was admitted to emergency surgery in general surgery department with severe abdomen pain. Resection part of the injured small bowel, drainage of the intra-abdominal abscess, and fashioning of a colostomy were performed.
Diagnoses: She failed to improve and ultimately there was tenderness and lot of pus under the skin around the fistulae. The wound started as a 3-cm lesion and progressed to a 6 ×13 (78 cm) around the stoma.
Interventions: In our case we present a novel device for managing colostomy wound combination with negative pressure wound therapy.
Outcomes: This tube allows for an effective drainage of small-bowel secretion and a safe build-up of granulation tissue. Also it could be a barrier between the bowel suction point and foam.
Lessons: Management of open abdomen wound involves initial dressing changes, antibiotic use and cutaneous closure. When compared with traditional dressing changes, the NPWT offers several advantages including increased granulation tissue formation, reduction in bacterial colonization, decreased of bowel edema and wound size, and enhanced neovascularization.
abstract_id: PUBMED:19438884
Use of topical negative pressure in assisted abdominal closure does not lead to high incidence of enteric fistulae. Aim: Reports suggested an increase in enterocutaneous fistulae with topical negative pressure (TNP) use in the open abdomen. The purpose of this study was to establish if our experience raises similar concerns.
Method: This is a 5-year prospective analysis, from January 2004 to December 2008, of 42 patients who developed deep wound dehiscence or their abdomen was left open at laparotomy requiring 'TNP' to assist in their management. The decision to use TNP was taken if it was felt unwise or not feasible to close the abdomen.
Results: There were 22 men; the median age was 68 (range 21-88) years. Twenty of 42 patients had peritonitis, 5/42 had oedematous bowel, 5/42 ischaemic gut, one had a large abdominal wall defect following debridement due to methicillin-resistant staphyloccus (MRSA) infection, 11/42 developed deep wound dehiscence. In 30/42, VAC abdominal dressing system and TNP were applied. In 12/42, VAC GranuFoam and TNP were used, of these five patients required a mesh to control the oedematous bowel. Four of 42 patients died. A total of 34 patients had anastomotic lines, 2/42 developed enteric fistulae, and both survived.
Conclusion: This study does not support the reports suggesting a higher fistulae rate with TNP. In our opinion, its use in the open abdomen is safe.
abstract_id: PUBMED:31641785
Effects of negative-pressure therapy with and without ropivacaine instillation in the early evolution of severe peritonitis in pigs. Purpose: The abdomen is the second most common source of sepsis and secondary peritonitis, which likely lead to death. In the present study, we hypothesized that instillation of local anesthetics into the peritoneum might mitigate the systemic inflammatory response syndrome (SIRS) in the open abdomen when combined with negative-pressure therapy (NPT) to treat severe peritonitis.
Methods: We performed a study in 21 pigs applying a model of sepsis based on ischemia/reperfusion and fecal spread into the peritoneum. The pigs were randomized into three groups, and treated for 6 h as follows: Group A: temporary abdominal closure with ABTHERA™ Open Abdomen Negative-Pressure Therapy; Group B: temporary abdominal closure with ABTHERA™ Open Abdomen Negative-Pressure Therapy plus abdominal instillation with physiological saline solution (PSS); and Group C: temporary abdominal closure with ABTHERA™ Open Abdomen Negative-Pressure Therapy plus peritoneal instillation with a solution of ropivacaine in PPS.
Results: A comparison between the three groups revealed no statistically significant difference for any of the parameters registered (p > 0.05), i.e., intra-abdominal pressure, blood pressure, heart rate, O2 saturation, diuresis, body temperature, and blood levels of interleukin 6 (IL-6), tumor necrosis factor alpha (TNFα), and c-reactive protein (CRP). In addition, histological studies of the liver, ileum, kidney and lung showed no difference between groups.
Conclusions: The use of abdominal instillation (with or without ropivacaine) did not change the effect of 6 h of NPT after sepsis in animals with open abdomen. The absence of adverse effects suggests that longer treatments should be tested.
abstract_id: PUBMED:22484569
Open abdomen treatment with dynamic sutures and topical negative pressure resulting in a high primary fascia closure rate. Background: Open abdomen (OA) treatment with negative-pressure therapy is a novel treatment option for a variety of abdominal conditions. We here present a cohort of 160 consecutive OA patients treated with negative pressure and a modified adaptation technique for dynamic retention sutures.
Methods: From May 2005 to October 2010, a total of 160 patients--58 women (36 %); median age 66 years (21-88 years); median Mannheim peritonitis index 25 (5-43) underwent emergent laparotomy for diverse abdominal conditions (abdominal sepsis 78 %, ischemia 16 %, other 6 %).
Results: Hospital mortality was 21 % (13 % died during OA treatment); delayed primary fascia closure was 76 % in the intent-to-treat population and 87 % in surviving patients. Six patients required reoperation for abdominal abscess and five patients for anastomotic leakage; enteric fistulas were observed in five (3 %) patients. In a multivariate analysis, factors correlating significantly with high fascia closure rate were limited surgery at the emergency operation and a Björk index of 1 or 2; factors correlating significantly with low fascia closure rate were male sex and generalized peritonitis.
Conclusions: With the aid of initially placed dynamic retention sutures, OA treatment with negative pressure results in high rates of delayed primary fascia closure. OA therapy with the technical modifications described is thus considered a suitable treatment option in various abdominal emergencies.
abstract_id: PUBMED:24858984
Management of fistula of ileal conduit in open abdomen by intra-condoit negative pressure system. Introduction: We aimed to present the management of a patient with fistula of ileal conduit in open abdomen by intra-condoid negative pressure in conjunction with VAC Therapy and dynamic wound closure system (ABRA).
Presentation Of Case: 65-Year old man with bladder cancer underwent radical cystectomy and ileal conduit operation. Fistula from uretero-ileostomy anastomosis and ileus occurred. The APACHE II score was 23, Mannheim peritoneal index score was 38 and Björck score was 3. The patient was referred to our clinic with ileus, open abdomen and fistula of ileal conduit. Patient was treated with intra-conduid negative pressure, abdominal VAC therapy and ABRA.
Discussion: Management of urine fistula like EAF in the OA may be extremely challenging. Especially three different treatment modalities of EAF are established in recent literature. They are isolation of the enteric effluent from OA, sealing of EAF with fibrin glue or skin flep and resection of intestine including EAF and re-anastomosis. None of these systems were convenient to our case, since urinary fistula was deeply situated in this patient with generalized peritonitis and ileus.
Conclusion: Application of intra-conduid negative pressure in conjunction with VAC therapy and ABRA is life saving strategies to manage open abdomen with fistula of ileal conduit.
abstract_id: PUBMED:22746075
Fascial closure of the abdominal wall by dynamic suture after topical negative pressure laparostomy treatment of severe peritonitis--results of a prospective randomized study Introduction: Severe peritonitis is a frequent condition characterized by high morbidity and mortality rates. Topical negative pressure (TNP) laparostomy could improve the results of the treatment, provided that the adverse events of this method are reduced. The aim of our study was to prove, in a prospective randomized study, that the primary use of TNP laparostomy reduces morbidity and mortality when compared to primary abdominal wall closure after the index surgery for severe peritonitis. The possibility of the abdominal wall fascial closure significantly influencing morbidity was the main topic of this study.
Material And Methods: Between 9/2009 and 9/2011,57 patients with severe peritonitis were included in the study at the Department of Surgery of the Bulovka Faculty Hospital; 28 of them were randomized to the TNP laparostomy group and 29 to the primary closure group. The two groups did not differ in age, gender, polymorbidity and severity of peritonitis.
Results: The length of hospital stay was similar in both groups (median: 22 days; range 10-171 days) in the intervention group and 23 days (range 3-71) in the control group (p = 0.89). The mortality rate was significantly lower in the TNP laparostomy group in comparison with the primary closure group (3 patients, 11% vs. 12 patients, 41%; p = 0.01). A complete closure of the abdominal wall including fascia and complete abdominal wall healing was achieved in 80% of survivors in the TNP group, compared to 29% in the primary closure group (p = 0.01). No enteral fistula occurred in any surviving patients from both groups. The overall length of abdominal wall healing was significantly shorter in the TNP group (median: 7; 7-94 days, versus 30; 7-223; p = 0.04).
Conclusions: Primary TNP laparostomy is an effective and safe method in the treatment of severe peritonitis. Keeping good clinical practice, especially using dynamic suture as early as after the index surgery and the timely closure of laparostomy as soon as the indication disappears (according to relevant criteria) leads to a significantly higher abdominal wall healing rate, icluding fascial closure, than after peritonitis treatment without laparostomy.
Answer: The use of topical negative pressure (TNP) in managing severe peritonitis appears to be a positive contribution based on the findings from several studies. A four-year prospective analysis of 20 patients requiring TNP following laparotomy for severe peritonitis showed that abdominal closure was achieved in 15 out of 20 patients within an average of 4.53 days, and all 20 patients survived, suggesting that TNP is safe and may be beneficial in managing these difficult cases (PUBMED:19610140). Another study with 16 cases of severe peritonitis treated with laparostomy and intraperitoneal TNP reported a mortality rate of 31.2% and complications related to TNP, but concluded that it is a safe and effective method for managing patients with severe peritonitis (PUBMED:23917343).
A Cochrane review included two studies involving 74 adults with open abdomen associated with various conditions, predominantly severe peritonitis. The review found very low-certainty evidence regarding the benefits of NPWT in primary fascial closure of the abdomen, adverse events, all-cause mortality, or length of hospital stay compared with the Bogota bag, indicating that further research is needed (PUBMED:35514120).
Additional studies have reported on the management of specific complications associated with severe peritonitis and open abdomen, such as enteroatmospheric fistulae (EAF), using TNP therapy. One study reported successful management of EAF with TNP therapy, with no additional fistulae occurring during VAC therapy and three EAF closing spontaneously (PUBMED:21601824). Another study did not support reports suggesting a higher fistulae rate with TNP, considering its use in the open abdomen as safe (PUBMED:19438884).
Moreover, a study on the treatment of severe peritonitis in pigs with negative-pressure therapy with and without ropivacaine instillation found no statistically significant difference in various parameters, suggesting that longer treatments should be tested (PUBMED:31641785). A study with 160 patients treated with negative pressure and dynamic retention sutures reported a high rate of delayed primary fascia closure, indicating that this approach is a suitable treatment option in various abdominal emergencies (PUBMED:22484569).
In conclusion, the evidence suggests that TNP can be a positive contribution to the management of severe peritonitis, with several studies reporting successful outcomes, high rates of abdominal closure, and survival. However, the certainty of the evidence is low, and further research is required to fully establish the benefits and safety of TNP in this context. |
Instruction: Multiple evanescent white dot syndrome: a genetic predisposition?
Abstracts:
abstract_id: PUBMED:8766033
Multiple evanescent white dot syndrome: a genetic predisposition? Background: Multiple evanescent white dot syndrome (MEWDS) is a benign acquired isolated chorioretinal disorder. Symptoms include photopsia, visual blur and scotomas. Ocular examination reveals multiple white dots at the level of the deep retina. A parainfectious disorder was suggested but the exact mechanism of MEWDS is still unknown. Postulating that MEWDS might be an antigen driven inflammatory reaction, we analyzed HLA subtypes in patients with MEWDS.
Patients And Methods: Sixteen patients were diagnosed with MEWDS in Lausanne from 1985 to 1994. Blood was withdrawn in 9/16 patients. HLA-A, -B and -DR were sought.
Results: HLA-B51 was detected in 4/9 patients (44.4%). Other HLA subtypes were detected sporadically.
Conclusions: The frequency of HLA-B51 haplotype was found to be 3.7 times more elevated than in a normal control caucasian group. This suggests the possibility that MEWDS might be a genetically determined disorder as it is the case for other ocular diseases like Birdshot chorioretinopathy (HLA-A29), Harada's disease (HLA-DRMT3), acute anterior uveitis (HLA-B27) or Behçet's disease (HLA-B51). We have no explanation for the presence of HLA-B51 in both Behçet's disease and MEWDS. The association of HLA-B51 and MEWDS needs confirmation by further testing.
abstract_id: PUBMED:21419509
Multiple evanescent white dot syndrome and multiple sclerosis Purpose: To describe a patient who fulfilled the criteria for both clinically definite multiple evanescent white dot syndrome (MEWDS) and multiple sclerosis.
Methods: We performed a complete ophthalmologic and neurological examination in a 30-year-old woman who was referred to our department for blurred vision in her left eye (LE) with photopsia.
Results: Following a complete ophthalmologic examination, the patient was diagnosed with MEWDS and coincident multiple sclerosis. She underwent therapy with intravenous methylprednisolone (1000 mg/day) for three days, followed by oral prednisone (1 mg/kg per day) for 15 days. Most of the symptoms and signs apparently regressed within one month, despite a still abnormal OCT macular scan, probably due to atrophic post-inflammatory changes in the outer and photoreceptor layers (rods and cones).
Conclusion: This report, showing the clinical features of MEWDS associated with multiple sclerosis, strongly suggests common neuropathological and inflammatory mechanisms between MS and white dot syndromes.
abstract_id: PUBMED:1434376
Multiple evanescent white dot syndrome A 38-year-old male patient experienced a unilateral visual acuity decrease to 20/60 and showed white dots at the level of the retinal pigment epithelial interface characteristic of multiple evanescent white dot-syndrome. Fluorescein angiography demonstrated early hyperfluorescent defects and some late staining. In spite of improvement of the visual acuity and the alterations of the fundus, an enlargement of the blind spot and some sharply demarcated depigmentations of the retinal pigment epithelium remain. This case shows, that already at the beginning of symptoms the characteristic white dots may be present. Enlargement of the blind spot and depigmentations of the retinal pigment epithelium may remain as defects after multiple evanescent white dot-syndrome.
abstract_id: PUBMED:33123841
Multiple evanescent white dot syndrome and panuveitis: a case report. Aim: To report a patient with multiple evanescent white dot syndrome (MEWDS) complicated by iridocyclitis and vitritis.
Case Description: A 70-year-old woman developed multiple subretinal white dots, iritis, and diffuse vitreous opacity. Angiographic and macular morphological features were consistent with those of MEWDS. Inflammatory findings including the white dots improved following only topical dexamethasone within 1 month after the initial visit. Best-corrected visual acuity recovered to 1.0 with restored photoreceptor structure.
Conclusion: The presence of iridocyclitis and vitritis, atypical to MEWDS, indicates the concurrent development of panuveitis associated with MEWDS. These results suggest that MEWDS is a clinical entity of uveitis.
abstract_id: PUBMED:37220450
Rare Case of Bilateral and Asymmetrical Multiple Evanescent White Dot Syndrome. Bilateral presentation of multiple evanescent white dot syndrome (MEWDS) is a rare occurrence. We report a case of bilateral multiple evanescent white dot syndrome in a young female patient with asymmetrical manifestation. She presented with sudden onset of right eye central blurring of vision and dyschromatopsia. Fundus examination however showed bilateral multiple grey-white intra-retinal punctate lesions with an asymmetrical manifestation of the swollen optic disc and foveal granularity over the right. Spectral Domain Optical Coherence Tomography (SD-OCT) showed the presence of juxta foveal subretinal fluid and disrupted inner segment-outer segment (IS-OS) junction over the right eye. The patient had a spontaneous complete recovery within six weeks' time.
abstract_id: PUBMED:37009534
Multiple Evanescent White-Dot Syndrome in a 9-Year-Old Girl. Purpose: This work describes a case of multiple evanescent white-dot syndrome (MEWDS) in a 9-year-old girl.
Methods: A case report is presented.
Results: A case of MEWDS in a 9-year-old girl is described.
Conclusions: To our knowledge this is the youngest presentation of MEWDS discussed in the literature. MEWDS should be considered in the differential diagnosis of ocular inflammation in the first decade of life.
abstract_id: PUBMED:37006511
Multiple Evanescent White Dot Syndrome Presenting in a Patient With Punctate Inner Choroidopathy. Purpose: This work reports a case of long-standing punctate inner choroidopathy (PIC) presenting with acute-onset multiple evanescent white dot syndrome.
Methods: A 44 year-old man presented with new onset of flashes and a peripheral spot of blurry vision in the right eye. His ocular history included PIC in both eyes.
Results: Corrected visual acuities and intraocular pressures were normal. Posterior segment examination of the right eye demonstrated old PIC lesions and new, deep-yellow lesions in the posterior pole and midperiphery. Four months later, these lesions had resolved.
Conclusions: Coexistence of PIC and multiple evanescent white dot syndrome has been rarely reported, and more research is warranted to investigate a possible shared etiology.
abstract_id: PUBMED:33250755
Early Progressive Circumpapillary Lesion as Atypical Presentation of Multiple Evanescent White Dot Syndrome: A Case Report. Classical clinical findings of multiple evanescent white dot syndrome (MEWDS) include multiple, small white dots scattered throughout the posterior pole, foveal granularity, posterior vitreous cells, and mild optic disc swelling. We describe the case of a 35-year-old man who was admitted to our department with an unusual presentation of MEWDS at the early onset of the disease. A unilateral circumpapillary retinal white spot was observed. Spectral domain optical coherence tomography demonstrated irregularities of the retinal pigment epithelium and disruptions of the outer retinal layers around the optic nerve without other abnormalities. A few days later, the lesion spread centrifugally from the peripapillary region and along the vascular arcades. This distinctive appearance in an early stage of the disease may suggest a disorder other than MEWDS, which can lead to a misdiagnosis and unnecessary treatment.
abstract_id: PUBMED:35434421
Multiple evanescent white dot syndrome following BNT162b2 mRNA COVID-19 vaccination. Purpose: To report a case with multiple evanescent white dot syndrome (MEWDS) following BNT162b2 mRNA COVID-19 vaccination.
Observations: Case: A 67-year-old Japanese female presented with central visual field loss and photopsia in the right eye (OD) for 5 days. She was complaining blurred vision with bright spots in vision in OD, but denied any ocular symptoms in left eye (OS). She had received the second dose of BNT162b2 mRNA COVID-19 vaccine (Pfizer-BioNTech) one day before the onset of visual symptoms; flu-like symptoms such as mild fever and general fatigue also developed along with ocular symptoms such as decreased vision and hypersensitivity to light in OD following the second COVID-19 vaccine. The first dose of vaccine was administrated followed three weeks later by the second dose and was not associated with any ocular or systemic symptoms besides mild pain at the injection site. She had not been followed by any ophthalmologist before the initial visit. At the initial visit, best corrected visual acuity (BCVA) in decimal points was 0.2 in OD and 1.0 in OS. Ophthalmic examination showed multifocal white dots in the posterior retina with moderate vitritis (1+ haze and 2+ cells) in OD. Multimodal imaging in OD showed diffuse disruption of ellipsoid zone with variable punctate hyperreflective lesions at macula on optical coherence tomography, punctate hyperfluorescence in a wreath-like pattern and late staining on fluorescein angiography, and multiple hypofluorescent spots of various sizes in the late phases on indocyanine green angiography. Both multiple hypofluorescent spots and scattered hyperfluorescent spots corresponding to white dots in OD were also seen on fundus autofluorescence. Her laboratory and systemic evaluations were negative for syphilis, tuberculosis, or toxoplasma, and selected autoimmune diseases like sarcoidosis, Behcet's disease, rheumatoid arthritis, and systemic lupus erythematosus. No active intraocular inflammation or abnormality were seen in OS. One week later, the multifocal white dots disappeared in OD, and were almost invisible on fundus photography. At that time, multifocal electroretinogram showed decreased response with low amplitude density across the entire field in OD. The BCVA in OD spontaneously improved to 0.8 without any treatment. Collectively, these clinical course and findings were suggestive of a diagnosis of MEWDS after mRNA COVID-19 vaccination.
Conclusions And Importance: In this present case, BNT162b2 mRNA COVID-19 vaccination might have been associated with MEWDS-like entity with vision loss. It is important for physicians to monitor the ocular status carefully in patients with visual disturbance after COVID-19 vaccination.
abstract_id: PUBMED:33859903
Multiple Evanescent White Dot Syndrome Presenting After Initiation of Empagliflozin, a Sodium-Glucose Cotransporter-2 Inhibitor. Multiple evanescent white dot syndrome (MEWDS) is an inflammatory condition of the retina that typically presents unilaterally with multiple gray-white spots in the outer retina or retinal pigmented epithelium and granular changes within the fovea. We report a case of new-onset MEWDS in a patient closely after the initiation of empagliflozin, a sodium-glucose cotransporter inhibitor medicine for his type II diabetes mellitus.
Answer: The possibility of a genetic predisposition to Multiple Evanescent White Dot Syndrome (MEWDS) has been suggested by the detection of the HLA-B51 haplotype in a significantly higher frequency among patients with MEWDS compared to a normal control Caucasian group. In a study, HLA-B51 was found in 44.4% of the patients with MEWDS, which is 3.7 times more elevated than in the control group. This finding indicates that MEWDS might be a genetically determined disorder, similar to other ocular diseases such as Birdshot chorioretinopathy, Harada's disease, acute anterior uveitis, or Behçet's disease, which are also associated with specific HLA subtypes (PUBMED:8766033). However, the association of HLA-B51 and MEWDS requires further confirmation through additional testing. |
Instruction: Is there still a role for computed tomography and bone scintigraphy in prostate cancer staging?
Abstracts:
abstract_id: PUBMED:26276152
Is there still a role for computed tomography and bone scintigraphy in prostate cancer staging? An analysis from the EUREKA-1 database. Purpose: According to the current guidelines, computed tomography (CT) and bone scintigraphy (BS) are optional in intermediate-risk and recommended in high-risk prostate cancer (PCa). We wonder whether it is time for these examinations to be dismissed, evaluating their staging accuracy in a large cohort of radical prostatectomy (RP) patients.
Methods: To evaluate the ability of CT to predict lymph node involvement (LNI), we included 1091 patients treated with RP and pelvic lymph node dissection, previously staged with abdomino-pelvic CT. As for bone metastases, we included 1145 PCa patients deemed fit for surgery, previously staged with Tc-99m methylene diphosphonate planar BS.
Results: CT scan showed a sensitivity and specificity in predicting LNI of 8.8 and 98 %; subgroup analysis disclosed a significant association only for the high-risk subgroup of 334 patients (P 0.009) with a sensitivity of 11.8 % and positive predictive value (PPV) of 44.4 %. However, logistic multivariate regression analysis including preoperative risk factors excluded any additional predictive ability of CT even in the high-risk group (P 0.40). These data are confirmed by ROC curve analysis, showing a low AUC of 54 % for CT, compared with 69 % for Partin tables and 80 % for Briganti nomogram. BS showed some positivity in 74 cases, only four of whom progressed, while 49 patients with negative BS progressed during their follow-up, six of them immediately after surgery.
Conclusions: According to our opinion, the role of CT and BS should be restricted to selected high-risk patients, while clinical predictive nomograms should be adopted for the surgical planning.
abstract_id: PUBMED:31688499
Comparison of bone scintigraphy and Ga-68 prostate-specific membrane antigen positron emission tomography/computed tomography in the detection of bone metastases of prostate carcinoma. Aim: This study aims to assess the diagnostic performance of Ga-68 prostate-specific membrane antigen PET/computed tomography in the comparison of planar bone scintigraphy in the detection of bone metastases. Another purpose is to define the additional benefit of bone scintigraphy subsequent to prostate-specific membrane antigen PET/computed tomography and the role of prostate-specific membrane antigen PET/computed tomography in the treatment planning.
Material And Method: Forty-six patients with a median interval of 19 (range: 3-90) days between prostate-specific membrane antigen PET/computed tomography and bone scintigraphy included in the analysis. Diagnostic performance of both modalities was calculated and compared.
Results: Prostate-specific membrane antigen PET/computed tomography and bone scintigraphy were performed for initial staging in 25 (54%), for evaluation of biochemical recurrence in 11 (24%) and metastatic castration-resistant prostate carcinoma in 10 (22%) patients. In the patient-based analysis sensitivity, specificity, accuracy, positive predictive value, and negative predictive value for bone scintigraphy for detection of bone metastases were calculated as 50%, 19-29%, 32-39%, 32-39%, and 33-39%, respectively, based on whether equivocal findings were classified as positive or negative. For prostate-specific membrane antigen PET/computed tomography, these values were found significantly higher as 100%, 95-100%, 98-100%, 96-100%, and 100%, respectively. The diagnostic performance of bone scintigraphy and PET/computed tomography in clinical subgroups was analyzed, prostate-specific membrane antigen PET/computed tomography was superior to bone scintigraphy in three groups.
Conclusion: In this retrospective study, prostate-specific membrane antigen PET/computed tomography was found to be superior to planar bone scintigraphy in the detection of bone metastases. Additional bone scintigraphy seems to be unnecessary in patients who underwent prostate-specific membrane antigen PET/computed tomography within three months period without additional treatment.
abstract_id: PUBMED:31040743
Semiquantitative assessment of osteoblastic, osteolytic, and mixed lytic-sclerotic bone lesions on fluorodeoxyglucose positron emission tomography/computed tomography and bone scintigraphy. Bone scintigraphy is widely used to detect bone metastases, particularly osteoblastic ones, and F-18 fluorodeoxyglucose (FDG) positron emission tomography (PET) scan is useful in detecting lytic bone metastases. In routine studies, images are assessed visually. In this retrospective study, we aimed to assess the osteoblastic, osteolytic, and mixed lytic-sclerotic bone lesions semiquantitatively by measuring maximum standardized uptake value (SUVmax) on FDG PET/computed tomography (CT), maximum lesion to normal bone count ratio (ROImax) on bone scintigraphy, and Hounsfield unit (HU) on CT. Bone scintigraphy and FDG PET/CT images of 33 patients with various solid tumors were evaluated. Osteoblastic, osteolytic, and mixed lesions were identified on CT and SUVmax, ROImax, and HU values of these lesions were measured. Statistical analysis was performed to determine if there is a difference in SUVmax, ROImax, and HU values of osteoblastic, osteolytic, and mixed lesions and any correlation between these values. Patients had various solid tumors, mainly lung, breast, and prostate cancers. There were 145 bone lesions (22.8% osteoblastic, 53.1% osteolytic, and 24.1% mixed) on CT. Osteoblastic lesions had a significantly higher value of CT HU as compared to osteolytic and mixed lesions (P < 0.01). There was no significant difference in mean ROImax and mean SUVmax values of osteolytic and osteoblastic bone lesions. There was no correlation between SUVmax and ROImax, SUVmax and HU, and ROImax and HU values in osteolytic, osteoblastic, and mixed lesions (P > 0.05). Not finding a significant difference in SUVmax and ROImax values of osteoblastic, osteolytic, and mixed lesions and also lack of correlation between SUVmax, ROImax, and HU values could be due to treatment status of the bone lesions, size of the lesion, nonmetastatic lesions, erroneous measurement of SUVmax and ROImax, or varying metabolism in bone metastases originating from various malignancies.
abstract_id: PUBMED:26753880
(18)F-fluoride positron emission tomography/computed tomography and bone scintigraphy for diagnosis of bone metastases in newly diagnosed, high-risk prostate cancer patients: study protocol for a multicentre, diagnostic test accuracy study. Background: For decades, planar bone scintigraphy has been the standard practice for detection of bone metastases in prostate cancer and has been endorsed by recent oncology/urology guidelines. It is a sensitive method with modest specificity. (18)F-fluoride positron emission tomography/computed tomography has shown improved sensitivity and specificity over bone scintigraphy, but because of methodological issues such as retrospective design and verification bias, the existing level of evidence with (18)F-fluoride positron emission tomography/computed tomography is limited. The primary objective is to compare the diagnostic properties of (18)F-fluoride positron emission tomography/computed tomography versus bone scintigraphy on an individual patient basis.
Methods/design: One hundred forty consecutive, high-risk prostate cancer patients will be recruited from several hospitals in Denmark. Sample size was calculated using Hayen's method for diagnostic comparative studies. This study will be conducted in accordance with recommendations of standards for reporting diagnostic accuracy studies. Eligibility criteria comprise the following: 1) biopsy-proven prostate cancer, 2) PSA ≥ 50 ng/ml (equals a prevalence of bone metastasis of ≈ 50% in the study population on bone scintigraphy), 3) patients must be eligible for androgen deprivation therapy, 4) no current or prior cancer (within the past 5 years), 5) ability to comply with imaging procedures, and 6) patients must not receive any investigational drugs. Planar bone scintigraphy and (18)F-fluoride positron emission tomography/computed tomography will be performed within a window of 14 days at baseline. All scans will be repeated after 26 weeks of androgen deprivation therapy, and response of individual lesions will be used for diagnostic classification of the lesions on baseline imaging among responding patients. A response is defined as PSA normalisation or ≥ 80% reduction compared with baseline levels, testosterone below castration levels, no skeletal related events, and no clinical signs of progression. Images are read by blinded nuclear medicine physicians. The protocol is currently recruiting.
Discussion: To the best of our knowledge, this is one of the largest prospective studies comparing (18)F-fluoride positron emission tomography/computed tomography and bone scintigraphy. It is conducted in full accordance with recommendations for diagnostic accuracy trials. It is intended to provide valid documentation for the use of (18)F-fluoride positron emission tomography/computed tomography for examination of bone metastasis in the staging of prostate cancer.
abstract_id: PUBMED:27290607
Comparison of bone scintigraphy and 68Ga-PSMA PET for skeletal staging in prostate cancer. Purpose: The aim of our study was to compare the diagnostic performance of 68Ga-PSMA PET and 99mTc bone scintigraphy (BS) for the detection of bone metastases in prostate cancer (PC) patients.
Methods: One hundred twenty-six patients who received planar BS and PSMA PET within three months and without change of therapy were extracted from our database. Bone lesions were categorized into benign, metastatic, or equivocal by two experienced observers. A best valuable comparator (BVC) was defined based on BS, PET, additional imaging, and follow-up data. The cohort was further divided into clinical subgroups (primary staging, biochemical recurrence, and metastatic castration-resistant prostate cancer [mCRPC]). Additionally, subgroups of patients with less than 30 days delay between the two imaging procedures and with additional single-photon emission computed tomography (SPECT) were analyzed.
Results: A total of 75 of 126 patients were diagnosed with bone metastases. Sensitivities and specificities regarding overall bone involvement were 98.7-100 % and 88.2-100 % for PET, and 86.7-89.3 % and 60.8-96.1 % (p < 0.001) for BS, with ranges representing results for 'optimistic' or 'pessimistic' classification of equivocal lesions. Out of 1115 examined bone regions, 410 showed metastases. Region-based analysis revealed a sensitivity and specificity of 98.8-99.0 % and 98.9-100 % for PET, and 82.4-86.6 % and 91.6-97.9 % (p < 0.001) for BS, respectively. PSMA PET also performed better in all subgroups, except patient-based analysis in mCRPC.
Conclusion: Ga-PSMA PET outperforms planar BS for the detection of affected bone regions as well as determination of overall bone involvement in PC patients. Our results indicate that BS in patients who have received PSMA PET for staging only rarely offers additional information; however, prospective studies, including a standardized integrated x-ray computed tomography (SPECT/CT) protocol, should be performed in order to confirm the presented results.
abstract_id: PUBMED:34351430
Positron emission tomography with computed tomography/magnetic resonance imaging for primary staging of prostate cancer Clinical/methodological Issue: Prostate cancer is the most common malignancy and the second leading cause of cancer-related death in men. Accurate imaging diagnosis and staging are crucial for patient management and treatment. The role of nuclear medicine in the diagnosis of prostate cancer has evolved rapidly in recent years due to the availability of hybrid imaging with radiopharmaceuticals targeting the prostate-specific membrane antigen (PSMA).
Standard Radiological Procedures: Hybrid imaging provides higher diagnostic accuracy compared to conventional imaging and has a significant impact on clinical management. Numerous radiotracers have been used in clinical applications, with PSMA ligands being the most commonly used.
Methodological Innovations: Hybrid imaging provides higher diagnostic accuracy for lymph node and bone metastases compared to conventional imaging and has a significant impact on clinical management.
Performance: The high accuracy for primary staging in high-risk prostate cancer using PSMA ligands has led to the inclusion of PSMA positron emission tomography (PET)/computed tomography (CT) in the new German S3 guideline for primary staging of prostate cancer.
Purpose: The aim of this article is to provide an overview of the use of PET imaging in the primary diagnosis of prostate cancer, to present the most commonly used radiotracers, and to highlight the results of recent studies.
abstract_id: PUBMED:34321968
F-18 fluorocholine positron emission tomography- computed tomography in initial staging and recurrence evaluation of prostate carcinoma: A prospective comparative study with diffusion-weighted magnetic resonance imaging and whole-body skeletal scintigraphy. Prostate cancer (PCa) is one of the major causes of death due to cancer in men. Conventional imaging modalities such as magnetic resonance imaging (MRI) provide locoregional status, but fall short in identifying distant metastasis. C-11 choline F-18 fluorocholine (F-18 FCH) has been shown to be useful in imaging of PCa. The present prospective study evaluates and compares the role of F-18 FCH positron emission tomography-computed tomography (PET-CT) with locoregional MRI and whole-body bone scintigraphy in PCa patients for initial staging and recurrence evaluation. This study included a total of 50 patients. Tc-99m skeletal scintigraphy, F-18 FCH PET-CT, and diffusion-weighted MRI of the pelvic region were performed within a span of 2-3 weeks of each other, in random order. For the primary site, core biopsy findings of the lesion were considered as gold standard. The kappa test was used to measure agreement between bone scintigraphy, F-18 FCH, and MRI. For comparing Tc-99m bone scintigraphy, F-18 FCH, and MRI, McNemar's test was applied. F-18 FCH PET-CT and MRI were able to detect primary lesion in all initial staging patients. The sensitivity and specificity of F-18 FCH PET-CT versus MRI were found to be 92.8% versus 89.2% and 100 versus 80%, respectively, for the recurrence at the primary site. A total of 55 bony lesions at distant sites were detected on F-18 FCH PET-CT in comparison to 43 bone lesions on whole-body bone scintigraphy. F-18 FCH PET/CT also detected additional lung lesions in 2 patients and abdominal lymph nodes in 12 patients. F-18 FCH PET-CT could detect primary lesions, local metastasis, bone metastasis, and distant metastasis in a single study and is also a useful modality in recurrence evaluation in PCa patients.
abstract_id: PUBMED:27919970
11C-Acetate-PET/CT Compared to 99mTc-HDP Bone Scintigraphy in Primary Staging of High-risk Prostate Cancer. Aim: The aim of this study was to evaluate the detection rate of bone metastases and the added value of 11C-acetate (ACE) positron-emission tomography/computed tomography (PET/CT) compared to bone scintigraphy (BS) in high-risk prostate cancer (PC).
Materials And Methods: A total of 66 untreated patients with high-risk PC with ACE-PET/CT and planar BS findings within 3 months of each other were retrospectively enrolled. Findings were compared and verified with follow-up data after an average of 26 months.
Results: The rate of detection of bone metastases was superior with ACE-PET/CT compared to BS (p<0.01). Agreement between the methods and between BS and follow-up was moderate (Cohen's kappa coefficient of 0.64 and 0.66, respectively). Agreement between ACE-PET/CT and follow-up was excellent (kappa coefficient of 0.95). Therapy was changed in 11% of patients due to ACE-PET/CT results.
Conclusion: ACE-PET/CT performed better than planar BS in detection of bone metastases in high-risk PC. ACE-PET/CT findings influenced clinical management.
abstract_id: PUBMED:2645558
The role of bone scintigraphy and its value in the staging and follow up of prostatic carcinoma The role and value of bone scintigraphy in the staging and follow-up of prostatic cancer. Bone scintigraphy was performed in 100 patients with prostatic cancer at the time of histological diagnosis, and during 6 years follow-up in predefinied manner. The sensitivity of the method was enhanced by the results in the case of early M0-1 staging (the rate of the skeletal metastasis was 41% at the first seen). During the follow-up only 8 further cases of skeletal metastasis was observed. In most of the cases the skeletal metastasis was already disseminated at the first seen, the hot spots were located mainly at the pelvis and spine. In six cases "super bone scan" was observed. The conventional hormone therapy resulted in regression in 7 of the patients with skeletal metastasis. According to the authors' experiences the bone scintigraphy is indispensable at the diagnosis in the correct staging, and useful in the patient's follow-up.
abstract_id: PUBMED:34295537
Combined bone scintigraphy and fluorocholine PET/computed tomography predicts response to radium-223 therapy in patients with prostate cancer. Aim: To assess the value of bone scintigraphy and 18F-fluorocholine PET/computed tomography (CT) in predicting outcome in patients with prostate cancer and bone metastases treated with 223radium.
Materials & Methods: Retrospective analysis of 48 patients that underwent 223radium therapy. End points were pain relief and overall survival.
Results: After therapy, pain relief was observed in 27 patients. Patients without pain relief had more bone lesions at PET/CT than at bone scintigraphy (pretherapy imaging mismatch). In 39 patients who completed treatment protocol, post-therapy alkaline phosphatase and pretherapy imaging mismatch were independent predictors of poor overall survival.
Conclusion: Patients with more lesions at 18F-fluorocholine PET/CT than at bone scintigraphy had a poor prognosis. The combined imaging approach could be useful to predict outcome after 223radium therapy.
Answer: Computed tomography (CT) and bone scintigraphy (BS) have traditionally been used in the staging of prostate cancer, but their roles are being reevaluated in light of newer imaging modalities. According to the analysis from the EUREKA-1 database, CT showed low sensitivity and specificity in predicting lymph node involvement (LNI), with a sensitivity of only 8.8% and specificity of 98%. The study suggested that the role of CT and BS should be restricted to selected high-risk patients, while clinical predictive nomograms should be adopted for surgical planning (PUBMED:26276152).
Comparatively, Ga-68 prostate-specific membrane antigen positron emission tomography/computed tomography (PSMA PET/CT) has been found to be superior to planar bone scintigraphy in the detection of bone metastases, with significantly higher sensitivity, specificity, accuracy, positive predictive value, and negative predictive value. The study concluded that additional bone scintigraphy seems to be unnecessary in patients who underwent PSMA PET/CT within a three-month period without additional treatment (PUBMED:31688499).
Furthermore, 68Ga-PSMA PET has been shown to outperform planar BS for the detection of affected bone regions as well as determination of overall bone involvement in prostate cancer patients. The results indicate that BS in patients who have received PSMA PET for staging rarely offers additional information (PUBMED:27290607).
The use of PET/CT with PSMA ligands for primary staging in high-risk prostate cancer has demonstrated high accuracy, leading to its inclusion in the new German S3 guideline for primary staging of prostate cancer (PUBMED:34351430).
In summary, while CT and BS have been standard in prostate cancer staging, their roles are becoming more limited due to the emergence of more accurate imaging techniques such as PSMA PET/CT. These newer modalities offer higher diagnostic accuracy and have a significant impact on clinical management, suggesting that the role of traditional CT and BS should be reconsidered and possibly restricted to specific high-risk cases. |
Instruction: Are Birth Certificate and Hospital Discharge Linkages Performed in 52 Jurisdictions in the United States?
Abstracts:
abstract_id: PUBMED:26140836
Are Birth Certificate and Hospital Discharge Linkages Performed in 52 Jurisdictions in the United States? Objectives: The purpose of this study was to determine the number and characteristics of US State Registrars of Vital Statistics (Vital Registrars) and State Systems Development Initiative (SSDI) Coordinators that link birth certificate and hospital discharge data as well as using linkage processes.
Methods: Vital Registrars and SSDI Coordinators in all 52 vital records jurisdictions (50 states, District of Columbia, and New York City) were asked to complete a 41-question survey. We examined frequency distributions among completed surveys using SAS 9.3.
Results: The response rate was 100% (N = 52) for Vital Registrars and 96% (N = 50) for SSDI Coordinators. Nearly half of Vital Registrars (n = 22) and SSDI Coordinators (n = 23) reported that their jurisdiction linked birth certificate and hospital discharge records at least once in the last 4 years. Among those who link, the majority of Vital Registrars (77.3%) and SSDI Coordinators (82.6) link both maternal and infant hospital discharge records to the birth certificate. Of those who do not link, 43% of the Vital Registrars and 55% of SSDI Coordinators reported an interest in linking birth certificate and hospital discharge data. Reasons for not linking included lack of staff time, inability to access raw data, high cost, and unavailability of personal identifiers to link the two sources.
Conclusions: Results of our analysis provide a national perspective on data linkage practices in the US. Our findings can be used to promote further data linkages, facilitate sharing of data and linkage methodologies, and identify uses of the resulting linked data.
abstract_id: PUBMED:26114767
Interpregnancy Intervals in the United States: Data From the Birth Certificate and the National Survey of Family Growth. Objective: To describe data on interpregnancy intervals (IPI), defined as the timing between a live birth and conception of a subsequent live birth, from a subset of jurisdictions that adopted the 2003 revised birth certificate. Because this information is available among revised jurisdictions only, the national representativeness of IPI and related patterns to the entire United States were assessed using the 2006-2010 National Survey of Family Growth (NSFG).
Methods: Birth certificate data are based on 100% of births registered in 36 states and the District of Columbia that adopted the 2003 revised birth certificate in 2011 (83% of 2011 U.S. births). The "Date of last live birth" item on the birth certificate was used to calculate months between the birth occurring in 2011 and the previous birth. These data were compared with pregnancy data from a nationally representative sample of women from the 2006-2010 NSFG.
Results: Jurisdiction-specific median IPI ranged from 25 months (Idaho, Montana, North Dakota, South Dakota, Utah, and Wisconsin) to 32 months (California) using birth certificate data. Overall, the distribution of IPI from the birth certificate was similar to NSFG for IPI less than 18 months (30% and 29%), 18 to 59 months (50% and 52%), and 60 months or more (21% and 18%). Consistent patterns in IPI distribution by data source were seen by age at delivery, marital status, education, number of previous live births, and Hispanic origin and race, with the exception of differences in IPI of 60 months or more among non-Hispanic black women and women with a bachelor's degree or higher.
abstract_id: PUBMED:18766434
Reviewing performance of birth certificate and hospital discharge data to identify births complicated by maternal diabetes. Objectives: Public health surveillance of diabetes during pregnancy is needed. Birth certificate and hospital discharge data are population-based, routinely available and economical to obtain and analyze, but their quality has been criticized. It is important to understand the usefulness and limitations of these data sources for surveillance of diabetes during pregnancy.
Methods: We conducted a comprehensive literature review to summarize the validity of birth certificate and hospital discharge data for identifying diabetes-complicated births.
Results: Sensitivities for birth certificate data identifying prepregnancy diabetes mellitus (PDM) ranged from 47% to 52%, median 50% (kappas: min = 0.210, med = 0.497, max = 0.523). Sensitivities for birth certificate data identifying gestational diabetes mellitus (GDM) ranged from 46% to 83%, median 65% (kappas: min = 0.545, med = 0.667, max = 0.828). Sensitivities for the two studies using hospital discharge data for identifying PDM were 78% and 95% (kappas: 0.839 and 0.964), and for GDM were 71% and 81% (kappas: 0.584 and 0.840). Specificities were consistently above 98% for both data sources.
Conclusions: Overall, hospital discharge data performed better than birth certificates, marginally so for identifying GDM but substantially so for identifying PDM. Reports based on either source alone should focus on trends and disparities and include the caveat that results under represent the problem. Linking the two data sources may improve identification of both GDM and PDM cases.
abstract_id: PUBMED:29150251
Validity of Birth Certificate and Hospital Discharge Data Reporting of Labor Induction. Purpose: To examine the concordance of labor induction measures derived from birth certificate and hospital discharge data with each other and with maternal report.
Methods: Birth certificate data were linked with hospital discharge data and structured interviews of 2,851 mothers conducted 1 month after first childbirth. Those who reported that a doctor or nurse tried to cause their labor to begin, and were not in labor before that event, were classified as undergoing labor induction. The mothers were aged 18 to 35 years at study entry and delivered at 78 hospitals (76 in Pennsylvania and 2 out of state) from 2009 to 2011.
Results: The labor induction rate was 34.3% measured by maternal report, 29.4% by birth certificate data, and 26.2% by hospital discharge data. More than one-third of the women who reported labor induction were not reported as having been induced in the birth certificate data (33.6%), with similar results for the hospital discharge data (36.5%). The rate of underreporting of labor induction in the birth certificate data was higher for inductions occurring before 39 weeks of gestation (43.9%) than for inductions at 39 weeks or later (29.9%; p < .0001). Agreement between birth certificate and hospital discharge data was relatively low (kappa = 0.56), as was agreement between maternal report and birth certificate data (kappa = 0.58), and maternal report and hospital discharge data (kappa = 0.60).
Conclusions: Both the birth certificate and hospital discharge data exhibit relatively poor agreement with maternal report of labor induction and seem to miss a substantial portion of labor inductions.
abstract_id: PUBMED:8700303
Neonatal seizures in the United States: results of the National Hospital Discharge Survey, 1980-1991. We present nationally representative estimates of neonatal seizure risk by gender, race and geographic region of the United States. National Hospital Discharge Survey data were analyzed for the period 1980-1991. Birth-weight-adjusted risks of neonatal seizures were calculated by the direct method for each gender or race group and for each census region by 4-year intervals. The overall risk of neonatal seizures was 2.84 per 1,000 live births. Risk estimates were consistently higher in low-birth-weight infants (relative risk 3.9). Unadjusted risks were similar across race and gender groups; birth weight adjustment had very little effect. No clear temporal trend was apparent over the 12-year study period. National Hospital Discharge Survey data provide reasonable, although conservative, estimates of neonatal seizure risks nationwide. Underascertainment of neonatal seizures, particularly among sick low-birth-weight infants, is likely due to data collection limitations of the National Hospital Discharge Survey.
abstract_id: PUBMED:26668053
Recording of Neonatal Seizures in Birth Certificates, Maternal Interviews, and Hospital Discharge Abstracts in a Cerebral Palsy Case-Control Study in Michigan. We evaluated the recording of neonatal seizures in birth certificates, hospital discharge abstracts, and maternal interviews in 372 children, 198 of them with cerebral palsy, born in Michigan hospitals from 1993 to 2010. In birth certificates, we examined checkbox items "seizures" or "seizure or serious neurologic dysfunction"; in hospital discharge abstracts ICD-9-CM codes 779.0, 345.X, and 780.3; and in maternal interviews a history of seizures or convulsions on day 1 of life recalled 2-16 years later. In 27 neonates, 38 neonatal seizures were recorded in 1 or more sources, 17 in discharge abstracts, 20 in maternal interviews, but just 1 on a birth certificate. The kappa coefficient (κ) between interviews and discharge abstracts was moderate (κ = 0.55), and substantial (κ = 0.63) if mothers noted use of antiepileptics. Agreement was higher (κ = 0.71 vs κ = 0.29) in term births than in preterm births. Birth certificates significantly underreported neonatal seizures.
abstract_id: PUBMED:1658322
The United States Standard Certificate of Live Birth. A critical commentary. The 1989 revision of the United States Standard Certificate of Live Birth was evaluated critically. The major changes in content and format were analyzed. The birth certificate has potential research uses and could be improved with some specific modifications.
abstract_id: PUBMED:8497369
Where babies are born and who attends their births: findings from the revised 1989 United States Standard Certificate of Live Birth. Objective: To examine the results of changes in the birth certificate with regard to characteristics of the mothers and the birth weights of their infants. The United States Standard Certificate of Live Birth was revised in 1989 to include specific designations for the place of births out of hospital and the presence of a nurse-midwife or other midwife at the birth.
Methods: All results are based on data from the Natality, Marriage and Divorce Statistics Branch of the National Center for Health Statistics, Centers for Disease Control. In all cases reported here, the data represent at least 91% of all United States births in 1989.
Results: Different patterns of birth attendance emerged in different settings. In residential births, other midwives and "others" attended 66% of all births, whereas in freestanding birth centers, physicians and certified nurse-midwives attended 75.1% of all births. The characteristics of the mothers differed substantially according to who attended their births in these settings. Substantial interstate variations in place and attendant were also documented.
Conclusion: The positive outcomes achieved in certain settings indicate a need for further research into the factors that influence birth outcomes.
abstract_id: PUBMED:8568572
Home birth in the United States, 1989-1992. A longitudinal descriptive report of national birth certificate data. This study was conducted to profile home birth in the United States from 1989 to 1992 using two birth certificate data sources from the Natality Branch of the National Center for Health Statistics (NCHS). Analysis included published and unpublished descriptive tables about all U.S. home births from 1989 to 1992, and a subset of the 82,210 U.S. home births from 1989 to 1991 that were drawn from NCHS national birth certificate data tapes. Results indicated that less than one-third of reported home births were attended by nurse-midwives or physicians. Distinct regional patterns in the frequency of home births were observed, with higher concentrations in the southwestern and western states. When compared with the average childbearing woman in the United States, mothers who gave birth at home were more likely to be older, have fewer years of education, be married, and be white; they were also more likely to be of higher parity and to receive less prenatal care. Home birth mothers were less likely than average to smoke or drink alcohol prenatally, to have a prenatal medical risk condition or an obstetric complication, or to receive certain prenatal tests. The outcomes of newborns born at home compared favorably to the national average during the same period. Several findings varied considerably by race or ethnicity of the mother.
abstract_id: PUBMED:16021070
The reporting of pre-existing maternal medical conditions and complications of pregnancy on birth certificates and in hospital discharge data. Objective: The purpose of this study was to determine the accuracy of live-birth certificates and hospital discharge data that reported of pre-existing maternal medical conditions and complications of pregnancy.
Study Design: We conducted a population-based validation study in 19 non-federal short-stay hospitals in Washington state with a stratified random sample of 4541 women who had live births between January 1, 2000, and December 31, 2000. True- and false-positive fractions were calculated.
Results: Birth certificate and hospital discharge data combined had substantially higher true-positive fractions than did birth certificate data alone for cardiac disease (54% vs 29%), acute or chronic lung disease (24% vs 10%), gestational diabetes mellitus (93% vs 64%), established diabetes mellitus (97% vs 52%), active genital herpes (77% vs 38%), chronic hypertension (70% vs 47%), pregnancy-induced hypertension (74% vs 49%), renal disease (13% vs 2%), and placenta previa (70% vs 33%). For the 2 medical risk factors that are available only on birth certificates, true-positive fractions were 37% for established genital herpes and 68% for being seropositive for hepatitis B surface antigen.
Conclusion: In Washington, most medical conditions and complications of pregnancy that affect mothers are substantially underreported on birth certificates, but hospital discharge data are accurate in the reporting of gestational and established diabetes mellitus and placenta previa. Together, birth certificate and hospital discharge data are much superior to birth certificates alone in the reporting of gestational diabetes mellitus, active genital herpes, and chronic hypertension.
Answer: Yes, birth certificate and hospital discharge linkages are performed in 52 jurisdictions in the United States. Nearly half of Vital Registrars (n = 22) and SSDI Coordinators (n = 23) reported that their jurisdiction linked birth certificate and hospital discharge records at least once in the last 4 years. Among those who link, the majority of Vital Registrars (77.3%) and SSDI Coordinators (82.6%) link both maternal and infant hospital discharge records to the birth certificate (PUBMED:26140836). |
Instruction: Premedication before intubation in UK neonatal units: a decade of change?
Abstracts:
abstract_id: PUBMED:37343322
Variability of neonatal premedication practices for endotracheal intubation and LISA in the UK (NeoPRINT survey). Objective: The NeoPRINT Survey was designed to assess premedication practices throughout UK NHS Trusts for both neonatal endotracheal intubation and less invasive surfactant administration (LISA).
Design: An online survey consisting of multiple choice and open answer questions covering preferences of premedication for endotracheal intubation and LISA was distributed over a 67-day period. Responses were then analysed using STATA IC 16.0.
Setting: Online survey distributed to all UK Neonatal Units (NNUs).
Participants: The survey evaluated premedication practices for endotracheal intubation and LISA in neonates requiring these procedures.
Main Outcome Measures: The use of different premedication categories as well as individual medications within each category was analysed to create a picture of typical clinical practice across the UK.
Results: The response rate for the survey was 40.8 % (78/191). Premedication was used in all hospitals for endotracheal intubation but overall, 50 % (39/78) of the units that have responded, use premedications for LISA. Individual clinician preference had an impact on premedication practices within each NNU.
Conclusion: The wide variability on first-line premedication for endotracheal intubation noted in this survey could be overcome using best available evidence through consensus guidance driven by organisations such as British Association of Perinatal |Medicine (BAPM). Secondly, the divisive view around LISA premedication practices noted in this survey requires an answer through a randomised controlled trial.
abstract_id: PUBMED:19221400
Premedication before intubation in UK neonatal units: a decade of change? Aims: To ascertain the prevalence of premedication before intubation and the choice of drugs used in UK neonatal units in 2007 and assess changes in practice since 1998.
Methods: A structured telephone survey of 221 eligible units was performed. 214 of the units surveyed completed the telephone questionnaire. The units were subdivided into those that routinely intubated and ventilated neonates (routine group) and those that intubated neonates prior to transfer to a regional unit (transfer group). A similar study was performed by one of the authors in 1998. The same telephone methodology was used in both studies.
Results: Premedication for newborn intubations was provided by 93% (198/214) of all UK units and 76% (162/214) had a written policy or guideline concerning premedication prior to elective intubation. Of those 198 units which premedicate, morphine was the most widely used sedative for newborn intubations with 80% (158/198) using either morphine alone or in combination with other drugs. The most widely used combination was morphine and suxamethonium+/-atropine, which was used by 21% (41/198) of all units. 78% (154/198) of all units administered a paralytic agent.
Conclusions: There has been substantial growth over the last decade in the number of UK neonatal units that provide some premedication for non-emergent newborn intubation, increasing from 37% in 1998 to 93% in 2007. This includes a concomitant increase in the use of paralytic drugs from 22% to 78%. However, the variety of drugs used merits further research.
abstract_id: PUBMED:33479006
Premedication for Nonemergent Intubation in the NICU: A Call for Standardized Practice. This paper discusses neonatal endotracheal intubation and the need for standardization in practice regarding the use of premedication. Intubation is common in the NICU because of resuscitation, surfactant administration, congenital anomalies, apnea, and sedation for procedures or surgery. Intubation is both painful and stressful. Unmedicated intubation is associated with several adverse outcomes including repeat and prolonged attempts, airway trauma, bradycardia, severe desaturation, and need for resuscitation. Most providers believe intubation is painful and that premedication should be provided; however, there is still resistance to provide premedication and inconsistency in doing so. Reasons for not providing premedication include concerns about medication side effects such as chest wall rigidity or prolonged respiratory depression inhibiting immediate extubation after surfactant administration. Premedication should include an opioid analgesic for pain, a benzodiazepine for an adjuvant sedation, a vagolytic to decrease bradycardia, and the optional use of a muscle relaxant for paralysis.
abstract_id: PUBMED:10634840
Premedication before intubation in UK neonatal units. Aims: To establish the extent and type of premedication used before intubation in neonatal units in the United Kingdom.
Methods: A structured telephone survey was conducted of 241 eligible units. Units were subdivided into those that routinely intubated and ventilated babies (routine group) and those that transferred intubated and ventilated babies (transfer group).
Results: Of the units contacted, 239 (99%) participated. Only 88/239 (37%) gave any sedation before intubating on the unit and only 34/239 (14%) had a written policy covering this. Morphine was used most commonly (66%), with other opioids and benzodiazepines used less frequently. Of the 88 units using sedation, 19 (22%) also used paralysis. Suxamethonium was given by 10/19 (53%) but only half of these combined it with atropine. Drug doses varied by factors of up to 200, even for commonly used drugs.
Conclusion: Most UK neonatal units do not sedate babies before intubating, despite evidence of physiological and practical benefits. Only a minority have written guidelines, which prohibits auditing of practice.
abstract_id: PUBMED:24577435
Impact of premedication on neonatal intubations by pediatric and neonatal trainees. Objective: To determine if premedication and training level affect the success rates of neonatal intubations.
Study Design: We retrospectively reviewed a hospital-approved neonatal intubation database from 2003 to 2010. Intubation success rate was defined as the number of successful intubations divided by the total number of attempts, and then compared by trainee's experience level and the use of premedication. Premedication regimen included anticholinergic, analgesic and muscle relaxant agents.
Result: There were 169 trainees who completed 1071 successful intubations with 2694 attempts. The median success rate was 36% by all trainees, and improved with training level from 29% for pediatric trainees to 50% for neonatal trainees (P<0.001). Premedication was used in 58% of intubation attempts. The median success rate was double with premedication (43% versus 22%, P<0.001).
Conclusion: Neonatal endotracheal intubation is a challenge for trainees. Intubation success rates progressively improve with experience. Premedication is associated with improved success rates for all training levels.
abstract_id: PUBMED:21089721
Premedication for non-emergency intubation in the neonate. Endotracheal intubation is frequently performed in neonatal intensive care. This procedure is extremely distressing and painful, and it has the potential for causing laryngospasm, hemodynamic changes, a rise in intracranial pressure and a risk of hemorrhage and airway injury. These adverse changes can be attenuated by using premedication with analgesic, sedative and muscle-relaxant drugs. Premedication is standard practice for pediatric and adult Intubation, but in neonates the use of supportive pharmacological measures is still hotly debated, mainly in terms of the risks and benefits of using sedatives in unstable and premature newborn. In a recent UK survey, 90% of tertiary neonatal units reported the routine use of sedation prior to intubation with a combination of atropine plus an opioid (morphine or fentanyl), while 82% of such units routinely use a muscle-relaxant. In Italy, a recent survey (in press) showed that the majority of NICU (Neonatal Intensive Care Units) use the sa me association of drugs for analgesia and sedation before tracheal intubation, but "not always" in more than half of these units. There is clearly a persistent concern about using such drugs in preterm and newborn infants, despite recent evidence showing that premedication for elective neonatal intubation is safer and more effective than when the infant is awake. Here we review the effects of using analgesic and sedative drugs on intubation conditions (good jaw relaxation, open and immobile vocal cord, suppression of pharyngeal and laryngeal reflex), on the time it takes to complete the procedure successfully, on pain control and the potentially adverse effects of using combinations of drugs for sedation.
abstract_id: PUBMED:19490437
Use of premedication for intubation in tertiary neonatal units in the United Kingdom. Background: Endotracheal intubation and laryngoscopy are frequently performed procedures in neonatal intensive care. These procedures represent profoundly painful stimuli and have been associated with laryngospasm, bronchospasm, hemodynamic changes, raised intracranial pressure and an increased risk of intracranial hemorrhage. These adverse changes can cause significant neonatal morbidity but may be attenuated by the use of suitable premedication.
Aims: To evaluate current practices for premedication use prior to elective intubation in UK tertiary neonatal units.
Methods: Telephone questionnaire survey of all 50 tertiary neonatal units in the UK.
Results: Ninety percent of units report the routine use of sedation prior to intubation and 82% of units routinely use a muscle relaxant. Morphine was the most commonly used sedative and suxamethonium was the most commonly used muscle relaxant. Approximately half of the units also used atropine during intubation. Seventy seven percent of units had a written policy for premedication. Ten percent of the units did not routinely use any sedatives or muscle relaxants for elective intubation.
Conclusions: In comparison with data from a 1998 survey, our study demonstrated an increase in the number of units that have adopted a written policy for premedication use, and in the number routinely using premedication drugs for elective intubation. There remains little consensus as to which drugs should be used and in what dose.
abstract_id: PUBMED:31036701
Haemodynamic effects of premedication for neonatal intubation: an observational study. Objective: To examine changes in blood pressure (BP), cardiac output (CO) and cerebral regional oxygen saturation (rScO2) with administration of premedication for neonatal intubation.
Design: Pilot, prospective, observational study. Oxygen saturation, heart rate, CO, rScO2 and BP data were collected. Monitoring began 5 min prior to premedication and continued until spontaneous movement.
Setting: Single-centre, level 3 neonatal intensive care unit PATIENTS: 35 infants, all gestational ages. 81 eligible infants: 66 consented, 15 refused.
Interventions: Intravenous atropine, fentanyl or morphine, ±cisatracurium MAIN OUTCOME MEASURES: BP, CO, rScO2 RESULTS: n=37 intubations. Mean gestational age and median birth weight were 31 4/7 weeks and 1511 g. After premedication, 10 episodes resulted in a BP increase from baseline and 27 in a BP decrease. Of those whose BP decreased, 17 had <20% decrease and 10 had ≥20% decrease. Those with <20% BP decrease took an average of 2.5 min to return to baseline while those with a ≥20% BP decline took an average of 15.2 min. Three did not return to baseline by 35 min. Following intubation, further declines in BP (21%-51%) were observed in eight additional cases. One infant required a bolus for persistently low BPs. CO and rScO2 changes were statistically similar between the two groups.
Conclusion: About 30% of infants dropped their BP by ≥20% after premedication for elective intubation. These BP changes were not associated with any significant change in rScO2 or CO. More data are needed to better characterise the immediate haemodynamic changes and clinical outcomes associated with premedication.
abstract_id: PUBMED:23493980
Premedication for neonatal intubation: Current practice in Saudi Arabia. Background: Despite strong evidence of the benefits of rapid sequence intubation in neonates, it is still infrequently utilized in neonatal intensive care units (NICU), contributing to avoidable pain and secondary procedure-related physiological disturbances.
Objectives: The primary objective of this cross-sectional survey was to assess the practice of premedication and regimens commonly used before elective endotracheal intubation in NICUs in Saudi Arabia. The secondary aim was to explore neonatal physicians' attitudes regarding this intervention in institutions across Saudi Arabia.
Methods: A web-based, structured questionnaire was distributed by the Department of Pediatrics, Umm Al Qura University, Mecca, to neonatal physicians and consultants of 10 NICUs across the country by E-mail. Responses were tabulated and descriptive statistics were conducted on the variables extracted.
Results: 85% responded to the survey. Although 70% believed it was essential to routinely use premedication for all elective intubations, only 41% implemented this strategy. 60% cited fear of potential side effects for avoiding premedication and 40% indicated that the procedure could be executed more rapidly without drug therapy. Treatment regimens varied widely among respondents.
Conclusion: Rates of premedication use prior to non-emergent neonatal intubation are suboptimal. Flawed information and lack of unified unit policies hampered effective implementation. Evidence-based guidelines may influence country-wide adoption of this practice.
abstract_id: PUBMED:26042264
Newborns should be receiving premedication before elective intubation. Background: Intubation is a common neonatal procedure. Premedication is accepted as a standard of care, but its use is not universal and wide variations exist in practice.
Objective: To evaluate current practices for premedication use prior to elective neonatal intubation in South Africa (SA).
Method: We invited 481 clinicians to participate in a cross-sectional web-based survey.
Results: We received responses from 28.3% of the clinicians surveyed; 54.1% were from the private sector and 45.9% from the state sector. Most respondents worked in medium-sized neonatal units with six to ten beds. Most paediatricians (76.0%) worked in the private sector, and 78.6% of neonatologists in the state sector. Premedication was practised by 71.9% of the respondents, but only 38.5% of neonatal units had a written policy. Sedatives were used for premedication by 63.2% of the respondents. Midazolam (41.5%), morphine (34.0%) and ketamine (20.8%) were most commonly used. Muscle relaxants and atropine were not routinely administered. Suxamethonium was the muscle relaxant of choice. Varied combinations of agents or single agents were used. Midazolam used alone was the preferred option.
Conclusion: This first survey of premedication for neonatal intubation in SA revealed variations in practice, with a minority of clinicians following a written policy. The findings can be used to benchmark practice and inform the design of local collaborative trials aimed at determining optimal premedication prior to neonatal intubation. The survey demonstrates clinicians' reluctance to participate in surveys, suggesting a need for a national collaborative network to obtain representative data.
Answer: The practice of premedication before intubation in UK neonatal units has indeed undergone significant changes over a decade. In 1998, only 37% of UK neonatal units provided any sedation before intubating neonates, and a mere 14% had a written policy on premedication (PUBMED:10634840). By 2007, there was a substantial increase, with 93% of units providing some form of premedication for non-emergent newborn intubation, and 76% had a written policy or guideline concerning premedication prior to elective intubation (PUBMED:19221400). This growth also included a rise in the use of paralytic drugs, from 22% to 78% (PUBMED:19221400).
In 2007, morphine was the most widely used sedative for newborn intubations, with 80% of units using it alone or in combination with other drugs. The most common combination was morphine and suxamethonium, possibly with atropine, used by 21% of all units (PUBMED:19221400). By contrast, a survey conducted in 2009 showed that 90% of tertiary neonatal units reported the routine use of sedation prior to intubation, with a combination of atropine plus an opioid (morphine or fentanyl), and 82% of such units routinely used a muscle relaxant (PUBMED:19490437).
Despite the increase in the use of premedication, there was still a lack of consensus on which drugs should be used and in what doses (PUBMED:19490437). The variability in premedication practices was also highlighted in a more recent survey (NeoPRINT survey), which found that while premedication was used in all hospitals for endotracheal intubation, there was a wide variability in first-line premedication, and individual clinician preference had an impact on practices within each neonatal unit (PUBMED:37343322).
Overall, the trend over the decade indicates a move towards more standardized use of premedication in UK neonatal units, with an increase in the adoption of written policies and the routine use of sedation and muscle relaxants prior to elective intubation. However, the variability in drug choice and dosing suggests that further research and consensus guidance are needed to optimize premedication practices (PUBMED:37343322, PUBMED:19221400, PUBMED:19490437). |
Instruction: Pancreatic abnormalities detected by endoscopic ultrasound (EUS) in patients without clinical signs of pancreatic disease: any difference between standard and Rosemont classification scoring?
Abstracts:
abstract_id: PUBMED:24854620
Pancreatic abnormalities detected by endoscopic ultrasound (EUS) in patients without clinical signs of pancreatic disease: any difference between standard and Rosemont classification scoring? Background: The prevalence of nine EUS features of chronic pancreatitis (CP) according to the standard Wiersema classification has been investigated in 489 patients undergoing EUS for an indication not related to pancreatico-biliary disease. We showed that 82 subjects (16.8%) had at least one ductular or parenchymal abnormality. Among them, 18 (3.7% of study population) had ≥3 Wiersema criteria suggestive of CP. Recently, a new classification (Rosemont) of EUS findings consistent, suggestive or indeterminate for CP has been proposed.
Aim: To stratify healthy subjects into different subgroups on the basis of EUS features of CP according to the Wiersema and Rosemont classifications and to evaluate the agreement in the diagnosis of CP with the two scoring systems. Weighted kappa statistics was computed to evaluate the strength of agreement between the two scoring systems. Univariate and multivariate analysis between any EUS abnormality and habits were performed.
Results: Eighty-two EUS videos were reviewed. Using the Wiersema classification, 18 subjects showed ≥3 EUS features suggestive of CP. The EUS diagnosis of CP in these 18 subjects was considered as consistent in only one patient, according to Rosemont classification. Weighted Kappa statistics was 0.34 showing that the strength of agreement was 'fair'. Alcohol use and smoking were identified as risk factors for having pancreatic abnormalities on EUS.
Conclusions: The prevalence of EUS features consistent or suggestive of CP in healthy subjects according to the Rosemont classification is lower than that assessed by Wiersema criteria. In that regard the Rosemont classification seems to be more accurate in excluding clinically relevant CP. Overall agreement between the two classifications is fair.
abstract_id: PUBMED:32414753
Endoscopic ultrasound (EUS) and the management of pancreatic cancer. Pancreatic cancer is one of the leading causes of cancer-related mortality in western countries. Early diagnosis of pancreatic cancers plays a key role in the management by identification of patients who are surgical candidates. The advancement in the radiological imaging and interventional endoscopy (including endoscopic ultrasound (EUS), endoscopic retrograde cholangiopancreatography and endoscopic enteral stenting techniques) has a significant impact in the diagnostic evaluation, staging and treatment of pancreatic cancer. The multidisciplinary involvement of radiology, gastroenterology, medical oncology and surgical oncology is central to the management of patients with pancreatic cancers. This review aims to highlight the diagnostic and therapeutic role of EUS in the management of patients with pancreatic malignancy, especially pancreatic ductal adenocarcinoma.
abstract_id: PUBMED:22687386
Conventional versus Rosemont endoscopic ultrasound criteria for chronic pancreatitis: interobserver agreement in same day back-to-back procedures. Background And Study Aims: Endoscopic ultrasound (EUS) is a commonly used and fairly sensitive method of assessing changes of chronic pancreatitis (CP) when routine noninvasive imaging has not shown overt features of CP. The aim of this study is to evaluate the interobserver agreement (IOA) for the classic (MSC) and the Rosemont (RC) criteria for the diagnosis of chronic pancreatitis on the basis of clinical practice.
Patients And Methods: Two experienced endosonographers evaluated on the same day patients referred for EUS in a blinded fashion. Data from the sonographic criteria of both MSC and RC were collected. Agreement was calculated using k statistics.
Results: A total of 69 patients were evaluated. The study population included mainly patients without pancreatic diseases, resulting in a low number of sonographic findings. Agreement for the final diagnosis was moderate for both classification systems of chronic pancreatitis (k = 0.53 for conventional and k = 0.46 for Rosemont).
Conclusions: The IOA of EUS in the diagnosis of CP is moderate. The concordance values obtained in clinical practice are similar to those obtained in multicenter studies. The RC does not seem to improve the IOA of MSC.
abstract_id: PUBMED:36159010
Exosomal glypican-1 is elevated in pancreatic cancer precursors and can signal genetic predisposition in the absence of endoscopic ultrasound abnormalities. Background: Individuals within specific risk groups for pancreatic ductal adenocarcinoma (PDAC) [mucinous cystic lesions (MCLs), hereditary risk (HR), and new-late onset diabetes mellitus (NLOD)] represent an opportunity for early cancer detection. Endoscopic ultrasound (EUS) is a premium image modality for PDAC screening and precursor lesion characterization. While no specific biomarker is currently clinically available for this purpose, glypican-1 (GPC1) is overexpressed in the circulating exosomes (crExos) of patients with PDAC compared with healthy subjects or those harboring benign pancreatic diseases.
Aim: To evaluate the capacity of GPC1+ crExos to identify individuals at higher risk within these specific groups, all characterized by EUS.
Methods: This cross-sectional study with a prospective unicentric cohort included 88 subjects: 40 patients with MCL, 20 individuals with HR, and 20 patients with NLOD. A control group (CG) was submitted to EUS for other reasons than pancreatic pathology, with normal pancreas and absence of hereditary risk factors (n = 8). The inclusion period was between October 2016 and January 2019, and the study was approved by the Ethics Committee of Centro Hospitalar Universitário de São João, Porto, Portugal. All patients provided written informed consent. EUS and blood tests for quantification of GPC1+ crExos by flow cytometry and carbohydrate antigen 19-9 (CA 19-9) levels by ELISA were performed in all subjects. EUS-guided tissue acquisition was done whenever necessary. For statistical analysis, SPSS® 27.0 (IBM Corp., Armonk, NY, United States) version was used. All graphs were created using GraphPad Prism 7.00 (GraphPad Software, San Diego, CA, United States).
Results: Half of MCLs harbored worrisome features (WF) or high-risk stigmata (HRS). Pancreatic abnormalities were detected by EUS in 10.0% and 35.0% in HR and NLOD individuals, respectively, all considered non-malignant and "harmless." Median levels of GPC1+ crExos were statistically different: MCL [99.4%, interquartile range (IQR): 94.9%-99.8%], HR (82.0%, IQR: 28.9%-98.2%), NLOD (12.6%, IQR: 5.2%-63.4%), and CG (16.2%, IQR: 6.6%-20.1%) (P < 0.0001). Median levels of CA 19-9 were within the normal range in all groups (standard clinical cut-off of 37 U/mL). Within HR, individuals with a positive history of cancer had higher median levels of GPC1+ crExos (97.9%; IQR: 61.7%-99.5%), compared to those without (59.7%; IQR: 26.3%-96.4%), despite no statistical significance (P = 0.21). Pancreatic cysts with WF/HRS were statistically associated with higher median levels of GPC1+ crExos (99.6%; IQR: 97.6%-99.8%) compared to those without (96.5%; IQR: 81.3%-99.5%) (P = 0.011), presenting an area under the receiver operating characteristic curve value of 0.723 (sensitivity 75.0% and specificity 67.7%, using a cut-off of 98.5%; P = 0.012).
Conclusion: GPC1+ crExos may act as biomarker to support the diagnosis and stratification of PDAC precursor lesions, and in signaling individuals with genetic predisposition in the absence of EUS abnormalities.
abstract_id: PUBMED:27076870
What are the current and potential future roles for endoscopic ultrasound in the treatment of pancreatic cancer? Pancreatic adenocarcinoma is the fourth leading cause of cancer-related death in the United States. Due to the aggressive tumor biology and late manifestations of the disease, long-term survival is extremely uncommon and the current 5-year survival rate is 7%. Over the last two decades, endoscopic ultrasound (EUS) has evolved from a diagnostic modality to a minimally invasive therapeutic alternative to radiologic procedures and surgery for pancreatic diseases. EUS-guided celiac plexus intervention is a useful adjunct to conventional analgesia for patients with pancreatic cancer. EUS-guided biliary drainage has emerged as a viable option in patients who have failed endoscopic retrograde cholangiopancreatography. Recently, the use of lumen-apposing metal stent to create gastrojejunal anastomosis under EUS and fluoroscopic guidance in patients with malignant gastric outlet obstruction has been reported. On the other hand, anti-tumor therapies delivered by EUS, such as the injection of anti-tumor agents, brachytherapy and ablations are still in the experimental stage without clear survival benefit. In this article, we provide updates on well-established EUS-guided interventions as well as novel techniques relevant to pancreatic cancer.
abstract_id: PUBMED:24837987
Impact of inconclusive endoscopic ultrasound-guided fine-needle aspiration results in the management and outcome of patients with solid pancreatic masses. Background And Aim: Endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) can be inconclusive in diagnosing solid pancreatic masses. The aim of the present study was to evaluate the impact of an inconclusive EUS-FNA in the management of patients with solid pancreatic masses.
Methods: This is a retrospective analysis of a prospective database of patients with solid pancreatic masses referred for EUS-FNA between December 2011 and December 2013. Consecutive patients with inconclusive initial EUS-FNA were included. Demographic, clinical, procedural and outcome data were analyzed.
Results: Over the study period, 387 patients underwent EUS-FNA of solid pancreatic masses, of which 38 patients had inconclusive cytology. Of the 38 patients, 18 were categorized as atypical, two were categorized as indeterminate or suspicious for malignancy, and 18 were categorized as benign process. Subsequently, 24 (63.2%) patients achieved cytopathological diagnosis by repeat EUS-FNA (8), transcutaneous FNA (4) and surgery (12). Repeat EUS-FNA were done a median of 13 days after the index examination and resulted in conclusive diagnosis in 72.7% of cases. Five patients undergoing surgery were confirmed to have benign lesions. In 14 (36.8%) patients not receiving a positive cytopathological diagnosis, 11 were eventually confirmed to be malignant based on clinical outcome and imaging evidence.
Conclusions: Inconclusive EUS-FNA could lead to unnecessary surgical procedures in patients with resectable solid pancreatic masses if no cytopathological diagnosis is obtained through either repeat or alternative diagnostic modalities. Repeat EUS-FNA provided a conclusive diagnosis in a majority of cases, and therefore should be strongly considered ahead of other modalities.
abstract_id: PUBMED:33390342
Endoscopic ultrasound elastography for small solid pancreatic lesions with or without main pancreatic duct dilatation. Background: /Objectives: Endoscopic ultrasound elastography (EUS-EG) is useful for diagnosis of small solid pancreatic lesions (SPLs), particularly in excluding pancreatic cancer (PC), but its dependence on main pancreatic duct dilatation (MPDD) has not been examined. We aimed to investigate EUS-EG for diagnosis of small SPLs with and without MPDD.
Methods: Patients with pathologically diagnosed SPLs of ≤20 mm were included and retrospectively analyzed. Using the blue:green ratio, an EUS-EG image was classified as blue-dominant, equivalent, or green-dominant. Using multiple EUS-EG images per patient, a lesion with a greater number of blue-dominant than green-dominant images was classified as stiff, and the others as soft. EUS-EG images in random order were judged by three raters. Considering stiff SPLs as PC, diagnostic performance of EUS-EG was examined for SPLs with and without MPDD.
Results: Of 126 cases analyzed, 65 (52%) were diagnosed as PC, and 63 (50%) had MPDD. A total of 1077 EUS-EG images were examined (kappa coefficient = 0.783). Lesions were classified as stiff in 91 cases and soft in 35 (kappa coefficient = 0.932). The ratio of stiff to soft lesions was significantly higher in PC than in non-PC (62:3 vs. 29:32, P < 0.001). The sensitivity, specificity, and negative predictive value of a stiff lesion with vs. without MPDD for diagnosis of PC were 94%, 23%, and 50% vs. 100%, 60%, and 100%, respectively.
Conclusions: Using the EUS-EG stiffness classification for small SPLs, PC can be excluded with high confidence and concordance for a soft lesion without MPDD.
abstract_id: PUBMED:34663214
Evaluation of preoperative diagnostic methods for resectable pancreatic cancer: a diagnostic capability and impact on the prognosis of endoscopic ultrasound-guided fine needle aspiration. Background: A pathological diagnosis of pancreatic cancer should be performed as much as possible to determine the appropriate treatment strategy, but priorities and algorithms for diagnostic methods have not yet been established. In recent years, the endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) has become the primary method of collecting tissues from pancreatic disease, but the effect of EUS-FNA on surgical results and prognosis has not been clarified.
Aims: To evaluate the diagnostic ability of EUS-FNA and its effect on the preoperative diagnosis, surgical outcome, and prognosis of pancreatic cancer.
Methods: Between January 2005 and June 2017, 293 patients who underwent surgical resection for pancreatic cancer were retrospectively evaluated. The outcomes of interest were the diagnostic ability of EUS-FNA and its influence on the surgical results and prognosis.
Results: The diagnostic sensitivity of EUS-FNA was 94.4%, which was significantly higher than that of endoscopic retrograde cholangiopancreatography (ERCP) (45.5%) (p < 0.001). The adverse event rate in ERCP was 10.2%, which was significantly higher than EUS-FNA (1.3%) (p = 0.001). Patients were divided into FNA group (N = 160) and non-FNA group (N = 133) for each preoperative diagnostic method. In the study of surgical curability R0 between the two groups, there was no significant difference in FNA group (65.0% [104/160]) and non-FNA group (64.7% [86/133], p = 1.000). In the prognostic study, 256 patients with curative R0 or R1 had a recurrence rate was 54.3% (70/129) in the FNA group and 57.4% (73/127) in the non-FNA group. Moreover peritoneal dissemination occurred in 34.3% (24/70) in the FNA group and in 21.9% (16/73) in the non-FNA group, neither of which showed a significant difference. The median survival times of the FNA and non-FNA groups were 955 days and 799 days, respectively, and there was no significant difference between the two groups (log-rank p = 0.735). In the Cox proportional hazards model, factors influencing prognosis, staging, curability, and adjuvant chemotherapy were the dominant factors, but the preoperative diagnostic method (EUS-FNA) itself was not.
Conclusions: EUS-FNA is a safe procedure with a high diagnostic ability for the preoperative examination of pancreatic cancer. It was considered the first choice without the influence of surgical curability, postoperative recurrence, peritoneal dissemination and prognosis.
abstract_id: PUBMED:34120342
Accuracy of cytopathology evaluation for resected benign and malignant pancreatic disease. Endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) is the preferred method for diagnosing pancreatic masses. While the diagnostic success of EUS-FNA is widely accepted, the actual performance of EUS-FNA is not known. This study sought to define the EUS-FNA accuracy compared with the gold standard, surgically resected specimens. The study was a single institution, retrospective, and chart review of patients with surgically resected pancreatic specimens from 2005 to 2015 with a preoperative EUS-FNA or biliary brushing. Cytological reports were organized from least concerning (i.e., low chance of malignancy) to most concerning (high chance of malignancy) into eight cytologic categories. We identified 741 cytologic cases: 530 EUS-FNA and 211 endoscopic brushings. For EUS-FNA samples, 62.5% of "benign" samples proved to be "benign" on surgical pathology. A cytologic diagnosis of "suspicious for malignancy" or "positive for malignancy" were concordant with a cancer diagnosis on surgical pathology 93.3% and 98.0% of cases, respectively. EUS-FNA proved to be highly reliable at diagnosing malignancy for cytologic samples that were "suspicious" or "positive" for malignancy. Paired with supportive clinical data, these interpretations may be used to justify cancer treatment.
abstract_id: PUBMED:23872485
Endoscopic approach to the diagnosis and treatment of pancreatic disease. Purpose Of Review: The purpose of this review is to examine the recent developments in the use of endoscopic retrograde cholangio-pancreatography (ERCP) and endoscopic ultrasound (EUS) in the management of patients with pancreatic-biliary disease.
Recent Findings: The use of ERCP to guide selective placement of pancreatic sphincterotomes, stone extraction balloons, and stents enables clinicians to treat pancreatic sphincteric and ductal disorders. Pancreatic stones are a remediable cause of recurrent pancreatitis and small calculi can be easily removed. The gold standard for the diagnosis of pancreas divisum remains ERCP and sphincterotomy is highly effective in the treatment of relapsing pancreatitis. Intraductal papillary mucinous neoplasms are the most common pancreatic malignancy and ERCP, as well as EUS can identify and sample the solid and cystic lesions. Mural nodules can be detected and sampled effectively by EUS-fine needle aspiration (FNA). The sensitivity of EUS-FNA for pancreatic adenocarcinoma is excellent (more than 85%). Although cyst fluid carcinoembryonic antigen is a very good marker for the presence of a mucinous cystic lesion, it is not an indicator of malignancy.
Summary: In summary, ERCP and EUS are important tools for the management of benign and malignant lesions of the pancreas.
Answer: Yes, there is a difference between standard and Rosemont classification scoring when detecting pancreatic abnormalities using endoscopic ultrasound (EUS) in patients without clinical signs of pancreatic disease. According to a study that investigated the prevalence of nine EUS features of chronic pancreatitis (CP) according to the standard Wiersema classification in 489 patients, 82 subjects (16.8%) had at least one ductular or parenchymal abnormality, and 18 (3.7% of the study population) had ≥3 Wiersema criteria suggestive of CP. However, when these cases were reviewed using the Rosemont classification, only one patient was considered to have a consistent diagnosis of CP. The weighted Kappa statistics was 0.34, indicating that the strength of agreement between the two scoring systems was 'fair'. The study concluded that the prevalence of EUS features consistent or suggestive of CP in healthy subjects according to the Rosemont classification is lower than that assessed by Wiersema criteria, suggesting that the Rosemont classification may be more accurate in excluding clinically relevant CP (PUBMED:24854620). |
Instruction: Are elective surgical operations cancelled due to increasing medical admissions?
Abstracts:
abstract_id: PUBMED:15693380
Are elective surgical operations cancelled due to increasing medical admissions? Background: Cancellation of operations increases theatre costs and decreases efficiency. We examined the causes of theatre cancellations in general surgery.
Methods: The Beaumont hospital database (ORSUS system) and theatre records were examined retrospectively between April 1997 and March 2002. The number and causes of theatre cancellations, the number of emergency admissions and their length of hospital stay were studied.
Results: The number of elective operations cancelled between April 1997-March 1998 and April 2001-March 2002 were 368 and 427 respectively. 'No bed' was the reason for theatre cancellation in 114 (31.0%) cases between April 1997-March 1998 and this increased to 267 (62.5%) cases between April 2001-March 2002. Between April 1997-March 1998 and April 2001-March 2002, general surgical emergency admissions decreased by 6.74% (3,116 to 2,906), and emergency surgical admissions across the specialties decreased by 2.02% (4,002 to 3,921). In the same time interval, general medical emergency admissions rose from 4,195 to 5,386 (a 28.39% increase), and emergency medical admissions across the specialties rose from 5,401 to 6,689 (a 23.84% increase). General surgical bed days for emergency admissions fell between April 1997-March 1998 and April 2001-March 2002 from 28,839 to 26,698 (7.4% decrease). There was a similar decrease from 38,188 to 36,004 (5.7% decrease) for all surgical specialties. Total bed days necessitated by general medical emergency admissions increased from 53,226 to 61,623 (15.8%). Across the medical specialties, an increase from 71,590 to 82,180 bed days (14.79%) was seen.
Conclusions: Elective surgery cancellation is a significant problem with far-reaching consequences. While multifactorial in aetiology, increased bed usage by medical specialties is one important factor. This study has implications for doctors, training, administrators and patients.
abstract_id: PUBMED:18444594
Cancelled elective general surgical operations in Ayub Teaching Hospital. Background: Cancellation of operations in hospitals is a significant problem with far reaching consequences. This study was planned to evaluate reasons for cancellation of elective surgical operation on the day of surgery in Ayub Teaching Hospital, Abbottabad.
Methods: From July 2006 to June 2007 the medical records of all the patients who had their operations cancelled on the day of surgery in all the three General Surgical units of Ayub Teaching Hospital, Abbottabad were audited prospectively. The number of operation cancelled and reasons for cancellations were documented.
Results: 3756 patients were scheduled for surgery during the study period. 2820 (75%) patients were operated upon. 936 (25%) operations were cancelled out of which 338 (36%) were cancelled due to shortage of time, 296 (31.6%) were cancelled due to medical reasons, 152 (16.2%) were cancelled due to shortage of beds while 55 (5.8%) were cancelled due to shortage of anaesthetists. Three operation lists were lost completely. The Anaesthetist cancelled 43%, Surgeon 39% while 18% of operations were cancelled due organizational reasons.
Conclusion: Cancellation of patients on operation lists occupy a substantial population (25%) of cases. Majority of cancellation were due to reasons other than patients medical conditions. Better management could have avoided most of these cancellations.
abstract_id: PUBMED:36540699
Elective operations delay and emergency department visits and inpatient admissions during COVID-19. Introduction: At the beginning of the COVID-19 pandemic, many hospitals postponed elective operations for a 12-week period in early 2020. During this time, there was concern that the delay would lead to worse health outcomes. The objective of this study is to analyze the effect of delaying operations during this period on ED (Emergency Department) visits and/or urgent IP (Inpatient) admissions.
Methods: Electronic Health Record (EHR) data on canceled elective operations between 3/17/20 to 6/8/20 was extracted and a descriptive analysis was performed looking at patient demographics, delay time (days), procedure type, and procedure on rescheduled, completed elective operations with and without a related ED visit and/or IP admission during the delay period.
Results: Only 4 out of 197 (2.0%) operations among 4 patients out of 186 patients (2.0%) had an ED visit or IP admission diagnosis related to the postponed operation. When comparing the two groups, the 4 patients were older and had a longer median delay time compared to the 186 patients without an ED visit or IP admission.
Conclusion: Postponement of certain elective operations may be done with minimal risk to the patient during times of crisis. However, this minimal risk may be due to the study site's selection of elective operations to postpone. For example, none of the elective operations canceled or postponed were cardiovascular operations, which have worse health outcomes when delayed.
abstract_id: PUBMED:38274927
Introduction of a New Protocol to Limit the Number of Cancelled Elective Orthopaedic Operations Due to Asymptomatic Bacteriuria. Background Asymptomatic bacteriuria (ASB) poses a significant diagnostic dilemma for medical professionals. Current hospital screening protocol determines the likelihood of a positive diagnosis of a urinary tract infection (UTI) based on the results of a bedside urinalysis. ASB, defined as a positive urine culture in the absence of symptoms, can contribute to unnecessary cancellations, poor utilisation of theatre time, and delayed patient care. We present a two-cycle audit proposing a new pathway to addressing ASB in patients awaiting elective orthopaedic surgery, aiming to optimise surgical yield. Our objectives are to identify areas for improvement in our departmental practices with respect to asymptomatic bacteria compared to the published literature. We propose a new protocol targeted to improve our current practices to minimise patient cancellations and optimise theatre utilisation. Methodology A total of 78 patients who had an elective orthopaedic procedure cancelled at a large district general hospital offering tertiary orthopaedic services, between two study periods spanning March 2018 to April 2019 and May 2019 to March 2020, were identified from electronic hospital records and theatre management systems. Demographics, procedure details, and reasons for cancellations, including the result of urinalysis and the presence of UTI symptoms were assessed. Our pathway was introduced after the first study period and, subsequently, re-audited to assess adherence to the new protocol and its effect on cancellations. Results We identified 78 patients, with a 50:50 male:female split and an average age of 63 (range = 9-90). Of the 33 patients in the first cohort, seven (21.2%) were cancelled due to UTI risk based on positive urinalysis. Of these seven cancellations, one (14.3%) patient reported symptoms of a UTI. The second cohort comprised 45 patients, two (4.4%) of whom were cancelled due to UTI risk based on symptom questionnaire results. These two symptomatic patients along with another two asymptomatic patients (8.8% in total) were found to have positive urinalyses; however, the two asymptomatic patients had their operations cancelled for unrelated reasons. Conclusions The study has shown that previously of all patients awaiting elective orthopaedic operations who had their procedures cancelled, 85.7% were cancelled due to ASB. After the introduction of a new protocol focussing on symptoms rather than urinalysis, we estimate that the number of cancelled elective orthopaedic operations has reduced by 71.4%, thereby greatly improving the utilisation of theatre time.
abstract_id: PUBMED:15693381
Impact of emergency admissions on elective surgical workload. Background: Day case surgery is the most cost-effective approach for all minor, most intermediate and some major surgery.
Aims: To examine the effect of the current 'escalation' policy of opening the surgical day ward to A&E admissions at the expense of planned surgery.
Patients And Methods: A retrospective study was carried out on all elective general surgical operations planned for January through March 2003. The number of cases cancelled and the reasons for cancellation were documented.
Results: The total number of patients booked for surgery was 836, 66.6% of which were day cases (557 patients). Overall 338 patients accounting for 40.4% of all planned cases were cancelled. Day case cancellations accounted for 68.9% of all cancellations (233 patients). Bed unavailability was the main reason due to the overflow of A&E admissions, accounting for 92% of cancelled patients and 73.8% of day ward cancellations.
Conclusions: The cancellation of surgery creates untold hardship for patients who plan their working and family lives around the proposed operation date. Most are cancelled at less than 24 hours notice. The cost implications to the community are immense but have not been calculated. The separation of emergency and planned surgery is essential through adequate observation ward access.
abstract_id: PUBMED:22275936
Incidence, causes and pattern of cancellation of elective surgical operations in a university teaching hospital in the Lake Zone, Tanzania. Background: Cancellation of elective surgical operations is recognized as a major cause of emotional trauma to patients as well as their families. This study was carried out to assess the incidence, causes and pattern of cancellation of elective surgical operations in our setting and to find the appropriate solutions for better patient management.
Methods: This was a prospective hospital-based study which was conducted in a teaching hospital at Bugando medical Centre from March 2009 to February 2010.
Results: A total of 3,064 patients were scheduled for elective surgical operations. Of these, 644 (21.0%) patients' operations were cancelled. General surgery had the highest rate of cancellations (31.5%) followed by orthopaedic surgery in 25.5%. Lack of theatre space and theatre facilities were the most common causes of cancellations in 53.0% and 28.4% of cases respectively. The majority of these cancellations were attributable to hospital administration in 82.0 % and most of them were preventable in 93.8% of cases. The mean hospital stay was 28.46 days and it was significantly related to the number of cancellations (p < 0.001).
Conclusion: Cancellation of elective surgical operations is a significant problem in our hospital. To prevent unnecessary cancellations, efforts should be made to enhance cost effectiveness through careful planning and efficient utilization of the few available hospital resources.
abstract_id: PUBMED:32969377
Cancellation of Elective General Surgical Operations on the Day of Surgery. Background: Unanticipated cancellation of scheduled elective operations decreases theatre efficiency and is inconvenient to the patients, their families and the medical teams. It creates logistic and financial burden associated with extended hospital stay and repetitions of pre-operative preparations. The aim of this study is to determine the incidence and causes of cancellation of surgical operations in our centre and make recommendations to reduce it.
Methods: This was a prospective cross-sectional study carried out over a period of one year in Manipal Teaching Hospital, Pokhara from July 12017 to June 2018. Consecutive sampling method was used. All patients booked for elective surgical procedures were enrolled in the study. The age, gender, diagnosis, proposed surgery and reasons for cancellation were documented and analysed.
Results: A total of 794 patients were scheduled for elective surgical operations during the study period and 86 (10.83%) patients' operations were cancelled. There were 54(62.79%) males and 32 (37.20%) females. Recent change in the medical status of the patient (n=18; 20.9%) was the main reason for cancellation of operation followed by overbooking (n=11; 12.7%), change in plan of management (n=9,10.4%).
Conclusions: Avoidable factors are mainly responsible for cancellation of surgeries. Efficient management, pre-operative assessment, utilization of the few available hospital resources, improvement in communication between medical teams and patients would reduce the rate of cancellation of booked surgical procedures.
abstract_id: PUBMED:17455812
Cancelled elective operations: an observational study from a district general hospital. Purpose: Cancelled operations are a major drain on health resources: 8 per cent of scheduled elective operations are cancelled nationally, within 24 hours of surgery. The aim of this study is to define the extent of this problem in one Trust, and suggest strategies to reduce the cancellation rate.
Design/methodology/approach: A prospective survey was conducted over a 12-month period to identify cancelled day case and in-patient elective operations. A dedicated nurse practitioner was employed for this purpose, ensuring that the reasons for cancellation and the timing in relation to surgery were identified. The reasons for cancellation were grouped into patient-related reasons, hospital clinical reasons and hospital non-clinical reasons.
Findings: In total, 13,455 operations were undertaken during the research period and 1,916 (14 per cent) cancellations were recorded, of which 615 were day cases and 1,301 in-patients: 45 per cent (n = 867) of cancellations were within 24 hours of surgery; 51 per cent of cancellations were due to patient-related reasons; 34 per cent were cancelled for non-clinical reasons; and 15 per cent for clinical reasons. The common reasons for cancellation were inconvenient appointment (18.5 per cent), list over-running (16 per cent), the patients thought that they were unfit for surgery (12.2 per cent) and emergencies and trauma (9.4 per cent).
Practical Implications: This study demonstrates that 14 per cent of elective operations are cancelled, nearly half of which are within 24 hours of surgery. The cancellation rates could be significantly improved by directing resources to address patient-related causes and hospital non-clinical causes.
Originality/value: This paper is of value in that it is demonstrated that most cancellations of elective operations are due to patient-related causes and several changes are suggested to try and limit the impact of these cancellations on elective operating lists.
abstract_id: PUBMED:32395848
Elective surgery cancellations due to the COVID-19 pandemic: global predictive modelling to inform surgical recovery plans. Background: The COVID-19 pandemic has disrupted routine hospital services globally. This study estimated the total number of adult elective operations that would be cancelled worldwide during the 12 weeks of peak disruption due to COVID-19.
Methods: A global expert response study was conducted to elicit projections for the proportion of elective surgery that would be cancelled or postponed during the 12 weeks of peak disruption. A Bayesian β-regression model was used to estimate 12-week cancellation rates for 190 countries. Elective surgical case-mix data, stratified by specialty and indication (surgery for cancer versus benign disease), were determined. This case mix was applied to country-level surgical volumes. The 12-week cancellation rates were then applied to these figures to calculate the total number of cancelled operations.
Results: The best estimate was that 28 404 603 operations would be cancelled or postponed during the peak 12 weeks of disruption due to COVID-19 (2 367 050 operations per week). Most would be operations for benign disease (90·2 per cent, 25 638 922 of 28 404 603). The overall 12-week cancellation rate would be 72·3 per cent. Globally, 81·7 per cent of operations for benign conditions (25 638 922 of 31 378 062), 37·7 per cent of cancer operations (2 324 070 of 6 162 311) and 25·4 per cent of elective caesarean sections (441 611 of 1 735 483) would be cancelled or postponed. If countries increased their normal surgical volume by 20 per cent after the pandemic, it would take a median of 45 weeks to clear the backlog of operations resulting from COVID-19 disruption.
Conclusion: A very large number of operations will be cancelled or postponed owing to disruption caused by COVID-19. Governments should mitigate against this major burden on patients by developing recovery plans and implementing strategies to restore surgical activity safely.
abstract_id: PUBMED:11394338
Separating elective and emergency surgical care (the emergency team). The purpose of this study was to evaluate the influence on general surgical activity following the separation of elective from emergency surgical care in one large teaching hospital. A prospective audit of elective and emergency general surgical activity between 1994 and 1999 inclusive was carried out. Elective and emergency surgical activity was separated in January 1996, with a dedicated 'Emergency Team' of one consultant for one week, two registrars, two senior house officers and four house officers for two weeks, in addition to a 20 bed acute admission ward and a 24 hour emergency theatre. The consultant cancelled the majority of his/her elective work during the on-call week. A prospective collection was made of all elective and emergency operations carried out between 1994 and 1999 using the Lothian Surgical Audit system. Out of hours operative activity was analysed retrospectively from data collected using the Operating Room Schedule of Surgery (ORSOS) and outpatient clinic and day case activity collected from the Hospital Administration System. Comparisons were made between years 1994/1995 and 1996/7/8/9. Emergency surgical admissions rose by 86% from 1973 patients in 1994 to 3675 in 1999. During the same period, elective in-patient activity remained fairly steady, but there was an increase in day surgery from 469 to 2089 cases per annum. Despite the on-call consultant cancelling his/her outpatient clinics, overall outpatient activity also increased from 9911 to 12,335. However a proportion of this reflects the appointment of two new consultants in April 1998. Emergency operations increased from 941 in 1994 to 1351 in 1999, with a two-fold reduction in operations carried out between 0000-0800 hours from 16% in 1994 to 7.9% in 1999. A separate and dedicated 'Emergency Team' is an efficient method of managing acute general surgical admissions. It permits elective work to carry on uninterrupted, reduces the number of operations performed after midnight, and provides a better environment for teaching and training. This scenario might also be applicable to other medical specialties who have a large emergency commitment.
Answer: Yes, elective surgical operations are often cancelled due to increasing medical admissions. The study conducted at Beaumont hospital showed that the number of elective operations cancelled increased over time, with 'no bed' being a significant reason for theatre cancellation, rising from 31.0% to 62.5% of cases. This increase in cancellations was associated with a rise in general medical emergency admissions by 28.39% and emergency medical admissions across the specialties by 23.84%, which led to increased bed usage by medical specialties (PUBMED:15693380).
Similarly, the study from Ayub Teaching Hospital found that 25% of scheduled surgeries were cancelled, with 16.2% being due to a shortage of beds (PUBMED:18444594). The study from a district general hospital also highlighted that bed unavailability due to the overflow of A&E admissions was the main reason for cancelled surgeries, accounting for 92% of cancelled patients (PUBMED:15693381).
In Tanzania, lack of theatre space and theatre facilities were the most common causes of cancellations, with hospital administration being responsible for the majority of these cancellations (PUBMED:22275936). The study from Manipal Teaching Hospital also reported that avoidable factors, such as recent changes in the medical status of the patient and overbooking, were mainly responsible for cancellations (PUBMED:32969377).
The introduction of a new protocol to limit the number of cancelled elective orthopaedic operations due to asymptomatic bacteriuria showed that focusing on symptoms rather than urinalysis could reduce cancellations and improve theatre utilization (PUBMED:38274927).
Overall, these studies indicate that elective surgical operations are indeed cancelled due to increasing medical admissions, among other factors, and that better management and efficient utilization of resources could mitigate some of these cancellations. |
Instruction: Do patients with osteoporosis have an increased prevalence of periodontal disease?
Abstracts:
abstract_id: PUBMED:23340948
Do patients with osteoporosis have an increased prevalence of periodontal disease? A cross-sectional study. Unlabelled: The study examined if women with osteoporosis were at increased risk of periodontal disease. Three hundred eighty females aged 45-65 years with recent dual-energy X-ray absorptiometry (DXA) scans of the spine and proximal femur agreed to a dental examination. No association was established between the presence of severe periodontal disease and osteoporosis.
Introduction: The purpose of this study is to determine whether patients with osteoporosis have an increased severity and extent of periodontal disease, taking full account of confounding factors.
Methods: Volunteer dentate women (45-65 years), who had undergone recent DXA of the femur and lumbar spine, received a clinical examination of their periodontal tissues by a single trained operator who was blind to the subject's osteoporosis status. Clinical examinations were performed within 6 months of the DXA. Basic Periodontal Examination score, gingival bleeding score, periodontal pocket depth, recession and calculus were the periodontal outcome measures. Potential confounding factors were recorded. Logistic regression was performed for the dichotomous outcome measure of severe periodontal disease (present or absent) with osteoporotic status, adjusting for confounding factors.
Results: There were 380 dentate participants for whom DXA data were available. Of these, 98 had osteoporosis. When compared with osteoporotic subjects, those with normal bone mineral density were significantly younger (p = 0.01), had a higher body mass index (p = 0.03) and had more teeth (p = 0.01). The prevalence of severe periodontal disease in the sample was 39 %. The unadjusted odds ratio for the association between osteoporosis and severe periodontal disease was 1.21 (0.76 to 1.93). The adjusted odds ratio analysis including other covariates (age, smoking, hormone replacement therapy, alcohol) was 0.99 (0.61 to 1.61).
Conclusion: No association was established between the presence of severe periodontal disease and osteoporosis.
abstract_id: PUBMED:33720410
The correlation between history of periodontitis according to staging and grading and the prevalence/severity of peri-implantitis in patients enrolled in maintenance therapy. Background: The aim of this study was to determine if a previous history of periodontitis according to the preset definitions of the 2017 World Workshop is correlated with increased implant failure, and occurrence and severity of peri-implantitis (PI).
Methods: A retrospective analysis of patients with a history of periodontitis who received nonsurgical and, if indicated, surgical corrective therapy prior to implant placement was performed. Periodontitis stage and grade were determined for each included patient based on data from the time of initiation of active periodontal therapy. Cox Proportional Hazard Frailty models were built to analyze the correlation between stage and grade of periodontitis at baseline with implant failure, as well as occurrence and severity of PI.
Results: Ninety-nine patients with a history of periodontitis receiving 221 implants were followed for a mean duration of 10.6 ± 4.5 years after implant placement. Six implants (2.7%) failed and a higher rate of implant failure due to PI was found for Grade C patients (P < 0.05), whereas only an increased trend was seen for Stages III and IV compared with I and II. Grading significantly influenced the risk of marginal bone loss (MBL) >25% of the implant length (P = 0.022) in PI-affected implants. However, a direct correlation between higher-level stage and grade and PI prevalence was not recorded.
Conclusion: No statistically significant association between periodontitis stage or grade and the prevalence of PI was found. However, when PI was diagnosed, there was a relationship between periodontitis grade and severity of PI or the occurrence of implant failure.
abstract_id: PUBMED:37767616
Longitudinal assessment of peri-implant diseases in patients with and without history of periodontitis: A 20-year follow-up study. Purpose: To longitudinally assess the prevalence of peri-implant health, peri-implant mucositis and peri-implantitis in a cohort of patients with and without history of periodontitis over a 20-year period.
Materials And Methods: Eighty-four patients who attended a specialist private periodontal practice were evaluated prospectively 10 and 20 years after prosthesis delivery. Following successful completion of periodontal/implant therapy, patients (172 implants) were enrolled on an individualised supportive periodontal care programme. Clinical and radiographic parameters were collected to assess the prevalence of peri-implant health and diseases. Prevalence of peri-implantitis and peri-implant mucositis was calculated based on the case definition set out in 2018. A multilevel logistic regression analysis was conducted to assess potential risk or protective factors.
Results: The analysis was performed on 22 periodontally healthy and 62 periodontally compromised patients rehabilitated with 39 and 130 implants, respectively. The 10-year prevalence of peri-implant health, peri-implant mucositis and peri-implantitis was 21.4%, 67.9% and 10.6%, respectively, whereas the 20-year prevalence was 29.8%, 47.6% and 33.3%, respectively. Non-compliant periodontally compromised patients showed a statistically significantly increased risk at 20 years of both peri-implant mucositis (odds ratio 11.1; 95% confidence interval 1.8-68.6) and peri-implantitis (bone loss and probing depth) (odds ratio 14.3; 95% confidence interval 1.8-32.9). High full-mouth plaque and bleeding scores were associated with higher odds of both peri-implant mucositis and peri-implantitis.
Conclusions: Peri-implant diseases were prevalent in patients rehabilitated with dental implants and followed up for a period of 20 years. History of periodontal disease and a lack of compliance with a tailored supportive periodontal care programme were identified as risk factors for peri-implant diseases.
abstract_id: PUBMED:34101228
Prevalence of periodontitis based on the 2017 classification in a Norwegian population: The HUNT study. Aim: This cross-sectional study assesses the prevalence of periodontitis in a large Norwegian population, based on the 2017 World Workshop on the Classification of Periodontal and Peri-implant Diseases and Conditions. The prevalence of periodontitis was determined by bone loss recorded on radiographs (orthopantomogram [OPG] and bitewing [BW]) and by clinical examination.
Materials And Methods: As part of a large population health study (The HUNT Study), 7347 participants aged 19 years and older were invited to the HUNT4 Oral Health Study. Radiographic bone loss (RBL) and periodontal stage and grade were assessed in 4863 participants.
Results: Periodontal examination was performed in 4863 participants. RBL and clinical registrations corresponding to periodontitis as defined were observed in 72.4%. The prevalence of periodontitis increased after 40 years of age, with severe forms occurring primarily after 60 years of age. Stage I was observed in 13.8%, Stage II in 41.1%, Stage III in 15.3%, and Stage IV in 2.3% of the population. Grade A, B, and C was observed in 5.7%, 60.2%, and 6.2%, respectively.
Conclusion: Periodontitis was frequently observed in the investigated population. The prevalence of periodontitis Stage III and Stage IV combined was observed in 17.6% of the study population.
abstract_id: PUBMED:26522602
Prevalence and Possible Risk Factors of Peri-implantitis: A Concept Review. Aim: The purpose of this review is to estimate the prevalence of peri-implantitis, as well as to determine possible risk factors associated with its development in patients treated with oral implants.
Background: Although implant therapy has been identified as a successful and predictable treatment for partially and fully edentulous patients, complications and failures can occur. Peri-implantitis is considered a biologic complication that results in bone loss around implants and may lead to implant treatment failure.
Results: A great variation has been observed in the literature regarding the prevalence of peri-implantitis according to the diagnostic criteria used to define peri-implantitis. The prevalence ranges from 4.7 to 43% at implant level, and from 8.9 to > 56% at patient level. Many risk factors that may lead to the establishment and progression of peri-implantitis have been suggested. There is strong evidence that presence and history of periodontitis are potential risk factors for peri-implantitis. Cigarette smoking has not yet been conclusively established as a risk factor for peri-implantitis, although extra care should be taken with dental implant in smokers. Other risk factors, such as diabetes, genetic traits, implant surface roughness and presence of keratinized mucosa still require further investigation.
Conclusion: Peri-implantitis is not an uncommon complication following implant therapy. A higher prevalence of peri-implantitis has been identified for patients with presence or history of periodontal disease and for smokers. Until now, a true risk factor for peri-implantitis has not been established. Supportive maintenance program is essential for the long-term success of treatments with oral implants.
Clinical Significance: The knowledge of the real impact of peri-implantitis on the outcome of treatments with oral implants as well as the identification of risk factors associated to this inflammatory condition are essential for the development of supportive maintenance programs and the establishment of prevention protocols.
abstract_id: PUBMED:8180278
Overhanging interproximal silver amalgam restorations. Prevalence and side-effects. This study was undertaken to find out the prevalence of overhanging Cl. II silver amalgam restorations amongst patients visiting Pb. Govt. Dental College and Hospital, Amritsar and Govt. Dental College Hospital, Patiala. Two parameters viz. the pocket depth and the extent of bone loss were evaluated to study the after effects of the overhanging restorations. The findings of this investigation showed the alarming prevalence of overhanging restorations (64.12%) and clearly indicate the relationship of overhangs with periodontal diseases. Periodontal breakdown was more evident along with overhanging restorations as compared to unrestored contralateral teeth. The mean pocket depth in restored surfaces was 3.75 mm as compared to 3.46 mm in unrestored ones, showing 8.38% increase. The mean extent of bone loss in restored tooth surface was 1.64 mm as compared to 1.50 mm in unrestored ones, showing an increase of 9.33%.
abstract_id: PUBMED:35448049
Prevalence of Moderate to Severe Periodontitis in an 18-19th Century Sample-St. Bride's Lower Churchyard (London, UK). The aim of the study was to determine the prevalence of moderate to severe periodontitis in 18-19th century skulls in the St Bride's Lower Churchyard in London, UK.
Materials And Methods: A total of 105 adult skulls (66 M: F 39) from the Museum of London collection were examined for evidence of dental disease. The primary method was to measure the presence of moderate to severe periodontitis. Other dental pathologies were recorded such as tooth wear, calculus, and caries.
Results: Overall, the prevalence of moderate to severe periodontitis in the sample was 21-24%. Males were observed to be more susceptible to periodontal disease than females. The severity of bone loss in the skull collection also increased with age. There was no significant difference in the amount of calculus deposition when comparing either age or sex. A total of 14% of the individuals in the sample showed signs of smoking.
Conclusion: The results of the study indicated that the prevalence of moderate to severe periodontitis in an 18-19th century skull sample was 21-24%, which was higher than in previous studies. This may be due to the lack of basic personal mouth care and professional dental treatment as well as known risk factors such as smoking, stress, low socioeconomic status, and malnutrition.
abstract_id: PUBMED:31752793
Prevalence of periodontitis and alveolar bone loss in a patient population at Harvard School of Dental Medicine. Background: Although several studies assessed the prevalence of alveolar bone loss, the association with several risk factors has not been fully investigated. The aim of this article is to measure the prevalence of periodontitis by calculating the mean alveolar bone loss/level of posterior teeth using bitewing radiographs among the patients enrolled in the clinics at Harvard School of Dental Medicine and address risk factors associated with the disease.
Methods: One thousand one hundred thirty-one patients were selected for radiographic analysis to calculate the mean alveolar bone loss/level by measuring the distance between the cementoenamel junction and the alveolar bone crest on the mesial and distal surfaces of posterior teeth. Linear regression with Multi-level mixed-effect model was used for statistical analysis adjusting for age, sex, race, median household income, and other variables.
Results: Mean alveolar bone level of the whole sample was 1.30 mm (±0.006). Overall periodontitis prevalence for the sample was 55.5% (±1.4%). Moderate periodontitis prevalence was 20.7% (±1.2%), while 2.8% (±0.5%) of the whole sample had severe periodontitis. Adjusted mean alveolar bone loss was higher in older age groups, males, Asian race group, ever smokers, and patients with low median household income.
Conclusion: The effect of high household income on the amount of bone loss can be powerful to the degree that high household income can influence outcomes even for individuals who had higher risks of developing the disease. Public health professionals and clinicians need to collaborate with policy makers to achieve and sustain high quality of healthcare for everyone.
abstract_id: PUBMED:25738181
Cross-sectional study on the prevalence and risk indicators of peri-implant diseases. Purpose: To evaluate the prevalence of peri-implant diseases in a university patient sample and to analyse possible risk variables associated with their occurrence.
Materials And Methods: One hundred and eighty-six patients with 597 implants were examined clinically and radiographically. The mean period of function was 5.5 years (range 1 to 16.5 years). A subgroup analysis was performed for implants with a minimum function time of 5 years. Outcome measures were implant failures, prevalence and risk indicators of peri-implant diseases. In order to identify statistically significant risk indicators of peri-implant mucositis and peri-implantitis multi-level logistic regression models were constructed.
Results: The prevalence of peri-implantitis and peri-implant mucositis on patient levels were 12.9% (13.3% for ≥ 5 years) and 64.5% (64.4% for ≥ 5 years), respectively. Multi-level analysis showed that a high plaque score (OR = 1.365; 95% CI: 1.18 to 1.57, P < 0.001) was a risk indicator for periimplant mucositis, while augmentation of the hard or soft tissue at implant sites had a protective effect (OR = 0.878 95% CI: 0.79 to 0.97, P = 0.01). It was also shown that the odds ratio for having peri-implant mucositis increased with the increase of plaque score in a dose-dependent manner. With respect to peri-implantitis, loss of the last tooth due to periodontitis (OR = 1.063; 95% CI: 1.00 to 1.12, P = 0.03) and location of the implants in the maxilla (OR = 1.052, 95% CI: 1.00 to 1.09, P = 0.02) were identified as statistically significant risk indicators.
Conclusions: Within the limitations of this study, the history of periodontal disease was the most significant risk indicator for peri-implantitis and the level of oral hygiene was significantly associated with peri-implant mucositis.
abstract_id: PUBMED:34595906
Prevalence of endo-perio lesions according to the 2017 World Workshop on the Classification of Periodontal and Peri-Implant Disease in a university hospital. Objectives: Teeth with combined endodontic-periodontal lesions (EPLs) have favorable to hopeless prognoses. The new classification system was developed by the World Workshop on the Classification of Periodontal and Peri-Implant Disease in 2017 and suitable epidemiologic data related to this new system are currently lacking. This study aims to contribute data about the prevalence of EPLs according to the new system.
Method And Materials: A total of 1,008 panoramic views taken in 2019 were analyzed, recording the presence of an EPL and other periodontic parameters. Radiographs of bad quality and of the same person were excluded. Additionally, the EPLs' radiographic patterns were rated by two raters according to their shape (j-shaped vs cone-shaped). Descriptive statistical methods as well as t tests for continuous and chi-squared tests for categorical variables were used.
Results: Overall, 866 patients (with 18,963 teeth) were included. Prevalence of EPLs was 4.9% (n = 43) (patient-related)/0.4% (n = 71) (tooth-related). Mean age (62.3 years vs 51.5 years), mean maximal percentage of bone loss (60% vs 30%), and mean age-adjusted bone-loss index (1.0 vs 0.6) were considerably higher compared to patients without EPL. A total of 67 EPLs were found in patients with stage III/IV periodontitis and 4 in patients with stage II periodontitis.
Conclusions: This is the first study showing prevalence of EPLs (4.9%/0.4%) according to the 2017 World Workshop on the Classification of Periodontal and Peri-Implant Disease. Patients with EPLs have a substantially higher maximal percentage of bone loss and a higher age-adjusted bone-loss index at residual teeth, excluding teeth with EPLs. All patients have at least stage II periodontitis.
Answer: Based on the cross-sectional study described in PUBMED:23340948, there is no established association between the presence of severe periodontal disease and osteoporosis. The study involved 380 female participants aged 45-65 years who had undergone recent dual-energy X-ray absorptiometry (DXA) scans and a clinical examination of their periodontal tissues. The prevalence of severe periodontal disease in the sample was 39%, and after adjusting for confounding factors such as age, smoking, hormone replacement therapy, and alcohol, the adjusted odds ratio analysis showed no significant association between osteoporosis and severe periodontal disease (adjusted odds ratio 0.99, 95% confidence interval 0.61 to 1.61). Therefore, based on this study, patients with osteoporosis do not have an increased prevalence of periodontal disease. |
Instruction: Molecular markers for patients with thymic malignancies: not feasible at present?
Abstracts:
abstract_id: PUBMED:34539550
Correlation Between Tumor Molecular Markers and Perioperative Epilepsy in Patients With Glioma: A Systematic Review and Meta-Analysis. Purpose: Tumors derived from the neuroepithelium are collectively termed gliomas and are the most common malignant primary brain tumor. Epilepsy is a common clinical symptom in patients with glioma, which can impair neurocognitive function and quality of life. Currently, the pathogenesis of glioma-related epilepsy is not fully described. Therefore, it is necessary to further understand the mechanism of seizures in patients with glioma. In this study, a comprehensive meta-analysis was conducted to investigate the relationship between five commonly used tumor molecular markers and the incidence of perioperative epilepsy in patients with glioma. Methods: PubMed, EMBASE, and Cochrane Library databases were searched for related research studies. Odds ratio and the corresponding 95% confidence interval were used as the main indicators to evaluate the correlation between tumor molecular markers and the incidence of perioperative epilepsy in patients with glioma. Results: A total of 12 studies were included in this meta-analysis. The results showed that isocitrate dehydrogenase 1 (IDH1) mutation was significantly correlated with the incidence of perioperative epilepsy. A subgroup analysis showed that IDH1 was significantly correlated with the incidence of preoperative epilepsy, but not with intraoperative and postoperative epilepsy. There was no correlation between O6-methylguanine-DNA methyltransferase methylation and 1p/19q deletion and the incidence of perioperative epilepsy. Tumor protein p53 and epidermal growth factor receptor could not be analyzed because of the limited availability of relevant literature. There was no significant heterogeneity or publication bias observed among the included studies. Conclusion: The present meta-analysis confirms the relationship between tumor molecular markers and the incidence of perioperative epilepsy in patients with glioma. The present results provide more comprehensive evidence for the study of the pathogenesis of glioma-related epilepsy. Our research may offer a new method for the treatment of perioperative seizures in patients with glioma.
abstract_id: PUBMED:25510798
Molecular markers: an important tool in the diagnosis, treatment and epidemiology of invasive aspergillosis Increase in the incidence of invasive aspergillosis has represented a difficult problem for management of patients with this infection due to its high rate of mortality, limited knowledge concerning its diagnosis, and therapeutic practice. The difficulty in management of patients with aspergillosis initiates with detection of the fungus in the specimens of immunosuppressed patients infected with Aspergillus fumigatus; in addition, difficulty exists in terms of the development of resistance to antifungals as a consequence of their indiscriminate use in prophylactic and therapeutic practice and to ignorance concerning the epidemiological data of aspergillosis. With the aim of resolving these problems, molecular markers is employed at present with specific and accurate results. However, in Mexico, the use of molecular markers has not yet been implemented in the routine of intrahospital laboratories; despite the fact that these molecular markers has been widely referred in the literature, it is necessary for it to validated and standardized to ensure that the results obtained in any laboratory would be reliable and comparable. In the present review, we present an update on the usefulness of molecular markers in accurate identification of A. fumigatus, detection of resistance to antifugal triazoles, and epidemiological studies for establishing the necessary measures for prevention and control of aspergillosis.
abstract_id: PUBMED:35431565
The High Prevalence of Short-Term Elevation of Tumor Markers Due to Hyperglycemia in Diabetic Patients. Introduction: The relationship between diabetes and cancer is uncertain. However, tumor markers in diabetic patients are significantly elevated. The prevalence of diabetic inpatients with elevation of tumor markers and its relationship to blood glucose is needed to be studied.
Methods: A total of 102 diabetic inpatients were included in this study. We collected information from diabetic inpatients and tested tumor markers. Patients with elevation of tumor markers were rechecked.
Results: We found that up to 73.3% of diabetic inpatients had one or more tumor markers elevated. The proportion of diabetic inpatients with higher than normal cytokeratin 19 fragment (CYFRA 21-1) was 54.5%. Most of them did not return to normal after controlling the blood glucose. A short-term elevation of carcinoembryonic antigen (CEA) was present in 15.8% of diabetic inpatients, and 19.8% of diabetic inpatients had a short-term elevation of carbohydrate antigen. CEA and carbohydrate antigen including CA19-9, CA72-4, CA125 and CA15-3 returned to normal or became significantly reduced within 2 weeks after good control of blood glucose.
Conclusion: Our study showed that the elevation of tumor markers was common in diabetic inpatients, especially those with poor blood glucose control. It indicated that re-checking the tumor markers after controlling blood glucose might be better than conducting large-scale test for cancer.
abstract_id: PUBMED:35782984
Role of hormone receptors and HER2 as prospective molecular markers for breast cancer: An update. This review provides an updated account on the current methods, principles and mechanism of action of therapies for the detection of molecular markers of therapeutic importance in the prognosis of breast cancer progression and recurrence, which includes estrogen receptor (ER), progesterone receptor (PR) and human epidermal growth factor receptor2 (HER2). Indeed, hormone-receptors namely, ER, PR, proto-oncogene HER2 are the basic molecular markers that are recognized and established prognostic factors and predictors of response, for therapeutic practice. These markers can be detected by using immunohistochemistry (IHC) and fluorescence in situ hybridization (FISH), which are established, faster and cost effective detection methods. These molecular markers along with clinicopathological prognostic parameters give the best prediction of the prognosis of cancer recurrence and progress. Finally, hormone receptors and HER2 as molecular markers are of prime therapeutic importance and have the capability to take part in future drug development techniques.
abstract_id: PUBMED:28233083
Molecular markers in glioma. Gliomas are the most malignant and aggressive form of brain tumors, and account for the majority of brain cancer related deaths. Malignant gliomas, including glioblastoma are treated with radiation and temozolomide, with only a minor benefit in survival time. A number of advances have been made in understanding glioma biology, including the discovery of cancer stem cells, termed glioma stem cells (GSC). Some of these advances include the delineation of molecular heterogeneity both between tumors from different patients as well as within tumors from the same patient. Such research highlights the importance of identifying and validating molecular markers in glioma. This review, intended as a practical resource for both clinical and basic investigators, summarizes some of the more well-known molecular markers (MGMT, 1p/19q, IDH, EGFR, p53, PI3K, Rb, and RAF), discusses how they are identified, and what, if any, clinical relevance they may have, in addition to discussing some of the specific biology for these markers. Additionally, we discuss identification methods for studying putative GSC's (CD133, CD15, A2B5, nestin, ALDH1, proteasome activity, ABC transporters, and label-retention). While much research has been done on these markers, there is still a significant amount that we do not yet understand, which may account for some conflicting reports in the literature. Furthermore, it is unlikely that the investigator will be able to utilize one single marker to prospectively identify and isolate GSC from all, or possibly, any gliomas.
abstract_id: PUBMED:29707982
Molecular markers in disease detection and follow-up of patients with non-muscle invasive bladder cancer. Introduction: Diagnosis and surveillance of non-muscle invasive bladder cancer (NMIBC) is mainly based on endoscopic bladder evaluation and urine cytology. Several assays for determining additional molecular markers (urine-, tissue- or blood-based) have been developed in recent years but have not been included in clinical guidelines so far. Areas covered: This review gives an update on different molecular markers in the urine and evaluates their role in patients with NMIBC in disease detection and surveillance. Moreover, the potential of recent approaches such as DNA methylation assays, multi-panel RNA gene expression assays and cell-free DNA analysis is assessed. Expert commentary: Most studies on various molecular urine markers have mainly focused on a potential replacement of cystoscopy. New developments in high throughput technologies and urine markers may offer further advantages as they may represent a non-invasive approach for molecular characterization of the disease. This opens new options for individualized surveillance strategies and may help to choose the best therapeutic option. The implementation of these technologies in well-designed clinical trials is essential to further promote the use of urine diagnostics in the management of patients with NMIBC.
abstract_id: PUBMED:36187350
The impact of heme biosynthesis regulation on glioma aggressiveness: Correlations with diagnostic molecular markers. Background: The prognosis of diffusely infiltrating glioma patients is dismal but varies greatly between individuals. While characterization of gliomas primarily relied on histopathological features, molecular markers increasingly gained importance and play a key role in the recently published 5 th edition of the World Health Organization (WHO) classification. Heme biosynthesis represents a crucial pathway due to its paramount importance in oxygen transport, energy production and drug metabolism. Recently, we described a "heme biosynthesis mRNA expression signature" that correlates with histopathological glioma grade and survival. The aim of the current study was to correlate this heme biosynthesis mRNA expression signature with diagnostic molecular markers and investigate its continued prognostic relevance.
Materials And Methods: In this study, patient data were derived from the "The Cancer Genome Atlas" (TCGA) lower-grade glioma and glioblastoma cohorts. We identified diffusely infiltrating gliomas correlating molecular tumor diagnosis according to the most recent WHO classification with heme biosynthesis mRNA expression. The following molecular markers were analyzed: EGFR amplification, TERT promoter mutation, CDKN2A/B homozygous loss, chromosome 7 + /10- aneuploidy, MGMT methylation, IDH mutation, ATRX loss, p53 mutation and 1p19q codeletion. Subsequently, we calculated the heme biosynthesis mRNA expression signature for correlation with distinct molecular glioma markers/molecular subgroups and performed survival analyses.
Results: A total of 649 patients with available data on up-to-date molecular markers and heme biosynthesis mRNA expression were included. According to analysis of individual molecular markers, we found a significantly higher heme biosynthesis mRNA expression signature in gliomas with IDH wildtype (p < 0.0005), without 1p19q codeletion (p < 0.0005), with homozygous CDKN2A/B loss (p < 0.0005) and with EGFR amplification (p = 0.001). Furthermore, we observed that the heme biosynthesis mRNA expression signature increased with molecular subgroup aggressiveness (p < 0.0005), being lowest in WHO grade 2 oligodendrogliomas and highest in WHO grade 4 glioblastomas. Finally, the heme biosynthesis mRNA expression signature was a statistically significant survival predictor after multivariate correction for all molecular markers (p < 0.0005).
Conclusion: Our data demonstrate a significant correlation between heme biosynthesis regulation and diagnostic molecular markers and a prognostic relevance independent of these established markers. Consequently, heme biosynthesis expression is a promising biomarker for glioma aggressiveness and might constitute a potential target for novel therapeutic approaches.
abstract_id: PUBMED:34295567
Effects of molecular markers on the treatment decision and prognosis of colorectal cancer: a narrative review. Objective: To summarize the effects of molecular markers on the treatment decision and prognosis of colorectal cancer.
Background: Colorectal cancer is a highly heterogeneous disease. Even colorectal cancers of the same pathological type and clinical stage may have significant differences in treatment efficacy and prognosis. There are three main molecular mechanisms for the occurrence and development of colorectal cancer: chromosomal instability (CIN) pathway, microsatellite instability (MSI), and CpG island methylate phenotype (CIMP). There are multiple molecular markers distributed on each pathway.
Methods: We performed a literature search on the PubMed database for studies published in English (from the date of initiation of the database to the year of 2020) using the following subject terms: "colon cancer", "rectal cancer", "colorectal cancer", "molecular markers", "biomarkers", "treatment strategies", and "prognosis".
Conclusions: The different expression states of molecular markers have a significant impact on the treatment decision and prognosis of colorectal cancer. Main colorectal cancer molecular markers include MSI and some important genes. Individualized treatments for tumors with different molecular phenotypes have improved the treatment effectiveness for colorectal cancer. The rational use of molecular markers is valuable for treatment decision-making and the prognosis of patients with colorectal cancer.
abstract_id: PUBMED:29081696
Implication of Gastric Cancer Molecular Genetic Markers in Surgical Practice. Introduction: We have investigated aberrant methylation of genes CDH1, RASSF1A, MLH1, N33, DAPK, expression of genes hTERT, MMP7, MMP9, BIRC5 (survivin), PTGS2, and activity of telomerase of 106 gastric tumor samples obtained intra-operatively and 53 gastric tumor samples from the same group of patients obtained endoscopically before surgery. Biopsy specimens obtained from 50 patients with chronic calculous cholecystitis were used as a control group. Together with tissue samples obtained from different sites remote to tumors, a total of 727 samples have been studied. The selected parameters comprise a system of molecular markers that can be used in both diagnostics of gastric cancer and in dynamic monitoring of patients after surgery. Special attention was paid to the use of molecular markers for the diagnostics of malignant process in the material obtained endoscopically since the efficacy of morphological diagnostics in biopsies is compromised by intratumoral heterogeneity, which may prevent reliable identification of tumor cells in the sampling. Our data indicated that certain molecular genetic events provided more sensitive yet specific markers of the tumor.
Conclusion: We demonstrated that molecular profiles detected in preoperative biopsies were confirmed by the material obtained intra-operatively. The use of endoscopic material facilitates gastric tumors pre-operative diagnostics, improving early detection of gastric cancer and potential effective treatment strategies.
abstract_id: PUBMED:33585189
Molecular Pathological Markers Correlated With the Recurrence Patterns of Glioma. Purpose: Glioma is one of the most common tumors of the central nervous system, and many patients suffer from recurrence even after standard comprehensive treatment. However, little is known about the molecular markers that predict the recurrence patterns of glioma. This study aimed to demonstrate the correlations between molecular markers and glioma recurrence patterns, which included local/nonlocal recurrence and paraventricular/nonparaventricular recurrence.
Methods: Immunohistochemical techniques were used to assess the molecular markers of 88 glioma tissues following surgical resection. The recurrence patterns were divided into local recurrence, marginal recurrence, distant recurrence, multirecurrence, and subarachniod recurrence, with the last four recurrence patterns being collectively called nonlocal recurrence. According to whether the recurrence invaded ventricles, the nonlocal recurrence patterns were divided into paraventricular and nonparaventricular recurrence. Then, we compared the different recurrence patterns and their clinical characteristics, focusing on the expression of molecular markers.
Results: More patients in the nonlocal recurrence group received combined radiotherapy and chemotherapy than patients in the local recurrence group (p=0.019). Sex, age, extent of surgery, time to recurrence, tumor location, size, and WHO grade were not different in the defined groups (P>0.05). Recurrent tumor volume and WHO grade were significantly different between the paraventricular and nonparaventricular recurrence groups (p=0.046 and 0.033). The expression of Ki-67, P53, and PCNA in the nonlocal recurrence group was significantly higher than that in the local recurrence group (p=0.015, 0.009, and 0.037), while the expression of S-100 in the nonlocal recurrence group was significantly lower than that in the local recurrence group (p=0.015). Cox regression indicated hazard ratio (HR) for high expression level of PCNA associated with non-local recurrence was 3.43 (95% CI, 1.15, 10.24), and HR for high expression level of MGMT associated with paraventricular recurrence was 2.64 (95% CI, 1.15,6.08).
Conclusions: Ki-67, P53, PCNA, and MGMT might be important clinical markers for nonlocal recurrence and paraventricular recurrence.
Answer: The provided abstracts do not specifically address the feasibility of using molecular markers for patients with thymic malignancies. The abstracts focus on various other types of cancers and diseases, such as gliomas (PUBMED:34539550, PUBMED:28233083, PUBMED:36187350, PUBMED:33585189), invasive aspergillosis (PUBMED:25510798), diabetes and its relationship with cancer (PUBMED:35431565), breast cancer (PUBMED:35782984), non-muscle invasive bladder cancer (PUBMED:29707982), colorectal cancer (PUBMED:34295567), and gastric cancer (PUBMED:29081696). They discuss the correlation between molecular markers and various aspects of these diseases, including diagnosis, prognosis, treatment decisions, and recurrence patterns.
However, none of the abstracts mention thymic malignancies or the use of molecular markers in the context of thymic cancer. Therefore, based on the information provided in these abstracts, it is not possible to determine the feasibility of using molecular markers for patients with thymic malignancies. Additional research and literature specific to thymic malignancies would be required to answer this question accurately. |
Instruction: Do anger and aggression affect carotid atherosclerosis?
Abstracts:
abstract_id: PUBMED:8322399
Do anger and aggression affect carotid atherosclerosis? Background And Purpose: Although a number of metabolic and psychosocial factors have been identified as coronary risk factors, no studies have evaluated the relation between personality and cerebrovascular disease. The purpose of the present study was to elucidate the relation between the characteristics of anger or aggression and the severity of carotid atherosclerosis on the basis of the findings of B-mode ultrasonography.
Methods: The Cornell Medical Index was used to measure anger in 34 patients with signs of atherosclerosis or at least one of four recognized risk factors for atherosclerosis (hypertension, hypercholesterolemia, diabetes mellitus, and cigarette smoking). The Rosenzweig Picture Frustration Study and Yatabe-Guilford Personality Test were used to evaluate aggression. High-resolution B-mode ultrasonography was performed, and the severity of carotid atherosclerosis was determined by plaque score. The occurrence of risk factors for carotid atherosclerosis was compared among the patients.
Results: The correlation of plaque score with one item that endorses anger was r = .65 (P < .01) and with "extrapersistive" in aggression was r = .50 (P < .01). Multivariate analysis identified significant correlations between plaque score and age, hypercholesterolemia, and anger.
Conclusions: Our results suggest that anger and, perhaps, aggression may be risk factors for cerebrovascular disease.
abstract_id: PUBMED:8236356
Do anger and aggression affect carotid atherosclerosis? N/A
abstract_id: PUBMED:17070423
Suppressed anger is associated with increased carotid arterial stiffness in older adults. Background: Anger and hostility have been implicated in the pathogenesis of heart disease, but the extent to which the large conduit arteries play an intermediate role in this relationship remains to be clarified. The present study investigated associations of anger frequency and expression style with carotid artery intima-media thickness (IMT) and stiffness in healthy adults older than 50 years.
Methods: Two hundred participants (95 men) in the Baltimore Longitudinal Study of Aging completed the Spielberger Anger Expression Inventory, which assesses anger frequency (trait anger), anger expression (anger-out), and anger suppression (anger-in). The carotid artery IMT was assessed by ultrasonography. Carotid stiffness was determined from the log of systolic over diastolic blood pressure (BP) as a function of carotid distensibility.
Results: In univariate correlational analysis, a significant positive association of anger-in with stiffness was observed (P < .01), together with a less significant association of anger-in with carotid artery IMT (P < .05). Neither anger-out nor trait anger was significantly associated with carotid artery IMT or stiffness. Moreover, none of the anger measures was significantly associated with resting BP in this normotensive sample. As expected, carotid artery IMT, stiffness, and systolic BP were all positively associated. In multivariate analysis, anger-in remained a determinant of stiffness independent of BP, and a marginally significant determinant of carotid artery IMT.
Conclusions: This is the first known finding that high anger-in is a significant independent determinant of carotid artery stiffness. These results suggest that high anger-in can potentiate the effects of age on stiffening of the central arteries.
abstract_id: PUBMED:15385684
Anger-related personality traits and carotid artery atherosclerosis in untreated hypertensive men. Objective: To determine whether anger-related personality traits are associated with carotid artery atherosclerosis in untreated hypertensive patients.
Methods: Study participants were 237 men with elevated blood pressure (systolic 140-180 mm Hg and/or diastolic 90-110 mm Hg) but untreated for hypertension. Average age was 56 years; 80% of subjects were white. Eighty-six percent had no history of antihypertensive treatment, and the remainder reported median lifetime treatment exposure of 4 months. Subjects were administered the Spielberger State-Trait Anger Expression Inventory, which measures tendencies to experience anger (Trait Anger) and modes of anger expression (Anger-In, Anger-Out, Anger-Control). Mean and maximum intima-medial thickness (IMT) and plaque occurrence in the extracranial carotid arteries were measured by B-mode ultrasonography.
Results: Trait Anger was marginally (p =.065) related to mean and significantly (p <.05) related to maximum IMT, independent of standard risk factors (age, race, body mass index, education, smoking, fasting glucose, total:high-density lipoprotein cholesterol ratio). A component of Trait Anger, Angry Temperament, similarly predicted mean (p =.062) and maximum IMT (p <.05) and plaque occurrence (p <.05). Anger-Out predicted both mean and maximum IMT (p values <.01).
Conclusions: An antagonistic disposition (Trait Anger), particularly a tendency to experience anger on minimal provocation (Angry Temperament) and a propensity to express anger outwardly (Anger-Out), are associated with heightened carotid atherosclerosis. These findings suggest that recently reported prospective associations between these anger dimensions and incident cerebrovascular disease may be mediated, in part, by increased atherosclerotic disease.
abstract_id: PUBMED:17363362
Race-gender differences in the association of trait anger with subclinical carotid artery atherosclerosis: the Atherosclerosis Risk in Communities Study. This paper examines the association between trait anger and subclinical carotid artery atherosclerosis among 14,098 Black or White men and women, aged 48-67 years, in the Atherosclerosis Risk in Communities Study cohort, 1990-1992. Trait anger was assessed using the 10-item Spielberger Trait Anger Scale. Carotid atherosclerosis was determined by an averaged measure of the wall intimal-medial thickness (IMT) of the carotid bifurcation and of the internal and common carotids, measured by high-resolution B-mode ultrasound. In the full study cohort, trait anger and carotid IMT were significantly and positively associated (p = 0.04). In race-gender stratified analysis, the association was strongest and independent only in Black men, among whom a significant trait anger-carotid IMT relation was observed for both the overall trait anger measure (p = 0.004) and the anger reaction dimension (p = 0.001). In Black men, carotid IMT levels increased across categories of overall trait anger and anger reaction, resulting in clinically significant differences (67 microm (95% confidence interval: 23, 110) and 82 microm (95% confidence interval: 40, 125), respectively) from low to high anger. Sociodemographic, lifestyle, anthropometric, and biologic cardiovascular disease risk factors appear to mediate the relation in Black women, White men, and White women. In conclusion, these findings document disparate race-gender patterns in the association of trait anger with subclinical carotid artery atherosclerosis.
abstract_id: PUBMED:16407698
Trait anger and arterial stiffness: results from the Atherosclerosis Risk in Communities (ARIC) study. The cross-sectional association between trait anger and stiffness of the left common carotid artery was examined in 10,285 black or white men or women, 48-67 years of age, from the Atherosclerosis Risk in Communities (ARIC) study cohort. Trait anger was assessed using the 10-item Spielberger Trait Anger Scale. Arterial stiffness was assessed by pulsatile arterial diameter change (PADC) derived from echo-tracking ultrasound methods; the smaller the PADC, the stiffer the common carotid artery. In men, trait anger was significantly associated with PADC, independent of the established cardiovascular disease risk factors (p=0.04). PADC decreased from the first (lowest anger group) to the second quintile of anger, but there was no progressive decrease thereafter. Also observed was a 13-microm (95% confidence interval [CI], 1-25) difference in the magnitude of PADC from the lowest to the uppermost quintile of anger (PADC [standard error], 421 [4] microm vs. 408 [5] microm). In women, the association was marginally significant (p=0.07). The low-high difference in the magnitude of PADC (PADC [standard error], 397 [3] microm vs. 406 [4] microm) was inverse (-9 microm 95% CI, -19 to 2). Conclusions indicate that very high trait anger is associated with arterial stiffness in men.
abstract_id: PUBMED:36715099
Independent Associations Between Trait-Anger, Depressive Symptoms and Preclinical Atherosclerotic Progression. Background: Previous research from our group found that recent depressive symptoms were associated with 3-year change in carotid intima-media thickness (CA-IMT), a biomarker of cardiovascular disease risk, in an initially healthy sample of older adults. Trait measures of anxiety, anger, and hostility did not predict 3-year CA-IMT progression in that report.
Purpose: The current study sought to reexamine these associations at a 6-year follow-up point.
Methods: Two-hundred seventy-eight participants (151 males, mean age = 60.68 years) from the original sample completed an additional IMT reading 6 years following the initial baseline assessment.
Results: Though not significant at 3-years, trait-anger emerged as a predictor of IMT progression at the 6-year point. When examined in separate regression models, both depression and trait-anger (but not anxiety or hostility) predicted 6-year IMT change (b = .017, p = .002; b = .029, p = .01, respectively). When examined concurrently, both depression and anger were independently associated with 6-year IMT progression (b = .016, p = .010, b = .028, p = .022, respectively). Exploratory analyses suggest that the relative contributions of anger and depression may differ for males and females.
Conclusions: The use of sequential follow-ups is relatively unique in this literature, and our results suggest a need for further research on the timing and duration of psychosocial risk exposures in early stages of cardiovascular disease.
abstract_id: PUBMED:15564356
Trait anger and the metabolic syndrome predict progression of carotid atherosclerosis in healthy middle-aged women. Objective: Hostility may predict coronary heart disease morbidity and mortality, as well as the metabolic syndrome. We tested to see if high levels of the attitudinal and emotional aspects of hostility lead to progression of carotid atherosclerosis in women and if the metabolic syndrome is a mediator of the association.
Methods: Two hundred nine healthy women were followed during the perimenopausal and postmenopausal periods. Carotid artery ultrasound scans measured intima-media thickness (IMT) an average 7.4 (SD = 0.9, range 4.2-10.8) and 10.5 years (SD = 1.1, range = 6.9-13.0) after baseline. Hostility was measured at baseline and at the first carotid scan with Spielberger Trait Anger (being angry frequently) and Anger In (suppressing angry feelings) scales, and the Cook-Medley Hostility Inventory (hostile, cynical attitudes toward others). Metabolic syndrome was measured at the study entry and through the second carotid scan.
Results: Baseline Trait Anger scores predicted an increase in IMT across 3 years (p < .05) and predicted the risk for developing the metabolic syndrome (p < .05). The risk for developing the metabolic syndrome, in turn, predicted an increase in IMT across 3 years (p < .05). Anger suppression and cynical attitudes were not associated with progression of carotid atherosclerosis.
Conclusion: Anger predicts progression of carotid atherosclerosis, and the metabolic syndrome may mediate this association. Women who experience angry feelings frequently may benefit from interventions aimed at reducing anger and reducing the metabolic syndrome components early in the natural history of atherosclerosis.
abstract_id: PUBMED:22511725
Associations of anger, anxiety, and depressive symptoms with carotid arterial wall thickness: the multi-ethnic study of atherosclerosis. Objective: Carotid arterial wall thickness, measured as intima-media thickness (IMT), is an early subclinical indicator of cardiovascular disease. Few studies have investigated the association of psychological factors with IMT across multiple ethnic groups and by sex.
Methods: We included 6561 men and women (2541 whites, 1790 African Americans, 1436 Hispanics, and 794 Chinese) aged 45 to 84 years who took part in the first examination of the Multi-Ethnic Study of Atherosclerosis. Associations of trait anger, trait anxiety, and depressive symptoms with mean values of common carotid artery (CCA) and internal carotid artery (ICA) IMTs were investigated using multivariable regression and logistic models.
Results: In age-, sex-, and race/ethnicity-adjusted analyses, the trait anger score was positively associated with CCA and ICA IMTs (mean differences per 1-standard deviation increment of trait anger score were 0.014 [95% confidence interval {CI} = 0.003-0.025, p = .01] and 0.054 [95% CI = 0.017-0.090, p = .004] for CCA and ICA IMTs, respectively). Anger was also associated with the presence of carotid plaque (age-, sex-, and race/ethnicity-adjusted odds ratio per 1-standard deviation increase in trait anger = 1.27 [95% CI = 1.06-1.52]). The associations of the anger score with thicker IMT were attenuated after adjustment for covariates but remained statistically significant. Associations were stronger in men than in women and in whites than in other race/ethnic groups, but heterogeneity was only marginally statistically significant by race/ethnicity. There was no association of depressive symptoms or trait anxiety with IMT.
Conclusions: Only one of the three measures examined was associated with IMT, and the patterns seemed to be heterogeneous across race/ethnic groups.
abstract_id: PUBMED:17283290
Negative emotions and 3-year progression of subclinical atherosclerosis. Context: Although depression, anxiety, and hostility/anger have each been associated with an increased risk of coronary artery disease, these overlapping negative emotions have not been simultaneously examined as predictors of the progression of subclinical atherosclerosis.
Objective: To evaluate the relative importance of depressive symptoms, anxiety symptoms, and hostility/anger in predicting subclinical atherosclerotic progression over a 3-year period. Design/
Setting: The Pittsburgh Healthy Heart Project, an ongoing prospective cohort study of healthy, older men and women from the general community. At baseline, questionnaires were administered to assess depressive symptoms, anxiety symptoms, hostility, anger experience, and anger expression. Mean carotid intima-media thickness was assessed by B-mode ultrasonography during the baseline and 3-year follow-up visits.
Participants: Of the 464 adults enrolled in the project, 324 (69.8%) were included in this report because they had complete baseline and follow-up data. Main Outcome Measure Three-year change in mean carotid intima-media thickness.
Results: Regression analyses indicated that higher depressive symptoms at baseline were associated with greater 3-year change in carotid intima-media thickness (DeltaR(2) = 0.026, P = .002), even after taking into account demographic factors, cardiovascular risk factors, medication use, medical conditions, and other correlated negative emotions. Measures of anxiety symptoms, hostility, anger experience, and anger expression were each unrelated to intima-media thickness change. Post hoc analyses examining depressive symptom clusters showed that the somatic-vegetative symptoms of depression (DeltaR(2) = 0.027, P = .002), but not the cognitive-affective symptoms, were positively associated with intima-media thickness change.
Conclusion: Our findings suggest that the somatic-vegetative features of depression, but perhaps not anxiety and hostility/anger, may play an important role in the earlier stages of the development of coronary artery disease.
Answer: Yes, anger and aggression do affect carotid atherosclerosis. Several studies have found significant associations between anger, aggression, and the severity of carotid atherosclerosis. One study found that anger and aggression may be risk factors for cerebrovascular disease, with significant correlations between plaque score and anger, as well as aggression (PUBMED:8322399). Another study reported that suppressed anger is associated with increased carotid arterial stiffness in older adults, suggesting that high anger-in can potentiate the effects of age on stiffening of the central arteries (PUBMED:17070423). Furthermore, anger-related personality traits, such as trait anger and anger-out, have been associated with heightened carotid atherosclerosis in untreated hypertensive men (PUBMED:15385684).
Race-gender differences have also been observed, with trait anger and carotid IMT being significantly and positively associated, particularly in Black men (PUBMED:17363362). Additionally, very high trait anger has been associated with arterial stiffness in men (PUBMED:16407698). In a longitudinal study, trait-anger emerged as a predictor of IMT progression at a 6-year follow-up, with both depression and trait-anger independently associated with 6-year IMT progression (PUBMED:36715099).
Trait anger has also been shown to predict progression of carotid atherosclerosis in healthy middle-aged women, with the metabolic syndrome potentially mediating this association (PUBMED:15564356). Moreover, trait anger was found to be associated with carotid arterial wall thickness, with stronger associations in men than in women and in whites than in other race/ethnic groups (PUBMED:22511725). Lastly, depressive symptoms, which can be related to negative emotions such as anger, were found to be associated with greater 3-year change in carotid intima-media thickness, indicating a potential role of negative emotions in the development of coronary artery disease (PUBMED:17283290). |
Instruction: Is exposure to silica associated with lung cancer in the absence of silicosis?
Abstracts:
abstract_id: PUBMED:36511262
Risk Assessment of Silicosis and Lung Cancer Mortality associated with Occupational Exposure to Crystalline Silica in Iran. Background: Exposure to crystalline silica has long been identified to be associated with lung diseases. Therefore, the present study aimed to assess the risk of silicosis and lung cancer associated with occupational exposure to crystalline silica in Iran.
Study Design: It is a systematic review study.
Methods: Different databases were searched, and the Cochrane method was used for the systematic review. Thereafter, cumulative exposure to crystalline silica (mg/m3-y) was calculated in every industry. The relative risk of death from silicosis was performed using Mannetje's method. Based on the geometric mean of exposure, the lung cancer risk of exposure to crystalline silica was also calculated.
Results: As evidenced by the results, worker's exposure to silica ranged from a geometric mean of 0.0212- 0.2689 mg/m3 (Recommended standard by the American Conference of Governmental Industrial Hygienists (ACGIH) was 0.025 mg/m3), which is generally higher than the occupational exposure limit recommended by National Institute for Occupational Safety and Health (NIOSH), ACGIH, and occupational exposure limits. The relative risk of silicosis was in the range of 1 to 14 per 1000 people, and the risk of lung cancer in workers ranged from 13-137 per 1000 people.
Conclusion: Since workers are at considerable risk of cancer due to exposure to silica in Iran, exposure control programs need to be implemented in workplaces to decrease the concentration of silica.
abstract_id: PUBMED:26858767
Assessment of Occupational Exposure to Dust and Crystalline Silica in Foundries. Background: The term "crystalline silica" refers to crystallized form of SiO2 and quartz, as the most abundant compound on the earth's crust; it is capable of causing silicosis and lung cancer upon inhaling large doses in the course of occupational exposure. The aim of this study was to assess occupational exposure to dust and crystalline silica in foundries in Pakdasht, Iran.
Materials And Methods: In this study, airborne dust samples were collected on PVC filters (37 mm diameter, 0.8 mm pore size), by using a sampling pump and open face cyclone at a flow rate of 2.2 l/min for a maximum volume of 800 liters. For determining crystalline silica spectrometry was used according to the National Institute of Occupational Safety and Health (NIOSH) method No. 7601 for analysis of samples.
Results: Results showed that crystalline silica concentration was higher than NIOSH and the American Conference of Government Industrial Hygienist (ACGIH) allowed extent (0.025 mg/m(3)). Concentration of crystalline silica was 0.02-0.1 mg/m(3). Total dust concentration average was higher than the allowed extent by Permissible Exposure Limit (PEL) of the Occupational Safety and Health Administration (OSHA).
Conclusion: It is essential to take necessary measures to control crystalline silica dust regarding the fact that 50% of workers are exposed to higher than the allowed extent.
abstract_id: PUBMED:27630796
Risk Assessment of Exposure to Silica Dust in Building Demolition Sites. Background: Building demolition can lead to emission of dust into the environment. Exposure to silica dust may be considered as an important hazard in these sites. The objectives of this research were to determine the amount of workers' exposure to crystalline silica dust and assess the relative risk of silicosis and the excess lifetime risk of mortality from lung cancer in demolition workers.
Methods: Four sites in the Tehran megacity region were selected. Silica dust was collected using the National Institute for Occupational Safety and Health method 7601 and determined spectrophotometrically. The Mannetje et al and Rice et al models were chosen to examine the rate of silicosis-related mortality and the excess lifetime risk of mortality from lung cancer, respectively.
Results: The amount of demolition workers' exposure was in the range of 0.085-0.185 mg/m(3). The range of relative risk of silicosis related mortality was increased from 1 in the workers with the lowest exposure level to 22.64/1,000 in the employees with high exposure level. The range of the excess lifetime risk of mortality from lung cancer was in the range of 32-60/1,000 exposed workers.
Conclusion: Geometric and arithmetic mean of exposure was higher than threshold limit value for silica dust in all demolition sites. The risk of silicosis mortality for many demolition workers was higher than 1/1,000 (unacceptable level of risk). Estimating the lifetime lung cancer mortality showed a higher risk of mortality from lung cancer in building demolition workers.
abstract_id: PUBMED:32330394
Respirable Crystalline Silica Exposure, Smoking, and Lung Cancer Subtype Risks. A Pooled Analysis of Case-Control Studies. Rationale: Millions of workers around the world are exposed to respirable crystalline silica. Although silica is a confirmed human lung carcinogen, little is known regarding the cancer risks associated with low levels of exposure and risks by cancer subtype. However, little is known regarding the disease risks associated with low levels of exposure and risks by cancer subtype.Objectives: We aimed to address current knowledge gaps in lung cancer risks associated with low levels of occupational silica exposure and the joint effects of smoking and silica exposure on lung cancer risks.Methods: Subjects from 14 case-control studies from Europe and Canada with detailed smoking and occupational histories were pooled. A quantitative job-exposure matrix was used to estimate silica exposure by occupation, time period, and geographical region. Logistic regression models were used to estimate exposure-disease associations and the joint effects of silica exposure and smoking on risk of lung cancer. Stratified analyses by smoking history and cancer subtypes were also performed.Measurements and Main Results: Our study included 16,901 cases and 20,965 control subjects. Lung cancer odds ratios ranged from 1.15 (95% confidence interval, 1.04-1.27) to 1.45 (95% confidence interval, 1.31-1.60) for groups with the lowest and highest cumulative exposure, respectively. Increasing cumulative silica exposure was associated (P trend < 0.01) with increasing lung cancer risks in nonsilicotics and in current, former, and never-smokers. Increasing exposure was also associated (P trend ≤ 0.01) with increasing risks of lung adenocarcinoma, squamous cell carcinoma, and small cell carcinoma. Supermultiplicative interaction of silica exposure and smoking was observed on overall lung cancer risks; superadditive effects were observed in risks of lung cancer and all three included subtypes.Conclusions: Silica exposure is associated with lung cancer at low exposure levels. An exposure-response relationship was robust and present regardless of smoking, silicosis status, and cancer subtype.
abstract_id: PUBMED:24479465
Occupational exposure to crystalline silica at Alberta work sites. Although crystalline silica has been recognized as a health hazard for many years, it is still encountered in many work environments. Numerous studies have revealed an association between exposure to respirable crystalline silica and the development of silicosis and other lung diseases including lung cancer. Alberta Jobs, Skills, Training and Labour conducted a project to evaluate exposure to crystalline silica at a total of 40 work sites across 13 industries. Total airborne respirable dust and respirable crystalline silica concentrations were quite variable, but there was a potential to exceed the Alberta Occupational Exposure Limit (OEL) of 0.025 mg/m(3) for respirable crystalline silica at many of the work sites evaluated. The industries with the highest potentials for overexposure occurred in sand and mineral processing (GM 0.090 mg/m(3)), followed by new commercial building construction (GM 0.055 mg/m(3)), aggregate mining and crushing (GM 0.048 mg/m(3)), abrasive blasting (GM 0.027 mg/m(3)), and demolition (GM 0.027 mg/m(3)). For worker occupations, geometric mean exposure ranged from 0.105 mg/m(3) (brick layer/mason/concrete cutting) to 0.008 mg/m(3) (dispatcher/shipping, administration). Potential for GM exposure exceeding the OEL was identified in a number of occupations where it was not expected, such as electricians, carpenters and painters. These exposures were generally related to the specific task the worker was doing, or arose from incidental exposure from other activities at the work site. The results indicate that where there is a potential for activities producing airborne respirable crystalline silica, it is critical that the employer include all worker occupations at the work site in their hazard assessment. There appears to be a relationship between airborne total respirable dust concentration and total respirable dust concentrations, but further study is require to fully characterize this relationship. If this relationship holds true, it may provide a useful hazard assessment tool for employers by which the potential for exposure to airborne respirable silica at the work site can be more easily estimated.
abstract_id: PUBMED:23997236
Determinants of respirable crystalline silica exposure among stoneworkers involved in stone restoration work. Objectives: Crystalline silica occurs as a significant component of many traditional materials used in restoration stonework, and stoneworkers who work with these materials are potentially exposed to stone dust containing respirable crystalline silica (RCS). Exposure to RCS can result in the development of a range of adverse health effects, including silicosis and lung cancer. An understanding of the determinants of RCS exposure is important for selecting appropriate exposure controls and in preventing occupational diseases. The objectives of this study were to quantify the RCS exposure of stoneworkers involved in the restoration and maintenance of heritage properties and to identify the main determinants of RCS exposure among this occupational group.
Methods: An exposure assessment was carried out over a 3-year period amongst a group of stonemasons and stone cutters involved in the restoration and maintenance of heritage buildings in Ireland. Personal air samples (n = 103) with corresponding contextual information were collected. Exposure data were analysed using mixed-effects modelling to investigate determinants of RCS exposure and their contribution to the individual's mean exposure. Between-depot, between-worker, and within-worker variance components were also investigated.
Results: The geometric mean (GM) RCS exposure concentrations for all tasks measured ranged from <0.02 to 0.70mg m(-3). GM RCS exposure concentrations for work involving limestone and lime mortar were <0.02-0.01mg m(-3), tasks involving granite were 0.01-0.06mg m(-3), and tasks involving sandstone were <0.02-0.70mg m(-3). Sixty-seven percent of the 8-h time-weighted average (TWA) exposure measurements for tasks involving sandstone exceeded the Scientific Committee on Occupational Exposure Limits recommended occupational exposure limit value of 0.05mg m(-3). Highest RCS exposure values were recorded for the tasks of grinding (GM = 0.70mg m(-3)) and cutting (GM = 0.70mg m(-3)) sandstone. In the mixed-effects analyses, task was found to be significantly associated with RCS exposure, with the tasks of grinding and cutting resulting in average exposures of between 32 and 70 times the exposures recorded for the task of stone decorating. The between-depot, between-worker, and within-worker variance components were reduced by 46, 89, and 49%, respectively, after including task in the mixed effects model.
Conclusions: Restoration stoneworkers are regularly overexposed (compared with 0.1 and 0.05mg m(-3) 8-h TWA) to RCS dust when working with sandstone. The results indicate that the tasks of cutting and grinding sandstone are predictors of increased exposure to RCS dust. In order to decrease exposure to RCS, efforts should be focused on developing and implementing interventions which focus on these high-risk tasks.
abstract_id: PUBMED:24043436
Exposure-response analysis and risk assessment for lung cancer in relationship to silica exposure: a 44-year cohort study of 34,018 workers. Crystalline silica has been classified as a human carcinogen by the International Agency for Research on Cancer (Lyon, France); however, few previous studies have provided quantitative data on silica exposure, silicosis, and/or smoking. We investigated a cohort in China (in 1960-2003) of 34,018 workers without exposure to carcinogenic confounders. Cumulative silica exposure was estimated by linking a job-exposure matrix to work history. Cox proportional hazards model was used to conduct exposure-response analysis and risk assessment. During a mean 34.5-year follow-up, 546 lung cancer deaths were identified. Categorical analyses by quartiles of cumulative silica exposure (using a 25-year lag) yielded hazard ratios of 1.26, 1.54, 1.68, and 1.70, respectively, compared with the unexposed group. Monotonic exposure-response trends were observed among nonsilicotics (P for trend < 0.001). Analyses using splines showed similar trends. The joint effect of silica and smoking was more than additive and close to multiplicative. For workers exposed from ages 20 to 65 years at 0.1 mg/m(3) of silica exposure, the estimated excess lifetime risk (through age 75 years) was 0.51%. These findings confirm silica as a human carcinogen and suggest that current exposure limits in many countries might be insufficient to protect workers from lung cancer. They also indicate that smoking cessation could help reduce lung cancer risk for silica-exposed individuals.
abstract_id: PUBMED:35769775
Case Report: Exposure to Respirable Crystalline Silica and Respiratory Health Among Australian Mine Workers. Occupational exposure to respirable crystalline silica (RCS) is common in a range of industries, including mining, and has been associated with adverse health effects such as silicosis, lung cancer, and non-malignant respiratory diseases. This study used a large population database of 6,563 mine workers from Western Australia who were examined for personal exposure to RCS between 2001 and 2012. A standardized respiratory questionnaire was also administered to collect information related to their respiratory health. Logistic regression analyses were performed to ascertain the association between RCS concentrations and the prevalence of respiratory symptoms among mine workers. The estimated exposure levels of RCS (geometric mean 0.008mg/m3, GSD 4.151) declined over the study period (p < 0.001) and were below the exposure standard of 0.05 mg/m3. Miners exposed to RCS had a significantly higher prevalence of phlegm (p = 0.017) and any respiratory symptom (p = 0.013), even at concentrations within the exposure limit. Miners are susceptible to adverse respiratory health effects at low levels of RCS exposure. More stringent prevention strategies are therefore recommended to protect mine workers from RCS exposures.
abstract_id: PUBMED:19066933
Is exposure to silica associated with lung cancer in the absence of silicosis? A meta-analytical approach to an important public health question. Objective: This report investigates epidemiologically whether exposure to silica is associated with lung cancer risks in individuals without silicosis.
Methods: We searched the PubMed reference data base from 1966 through 1/2007 for reports of lung cancer in silica-exposed persons without and with silicosis. To explore heterogeneity between studies, a multi-stage strategy was employed. First, fixed-effect summaries (FES) and corresponding 95% confidence intervals (CI) for various combinations of studies were calculated, weighting individual results by their precision. The homogeneity of the contributing results was examined using chi(2) statistics. Where there was evidence of substantial heterogeneity, the CI around the FES was increased to take account of the between-study variability. Random-effect summaries and their CI for identical combinations of studies were also computed. Meta regression was used to explore interactions with covariates. To draw comparisons, parallel analyses were performed for non-silicotics and for silicotics.
Results: The persistence of a significant link between silicosis and lung cancer since the characterisation in 1997 of silica as a human carcinogen [our estimates of lung cancer relative risks (RR) exceeded unity in each of 38 eligible studies of silicotics published until 1/2007, averaging 2.1 in analyses based on both fixed and random effect models (95% CI = (2.0-2.3) and (1.9-2.3), respectively)] does not resolve our study question, namely whether exposure to silica levels below those required to induce silicosis are carcinogenic. Importantly, our detailed examination of 11 studies of lung cancer in silica-exposed individuals without silicosis included only three with data allowing adjustment for smoking habits. They yielded a pooled RR estimate of 1.0 [95% CI = (0.8-1.3)]. The other eight studies, with no adjustment for smoking habits, suggested a marginally elevated risk of lung cancer [RR = 1.2; 95% CI (1.1-1.4)], but with significant heterogeneity between studies (P approximately 0.05).
Conclusions: Necessary further research should concentrate on silica exposures both above and below those that induce silicosis, so that the shape of the exposure-response relationship may be identified, with adjustments for likely confounding factors including silicosis. Time-dependent information on silicosis and on silica dust is required as well as the application of methods like G-estimation to answer the important public health question: Is silicosis a necessary condition for the elevation of silica-associated lung cancer risks?
abstract_id: PUBMED:26888888
The Australian Work Exposures Study: Prevalence of Occupational Exposure to Respirable Crystalline Silica. Background: Respirable crystalline silica (RCS) is a biologically active dust that can accumulate in the lung and induce silicosis and lung cancer. Despite occupational exposure being the predominant source, no study has described current occupational RCS exposure on a national scale in Australia. The aim of this study is to estimate the characteristics of those exposed and the circumstances of RCS exposure in Australian workplaces.
Methods: A cross-sectional survey of the Australian working population (18-65 years old) was conducted. Information about the respondents' current job and their demographic characteristics was collected in a telephone interview. Occupational exposure to RCS was determined based on preprogrammed decision rules regarding potential levels of exposure associated with self-reported tasks.
Results: Overall, 6.4% of respondents were deemed exposed to RCS at work in 2012 (3.3% were exposed at a high level). The exposure varied with sex, state of residence, and socioeconomic status. Miners and construction workers were most likely to be highly exposed to RCS when performing tasks with concrete or cement or working near crushers that create RCS-containing dusts. When extrapolated to the entire Australian working population, 6.6% of Australian workers were exposed to RCS and 3.7% were highly exposed when carrying out tasks at work.
Conclusion: This is the first study investigating occupational RCS exposure in an entire national working population. The information about occupational tasks that lead to high level RCS exposure provided by this study will inform the direction of occupational interventions and policies.
Answer: The association between exposure to silica and lung cancer in the absence of silicosis has been a subject of investigation. A meta-analytical approach to this public health question found that while there is a persistent significant link between silicosis and lung cancer, with relative risk estimates exceeding unity in studies of silicotics, the question of whether exposure to silica levels below those required to induce silicosis are carcinogenic remains unresolved (PUBMED:19066933). The meta-analysis included 11 studies of lung cancer in silica-exposed individuals without silicosis, but only three of these allowed for adjustment for smoking habits, yielding a pooled relative risk estimate of 1.0, suggesting no increased risk when smoking is accounted for. The other eight studies, without adjustment for smoking habits, suggested a marginally elevated risk of lung cancer, but with significant heterogeneity between studies (PUBMED:19066933).
Furthermore, a pooled analysis of case-control studies found that exposure to respirable crystalline silica is associated with lung cancer at low exposure levels, with an exposure-response relationship present regardless of smoking, silicosis status, and cancer subtype (PUBMED:32330394). This indicates that even low levels of occupational silica exposure are associated with increased lung cancer risks.
In summary, while there is evidence to suggest that exposure to silica is associated with lung cancer risks even in the absence of silicosis, the relationship is complex and may be influenced by factors such as smoking habits and the level of exposure. Further research is needed to clarify the exposure-response relationship and to determine the carcinogenic potential of silica at levels below those that induce silicosis. |
Instruction: Does energy intake underreporting involve all kinds of food or only specific food items?
Abstracts:
abstract_id: PUBMED:11126348
Does energy intake underreporting involve all kinds of food or only specific food items? Results from the Fleurbaix Laventie Ville Santé (FLVS) study. Objective: To determine if energy intake underreporting concerns all major food groups or if it occurs for specific food groups only.
Design: Cross-sectional study on dietary habits and food consumption.
Subjects: Five-hundred and four women and 529 men, aged between 25 and 55y participating in the Fleurbaix Laventie Ville Sante study.
Measurements: A nutritional survey was conducted between March and June 1993 using a 3-day food record. Reported weight and height were used to estimate body mass index and basal metabolic rate. Underreporters were defined as subjects whose ratio of mean energy intake to basal metabolic rate was lower than 1.05. Food consumption was compared between underreporters and non-underreporters.
Results: Energy percentage of fat and carbohydrate were lower in underreporters than in non-underreporters in contrast to the energy percentage of protein. This was due to the fact that food items rich in fat and/or carbohydrates (such as butter, French fries, sugars and confectionery, cakes and pastries) were reported to be less frequently eaten and/or in smaller quantities in underreporters compared to non-underreporters.
Conclusion: Although this study presents some limitations, like the use of reported weight and a standard value for physical activity, it shows that reported foods differed, quantitatively and qualitatively, between severe underreporters and non-underreporters. Underreporting of food intake does not result from a systematical underestimation of portion sizes for all food items, but seems to concern specific food items which are generally considered 'bad for health'.
abstract_id: PUBMED:35205013
Misreporting of Energy Intake Is Related to Specific Food Items in Low-Middle Income Chilean Adolescents. Background: Misreporting of energy intake (EI) in self-reported dietary assessment is inevitable, and even less is known about which food items are misreported by low-middle income adolescents. We evaluated the prevalence of misreporting of energy intake and its relationship with nutrients and food intake.
Methods: We analyzed 24 h dietary recalls collected from 576 adolescents (52.08% boys) from southeastern Santiago. Anthropometrics measurements and information about sociodemographic characteristics were obtained during clinical visits. The method proposed by McCrory et al. was used to identify under-reporters (UnRs), over-reporters (OvRs), or plausible reporters (PRs). Food items were collapsed into 28 categories and every food item was expressed as a percentage of total EI. Logistic regression models were performed to investigate the factors associated with misreporting, and a two-part model was used to estimate the difference in the percentage of EI between UnRs versus PRs, and OvRs versus PRs in each food item.
Results: Half of the participants were classified as UnRs and 9% were OvRs. UnR was higher among boys (62%) and adolescents with overweight and obesity (72%). OvR was higher among adolescents with normal weight. UnRs had a lower intake of energy from cookies/cake, chocolate/confectionery, and a higher intake of vegetables and eggs than PRs. OvRs had a higher intake of cookies/cake, chocolate/confectionery, and a lower intake of fruit, white milk, and yogurt than PRs.
Conclusions: A high frequency of UnR among boys and participants with excess weight was found in this study. Healthy and unhealthy foods are reported differently between UnRs and OvRs of energy intake, indicating that bias is specific for some food items that adolescents commonly eat.
abstract_id: PUBMED:24724773
Comparative analysis of approaches for assessing energy intake underreporting by female bariatric surgery candidates. Objective: To test six variations in the Goldberg equation for evaluating the underreporting of energy intake (EI) among obese women on the waiting list for bariatric surgery, considering variations in resting metabolic rate (RMR), physical activity, and food intake levels in group and individual approaches.
Methods: One hundred obese women aged 20 to 45 years (33.3 ± 6.08) recruited from a bariatric surgery waiting list participated in the study. Underreporting assessment was based on the difference between reported energy intake, indirect calorimetry measurements and RMR (rEI:RMR), which is compatible with the predicted physical activity level (PAL). Six approaches were used for defining the cutoff points. The approaches took into account variances in the components of the rEI:RMR = PAL equation as a function of the assumed PAL, sample size (n), and measured or estimated RMR.
Results: The underreporting percentage varied from 55% to 97%, depending on the approach used for generating the cutoff points. The ratio rEI:RMR and estimated PAL of the sample were significantly different (p = 0.001). Sixty-one percent of the women reported an EI lower than their RMR. The PAL variable significantly affected the cutoff point, leading to different proportions of underreporting. The RMR measured or estimated in the equation did not result in differences in the proportion of underreporting. The individual approach was less sensitive than the group approach.
Conclusion: RMR did not interfere in underreporting estimates. However, PAL variations were responsible for significant differences in cutoff point. Thus, PAL should be considered when estimating underreporting, and even though the individual approach is less sensitive than the group approach, it may be a useful tool for clinical practice.
abstract_id: PUBMED:36350182
Underreporting of energy intake is not associated with the reported consumption of NOVA-classified food groups in socially vulnerable women. Few studies have investigated which types of food are least reported among underreporters of energy intake (EI). This study assessed the association between the underreporting of EI and the consumption report of food groups according to NOVA classification in women in social vulnerability. EI was measured through three 24-h dietary recalls administered by the research team. Total energy expenditure (TEE) was evaluated using the doubly labelled water method. The percentage of EI arising from each NOVA group food classification (unprocessed/minimally processed foods, culinary ingredients, processed foods and ultra-processed foods [UPF]) was calculated. The agreement between the EI and the TEE was assessed using the ratio EI:TEE. Associations were assessed with Pearson's correlation and multivariable linear regression, adjusted for age, education and body fat. The sample (63 women, age: 30.8 years, Body Mass Index: 27.6 kg/m2 ) reported an EI of 1849 kcal and a TEE of 2223 kcal, with a mean EI:TEE of 0.85. There were no significant correlations between the EI:TEE and the reported food intake according to NOVA classifications. Multivariable linear regression also did not show any significant associations (UPF: 8.47, 95% CI: [-3.65; 20.60] %kcal; Processed: -6.85, [-19.21; 7.71] %kcal; Culinary ingredients: 1.30 [-5.10; 7.71] %kcal; Unprocessed/minimally processed: -2.92 [-10.98; 5.13] %kcal). In conclusion, socially vulnerable women that underreport their EI do not report a lower intake of any specific group of foods according to NOVA classification.
abstract_id: PUBMED:20368944
Underreporting of dietary intake by body mass index in premenopausal women participating in the Healthy Women Study. Underreporting patterns by the level of obesity have not been fully assessed yet. The purpose of this study was to examine the differential underreporting patterns on cardiovascular risk factor, macronutrient, and food group intakes by the level of Body Mass Index (BMI). We analyzed cross-sectional baseline nutritional survey data from the population-based longitudinal study, the Healthy Women Study (HWS) cohort. Study subjects included 538 healthy premenopausal women participating in the HWS. Nutrient and food group intakes were assessed by the one-day 24-hour dietary recall and a semi-quantitative food frequency questionnaire, respectively. The ratio of reported energy intake (EI) to estimated basal metabolic rate (BMR) was used as a measure of relative energy reporting status and categorized into tertiles. Overweight group (BMI>/=25kg/m(2)) had a higher ratio of EI to BMR (EI/BMR) than normal weight group (BMI<25kg/m(2)). Normal weight and overweight groups showed similar patterns in cardiovascular risk factors, nutrient intake, and food group intake by the EI/BMR. Fat and saturated fat intakes as a nutrient density were positively associated with the EI/BMR. Proportion of women who reported higher consumption (>/=4 times/wk) of sugar/candy, cream and red meat groups was greater in higher tertiles of the EI/BMR in both BMI groups. Our findings suggest similar patterns of underreporting of cardiovascular risk factors, and macronutrient and food group intakes in both normal and overweight women.
abstract_id: PUBMED:8793423
Energy intake adaptation of food intake to extreme energy densities of food by obese and non-obese women. Objective: Examination of energy intake in relation to energy density of food in obese and non-obese women. Assessment of energy and macronutrient intake over a day.
Design: Controlled food intake diaries of two weekdays and one weekend day.
Setting: Daily life, with visits to the department of Human Biology, State University of Limburg.
Subjects: 96 women: 68 subjects: 34 obese and 34 non-obese were matched for age (20-50y) and were selected based on completing the food intake diaries accurately, i.e. underreporting < 10% of their estimated energy intake.
Results: The obese women showed a food intake distribution of 24 en% (0-7.5 kJ/g), 52 en% (7.5-15 kJ/g) and 24 en% (15-22.5 kJ/g), with a macronutrient composition of C/P/F: 39/17/44 en%. (Significantly different from the values of non-obese (P = 0.007) and of the Dutch food guidelines values (P = 0.008)). Non-obese women showed a food intake distribution of 38 en% (0-7.5 kJ/g), 49 en% (7.5-15 kJ/g), 13 en% (15-22.5 kJ/g), with a macronutrient composition of C/P/F: 46/17/37 en%. Energy intake per meal increased from 1.2 or 1.3 MJ to 4.1 or 4.5 MJ over a day.
Conclusions: In obese women food intake was adapted to extreme energy densities of the food and in non-obese women food intake was overadapted to extreme energy densities. Energy intake per meal increased during the day.
abstract_id: PUBMED:10617957
Undereating and underrecording of habitual food intake in obese men: selective underreporting of fat intake. Background: Underreporting of food intake is common in obese subjects.
Objective: One aim of this study was to assess to what extent underreporting by obese men is explained by underrecording (failure to record in a food diary everything that is consumed) or undereating. Another aim of the study was to find out whether there was an indication for selective underreporting.
Design: Subjects were 30 obese men with a mean (+/-SD) body mass index (in kg/m(2)) of 34 +/- 4. Total food intake was measured over 1 wk. Energy expenditure (EE) was measured with the doubly labeled water method, and water loss was estimated with deuterium-labeled water. Energy balance was checked for by measuring body weight at the start and end of the food-recording week and 1 wk after the recording week.
Results: Mean energy intake and EE were 10.4 +/- 2.5 and 16.7 +/- 2. 4 MJ/d, respectively; underreporting was 37 +/- 16%. The mean body mass loss of 1.0 +/- 1.3 kg over the recording week was significantly different (P < 0.05) from the change in body mass over the nonrecording week, and indicated 26% undereating. Water intake (reported + metabolic water) and water loss were significantly different from each other and indicated 12% underrecording. The reported percentage of energy from fat was a function of the level of underreporting: percentage of energy from fat = 46 - 0.2 x percentage of underreporting (r(2) = 0.28, P = 0.003).
Conclusions: Total underreporting by the obese men was explained by underrecording and undereating. The obese men selectively underreported fat intake.
abstract_id: PUBMED:29189903
Comparison of food consumption and nutrient intake assessed with three dietary assessment methods: results of the German National Nutrition Survey II. Purpose: Comparison of food consumption, nutrient intake and underreporting of diet history interviews, 24-h recalls and weighed food records to gain further insight into specific strength and limitations of each method and to support the choice of the adequate dietary assessment method.
Methods: For 677 participants (14-80 years) of the German National Nutrition Survey II confidence intervals for food consumption and nutrient intake were calculated on basis of bootstrapping samples, Cohen's d for the relevance of differences, and intraclass correlation coefficients for the degree of agreement of dietary assessment methods. Low energy reporters were identified with Goldberg cut-offs.
Results: In 7 of 18 food groups diet history interviews showed higher consumption means than 24-h recalls and weighed food records. Especially mean values of food groups perceived as socially desirable, such as fruit and vegetables, were highest for diet history interviews. For "raw" and "cooked vegetables", the diet history interviews showed a mean consumption of 144 and 109 g/day in comparison with 68 and 70 g/day in 24-h recalls and 76 and 75 g/day in weighed food records, respectively. For "fruit", diet history interviews showed a mean consumption of 256 g/day in comparison with 164 g/day in 24-h recalls and 147 g/day in weighed food records. No major differences regarding underreporting of energy intake were found between dietary assessment methods.
Conclusions: With regard to estimating food consumption and nutrient intake, 24-h recalls and weighed food records showed smaller differences and better agreement than pairwise comparisons with diet history interviews.
abstract_id: PUBMED:36901000
Nutritional Content of Popular Menu Items from Online Food Delivery Applications in Bangkok, Thailand: Are They Healthy? The rise in online food delivery (OFD) applications has increased access to a myriad of ready-to-eat options, which may lead to unhealthier food choices. Our objective was to assess the nutritional profile of popular menu items available through OFD applications in Bangkok, Thailand. We selected the top 40 popular menu items from three of the most commonly used OFD applications in 2021. Each menu item was collected from the top 15 restaurants in Bangkok for a total of 600 items. Nutritional contents were analysed by a professional food laboratory in Bangkok. Descriptive statistics were employed to describe the nutritional content of each menu item, including energy, fat, sodium, and sugar content. We also compared nutritional content to the World Health Organization's recommended daily intake values. The majority of menu items were considered unhealthy, with 23 of the 25 ready-to-eat menu items containing more than the recommended sodium intake for adults. Eighty percent of all sweets contained approximately 1.5 times more sugar than the daily recommendation. Displaying nutrition facts in the OFD applications for menu items and providing consumers with filters for healthier options are required to reduce overconsumption and improve consumer food choice.
abstract_id: PUBMED:25261733
A photographic method to measure food item intake. Validation in geriatric institutions. From both a clinical and research perspective, measuring food intake is an important issue in geriatric institutions. However, weighing food in this context can be complex, particularly when the items remaining on a plate (side dish, meat or fish and sauce) need to be weighed separately following consumption. A method based on photography that involves taking photographs after a meal to determine food intake consequently seems to be a good alternative. This method enables the storage of raw data so that unhurried analyses can be performed to distinguish the food items present in the images. Therefore, the aim of this paper was to validate a photographic method to measure food intake in terms of differentiating food item intake in the context of a geriatric institution. Sixty-six elderly residents took part in this study, which was performed in four French nursing homes. Four dishes of standardized portions were offered to the residents during 16 different lunchtimes. Three non-trained assessors then independently estimated both the total and specific food item intakes of the participants using images of their plates taken after the meal (photographic method) and a reference image of one plate taken before the meal. Total food intakes were also recorded by weighing the food. To test the reliability of the photographic method, agreements between different assessors and agreements among various estimates made by the same assessor were evaluated. To test the accuracy and specificity of this method, food intake estimates for the four dishes were compared with the food intakes determined using the weighed food method. To illustrate the added value of the photographic method, food consumption differences between the dishes were explained by investigating the intakes of specific food items. Although they were not specifically trained for this purpose, the results demonstrated that the assessor estimates agreed between assessors and among various estimates made by the same assessor. The results also revealed that the accuracy of this method was not dependent on the type of food studied, thus, the photographic method was not specific to a particular food type. Finally, the photographic method was able to provide more detailed data because it allowed differentiation between food item intakes. These findings clearly suggest that the photographic method is a valid and useful method to measure food intake in geriatric institutions.
Answer: Energy intake underreporting does not involve all kinds of food equally but is more prevalent with specific food items. Studies have shown that underreporting is often associated with foods that are generally considered 'bad for health' or less socially desirable. For instance, in the Fleurbaix Laventie Ville Santé (FLVS) study, it was found that underreporters reported consuming less frequently and in smaller quantities food items rich in fat and/or carbohydrates, such as butter, French fries, sugars and confectionery, cakes, and pastries, compared to non-underreporters (PUBMED:11126348). Similarly, in a study involving Chilean adolescents, underreporters had a lower intake of energy from cookies/cake, chocolate/confectionery, and a higher intake of vegetables and eggs than plausible reporters (PUBMED:35205013).
The underreporting of energy intake can vary depending on the approach used to assess it. For example, in a study with female bariatric surgery candidates, the underreporting percentage varied from 55% to 97% depending on the approach used for generating the cutoff points (PUBMED:24724773). However, in a study with socially vulnerable women, underreporting of energy intake was not associated with the reported consumption of NOVA-classified food groups, indicating that these women did not report a lower intake of any specific group of foods according to NOVA classification (PUBMED:36350182).
In premenopausal women, underreporting patterns did not differ significantly by the level of obesity, and similar patterns were observed in cardiovascular risk factors, nutrient intake, and food group intakes in both normal and overweight women (PUBMED:20368944). In contrast, a study on obese men indicated that they selectively underreported fat intake (PUBMED:10617957).
Overall, these findings suggest that underreporting of energy intake is more common with certain types of food, particularly those perceived as unhealthy or less socially acceptable, rather than being a uniform underestimation across all food items. |
Instruction: Can health insurance improve employee health outcome and reduce cost?
Abstracts:
abstract_id: PUBMED:24202243
Can health insurance improve employee health outcome and reduce cost? An evaluation of Geisinger's employee health and wellness program. Objective: To evaluate the impact of a health plan-driven employee health and wellness program (known as MyHealth Rewards) on health outcomes (stroke and myocardial infarction) and cost of care.
Methods: A cohort of Geisinger Health Plan members who were Geisinger Health System (GHS) employees throughout the study period (2007 to 2011) was compared with a comparison group consisting of Geisinger Health Plan members who were non-GHS employees.
Result: The GHS employee cohort experienced a stroke or myocardial infarction later than the non-GHS comparison group (hazard ratios of 0.73 and 0.56; P < 0.01). There was also a 10% to 13% cost reduction (P < 0.05) during the second and third years of the program. The cumulative return on investment was approximately 1.6.
Conclusion: Health plan-driven employee health and wellness programs similarly designed as MyHealth Rewards can potentially have a desirable impact on employee health and cost.
abstract_id: PUBMED:25514813
Reducing state employee health insurance costs. (1) States and their employees spent $30.7 billion on health insurance premiums for state employees in 2013. (2) State employee health plan cost-sharing arrangements and premiums vary widely by state. (3) Across all sectors, employer-provided health insurance costs doubled from 1992 to 2012.
abstract_id: PUBMED:34902812
Healthcare Cost Reduction and Health Insurance Policy Improvement. Objectives: Reducing healthcare costs is a constant endeavor of all healthcare organizations, governments, policy makers, and individuals. A comparative study of available healthcare policies from the patient's perspective is not available. Furthermore, an analysis of how the various components of these policies affect the healthcare cost of a patient is required.
Methods: Data were collected from 150 hospitalized patients in India regarding their views on 7 healthcare cost categories covering 22 cost components. These are statistically analyzed under 4 commonly used health insurance policies (2 government insurance schemes: ex-servicemen contributory health scheme and employee state insurance; private insurance schemes; and self-financing-ie, no insurance) to assess which healthcare cost component is more important under which policy option.
Results: Under 7 healthcare cost categories, 22 cost components were studied, and out of these 22, 16 were found statistically significant. Results revealed that the treatment of all 16 significant cost components under the 4 health insurance policy options was statistically different.
Conclusions: Patients covered under government sector health insurance policies were found to be less concerned about healthcare costs, whereas those covered under private health insurance policies were found to be more cost-conscious. Access to healthcare or transportation costs to the healthcare facility is a key concern area for self-financed patients.
abstract_id: PUBMED:7108173
Employer-based health insurance. Employer-based health insurance (insurance that is purchased by employers for their employees and financed through employer or joint employer-employee contributions) is currently subsidized in part by the federal government through tax exclusions for employer contributions to employee health insurance plans. This subsidization costs the federal government close to 10 billion dollars a year in lost revenues. Many proposed national health insurance plans assign a key role to employer-based health insurance as a vehicle for financing health care. Federal subsidization of employer-based health insurance and plans that assign employers a key role in the administration of a national health insurance plan both assume that private industry acts to realize federal health policy goals-- particularly cost containment--in administering health insurance plans. Little is known, however, about how employers go about selecting the plans they offer their employees or about the incentives and disincentives regarding cost of care than are created by employer-based health insurance. Existing evidence suggests that rather than helping to contain health care costs, employer-based health insurance may be partly responsible for their present escalation. In addition, employer-based health insurance may not be the most equitable way to implement a national health insurance plan.
abstract_id: PUBMED:7356092
Selection of health insurance by an employee group in Northern California. Enrollment trends for a large employee group were analyzed to determine the extent to which consumers chose Blue Cross or Health Maintenance Organization (HMO) health insurance under various premium differentials. Data were collected from employment records of six University of California campuses for the period 1967 to 1978. Enrollment in the Kaiser Foundation Health Plan (an HMO) more than doubled during this period while enrollment in Blue Cross remained relatively stable. This increased preference for Kaiser coverage was associated with a concurrent relative rise in costs to employees of Blue Cross coverage. These data suggest that consumers are sensitive to insurance costs, and that given the opportunity HMOs can compete effectively with traditional health insurance.
abstract_id: PUBMED:17678512
Enhancing employee capacity to prioritize health insurance benefits. Objective: To demonstrate that employees can gain understanding of the financial constraints involved in designing health insurance benefits.
Background: While employees who receive their health insurance through the workplace have much at stake as the cost of health insurance rises, they are not necessarily prepared to constructively participate in prioritizing their health insurance benefits in order to limit cost.
Design: Structured group exercises.
Setting And Participants: Employees of 41 public and private organizations in Northern California.
Intervention: Administration of the CHAT (Choosing Healthplans All Together) exercise in which participants engage in deliberation to design health insurance benefits under financial constraints.
Main Outcome Measures: Change in priorities and attitudes about the need to exercise insurance cost constraints.
Results: Participants (N = 744) became significantly more cognizant of the need to limit insurance benefits for the sake of affordability and capable of prioritizing benefit options. Those agreeing that it is reasonable to limit health insurance coverage given the cost increased from 47% to 72%.
Conclusion: It is both possible and valuable to involve employees in priority setting regarding health insurance benefits through the use of structured decision tools.
abstract_id: PUBMED:14606255
Employee input and health care cost-containment strategies. Health insurance premiums have risen steadily in recent years, and many employers are coping by increasing employee premium contributions. The danger with cost shifting is that a substantial number of employees will refuse offered insurance because of the escalating contribution required of them. The authors surveyed employees regarding what aspects of their insurance benefits they would be willing to give up if their policies were to be substantially trimmed. The responses were varied and influenced by income, education, current contribution to premium, and health status. Interestingly, few employees outside of unions strategize with their employers about how best to structure health insurance benefits to keep them affordable.
abstract_id: PUBMED:11845926
Employee demand for health insurance and employer health plan choices. Although most private health insurance in US is employment-based, little is known about how employers choose health plans for their employees. In this paper, I examine the relationship between employee preferences for health insurance and the health plans offered by employers. I find evidence that employee characteristics affect the generosity of the health plans offered by employers and the likelihood that employers offer a choice of plans. Although the results suggest that employers do respond to employee preferences in choosing health benefits, the effects of worker characteristics on plan offerings are quantitatively small.
abstract_id: PUBMED:37376911
Legitimacy of Cost Concern for Health Insurance Coverage of Gender-Affirming Surgeries: Comparison of the Interest in Keeping Insurance Cost Down versus the Cost-Effectiveness of Including Gender-Affirming Surgeries in Employer Health Insurance Plans. This RCD discusses the recent development in Lange v Houston County. In this case, the United States District Court for The Middle District Of Georgia Macon Division found that an Exclusion Policy, prohibiting health insurance coverage of gender-affirming surgery for an employee, Anna Lange, violated Title VII of the Civil Rights Act. On appeal, the Defendants argued that the District Court erred in its decision and relied on the cost burden of gender-affirming surgery as one of their defenses. This RCD highlights that cost is a common defense tactic used by defendants in these cases. However, the author argues that these concerns are unfounded and meritless given the cost-effectiveness of including gender-affirming surgeries in health insurance plans, as highlighted in the RCD.
abstract_id: PUBMED:30907688
Self-Insured Employer Health Benefits Strategy Established a Negative Cost Trend While Improving Performance. This case study describes the collaboration between a self-insured employee benefits team and a national health insurance provider to control costs while maintaining program quality and promoting population health. In 2015, Quest Diagnostics well exceeded the full-year expense target for their ∼60,000-life Group Health Insurance (GHI) program. Through proactive changes, physician executive leadership, health plan collaboration, disease-specific population health initiatives, and plan design, Quest GHI annual employer health care cost trend subsequently improved from a year-over-year trend of 5.7% for 2014 to 2015, to 4.6% for 2015 to 2016, to -1.0% for 2016 to 2017, and most recently, 0.3% for 2017 to 2018. The actuarial value of the GHI plan did not decline, and employee cost share also remained unchanged in 2017 and 2018 versus 2016 for the high-performance network option. There was a 3% premium increase for the Preferred Provider Organization option in 2018. A third-party analysis for full year 2017 showed Quest GHI to be 11% more efficient than the mean GHI for programs with a comparable benefit and employee contribution. Early results in 2018 show improvements in the health status of the health plan membership. This article describes an approach for self-insured employers to proactively collaborate with a health plan and pharmacy benefits manager to practice the Triple Aim of improving the patient health care experience and population health while reducing per capita health care spending.
Answer: Yes, health insurance can improve employee health outcomes and reduce costs. An evaluation of Geisinger's employee health and wellness program, known as MyHealth Rewards, showed that the Geisinger Health System (GHS) employee cohort experienced stroke or myocardial infarction later than the non-GHS comparison group, with hazard ratios of 0.73 and 0.56, respectively. Additionally, there was a 10% to 13% cost reduction during the second and third years of the program, with a cumulative return on investment of approximately 1.6 (PUBMED:24202243). This suggests that health plan-driven employee health and wellness programs can have a desirable impact on employee health and cost.
Moreover, a case study of a collaboration between a self-insured employee benefits team and a national health insurance provider demonstrated that it is possible to control costs while maintaining program quality and promoting population health. Quest Diagnostics' Group Health Insurance program showed an improved annual employer health care cost trend from a year-over-year trend of 5.7% for 2014 to 2015, to -1.0% for 2016 to 2017, without a decline in the actuarial value of the GHI plan or an increase in employee cost share for the high-performance network option (PUBMED:30907688).
These findings indicate that with well-designed health insurance programs and proactive strategies, employers can not only improve the health outcomes of their employees but also achieve cost savings. |
Instruction: Is selective embolization of uterine arteries a safe alternative to hysterectomy in patients with postpartum hemorrhage?
Abstracts:
abstract_id: PUBMED:11418416
Is selective embolization of uterine arteries a safe alternative to hysterectomy in patients with postpartum hemorrhage? Objective: The purpose of this study was to evaluate the efficacy and safety of selective arterial embolization to control severe postpartum hemorrhage.
Materials And Methods: Twenty-five women with intractable postpartum hemorrhage underwent uterine embolization in our institution during a 6-year period.
Results: Angiography revealed arterial extravasation in 13 patients (52%). Sixty-nine arteries were embolized. External bleeding resolved immediately or was markedly decreased in 24 women. In one patient, embolization failed to control the bleeding, and surgical treatment was required. No major complication of embolization therapy was observed. Ten women were followed up for an average of 2 years. Menstruation resumed in all patients, and one woman became pregnant.
Conclusion: Embolization of acute postpartum hemorrhage is a safe and effective alternative to hysterectomy.
abstract_id: PUBMED:29065701
Postpartum hemorrhage from non-uterine arteries: clinical importance of their detection and the results of selective embolization. Background Identification of the source of postpartum hemorrhage (PPH) is important for embolization because PPH frequently originates from non-uterine arteries. Purpose To evaluate the clinical importance of identifying the non-uterine arteries causing the PPH and the results of their selective embolization. Material and Methods This retrospective study enrolled 59 patients who underwent embolization for PPH from June 2009 to July 2016. Angiographic findings and medical records were reviewed to determine whether non-uterine arteries contributed to PPH. Arteries showing extravasation or hypertrophy accompanying uterine hypervascular staining were regarded as sources of the PPH. The results of their embolization were analyzed. Results Of 59 patients, 19 (32.2%) underwent embolization of non-uterine arteries. These arteries were ovarian (n = 7), vaginal (n = 5), round ligament (n = 5), inferior epigastric (n = 3), cervical (n = 2), internal pudendal (n = 2), vesical (n = 1), and rectal (n = 1) arteries. The embolic materials used included n-butyl cyanoacrylate (n = 9), gelatin sponge particles (n = 8), gelatin sponge particles with microcoils (n = 1), and polyvinyl alcohol particles (n = 1). In 13 patients, bilateral uterine arterial embolization was performed. Re-embolization was performed in two patients with persistent bleeding. Hemostasis was achieved in 17 (89.5%) patients. Two patients underwent immediate hysterectomy due to persistent bleeding. One patient experienced a major complication due to pelvic organ ischemia. One patient underwent delayed hysterectomy for uterine infarction four months later. Conclusion Non-uterine arteries are major sources of PPH. Detection and selective embolization are important for successful hemostasis.
abstract_id: PUBMED:28756581
Predelivery uterine arteries embolization in patients affected by placental implant anomalies. Purpose: The aim of this study is to report on a single center experience of managing patients affected by placenta previa major and/or accretism by embolizing uterine arteries immediately before the cesarean delivery to reduce blood loss and secondary the rate of hysterectomies.
Materials And Methods: Sixty-nine patients have been prospectively enrolled. Inclusion criteria were radiological diagnosis of placenta anomalies and risk factors for peri/postpartum hemorrhage. The delivery was electively scheduled between the 35th week and the 36th week of pregnancy. The embolization procedure was performed in the gynecological operating room with a mobile C-arm by injecting calibrated microparticles 500-700 μm. A contrast-enhanced MRI was acquired in a subgroup of 10 patients 6 months after the delivery to evaluate the uterine wall status.
Results: Hysterectomy had been performed in 43.5%; 52.2% did not require blood transfusions; 1.2 blood units per patient had been meanly transfused. The mean fluoroscopy beam-on time was 195 s per patient. The mean uterine dose was 26.75 mGy. No pH anomalies were measured from the umbilical cord blood; the Apgar score at 5 min was ≥8. The analysis of the neuro-developmental milestones showed normal cognitive development in all children at 6 months. The uterine wall enhancement evaluated with contrast-enhanced MRI 6 months after the embolization procedure showed preserved myometrial perfusion without area of necrosis.
Conclusions: In this series of patients, the predelivery uterine arteries' embolization was a safe and effective procedure; this may represent a technical alternative that interventional radiologists can consider when facing this challenging scenario.
abstract_id: PUBMED:11604185
Abnormal placentation and selective embolization of the uterine arteries. Objective: Abnormal placentation accounts for more than 50% of uterine artery embolization failure. The authors report their experience in this situation.
Study Design: Seven women presented with abnormal placentation. Uterine artery embolization was carried out in emergency or prophylactic control of postpartum bleeding.
Results: In five patients, control of postpartum hemorrhage was obtained without hysterectomy. In two cases with no placental removal and prophylactic procedures, hysterectomy and blood transfusion were not necessary. The manual removal of the placenta was achieved secondarily, respectively on the 25th and the 12th day.
Conclusions: The success rate of uterine artery embolization for postpartum bleeding appears to be lower with abnormal placentation. In none of the cases with the placenta present was it possible to leave the residual placenta in place. However, embolization may permit a safe waiting period and spontaneous migration of the placenta. When the diagnosis is made before delivery, prophylactic uterine artery embolization without placental removal should be considered to reduce blood transfusion and preserve fertility.
abstract_id: PUBMED:10468062
Selective arterial embolization of the uterine arteries in the management of intractable post-partum hemorrhage. Background: To evaluate the efficacy and safety of selective arterial embolization in the management of intractable post-partum hemorrhage.
Methods: Thirty-five consecutive women with severe post-partum hemorrhage (primary, n=25; secondary, n=10) were treated by selective embolization of the uterine arteries. The main cause of immediate post-partum hemorrhage was atonic uterus. Retained placental fragments with endometritis was the main cause of delayed hemorrhage. In all cases, hemostatic embolization was performed because of intractable hemorrhage. Hysterectomy had been performed in two cases before embolization but it had also failed to stop the bleeding.
Results: Angiography revealed extravasation in ten cases, spasm of the internal iliac artery in four cases, false aneurysm in two cases and arteriovenous fistula in one case. After embolization, immediate cessation or dramatic diminution of bleeding was observed in all cases. Two patients required repeated embolization the following day. No major complication related to embolization was found. In one patient with placenta accreta, delayed hysterectomy was necessary. Normal menstruation resumed in all women but two who had hysterectomy. One woman became pregnant after embolization.
Conclusion: Selective emergency arterial embolization is an effective means of controlling severe post-partum hemorrhage. This procedure avoids high risk surgery and maintains reproductive ability.
abstract_id: PUBMED:12932867
Place of embolization of the uterine arteries in the management of post-partum haemorrhage: a study of 12 cases. Objective: To assess the current place of embolization of the uterine arteries in the treatment of severe post-partum haemorrhages.
Materials And Methods: A retrospective study of 13,160 deliveries in a level III maternity unit between January 1996 and December 2001. Five hundred and forty-nine post-partum haemorrhages were diagnosed. Seventeen (0.13%) patients had a haemorrhage which did not respond to treatment using obstetric manoeuvres and uterotonic drugs. Twelve patients aged between 19 and 34 years old benefited from embolization of the uterine arteries. Nine patients had delivered by Caesarian section and three vaginally. The aetiologies found were uterine atony (n=8), placenta praevia (n=1), placenta accreta (n=1), abruptio placentae (n=1) and uterine myomas (n=1).
Results: The success rate of embolization was 91.6%. One failure, resulting from cardiovascular shock during the procedure, led to the patient being transferred as an emergency to the operating theatre for a haemostasis hysterectomy. It was due to placenta increta. No maternal deaths were reported. No complications because of the technique used were noted. One patient successfully delivered, following a normal pregnancy, one year after embolization.
Conclusion: Embolization of the uterine arteries is indicated in severe post-partum haemorrhage, irrespective of the aetiology or the type of delivery. It should be offered as soon as primary management measures undertaken for haemorrhage are judged as ineffective. Its place in the treatment strategy, is in all cases before embarking on surgery, which is the final recourse in the case of failure. It is a fairly uninvasive procedure, which preserves the potential for future pregnancies.
abstract_id: PUBMED:22342505
Uterine necrosis after arterial embolization for postpartum hemorrhage Radiologic embolization of the uterine arteries is increasingly used to treat severe postpartum hemorrhage, as an alternative to surgical procedures. Guidelines have been published in order to standardize the indications as well as the technique. An important objective was to limit severe complications such as uterine necrosis. We report a case of a uterine necrosis after arterial embolization for severe postpartum hemorrhage due to uterine atony on a uterus with fibroids. This complication occurred despite the use of the recommended technique.
abstract_id: PUBMED:12188066
A case of uterine artery pseudoaneurysms. Uterine artery pseudoaneurysms is a rare cause of haemorrhage but is potentially life-threatening and can occur after common gynaecological operations such as a Caesarean section or a hysterectomy. A 33-year-old woman who developed secondary postpartum haemorrhage after a Caesarean section was diagnosed to have uterine artery pseudoaneurysms on ultrasound scan. She was treated with bilateral uterine artery embolisation via selective catherisation of uterine arteries. Good outcome with the aneurysms remaining obliterated was obtained. Angiographic embolisation is a safe and effective method of treating postpartum haemorrhage in haemodynamically stable patients and should be an option before resorting to surgery in appropriately selected cases.
abstract_id: PUBMED:17317035
Uterine and ovarian necrosis following embolisation of the uterine arteries for postpartum haemorrhagia We report here the case of a young woman who presented with pain, fever and apparent cutaneous sub ischaemia following embolisation of the uterine arteries for postpartum haemorrhagia. This embolisation was carried out by bilateral selective catheterism of the uterine arteries using 45 to 150 micron polyvinyl alcohol particles. Investigative laparotomy was decided in view of the persistence of the symptoms, and the patient underwent hysterectomy with ablation of the right adnexa to treat uterine necrosis and adnexal atrophy. Because of the information that we have actually, we can explain that this complication is linked with the use of polyvinyl alcohol small size particles. The migration of those embolisation agents might be responsible for obliteration of a large number of distal vessels. Those embolisation agents must, because of recommendation for good practice published in 2004, be reserved to exceptional cases to minimize the complication of embolisation.
abstract_id: PUBMED:12423843
Pregnancy after uterine artery embolization. Background: Uterine artery embolization is an increasingly popular alternative to hysterectomy and myomectomy as a treatment for uterine leiomyoma. Whether this procedure is safe for women desiring future fertility is controversial.
Cases: A primigravida who had previously undergone uterine artery embolization had premature rupture of membranes at 24 weeks. She had a cesarean delivery at 28 weeks, which was followed by uterine atony requiring hysterectomy. A primigravida who had previously undergone uterine artery embolization delivered appropriately grown dichorionic twins at 36 weeks. An analysis of the 50 published cases of pregnancy after uterine artery embolization revealed the following complications: malpresentation (17%), small for gestational age (7%), premature delivery (28%), cesarean delivery (58%), and postpartum hemorrhage (13%).
Conclusion: Women who become pregnant after uterine artery embolization are at risk for malpresentation, preterm birth, cesarean delivery, and postpartum hemorrhage.
Answer: Selective embolization of uterine arteries is considered a safe and effective alternative to hysterectomy for the management of severe postpartum hemorrhage (PPH). Studies have shown that this procedure can control bleeding while preserving the uterus and maintaining reproductive potential. In a study involving 25 women with intractable postpartum hemorrhage, embolization resolved or markedly decreased external bleeding in 24 patients, with no major complications observed, and menstruation resumed in all patients who were followed up, with one woman becoming pregnant (PUBMED:11418416). Another study reported that non-uterine arteries are significant sources of PPH, and their detection and selective embolization are crucial for successful hemostasis, achieving control in 89.5% of patients (PUBMED:29065701).
Furthermore, predelivery uterine artery embolization in patients with placental implant anomalies has been shown to be safe and effective, reducing blood loss and the rate of hysterectomies (PUBMED:28756581). Even in cases of abnormal placentation, which accounts for more than 50% of uterine artery embolization failure, the procedure has been successful in controlling postpartum hemorrhage without hysterectomy in some cases (PUBMED:11604185).
However, it is important to note that while embolization is generally safe, there are risks associated with the procedure. For instance, there have been reports of uterine necrosis following arterial embolization for PPH (PUBMED:22342505), and complications such as pelvic organ ischemia and uterine infarction have been observed (PUBMED:29065701). Additionally, pregnancies following uterine artery embolization may be at risk for complications such as malpresentation, preterm birth, cesarean delivery, and postpartum hemorrhage (PUBMED:12423843).
In conclusion, selective embolization of uterine arteries is a viable and generally safe alternative to hysterectomy for patients with postpartum hemorrhage, offering the advantage of fertility preservation. However, it is not without risks, and careful patient selection and management are essential to minimize complications. |
Instruction: Evaluation of a predoctoral implant curriculum: does such a program influence graduates' practice patterns?
Abstracts:
abstract_id: PUBMED:12182297
Evaluation of a predoctoral implant curriculum: does such a program influence graduates' practice patterns? Purpose: Didactic predoctoral dental implant education is part of the curriculum in most US dental schools. However, fewer than half offer laboratory instruction, and only a few allow dental students to place and restore dental implants. The additional time necessary for laboratory and clinical experience encroaches on an already crowded curriculum. Is the additional time necessary in the curriculum for laboratory and clinical experience by dental students reflected by the practice patterns of graduates who have completed such a program over the past 10 years?
Materials And Methods: A survey was designed to determine the implant practice patterns of graduates of the Creighton School of Dentistry, Omaha, Nebraska, for the 10-year period 1988 to 1997. These graduates had all participated in a formal undergraduate didactic and laboratory curriculum in implant dentistry. Approximately half also had the opportunity to place and/or restore dental implants while students. The survey was also sent to graduates (also 1988 to 1997) from a midwestern dental school without a formal laboratory or clinical component (used as a control group). The data were analyzed statistically.
Results: In comparison to the control group (56% versus 23%), more than twice as many Creighton graduates restore dental implants as a part of their general practice, surgically place more dental implants, refer more implant patients to surgical specialists, and seek more continuing education hours related to implant dentistry. These conclusions were all supported by statistical analysis of the data.
Discussion: Student clinical experience with implant dentistry appears to significantly increase the incorporation of implant dentistry into future dental practices. Even if clinical experience was not an option, a school curriculum which included both didactic and laboratory participation still significantly increased the number of graduates who included implant dentistry in their practices.
Conclusion: The inclusion of laboratory and clinical experience in implant dentistry in the CUSD undergraduate curriculum resulted in significantly greater participation in implant dentistry at the general practice level.
abstract_id: PUBMED:21642517
Perceptions of predoctoral dental education and practice patterns in special care dentistry. The objective of this research project was to compare alumni perceptions of predoctoral dental education in the care and management of patients with complex needs to alumni practice patterns. Alumni from the University of the Pacific Arthur A. Dugoni School of Dentistry who graduated from 1997 to 2007 were surveyed regarding perceptions of their predoctoral education in the care of patients categorized and defined as medically compromised, frail elders, and developmentally disabled, as well as their practice patterns. Perceptions were rated on a Likert scale. Regression analyses were utilized. Three primary relationships were identified: 1) positive relationships emerged between perceptions of educational value, as students and practitioners, of the training they received compared to percentages of medically compromised patients they currently treat (p≤0.05); 2) after practice experience, 2003-07 graduates reported significantly higher value of their education in this area compared to 1997-2002 graduates; and 3) alumni who reported treating more patients with complex needs during school reported treating significantly more of these patients in practice (p≤0.05). We conclude that alumni who reported educational experiences as more valuable treat more patients with complex needs compared to those who valued them less. Alumni who reported having more opportunities to treat patients with complex needs as students treat a higher percentage of those patients than those reporting fewer. Even positive perceptions may underestimate the value of educational experiences as they relate to future practice.
abstract_id: PUBMED:35881827
Digital Implant Dentistry Predoctoral Program at University of Kentucky. This report describes the predoctoral comprehensive digital implant dentistry program at the University of Kentucky, College of Dentistry (UKCD). UKCD has implemented a digital dentistry workflow in the dental curriculum for predoctoral and graduate programs since 2018. Digital implant dentistry education involves using cone beam computed tomography (CBCT) for diagnosis and treatment planning, intraoral scanner for digital impression, and treatment planning software to plan for single implant-supported restorations and implant-retained mandibular overdenture cases. The laboratory components include virtual designing of a surgical guide and using three-dimensional printing to fabricate a fully guided surgical template for implant placement procedures for the patient. In the last 3 years, including the COVID year, a total of 294 implants have been placed by dental students. Unfortunately, 6 implants failed in the early healing time due to infection, with an overall success rate of 98%. These treatment outcomes are very favorable compared with published literature.
abstract_id: PUBMED:12237801
Implant dentistry in predoctoral education: the elective approach. In response to interest by dental students and patient needs, an elective program in implant dentistry was started at the University of Detroit Mercy School of Dentistry (UDM) in the summer of 1994. The 1-year program is offered to a group of 10 senior students out of a class of 72. Implant treatment is provided to selected edentulous and partially edentulous patients. Predoctoral students participate in diagnosis and treatment planning, assist in surgical placement, and perform the prosthodontic procedures. A survey was sent to 160 UDM graduates, and 90 responded. Out of the 90 respondents, 35% had participated in the elective implant program and 65% had not. A Pearson correlation matrix was used to analyze their responses. A stronger positive correlation with offering and restoring implants was seen in graduates who had completed the elective program in implant dentistry.
abstract_id: PUBMED:17872850
Characteristics and practice patterns of international medical graduates: how different are they from those of Canadian-trained physicians? Objective: To investigate the personal characteristics and practice patterns of international medical graduates (IMGs) practising in southwestern Ontario and to compare them with the personal characteristics and practice patterns of Canadian-trained family physicians practising in the same region.
Design: Cross-sectional analysis of data gathered from a census of family physicians.
Setting: Southwestern Ontario.
Participants: A total of 685 family physicians.
Main Outcome Measures: Characteristics and practice patterns of IMG physicians and Canadian-trained physicians.
Results: Among all family physicians practising in southwestern Ontario, 15.3% were IMGs. The IMGs were more likely than Canadian-trained medical graduates to be older and to have been in practice longer, and less likely to have completed a family medicine residency or to have been involved in undergraduate or postgraduate teaching. The IMGs were more likely to have practised longer in their current locations and to be in solo practice and accepting new patients, but were less likely to be providing maternity and newborn care. They were also more likely than Canadian-trained medical graduates were to be serving in small towns and rural and isolated communities.
Conclusion: The personal and practice characteristics of IMG physicians vary somewhat from those of their Canadian-trained colleagues. Policy efforts aimed at increasing and integrating IMG family physicians into the work force need to recognize these differences. Further research is needed before our results can be generalized to physicians practising beyond southwestern Ontario.
abstract_id: PUBMED:24789837
Advanced predoctoral implant program at UIC: description and qualitative analysis. Dental implant education has increasingly become an integral part of predoctoral dental curricula. However, the majority of implant education emphasizes the restorative aspect as opposed to the surgical. The University of Illinois at Chicago College of Dentistry has developed an Advanced Predoctoral Implant Program (APIP) that provides a select group of students the opportunity to place implants for single-tooth restorations and mandibular overdentures. This article describes the rationale, logistics, experiences, and perspectives of an innovative approach to provide additional learning experiences in the care of patients with partial and complete edentulism using implant-supported therapies. Student and faculty perspectives on the APIP were ascertained via focus group discussions and a student survey. The qualitative analysis of this study suggests that the select predoctoral dental students highly benefited from this experience and intend to increase their knowledge and skills in implant dentistry through formal education following graduation. Furthermore, the survey indicates that the APIP has had a positive influence on the students' interest in surgically placing implants in their future dental practice and their confidence level in restoring and surgically placing implants.
abstract_id: PUBMED:8855604
Lafayette's family practice residency program: practice patterns of graduates. The Lafayette Family Practice Residency Program graduated 25 physicians prior to 1995. This project was undertaken to support our assumption that graduates establish their practices in communities near their residency programs. Further we surveyed the graduates to determine graduate satisfaction and practice characteristics. The vast majority (88%) of these physicians were practicing in Louisiana at the time of this survey. Over half the graduates were practicing in Acadiana. The results suggest that these physicians are indeed satisfied in their careers as family physicians.
abstract_id: PUBMED:27398047
The training paths and practice patterns of Canadian paediatric residency graduates, 2004-2010. Background: The Paediatric Chairs of Canada have been proactive in workforce planning, anticipating paediatric job opportunities in academic centres. To complement this, it is important to characterize the practice profiles of paediatricians exiting training, including those working outside of tertiary care centres.
Objective: To describe the training paths and the practice patterns of Canadian paediatric residency graduates.
Methods: A survey was completed in 2010 to 2011 by Canadian program directors regarding residents completing core paediatrics training between 2004 and 2010. Data collection included training path after completing core paediatrics training and practice type after graduation.
Results: Of 699 residents completing their core training in paediatrics, training path data were available for 685 (98%). Overall, 430 (63%) residents completed subspecialty training while 255 (37%) completed general paediatrics training only. There was a significant increase in subspecialty training, from 59% in earlier graduates (2004 to 2007) to 67% in later graduates (2008 to 2010) (P=0.037). Practice pattern data after completion of training were available for 245 general paediatricians and 205 subspecialists. Sixty-nine percent of general paediatricians were community based while 85% of subspecialists were hospital based in tertiary or quaternary centres. Of all residents currently in practice, only 36 (8%) were working in rural, remote or underserviced areas.
Conclusions: Almost two-thirds of recent Canadian paediatric graduates pursued subspecialty training. There was a significant increase in the frequency of subspecialty training among later-year graduates. Few graduates are practicing in rural or underserviced areas. Further studies are needed to determine whether these trends continue and their impact on the future paediatric workforce in Canada.
abstract_id: PUBMED:32819675
Influence of fellowship educational experience on practice patterns for adrenalectomy: A survey of recent AAES fellowship graduates. Background: Current practice patterns for adrenalectomy among endocrine surgeons is a limited area of study. Here we survey relatively junior endocrine surgeons regarding educational experiences in adrenalectomy and correlate these with current practice.
Methods: An electronic survey was sent to recent AAES-accredited fellowships graduates (2014-2019), querying adrenalectomy volume and approaches during fellowship and current practice patterns.
Results: Most graduates (63.2%) performed >20 adrenalectomies in fellowship. Exposure was greatest to open (94.1%) and laparoscopic transabdominal (92.6%) adrenalectomy, followed by retroperitoneoscopic (86.7%). The majority (73.5%) of respondents stated their current practice patterns are the same as their exposure during training. Preoperative diagnosis, side of lesion, and patient comorbidities were all ranked as significant predictors affecting choice of approach (p < 0.001).
Conclusion: The large majority of AAES fellowship graduates receive high-volume adrenalectomy experience in several approaches. The technique to which a trainee was exposed to most frequently was the greatest predictor for preferential approach in current practice.
abstract_id: PUBMED:16809656
Changing practice patterns of family medicine graduates: a comparison of alumni surveys from 1998 to 2004. Background And Objective: The University of Missouri family medicine residency has 297 family physician graduates. We suspected that the practice patterns of graduates were changing.
Methods: All graduates of the residency were surveyed in 1998, 2001, and 2004, asking about practice patterns. To characterize current practice characteristics and scope, we used the latest survey returned by each respondent. We analyzed data for persons who returned all 3 surveys to examine trends across surveys.
Results: Annual response rates ranged from 58% to 78%. Of graduates who responded to all 3 surveys, fewer graduates care for patients in the hospital (71.3%, 1998; 56.5%, 2004), practice obstetrics (40.7%, 1998; 23.2%, 2004), or provide primary care for their patients in the emergency department (25.9%, 1998; 13.0%, 2004). Fewer recent graduates perform flexible sigmoidoscopy or exercise electrocardiograms. Graduates who are practicing obstetrics are more likely to be rural or to have graduated since 1994. Those performing flexible sigmoidoscopy are more likely to be male or to have graduated before 1994. The perceived need for more training in practice management is higher for more recent graduates (14.9% for 1975 to 1983 graduates; 31.9% for 1994 to 2003 graduates).
Conclusions: Across the 3 surveys, there was a decline in the proportion of graduates of this family medicine residency program performing procedures, obstetrics, intensive care unit care, or hospital medicine. This study highlights how the practices of family medicine residency graduates may change over time. Data regarding residency graduate practice profiles may help predict the knowledge and skills residency graduates will need in their future practices and evaluate the impact of the Future of Family Medicine recommendations.
Answer: The evaluation of predoctoral implant curricula suggests that such programs do influence graduates' practice patterns. Graduates from the Creighton School of Dentistry, who participated in a formal undergraduate didactic and laboratory curriculum in implant dentistry, were found to restore dental implants more frequently, surgically place more dental implants, refer more implant patients to surgical specialists, and seek more continuing education hours related to implant dentistry compared to graduates from a control group without formal laboratory or clinical components in their education (PUBMED:12182297). This indicates that student clinical experience with implant dentistry significantly increases the incorporation of implant dentistry into future dental practices.
Similarly, the University of Detroit Mercy School of Dentistry found that graduates who had completed an elective program in implant dentistry showed a stronger positive correlation with offering and restoring implants in their practices (PUBMED:12237801). The University of Illinois at Chicago College of Dentistry's Advanced Predoctoral Implant Program (APIP) also had a positive influence on students' interest in surgically placing implants in their future dental practice and their confidence level in restoring and surgically placing implants (PUBMED:24789837).
Moreover, the University of Kentucky College of Dentistry implemented a comprehensive digital implant dentistry program in their predoctoral curriculum, which resulted in a high success rate of implant placements by dental students, suggesting favorable treatment outcomes and potential influence on future practice patterns (PUBMED:35881827).
These findings collectively suggest that predoctoral implant education, particularly when it includes hands-on clinical experience, positively influences graduates' practice patterns by increasing their likelihood to incorporate implant dentistry into their general practice. |
Instruction: Do pacifiers reduce the risk of sudden infant death syndrome?
Abstracts:
abstract_id: PUBMED:16216900
Do pacifiers reduce the risk of sudden infant death syndrome? A meta-analysis. Objective: Pacifier use has been reported to be associated with a reduced risk of sudden infant death syndrome (SIDS), but most countries around the world, including the United States, have been reluctant to recommend the use of pacifiers because of concerns about possible adverse effects. This meta-analysis was undertaken to quantify and evaluate the protective effect of pacifiers against SIDS and to make a recommendation on the use of pacifiers to prevent SIDS.
Methods: We searched the Medline database (January 1966 to May 2004) to collect data on pacifier use and its association with SIDS, morbidity, or other adverse effects. The search strategy included published articles in English with the Medical Subject Headings terms "sudden infant death syndrome" and "pacifier" and the keywords "dummy" and "soother." Combining searches resulted in 384 abstracts, which were all read and evaluated for inclusion. For the meta-analysis, articles with data on the relationship between pacifier use and SIDS risk were limited to published original case-control studies, because no prospective observational reports were found; 9 articles met these criteria. Two independent reviewers evaluated each study on the basis of the 6 criteria developed by the American Academy of Pediatrics Task Force on Infant Positioning and SIDS; in cases of disagreement, a third reviewer evaluated the study, and a consensus opinion was reached. We developed a script to calculate the summary odds ratio (SOR) by using the reported ORs and respective confidence intervals (CI) to weight the ORs. We then pooled them together to compute the SOR. We performed the Breslow-Day test for homogeneity of ORs, Cochran-Mantel-Haenszel test for the null hypothesis of no effect (OR = 1), and the Mantel-Haenszel common OR estimate. The consistency of findings was evaluated and the overall potential benefits of pacifier use were weighed against the potential risks. Our recommendation is based on the taxonomy of the 5-point (A-E) scale adopted by the US Preventive Services Task Force.
Results: Seven studies were included in the meta-analysis. The SOR calculated for usual pacifier use (with univariate ORs) is 0.90 (95% confidence interval [CI]: 0.79-1.03) and 0.71 (95% CI: 0.59-0.85) with multivariate ORs. For pacifier use during last sleep, the SORs calculated using univariate and multivariate ORs are 0.47 (95% CI: 0.40-0.55) and 0.39 (95% CI: 0.31-0.50), respectively.
Conclusions: Published case-control studies demonstrate a significant reduced risk of SIDS with pacifier use, particularly when placed for sleep. Encouraging pacifier use is likely to be beneficial on a population-wide basis: 1 SIDS death could be prevented for every 2733 (95% CI: 2416-3334) infants who use a pacifier when placed for sleep (number needed to treat), based on the US SIDS rate and the last-sleep multivariate SOR resulting from this analysis. Therefore, we recommend that pacifiers be offered to infants as a potential method to reduce the risk of SIDS. The pacifier should be offered to the infant when being placed for all sleep episodes, including daytime naps and nighttime sleeps. This is a US Preventive Services Task Force level B strength of recommendation based on the consistency of findings and the likelihood that the beneficial effects will outweigh any potential negative effects. In consideration of potential adverse effects, we recommend pacifier use for infants up to 1 year of age, which includes the peak ages for SIDS risk and the period in which the infant's need for sucking is highest. For breastfed infants, pacifiers should be introduced after breastfeeding has been well established.
abstract_id: PUBMED:16651334
Should pacifiers be recommended to prevent sudden infant death syndrome? Objectives: Our aim was to review the evidence for a reduction in the risk of sudden infant death syndrome (SIDS) with pacifier ("dummy" or "soother") use, to discuss possible mechanisms for the reduction in SIDS risk, and to review other possible health effects of pacifiers.
Results: There is a remarkably consistent reduction of SIDS with pacifier use. The mechanism by which pacifiers might reduce the risk of SIDS is unknown, but several mechanisms have been postulated. Pacifiers might reduce breastfeeding duration, but the studies are conflicting.
Conclusions: It seems appropriate to stop discouraging the use of pacifiers. Whether it is appropriate to recommend pacifier use in infants is open to debate.
abstract_id: PUBMED:19405412
Risks and benefits of pacifiers. Physicians are often asked for guidance about pacifier use in children, especially regarding the benefits and risks, and when to appropriately wean a child. The benefits of pacifier use include analgesic effects, shorter hospital stays for preterm infants, and a reduction in the risk of sudden infant death syndrome. Pacifiers have been studied and recommended for pain relief in newborns and infants undergoing common, minor procedures in the emergency department (e.g., heel sticks, immunizations, venipuncture). The American Academy of Pediatrics recommends that parents consider offering pacifiers to infants one month and older at the onset of sleep to reduce the risk of sudden infant death syndrome. Potential complications of pacifier use, particularly with prolonged use, include a negative effect on breastfeeding, dental malocclusion, and otitis media. Adverse dental effects can be evident after two years of age, but mainly after four years. The American Academy of Family Physicians recommends that mothers be educated about pacifier use in the immediate postpartum period to avoid difficulties with breastfeeding. The American Academy of Pediatrics and the American Academy of Family Physicians recommend weaning children from pacifiers in the second six months of life to prevent otitis media. Pacifier use should not be actively discouraged and may be especially beneficial in the first six months of life.
abstract_id: PUBMED:17956375
Pacifiers: an update on use and misuse. Purpose: The use of pacifiers is a controversial topic; this article looks at the subject from both a historical and cultural perspective, with a review of current research.
Conclusions: The use of pacifiers in infants older than 1 month is currently recommended by multiple researchers to prevent sudden infant death syndrome, and is associated with other benefits for premature infants. However, pacifier use has also been associated with higher risk of otitis media.
Practice Implications: Knowledge of the most recent evidence will enable providers to communicate appropriate guidelines on pacifier use to families.
abstract_id: PUBMED:28378502
Infant pacifiers for reduction in risk of sudden infant death syndrome. Background: Sudden infant death syndrome (SIDS) has been most recently defined as the sudden unexpected death of an infant less than one year of age, with onset of the fatal episode apparently occurring during sleep, that remains unexplained after a thorough investigation, including the performance of a complete autopsy and a review of the circumstances of death and clinical history. Despite the success of several prevention campaigns, SIDS remains a leading cause of infant mortality. In 1994, a 'triple risk model' for SIDS was proposed that described SIDS as an event that results from the intersection of three factors: a vulnerable infant; a critical development period in homeostatic control (age related); and an exogenous stressor. The association between pacifier (dummy) use and reduced incidence of SIDS has been shown in epidemiological studies since the early 1990s. Pacifier use, given its low cost, might be a cost-effective intervention for SIDS prevention if it is confirmed effective in randomised controlled trials.
Objectives: To determine whether the use of pacifiers during sleep versus no pacifier during sleep reduces the risk of SIDS.
Search Methods: We used the standard search strategy of the Cochrane Neonatal Review Group to search the Cochrane Central Register of Controlled Trials (CENTRAL 2016, Issue 2), MEDLINE via PubMed, Embase, and CINAHL to 16 March 2016. We also searched clinical trials databases, conference proceedings, and the reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials.
Selection Criteria: Published and unpublished controlled trials using random and quasi-random allocations of infants born at term and at preterm (less than 37 weeks' gestation) or with low birth weight (< 2500 g). Infants must have been randomised by one month' postmenstrual age. We planned to include studies reported only by abstracts, and cluster and cross-over randomised trials.
Data Collection And Analysis: Two review authors independently reviewed studies from searches. We found no eligible studies.
Main Results: We identified no randomised controlled trials examining infant pacifiers for reduction in risk of SIDS.
Authors' Conclusions: We found no randomised control trial evidence on which to support or refute the use of pacifiers for the prevention of SIDS.
abstract_id: PUBMED:17081147
Pacifiers: a microbial reservoir. The permanent contact between the nipple part of pacifiers and the oral microflora offers ideal conditions for the development of biofilms. This study assessed the microbial contamination on the surface of 25 used pacifier nipples provided by day-care centers. Nine were made of silicone and 16 were made of latex. The biofilm was quantified using direct staining and microscopic observations followed by scraping and microorganism counting. The presence of a biofilm was confirmed on 80% of the pacifier nipples studied. This biofilm was mature for 36% of them. Latex pacifier nipples were more contaminated than silicone ones. The two main genera isolated were Staphylococcus and Candida. Our results confirm that nipples can be seen as potential reservoirs of infections. However, pacifiers do have some advantages; in particular, the potential protection they afford against sudden infant death syndrome. Strict rules of hygiene and an efficient antibiofilm cleaning protocol should be established to answer the worries of parents concerning the safety of pacifiers.
abstract_id: PUBMED:17883819
Safe sleep: can pacifiers reduce SIDS risk? N/A
abstract_id: PUBMED:28449646
When is the use of pacifiers justifiable in the baby-friendly hospital initiative context? A clinician's guide. Background: The use of pacifiers is an ancient practice, but often becomes a point of debate when parents and professionals aim to protect and promote breastfeeding as most appropriately for nurturing infants. We discuss the current literature available on pacifier use to enable critical decision-making regarding justifiable use of pacifiers, especially in the Baby-Friendly Hospital Initiative context, and we provide practical guidelines for clinicians.
Discussion: Suck-swallow-breathe coordination is an important skill that every newborn must acquire for feeding success. In most cases the development and maintenance of the sucking reflex is not a problem, but sometimes the skill may be compromised due to factors such as mother-infant separation or medical conditions. In such situations the use of pacifiers can be considered therapeutic and even provide medical benefits to infants, including reducing the risk of sudden infant death syndrome. The argument opposing pacifier use, however, is based on potential risks such as nipple confusion and early cessation of breastfeeding. The Ten Steps to Successful Breastfeeding as embedded in the Baby-Friendly Hospital Initiative initially prohibited the use of pacifiers in a breastfeeding friendly environment to prevent potential associated risks. This article provides a summary of the evidence on the benefits of non-nutritive sucking, risks associated with pacifier use, an identification of the implications regarded as 'justifiable' in the clinical use of pacifiers and a comprehensive discussion to support the recommendations for safe pacifier use in healthy, full-term, and ill and preterm infants. The use of pacifiers is justifiable in certain situations and will support breastfeeding rather than interfere with it. Justifiable conditions have been identified as: low-birth weight and premature infants; infants at risk for hypoglyceamia; infants in need of oral stimulation to develop, maintain and mature the sucking reflex in preterm infants; and the achievement of neurobehavioural organisation. Medical benefits associated with the use of pacifiers include providing comfort, contributing towards neurobehavioural organisation, and reducing the risk of sudden infant death syndrome. Guidelines are presented for assessing and guiding safe pacifier use, for specific design to ensure safety, and for cessation of use to ensure normal childhood development.
abstract_id: PUBMED:33951169
Evaluation of Effect of Orthodontic Pacifiers in Prevention of Sudden Infant Death Syndrome: A Finite Element Method and Questionnaire Based Study. Introduction: Considering the morbidity associated with Sudden Infant Death Syndrome (SIDS) and limitations of absence of such syndrome in animals, a retrospective survey based human study and prospective Finite Element Method (FEM) study was planned to evaluate the effect of orthodontic pacifier in prevention of SIDS.
Study Design: Two groups, Group I (case) consisting of 48 people, who had lost their infant due to SIDS in past, and Group II (control) consisting of 200 participants with infant in the family, were established. The study was conducted in two parts. An online questionnaire-based survey consisting of 20 multiple choice questions was conducted to establish the correlation of pacifiers in families affected with SIDS. Thereafter, FEM evaluation was carried out in two age groups (up to six months, and between seven to 12 months) with two different pacifiers i.e. conventional and orthodontic, and one human nipples.
Results: 12 participants from case group and 170 in control group gave history of using pacifier for their infants between 2 to 6 months. The frequency and duration of use of pacifiers in case group generally increased while infant cried as high as 66 percent in frequency and 75 percent in duration in comparison to 90 percent in control group. FEM analysis showed significant stresses incurred with conventional pacifiers in relation to oral cavity and tongue. Orthodontic pacifiers exhibited human nipple like effect with more pronounced effects on posterior oral cavity and lesser strain on soft and hard tissues.
Conclusion: Promising results obtained with survey and positive correlation of FEM data with orthodontic pacifiers indicates the superiority and advantages of orthodontic pacifiers in prevention of SIDS.
abstract_id: PUBMED:20822076
Do pacifiers reduce the risk of sudden infant death syndrome? N/A
Answer: Yes, pacifiers have been reported to reduce the risk of sudden infant death syndrome (SIDS). A meta-analysis that included seven studies found a significant reduction in the risk of SIDS with pacifier use, particularly when the pacifier was used during sleep. The summary odds ratio (SOR) for pacifier use during the last sleep was 0.39 (95% CI: 0.31-0.50) with multivariate odds ratios, indicating a strong protective effect. The authors recommended offering pacifiers to infants as a potential method to reduce the risk of SIDS, with a level B strength of recommendation from the US Preventive Services Task Force (PUBMED:16216900).
The consistent reduction of SIDS with pacifier use has been noted in the literature, although the exact mechanism by which pacifiers might reduce the risk is unknown. Despite concerns about possible adverse effects such as reduced breastfeeding duration, the evidence suggests that it may be appropriate to stop discouraging the use of pacifiers (PUBMED:16651334).
The American Academy of Pediatrics recommends considering offering pacifiers to infants one month and older at the onset of sleep to reduce the risk of SIDS. However, potential complications of pacifier use, such as negative effects on breastfeeding, dental malocclusion, and otitis media, should be considered, especially with prolonged use (PUBMED:19405412).
While no randomised controlled trials have been identified to support or refute the use of pacifiers for the prevention of SIDS, the association between pacifier use and reduced incidence of SIDS has been shown in epidemiological studies (PUBMED:28378502).
In summary, the use of pacifiers is associated with a reduced risk of SIDS, particularly when used during sleep, and is recommended by various health organizations as a preventive measure, taking into account the appropriate timing and potential risks associated with pacifier use (PUBMED:16216900; PUBMED:16651334; PUBMED:19405412; PUBMED:28378502). |
Instruction: Does the Sabra hypertension-prone rat represent a model of salt or mineralocorticoid sensitivity?
Abstracts:
abstract_id: PUBMED:9814619
Does the Sabra hypertension-prone rat represent a model of salt or mineralocorticoid sensitivity? Objectives: Since the Sabra experimental model of hypertension was developed, it has been known as a model of salt-susceptible hypertension. Because the hypertensive response of the Sabra hypertension-prone strain (SBH/y) is classically elicited by salt loading with a combination of deoxycorticosterone acetate (DOCA) and salt, doubt has now been cast on whether the hypertensive response is due to sensitivity to salt or to mineralocorticoids. The present study was designed to resolve this question.
Materials And Methods: We studied the blood pressure response of SBH/y to various modes of salt loading. Animals were salt-loaded by administration of: 1% NaCl in drinking water and subcutaneous implantation of a 25 mg DOCA pellet (DOCA-salt); DOCA alone; 1% NaCl in drinking water alone; or 8% NaCl in chow alone. Blood pressure was determined by the tail-cuff method in awake and undisturbed animals.
Results: Within 4 weeks, the DOCA-salt treatment elicited the full hypertensive response previously reported in the SBH/y strain. Salt loading with 8% NaCl in chow reproduced the full hypertensive response observed with DOCA-salt, except that it occurred only after 7 weeks of treatment. Salt loading with DOCA alone raised blood pressure moderately and to a maximal level within 3 weeks; the magnitude of the blood pressure response was, however, significantly smaller than that observed with DOCA-salt or 8% NaCl in chow. Administration of 1% NaCl in water alone elicited no hypertensive response.
Conclusions: The hypertensive response to salt loading in the Sabra experimental model of hypertension is an expression primarily of salt sensitivity, as it can be fully reproduced with salt alone, but not with DOCA alone. The use of the DOCA-salt mode of salt loading in this model, as opposed to salt loading with 8% salt in chow, is a useful way of accelerating the development of salt-sensitive hypertension in SBH/y, which shortens, and therefore facilitates, phenotyping.
abstract_id: PUBMED:9607181
Genetic basis of salt-susceptibility in the Sabra rat model of hypertension. The Sabra salt-sensitive SBH/y and salt-resistant SBN/y rats constitute a unique experimental model of hypertension in which salt-susceptibility is genetically determined and expressed only after salt-loading, without the development of spontaneous hypertension. To determine the genetic basis of salt-susceptibility in the Sabra rats, the candidate gene and total genome screen approaches were adopted. The likely candidate genes in this model incorporate salt-related physiological mechanisms such as the nitric oxide system, the arginine vasopressin axis and the epithelial sodium channel. In the random genome search scheme for culprit genes, SBH/y and SBN/y were cross-bred. A highly unusual and composite mode of transmission of salt-susceptibility was found in this cross, emphasizing the complexity of the genetic basis of salt-susceptibility. Linkage analysis of the entire rat genome with a large number of widely distributed microsatellite markers identified three putative gene loci on chromosomes 1 and 17 that contribute importantly to salt-sensitivity and/or resistance, and uncovered sex specificity in the role that salt-susceptibility genes fulfill in the development of hypertension.
abstract_id: PUBMED:6120496
The Sabra hypertension prone (H) and hypertension resistant (N) rat strain. By selective inbreeding of the Hebrew University Sabra rat, we have obtained a hypertension prone (H) and a hypertension resistant (N) substrain. The criteria for selection was the blood pressure response to DOCA-salt. The outstanding element of our model is the N rat with its remarkable resistance to hypertension. When compared to H, the N rat presents the following characteristics: 1. The blood pressure of experimentally naive N rats is significantly lower at comparable ages, in both sexes. 2. N rats are resistant to both DOCA-salt and renal clip hypertension. 3. In the medulla oblongata (MO) of N rats, the noradrenaline (NA) content is significantly higher and the activity of tyrosine hydroxylase is significantly lower. 4. In the MO of N rats, the sensitivity of the NA dependent cAMP generating system is significantly decreased. 5. In the atrium of N rats, the NA content is significantly higher, and is unaffected by DOCA-salt treatment. The results suggest that genetic differences in catecholamine metabolism may account for the disparate susceptibility to hypertension of the two strains.
abstract_id: PUBMED:9931114
Role of chromosome X in the Sabra rat model of salt-sensitive hypertension. We carried out a total genome screen in the Sabra rat model of hypertension to detect salt-susceptibility genes. We previously reported in male animals the presence of 2 major quantitative trait loci (QTLs) on chromosome 1 that together accounted for most of the difference in the blood pressure (BP) response to salt loading between Sabra hypertension-prone rats (SBH/y) and Sabra hypertension-resistant rats (SBN/y). In females, we reported on 2 major QTLs on chromosomes 1 and 17 that together accounted for only two thirds of the difference in the BP response between the strains. On the basis of phenotypic patterns of inheritance in reciprocal F2 crosses, we proposed a role of the X chromosome. We therefore continued the search for the missing QTL in females that would account for the remaining difference in the BP response between the 2 strains using newly developed microsatellite markers and focusing on chromosome X. We screened an F2 cross, consisting of 371 females and 336 males, using 19 polymorphic chromosome X microsatellite markers. We analyzed the averages of BP by genotype using ANOVA and the individual data using MAPMAKER/QTL. In female F2 progeny, we identified a segment on chromosome X that spans over 33.4 cM and shows significant cosegregation (P<0.001) of 14 microsatellite markers (demarcated by DXRat4 and DXMgh10) with systolic BP after salt loading. This segment has 2 apparent peaks at DXRat4 and DXRat13, with a BP effect of 14 mm Hg for each. Multipoint linkage analysis with a free model detected 3 peaks (logarithm of the odds ratio [LOD] score >4.3) within the same chromosomal segment: One between DXMgh9 and DXMit4 (LOD 4.9; 6.1% of variance), a second between DXMgh12 and DXRat8 (LOD 5.2; 7.2% of variance), and a third between DXRat2 and DXRat4 (LOD 5.8; 7.5% of variance). On the basis of these findings and until congenic strains become available, our working assumption is that within chromosome X, 1 to 3 genetic loci contribute importantly to the BP response of female Sabra rats to salt. In male F2 progeny, we detected no significant cosegregation of any region on chromosome X with the BP response to salt loading. We conclude that in the female rat, salt susceptibility is mediated by 3 to 5 gene loci on chromosomes 1, 17, and X, whereas in the male rat, the X chromosome does not affect the BP response to salt.
abstract_id: PUBMED:24111570
Aberrant Rac1-mineralocorticoid receptor pathways in salt-sensitive hypertension. According to Guyton's model, impaired renal sodium excretion plays a key role in the increased salt sensitivity of blood pressure (BP). Several factors contribute to impaired renal sodium excretion, including the sympathetic nervous system, the renin-angiotensin system and aldosterone. Accumulating evidence suggests that abnormalities in aldosterone and its receptor (i.e. the mineralocorticoid receptor (MR)) are involved in the development of salt-sensitive (SS) hypertension. Patients with metabolic syndrome often exhibit hyperaldosteronism and are susceptible to SS hypertension. Aldosterone secretion from the adrenal glands is not suppressed in obese hypertensive rats fed a high-salt diet because of the abundant production of adipocyte-derived aldosterone-releasing factors, which are independent of the negative feedback regulation of aldosterone secretion by the renin-angiotensin-aldosterone system. Increased plasma aldosterone levels lead to SS hypertension via MR activation in the kidney. Renal MR activity is increased in Dahl salt-sensitive rats fed a high-salt diet, despite the appropriate suppression of plasma aldosterone levels. In this rat strain, activation of MR in the distal nephron causes salt-induced hypertension. This paradoxical response of the MR to salt loading can be attributed to activation of Rac1, a small GTPase. In the presence of aldosterone, activated Rac1 synergistically and directly activates MR in a ligand-independent manner. Thus, Rac1 activation in the kidney determines the salt sensitivity of BP. Together, the available evidence suggests that the aberrant Rac1-MR pathway plays a key role in the development of SS hypertension.
abstract_id: PUBMED:9449402
Salt susceptibility maps to chromosomes 1 and 17 with sex specificity in the Sabra rat model of hypertension. Random genome screening was initiated in the Sabra rat model of hypertension in search of genes that account for salt sensitivity or salt resistance in terms of the development of hypertension. Female salt-sensitive Sabra hypertension-prone (SBH/y) rats were crossed with male salt-resistant Sabra hypertension-resistant (SBN/y) rats, resulting in an F2 cohort consisting of 100 males and 132 females. Systolic blood pressure (BP) was measured in rats at 6 weeks of age under basal conditions and after 4 weeks of salt loading. Genotypes for 24 polymorphic microsatellite markers localized to chromosome 1 and for 8 markers localized to chromosome 17 were determined in F2 and cosegregation with BP was evaluated by ANOVA and multipoint linkage analysis. Basal BP did not cosegregate with any locus on chromosomes 1 or 17. In contrast, BP after salt loading showed significant cosegregation with three QTLs, two on chromosome 1 and one on chromosome 17, designated SS1a, SS1b, and SS17, respectively; the maximal logarithm of the odds (LOD) scores were 4.71, 4.91, and 3.43, respectively. Further analysis revealed sexual dimorphism. In male F2, BP response to salt loading cosegregated with one QTL (LOD score 4.52) and a second QTL (LOD score 2.98), both on chromosome 1 and coinciding with SS1a and SS1b, respectively. In female rats, BP response cosegregated with one QTL on chromosome 1 (LOD score 3.08) coinciding with SS1b, and with a second QTL on chromosome 17 (LOD score 3.66) coinciding with SS17. In males, the additive effects of the two QTLs on chromosome 1 accounted for most of the BP variance to salt loading, whereas in females the additive effects of the QTLs on chromosomes 1 and 17 accounted for over two thirds of the variance. These results identify three putative gene loci on chromosomes 1 and 17 that contribute importantly to salt sensitivity and/or resistance and uncover sex specificity in the role that salt susceptibility genes fulfill in the development of hypertension.
abstract_id: PUBMED:8156736
Platelet membrane microviscosity in Sabra rats with early salt hypertension. 1. To investigate the possibility that arterial hypertension is associated with changes in the physicochemical properties of cell membranes, we have studied the effects of dietary salt loading on platelet membrane microviscosity in hypertension-prone and -resistant Sabra rats. 2. Sixteen hypertension-prone and 14 hypertension-resistant Sabra rats were submitted to either a low-salt (0.25% NaCl) or a high-salt (4% NaCl) diet for 3-4 weeks. Platelet membrane anisotropy was determined, in the presence and absence of extracellular Na+, using two fluorescent probes, diphenylhexatriene and trimethylamino-diphenylhexatriene, inserted in different areas of the cell membranes. 3. A decrease in diphenylhexatriene anisotropy was demonstrated when platelets of hypertension-prone (but not hypertension-resistant) Sabra rats were suspended in a Na(+)-free medium. This alteration in membrane dynamic properties is localized within the hydrophobic core of the platelet membranes and is independent of salt intake. It reflects an abnormal fluidizing effect of extracellular Na+ removal. 4. Platelets of hypertension-prone and hypertension-resistant Sabra rats did not differ significantly in trimethylamino-diphenylhexatriene fluorescence anisotropy, irrespective of the incubation media used. Extracellular Na+ removal caused an increase in trimethylamino-diphenylhexatriene fluorescence anisotropy in all groups, the change being greatest in salt-loaded rats. 5. This study indicates that platelet membrane microviscosity is specifically altered in the hypertension-prone Sabra rat irrespective of salt intake. This raises the question of the relation of this inherited defect with the susceptibility of this strain to dietary salt loading.
abstract_id: PUBMED:8906515
Development, genotype and phenotype of a new colony of the Sabra hypertension prone (SBH/y) and resistant (SBN/y) rat model of slat sensitivity and resistance. Objectives: Variations in the blood pressure response to salt-loading, the lack of quality control measures, and the need to prepare the strains for genetic studies led to renewed secondary inbreeding of the original colony of Sabra hypertension prone (SBH) and resistant (SBN) rats in order to regain genotypic and phenotypic homogeneity of the substrains.
Methods: Animals from the original breeding colony were selectively inbred for basal normotension and for susceptibility or resistance to the development of hypertension following salt-loading with deoxycorticosterone acetate (DOCA)-salt. Efficacy of inbreeding was tested by genome screening with 416 microsatellite primer sets. Phenotyping was based on measurements of systolic blood pressure by the tail-cuff methodology in awake, undisturbed animals maintained on standard diet and after salt-loading with DOCA-salt. Telemetric measurements of blood pressure were performed in a small number of animals to validate tail-cuff measurements.
Results: Animals from the new colony were designated SBH/y and SBN/y to differentiate from the original colony. Fourteen generations have been inbred over the past 4 years. Of the 402 microsatellites that amplified, 183 (45.5%) were polymorphic between the two substrains, and not a single locus was found to be heterozygous in either substrain. Phenotypic characteristics are provided for SBH/ y and SBN/y rats with respect to tail-cuff systolic blood pressure. The values obtained, which were validated by telemetry, demonstrate classical features of salt sensitivity or resistance, respectively.
Conclusions: The genetic homogeneity found in SBH/y and SBN/y, the phenotype demonstrating salt-sensitivity or salt-resistance in terms of development of hypertension, and the relatively high frequency of informative genetic markers identify this Sabra rat model as highly suited for studies concerning the molecular genetics of gene-environment interactions affecting blood pressure regulation.
abstract_id: PUBMED:12045297
Proteinuria and glomerulosclerosis in the Sabra genetic rat model of salt susceptibility. In search of an experimental model that would simulate the association between proteinuria and salt sensitivity in humans, we studied protein excretion in the Sabra rat model of salt susceptibility. Monthly measurements of urinary protein excretion in animals fed standard rat chow revealed that normotensive salt-sensitive SBH/y developed proteinuria that averaged 65 +/- 7 mg/day (n = 10) at 9 mo, whereas proteinuria in normotensive salt-resistant SBN/y was 39 +/- 4 mg/day (n = 10) (P < 0.01). Histopathological evaluation revealed focal and segmental glomerulosclerosis (FSGS) lesions grade 2 in SBH/y and normal histology in SBN/y. To amplify the differences between the strains, uninephrectomy was performed. At 9 mo, proteinuria in SBH/y with one kidney (SBH/y-1K) was 195 +/- 12 mg/day (n = 10) and in SBN/y was 128 +/- 10 mg/day (n = 10) (P < 0.001); histopathology revealed FSGS grade 3 in SBH/y-1K and grade 1-2 in SBN/y-1K. To determine the effect of salt loading, animals were provided with 8% NaCl in chow, causing hypertension in SBH/y but not in SBN/y. Proteinuria markedly increased in both SBH/y with two kidneys (SBH/y-2K) and SBH/y-1K, but not in SBN/y; histopathology revealed FSGS grade 1-2 in SBH/y-2K, grade 2 in SBH/y-1K, no lesions in SBN/y-2K, and grade 0-1 in SBN/y-1K. We concluded that the SBH/y strain is more susceptible to develop proteinuria and glomerulosclerosis than SBN/y. In search for the genetic basis of this phenomenon, we investigated the role of candidate proteinuric gene loci. Consomic strains were constructed by introgressing chromosome 1 (which harbors the rf-1 and rf-2 proteinuric loci) or chromosome 17 (which harbors rf-5) from SBH/y onto the SBN/y genomic background. The resulting consomic strains developed marked proteinuria that was severalfold higher than in SBN/y-1K; histopathological evaluation, however, revealed FSGS lesions grade 1-2, similar to those found in SBN/y-1K and less severe than in SBH/y-1K. These results suggest a functional role of gene systems located on chromosomes 1 and 17 in inducing proteinuria in the salt-susceptible Sabra rat strain. These genetic loci do not appear to harbor major genes for glomerulosclerosis.
abstract_id: PUBMED:11057426
The lack of a modulating effect of non-genetic factors (age, gonads and maternal environment) on the phenotypic expression of the salt-susceptibility genes in the Sabra rat model of hypertension. Objective: This study was designed to test the hypothesis that non-genetic factors such as age, gonads and maternal environment modulate the expression of the salt-susceptibility genes and affect the blood pressure response to salt-loading (salt-sensitivity and salt-resistance) in the Sabra rat model of hypertension.
Methods: The blood pressure response to salt-loading was studied in Sabra hypertension prone (SBH/y) and Sabra hypertension resistant (SBN/y) rats of both sexes: (1) at 1, 3, 6, 9 and 12 months of age, (2) in adult rats after orchiectomy or oophorectomy, and (3) in animals that had been raised and nourished from birth to weaning by a foster mother from the contrasting strain. In each of the study protocols, systolic blood pressure was measured at baseline by the tail cuff method, animals were salt-loaded with deoxycorticosterone acetate, and blood pressure was measured again after 4 weeks.
Results: Basal blood pressure at all the ages studied and in both sexes was on average 10-15 mmHg higher in SBH/y than in SBN/y. Salt-loading in SBN/y of both sexes aged 1-12 months did not induce any significant increment in blood pressure. Salt-loading in SBH/y, in contrast, caused a highly significant rise in systolic blood pressure, of 40 mmHg or more at all the ages studied. There was no age difference or sex dependence in the magnitude of the blood pressure response to salt Oophorectomy or orchiectomy did not affect the levels of basal blood pressure nor prevent the hypertensive response to salt-loading in SBH/y or the lack of a hypertensive response in SBN/y rats. Gonadectomy did not affect blood pressure in salt-loaded hypertensive SBH/y nor in salt-loaded normotensive SBN/y. The basal blood pressure and the blood pressure responses of SBH/y and SBN/y of both sexes raised by foster mothers of the contrasting strains from birth to weaning were not different from those observed when raised by their natural mothers.
Conclusions: This study indicates that salt-sensitivity in SBH/y and salt-resistance in SBN/y are not age-dependent phenomena; that the magnitude of the BP response to salt-loading is not sex-dependent; and that neither gonadectomy nor the maternal environment affect the blood pressure response to salt-loading in the adult animal of either strain. These non-genetic factors thus do not modulate expression of the salt-susceptibility genes in the Sabra genetic model of salt-sensitive hypertension.
Answer: The Sabra hypertension-prone rat (SBH/y) represents a model of salt sensitivity rather than mineralocorticoid sensitivity. This conclusion is drawn from a study that investigated the blood pressure response of SBH/y rats to various modes of salt loading. The study found that the full hypertensive response in SBH/y rats could be elicited by salt loading with 8% NaCl in chow alone, which occurred after 7 weeks of treatment. In contrast, administration of deoxycorticosterone acetate (DOCA) alone raised blood pressure moderately and to a maximal level within 3 weeks, but the magnitude of the blood pressure response was significantly smaller than that observed with DOCA-salt or 8% NaCl in chow. Furthermore, administration of 1% NaCl in water alone did not elicit a hypertensive response. These results indicate that the hypertensive response in the Sabra experimental model of hypertension is primarily an expression of salt sensitivity, as it can be fully reproduced with salt alone, but not with DOCA alone. The use of the DOCA-salt mode of salt loading in this model is a useful way of accelerating the development of salt-sensitive hypertension in SBH/y, which shortens and facilitates phenotyping (PUBMED:9814619). |
Instruction: Can we predict immediate outcome after laparoscopic rectal surgery?
Abstracts:
abstract_id: PUBMED:34312719
Improving postoperative outcome in rectal cancer surgery: Enhanced Recovery After Surgery in an era of increasing laparoscopic resection. Purpose: The Enhanced Recovery After Surgery (ERAS) protocol reduces complications and length of stay (LOS) in colon cancer, but implementation in rectal cancer is different because of neo-adjuvant therapy and surgical differences. Laparoscopic resection may further improve outcome. The aim of this study was to evaluate the effects of introducing ERAS on postoperative outcome after rectal cancer resection in an era of increasing laparoscopic resections.
Materials And Methods: Patients who underwent elective rectal cancer surgery from 2009 till 2015 were included in this observational cohort study. In 2010, ERAS was introduced and adherence to the protocol was registered. Open and laparoscopic resections were compared. With regression analysis, predictive factors for postoperative outcome and LOS were identified.
Results: A total of 499 patients were included. The LOS decreased from 12.3 days in 2009 to 5.7 days in 2015 (p = 0.000). Surgical site infections were reduced from 24% in 2009 to 5% in 2015 (p = 0.013) and postoperative ileus from 39% in 2009 to 6% in 2015 (p = 0.000). Only postoperative ERAS items and laparoscopic surgery were associated with an improved postoperative outcome and shorter LOS.
Conclusions: ERAS proved to be feasible, safe, and contributed to improving short-term outcome in rectal cancer resections. The benefits of laparoscopic surgery may in part be explained by reaching better ERAS adherence rates. However, the laparoscopic approach was also associated with anastomotic leakage. Despite the potential of bias, this study provides an insight in effects of ERAS and laparoscopic surgery in a non-randomized real-time setting.
abstract_id: PUBMED:25214259
Outcome of laparoscopic versus open resection for rectal cancer in elderly patients. Background: Laparoscopic colorectal resection has been gaining popularity over the past two decades. However, studies about laparoscopic rectal surgery in elderly patients with long-term oncologic outcomes are limited. In this study, we evaluated the short-term and long-term outcomes of laparoscopic and open resection in patients with rectal cancer aged ≥ 70 y.
Methods: From 2007-2012, a total of 294 consecutive patients with rectal cancer from a single institution were included, 112 patients undergoing laparoscopic rectal resection were compared with 182 patients undergoing open rectal resection.
Results: Seven (6.3%) patients in the laparoscopic group required conversion to open surgery. The two groups were well balanced in terms of age, gender, body mass index, American society of anesthesiologists scores, site, and stage of the tumors. Laparoscopic surgery was associated with significantly longer median operating time (220 versus 200 min; P = 0.005), less estimated blood loss (100 versus 150 mL; P < 0.001), a shorter postoperative hospital stay (8 versus 11 d), lower overall postoperative complication rate (15.2% versus 26.4%; P = 0.025), wound-related complication rate (7.14% versus 17.03%; P = 0.015), less need of blood transfusion (8.04% versus 16.5%; P = 0.038), and surgical intensive care unit after surgery (12.5% versus 22.0%; P = 0.042) when compared with open surgery. Mortality, quality of surgical specimen, lymph nodes harvested, positive distal, and circumferential margin rate were not significantly different between two groups. The estimated 3-y survival rates were similar between two groups.
Conclusions: Laparoscopic rectal surgery is safe and feasible in patients >70 y and is associated with better short-term outcomes when compared with open surgery.
abstract_id: PUBMED:18362627
Can we predict immediate outcome after laparoscopic rectal surgery? Multivariate analysis of clinical, anatomic, and pathologic features after 3-dimensional reconstruction of the pelvic anatomy. Objectives: The laparoscopic approach for colon resection is widely accepted but its definitive role in rectal tumors is controversial due to the technical difficulties associated with this procedure. Tumor size and volume, and pelvic dimensions may influence intraoperative and/or immediate outcome. This study aimed to evaluate the predictive value of anatomic and pathologic features on immediate outcome after laparoscopic rectal resection.
Material And Methods: The study included a prospective series of 60 patients submitted to laparoscopic resection for rectal tumors. A preoperative computed tomography was performed in all patients. Three-dimension reconstruction of the pelvis, rectal tumor, and prostate was computed. Tumor and prostate volume and diameters were calculated, as were main pelvic diameters (subsacrum-retropubic, coccyx pubis, and promontorium coccyx), and lateral diameters, at the tumor level (3D Doctor Software package). Age, sex, body mass index (BMI), tumor height, previous radiotherapy treatment, and type of procedure (anterior resection, low anterior resection, and abdominoperineal resection) were recorded. Immediate outcome (morbidity, mortality, and stay) was also collected. Dependent variables were operative time, intraoperative difficulty, conversion, and postoperative morbidity. Univariate and multivariate analyses were performed (SPSS package).
Results: The series included 36 men and 24 women, with a mean age of 72 years (range, 38-87). Surgical procedures were 10 anterior resections, 31 low anterior resections, and 19 abdominoperineal resections. Conversion rate was 9 of 60 (15%), operative time: 172 minutes (range, 90-360), morbidity: 31% and stay: 9 days (range, 6-43). Multivariate analysis showed tumor craniocaudal length was an independent predictive factor for conversion (P < 0.04, odds ratio [OR]: 1.5, confidence interval [CI]95%: 1-2.2). Pubic coccyx axis (P < 0.005) and sex (P < 0.009) showed independent values for operative time, and BMI (P < 0.02, OR: 1.2, CI 95%:1-1.5) was related to postoperative morbidity. When a subanalysis was performed in relation to sex, independent factors differed between males and females, with a predominance of anatomic and tumor measures in men.
Conclusion: Local anatomy and pathologic features directly affect surgical outcome in the laparoscopic approach to the rectum. Sex, BMI, lower pelvis diameter, and tumor size are independent predictors for conversion, operative time, and morbidity. These data should be taken into account when planning this kind of procedure.
abstract_id: PUBMED:34145964
The clinical impact of robot-assisted laparoscopic rectal cancer surgery associated with robot-assisted radical prostatectomy. Introduction: Robot-assisted laparoscopic surgery has been performed in various fields, especially in the pelvic cavity. However, little is known about the utility of robot-assisted laparoscopic rectal cancer surgery associated with robot-assisted radical prostatectomy (RARP). We herein report the clinical impact of robot-assisted laparoscopic rectal cancer surgery associated with RARP.
Methods: We experienced five cases of robot-assisted laparoscopic rectal cancer surgery associated with RARP. One involved robot-assisted laparoscopic abdominoperineal resection with en bloc prostatectomy for T4b rectal cancer, and one involved robot-assisted laparoscopic intersphincteric resection combined with RARP for synchronous rectal and prostate cancer. The remaining three involved robot-assisted laparoscopic low anterior resection (RaLAR) after RARP. For robot-assisted laparoscopic rectal cancer surgery, the da Vinci Xi surgical system was used.
Results: We could perform planned robotic rectal cancer surgery in all cases. The median operation time was 529 min (373-793 min), and the median blood loss was 307 ml (32-1191 ml). No patients required any transfusion in the intra-operative or immediate peri-operative period. The circumferential resection margin was negative in all cases. There were no complications of grade ≥III according to the Clavien-Dindo classification and no conversions to conventional laparoscopic or open surgery.
Conclusion: Robot-assisted laparoscopic surgery associated with RARP is feasible in patients with rectal cancer. The long-term surgical outcomes remain to be further evaluated.
abstract_id: PUBMED:34338870
Can MRI pelvimetry predict the technical difficulty of laparoscopic rectal cancer surgery? Purpose: Selection of an open or minimally invasive approach to total mesorectal excision (TME) is generally based on surgeon preference and an intuitive assessment of patient characteristics but there consensus on criteria to predict surgical difficulty. Pelvimetry has been used to predict the difficult surgical pelvis, typically using only bony landmarks. This study aimed to assess the relationship between pelvic soft tissue measurements on preoperative MRI and surgical difficulty.
Methods: Preoperative MRIs for patients undergoing laparoscopic rectal resection in the Australasian Laparoscopic Cancer of the Rectum Trial (ALaCaRT) were retrospectively reviewed by two blinded surgeons and pelvimetric variables measured. Pelvimetric variables were analyzed for predictors of successful resection of the rectal cancer, defined by clear circumferential and distal resection margins and completeness of TME.
Results: There was no association between successful surgery and any measurement of distance, area, or ratio. However, the was a strong association between the primary outcome and the estimated total pelvic volume on adjusted logistic regression analysis (OR = 0.99, P = 0.01). For each cubic centimeter increase in the pelvic volume, there was a 1% decrease in the odds of successful laparoscopic rectal cancer surgery. Intuitive prediction of unsuccessful surgery was correct in 43% of cases, and correlation between surgeons was poor (ICC = 0.18).
Conclusions: A surgeon's intuitive assessment of the difficult pelvis, based on visible MRI assessment, is not a reliable predictor of successful laparoscopic surgery. Further assessment of pelvic volume may provide an objective method of defining the difficult surgical pelvis.
abstract_id: PUBMED:24355022
Laparoscopic surgery for rectal cancer: current status and future perspective. Although laparoscopic surgery for colon cancer is accepted in the treatment guidelines, the laparoscopic approach for rectal cancer is recommended only in clinical trials. Thus far, several trials have shown favorable short-term results such as early recovery and short hospital stay, but long-term results remain a critical concern for laparoscopic rectal cancer surgery. To date, no randomized control trials have shown an increased local recurrence after laparoscopic surgery for rectal cancer. Additionally, according to previous studies, open conversion, which is more frequent in laparoscopic rectal surgery than in laparoscopic colon surgery, may affect short-term and long-term survival. The evidence on male sexual function has been contradictory. Long-term results from ongoing multicenter trials will be available within several years. Based on accumulated evidence from well-organized clinical trials, laparoscopic surgery will likely be accepted as a treatment choice for rectal cancer. In the future, extended laparoscopic rectal surgery might be feasible for additional procedures such as laparoscopic lateral pelvic lymph node dissection and laparoscopic total pelvic exenteration for rectal cancer invading the adjacent pelvic organ.
abstract_id: PUBMED:27648031
Short term outcome of laparoscopic ventral rectopexy for rectal prolapse. Objective: To find out the short term outcomes of effectiveness and safety of laparoscopic ventral rectopexy for rectal prolapse.
Methods: It was a descriptive case series study of 31 consecutive patients of rectal prolapse in Colorectal division of Ward 2, Department of General surgery, Jinnah Post Graduate Medical Center, Karachi, from November 2009 to November 2015. These patients were admitted through outpatient department with complains of something coming out of anus, constipation and per rectal bleeding. All patients were clinically examined and baseline investigations were done. All patients underwent laparoscopic repair with ventral mesh placement on rectum.
Results: Among 31 patients, mean age was 45 years range (20 - 72). While females were 14(45%) and males 17(55%). We observed variety of presentations, including solitary rectal ulcers (n=4) and rectocele (n=3) but full thickness rectal prolapse was predominant(n=24). All patients had laparoscopic repair with mesh placement. Average hospital stay was three days. Out of 31 patients, there was one (3.2%) recurrence. Port site minor infection in 3(9.7%) patients, while conversion to open approach was done in two (6.4%), postoperative ileus observed in two (6.4%) patients. one(3.2%) patient developed intractable back pain and mesh was removed six weeks after the operation. one(4.8%) patient complained of abdominal pain off and on postoperatively. No patient developed denovo or worsening constipation while constipation was improved in 21 patients (67%). Sexual dysfunction such as dysperunia in females and impotence in males was not detected in follow up.
Conclusions: This study provides the limited evidence that nerve sparing laparoscopic ventral rectopexy is safe and effective treatment of external and symptomatic internal rectal prolapse. It has better cosmetic and functional outcome as advantages of minimal access and comparable recurrence rate.
abstract_id: PUBMED:29402298
Laparoscopic low anterior resection for rectal cancer with rectal prolapse: a case report. Background: Rectal cancer with rectal prolapse is rare, described by only a few case reports. Recently, laparoscopic surgery has become standard procedure for either rectal cancer or rectal prolapse. However, the use of laparoscopic low anterior resection for rectal cancer with rectal prolapse has not been reported.
Case Presentation: A 63-year-old Japanese woman suffered from rectal prolapse, with a mass and rectal bleeding for 2 years. An examination revealed complete rectal prolapse and the presence of a soft tumor, 7 cm in diameter; the distance from the anal verge to the tumor was 5 cm. Colonoscopy demonstrated a large villous tumor in the lower rectum, which was diagnosed as adenocarcinoma on biopsy. We performed laparoscopic low anterior resection using the prolapsing technique without rectopexy. The distal surgical margin was more than 1.5 cm from the tumor. There were no major perioperative complications. Twelve months after surgery, our patient is doing well with no evidence of recurrence of either the rectal prolapse or the cancer, and she has not suffered from either fecal incontinence or constipation.
Conclusions: Laparoscopic low anterior resection without rectopexy can be an appropriate surgical procedure for rectal cancer with rectal prolapse. The prolapsing technique is useful in selected patients.
abstract_id: PUBMED:12907898
Outcome of laparoscopic surgery for rectal cancer in 101 patients. Purpose: This study was conducted to investigate the feasibility of laparoscopic resection of rectal cancer and to compare early outcome data with the results of the conventional technique.
Methods: From January 1996 to March 2002, 435 patients with primary rectal cancer were operated on at our institution. Low-risk, small rectal tumors treatable by local excision, rectal cancer recurrences, and emergency cases were excluded from the analysis. Three hundred thirty-four patients were operated on by the conventional open approach. One hundred one selected patients underwent surgery by the laparoscopic technique.
Results: Because of the selection process, significantly more patients with early tumor stages were operated on by laparoscopy. There were no differences in mean operation time, morbidity, mortality, or the anastomotic leakage rate; however, the need for intraoperative transfusion, mean stay in the intensive care unit, and length of hospital stay were reduced significantly.
Conclusions: In terms of the intraoperative and early postoperative course, the laparoscopic resection of rectal cancer in a selected cohort of patients compares favorably with the open technique. Because follow-up time is limited to date, only very preliminary information can be given on tumor-related outcome data. However, these preliminary data appear to suggest that rectal cancer resection can be performed by laparoscopy in accordance with established principles of cancer therapy and that port-site metastases are not a relevant clinical problem. Prospective, randomized trials are required to determine whether the laparoscopic approach will play a significant role in the treatment of rectal cancer in the future.
abstract_id: PUBMED:29761276
Does robotic rectal cancer surgery improve the results of experienced laparoscopic surgeons? An observational single institution study comparing 168 robotic assisted with 184 laparoscopic rectal resections. Background: The role of robotic assistance in colorectal cancer surgery has not been established yet. We compared the results of robotic assisted with those of laparoscopic rectal resections done by two surgeons experienced in laparoscopic as well as in robotic rectal cancer surgery.
Methods: Two surgeons who were already experienced laparoscopic colorectal surgeons in 2005 started robotic surgery with the daVinci SI system in 2012. All their rectal cancer resections between 2005 and 2015 were retrieved from a prospectively recorded colorectal database of routinely collected patient data. Multi-organ resections were excluded. Patient data, diagnostic data, data on preceding operations and neoadjuvant treatment, perioperative and operative data, logistic data, and short-term outcomes were gathered. Multivariable analyses (multiple linear and logistic regression) were used to assess differences in several outcomes between the two resection methods while adjusting for all potential confounders we could identify. Results are presented as adjusted mean differences for continuous outcome variables or as adjusted odds ratios (OR) for dichotomous outcome variables.
Results: Three hundred and fifty-two patients with rectal cancers were identified: 168 robotic and 184 conventional laparoscopic cases, 178 operated by surgeon A and 174 operated by surgeon B. Adjusted mean operation time was 215 min in the robotic group which was 40 min (95% CI 24-56; p < 0.0005) longer than the 175 min in the laparoscopic group. Robotic treatment had significantly lesser numbers of conversions (OR 0.09 (0.03-0.32); p < 0.0005) and other complications (SSI and anastomic leakage excluded) (OR 0.32 (0.15-0.69); p = 0.004), adjusted for potential confounders.
Conclusions: Our study suggests that robotic surgery in the hands of experienced laparoscopic rectal cancer surgeons improves the conversion rate and complication rate drastically compared to conventional laparoscopic surgery, but operation time is longer.
Answer: The immediate outcome after laparoscopic rectal surgery can be influenced by several factors, and some studies have attempted to identify predictors for these outcomes. According to one study, tumor craniocaudal length was an independent predictive factor for conversion from laparoscopic to open surgery, while the pubic coccyx axis and sex showed independent values for operative time, and BMI was related to postoperative morbidity (PUBMED:18362627). This suggests that certain anatomical and pathological features, as well as patient characteristics like sex and BMI, can impact the surgical outcome in laparoscopic approaches to rectal cancer.
Another study found that a surgeon's intuitive assessment of the difficult pelvis, based on visible MRI assessment, is not a reliable predictor of successful laparoscopic surgery. However, they noted a strong association between successful surgery and the estimated total pelvic volume, with a 1% decrease in the odds of successful laparoscopic rectal cancer surgery for each cubic centimeter increase in pelvic volume (PUBMED:34338870).
In terms of the technical aspects, robotic assistance in laparoscopic rectal cancer surgery has been shown to improve the conversion rate and complication rate drastically compared to conventional laparoscopic surgery, although operation time is longer (PUBMED:29761276). This indicates that the use of robotic systems might be a factor in predicting better immediate outcomes in terms of reduced conversion and complication rates.
Overall, while it is challenging to predict immediate outcomes with certainty, certain anatomical features, patient characteristics, and the use of advanced surgical techniques like robotic assistance can provide some predictive value for the outcomes of laparoscopic rectal surgery. |
Instruction: Do we agree on the technical quality of our chest X-rays?
Abstracts:
abstract_id: PUBMED:33958315
Inter-rater reliability in quality assurance (QA) of pediatric chest X-rays. Purpose: The goal of the study is to determine the inter-rater agreement on multiple factors that were utilized to evaluate the quality of pediatric chest X-ray exams from different levels of healthcare provision in an African setting.
Methods: The image quality of pediatric chest X-rays from 3 South African medical centers of varying level of healthcare service were retrospectively assessed by 3 raters for 12 quality factors including: (1) absent body parts; (2) under inspiration; (3) patient rotation; (4) scapula in the way; (5) patient kyphosis/lordosis; (6) artefact/foreign body; (7) central vessel visualization; (8) peripheral vessels visualization; (9) poor collimation; and (10) trachea and bronchi visualization; (11) post-cardiac vessel visualization; and (12) absent or wrong image orientation. Analysis was performed using the Brennan--Prediger coefficient of agreement for inter-rater reliability and Cochran's Q statistic and McNemar's test for inter-rater bias.
Results: 1077 X-rays were reviewed. The least difference between observers in the frequency of the errors was noticed for factors (1) absent body parts and (12) absent or wrong image orientation with almost perfect agreement between raters. κ score for these two factors among all raters and between each pair of raters was more than 0.95 with no significant inter-rater bias. Conversely, there was poor agreement for the remaining factors with the least agreed on being factor (3) patient rotation with a κ score of 0.23. This was followed by factors (2) under inspiration (κ score of 0.32) and factors (4) scapula in the way (κ score of 0.35) respectively. There was significant inter-rater bias for all these three factors.
Conclusion: Many of the factors used to assess the quality of a chest X-ray in children demonstrate poor reliability despite mitigation against variations in training, standard quality definitions and level of healthcare service provision. New definitions, objective measures and recording tools for assessing pediatric chest radiographic quality are required.
abstract_id: PUBMED:38440036
Technical Quality and Diagnostic Impact of Chest X-rays in Tuberculosis Screening: Insights From a Saudi Teleradiology Cohort. Objectives To assess the standard of chest X-ray techniques in tuberculosis (TB) screening within Saudi Arabian healthcare facilities and evaluate the impact of technical quality on radiological interpretation. Materials and methods Analysis of 250 posteroanterior chest radiographs sourced from a network of five clinics was conducted. These images were scrutinized for technical quality by a radiologist. Results Of the radiographs analyzed, 57% exhibited technical issues, with overexposure and clothing artifacts being the most commonly encountered. Notably, only 14% of these radiographs were deemed to have compromised diagnostic ability. Conclusion The presence of technical issues in most chest X-rays for TB screening highlights a significant area for improvement. However, the relatively low percentage of radiographs impacting diagnostic quality indicates that most issues do not critically hinder the radiologist's interpretative capability. This underscores the importance of balanced quality control measures in radiographic practices for effective TB detection in the region.
abstract_id: PUBMED:36507112
Call to Action: Creating Resources for Radiology Technologists to Capture Higher Quality Portable Chest X-rays. Background Patient rotation, foreign body overlying anatomy, and anatomy out of field of view can have detrimental impacts on the diagnostic quality of portable chest x-rays (PCXRs), especially as the number of PCXR imaging increases due to the coronavirus disease 2019 (COVID-19) pandemic. Although preventable, these "quality failures" are common and may lead to interpretative and diagnostic errors for the radiologist. Aims In this study, we present a baseline quality failure rate of PCXR imaging as observed at our institution. We also conduct a focus group highlighting the key issues that lead to the problematic images and discuss potential interventions targeting technologists that can be implemented to address imaging quality failure rate. Materials and methods A total of 500 PCXRs for adult patients admitted to a large university hospital between July 12, 2021, and July 25, 2021, were obtained for evaluation of quality. The PCXRs were evaluated by radiology residents for failures in technical image quality. The images were categorized into various metrics including the degree of rotation and obstruction of anatomical structures. After collecting the data, a focus group involving six managers of the technologist department at our university hospital was conducted to further illuminate the key barriers to quality PCXRs faced at our institution.. Results Out of the 500 PCXRs evaluated, 231 were problematic (46.2%). 43.5% of the problematic films with a repeat PCXR within one week showed that there was a technical problem impacting the ability to detect pathology. Most problematic films also occurred during the night shift (48%). Key issues that lead to poor image quality included improper patient positioning, foreign objects covering anatomy, and variances in technologists' training. Three interventions were proposed to optimize technologist performance that can lower quality failure rates of PCXRs. These include a longitudinal educational curriculum involving didactic sessions, adding nursing support to assist technologists, and adding an extra layer of verification by internal medicine residents before sending the films to the radiologist. The rationale for these interventions is discussed in detail so that a modified version can be implemented in other hospital systems. Conclusion This study illustrates the high baseline error rate in image quality of PCXRs at our institution and demonstrates the need to improve on image quality. Poor image quality negatively impacts the interpretive accuracy of radiologists and therefore leads to wrong diagnoses. Increasing educational resources and support for technologists can lead to higher image quality and radiologist accuracy.
abstract_id: PUBMED:37244797
Patient rotation chest X-rays and the consequences of misinterpretation in paediatric radiology. Purpose: We aimed to demonstrate the consequences of rotation on neonatal chest radiographs and how it affects diagnosis. In addition, we demonstrate methods for determining the presence and direction of rotation.
Background: Patient rotation is common in chest X-rays of neonates. Rotation is present in over half of chest X-rays from the ICU, contributed to by unwillingness of technologists to reposition new-borns for fear of dislodging lines and tubes. There are six main effects of rotation on supine paediatric chest X-rays: 1) unilateral hyperlucency of the side that the patient is rotated towards; 2) the side 'up' appears larger; 3) apparent deviation of the cardiomediastinal shadow in the direction that the chest is rotated towards; 4) apparent cardiomegaly; 5) distorted cardio-mediastinal configuration; and 6) reversed position of the tips of the umbilical artery and vein catheters with rotation to the left. These effects can cause diagnostic errors due to misinterpretation, including air-trapping, atelectasis, cardiomegaly, and pleural effusions, or disease may be masked. We demonstrate the methods of evaluating rotation with examples, including a 3D model of the bony thorax as a guide. In addition, multiple examples of the effects of rotation are provided including examples where disease was misinterpreted, underestimated or masked.
Conclusion: Rotation is often unavoidable in neonatal chest X-rays, especially in the ICU. It is therefore important for physicians to recognise rotation and its effects, and to be aware that it can mimic or mask disease.
abstract_id: PUBMED:35870303
Semantic segmentation of bone structures in chest X-rays including unhealthy radiographs: A robust and accurate approach. The chest X-ray is a widely used medical imaging technique for the diagnosis of several lung diseases. Some nodules or other pathologies present in the lungs are difficult to visualize on chest X-rays because they are obscured byoverlying boneshadows. Segmentation of bone structures and suppressing them assist medical professionals in reliable diagnosis and organ morphometry. But segmentation of bone structures is challenging due to fuzzy boundaries of organs and inconsistent shape and size of organs due to health issues, age, and gender. The existing bone segmentation methods do not report their performance on abnormal chest X-rays, where it is even more critical to segment the bones. This work presents a robust encoder-decoder network for semantic segmentation of bone structures on normal as well as abnormal chest X-rays. The novelty here lies in combining techniques from two existing networks (Deeplabv3+ and U-net) to achieve robust and superior performance. The fully connected layers of the pre-trained ResNet50 network have been replaced by an Atrous spatial pyramid pooling block for improving the quality of the embedding in the encoder module. The decoder module includes four times upsampling blocks to connect both low-level and high-level features information enabling us to retain both the edges and detail information of the objects. At each level, the up-sampled decoder features are concatenated with the encoder features at a similar level and further fine-tuned to refine the segmentation output. We construct a diverse chest X-ray dataset with ground truth binary masks of anterior ribs, posterior ribs, and clavicle bone for experimentation. The dataset includes 100 samples of chest X-rays belonging to healthy and confirmed patients of lung diseases to maintain the diversity and test the robustness of our method. We test our method using multiple standard metrics and experimental results indicate an excellent performance on both normal and abnormal chest X-rays.
abstract_id: PUBMED:34173055
Routine chest X-rays after pigtail chest tube removal rarely change management in children. Background: The need for chest X-rays (CXR) following large-bore chest tube removal has been questioned; however, the utility of CXRs following removal of small-bore pigtail chest tubes is unknown. We hypothesized that CXRs obtained following removal of pigtail chest tubes would not change management.
Methods: Patients < 18 years old with pigtail chest tubes placed 2014-2019 at a tertiary children's hospital were reviewed. Exclusion criteria were age < 1 month, death or transfer with a chest tube in place, or pigtail chest tube replacement by large-bore chest tube. The primary outcome was chest tube reinsertion.
Results: 111 patients underwent 123 pigtail chest tube insertions; 12 patients had bilateral chest tubes. The median age was 5.8 years old. Indications were pneumothorax (n = 53), pleural effusion (n = 54), chylothorax (n = 6), empyema (n = 5), and hemothorax (n = 3). Post-pull CXRs were obtained in 121/123 cases (98.4%). The two children without post-pull CXRs did not require chest tube reinsertion. Two patients required chest tube reinsertion (1.6%), both for re-accumulation of their chylothorax.
Conclusions: Post-pull chest X-rays are done nearly universally following pigtail chest tube removal but rarely change management. Providers should obtain post-pull imaging based on symptoms and underlying diagnosis, with higher suspicion for recurrence in children with chylothorax.
abstract_id: PUBMED:34801180
Usefulness of chest X-rays for evaluating prognosis in patients with COVID-19. Background And Aims: The pandemia caused by SARS-CoV-2 (COVID-19) has been a diagnostic challenge in which chest X-rays have had a key role. This study aimed to determine whether the Radiological Scale for Evaluating Hospital Admission (RSEHA) applied to chest X-rays of patients with COVID-19 when they present at the emergency department is related with the severity of COVID-19 in terms of the need for admission to the hospital, the need for admission to the intensive care unit (ICU), and/or mortality.
Material And Methods: This retrospective study included 292 patients with COVID-19 who presented at the emergency department between March 16, 2020 and April 30, 2020. To standardize the radiologic patterns, we used the RSEHA, categorizing the radiologic pattern as mild, moderate, or severe. We analyzed the relationship between radiologic severity according to the RSEHA with the need for admission to the hospital, admission to the ICU, and mortality.
Results: Hospital admission was necessary in 91.4% of the patients. The RSEHA was significantly associated with the need for hospital admission (p = 0.03) and with the need for ICU admission (p < 0.001). A total of 51 (17.5%) patients died; of these, 57% had the severe pattern on the RSEHA. When we analyzed mortality by grouping patients according to their results on the RSEHA and their age range, the percentage of patients who died increased after age 70 years in patients classified as moderate or severe on the RSEHA.
Conclusions: Chest X-rays in patients with COVID-19 obtained in the emergency department are useful for determining the prognosis in terms of admission to the hospital, admission to the ICU, and mortality; radiologic patterns categorized as severe on the RSEHA are associated with greater mortality and admission to the ICU.
abstract_id: PUBMED:31446493
Evaluation of a computer-aided method for measuring the Cobb angle on chest X-rays. Objectives: To automatically measure the Cobb angle and diagnose scoliosis on chest X-rays, a computer-aided method was proposed and the reliability and accuracy were evaluated.
Methods: Two Mask R-CNN models as the core of a computer-aided method were used to separately detect and segment the spine and all vertebral bodies on chest X-rays, and the Cobb angle of the spinal curve was measured from the output of the Mask R-CNN models. To evaluate the reliability and accuracy of the computer-aided method, the Cobb angles on 248 chest X-rays from lung cancer screening were measured automatically using a computer-aided method, and two experienced radiologists used a manual method to separately measure Cobb angles on the aforementioned chest X-rays.
Results: For manual measurement of the Cobb angle on chest X-rays, the intraclass correlation coefficients (ICC) of intra- and inter-observer reliability analysis was 0.941 and 0.887, respectively, and the mean absolute differences were < 3.5°. The ICC between the computer-aided and manual methods for Cobb angle measurement was 0.854, and the mean absolute difference was 3.32°. These results indicated that the computer-aided method had good reliability for Cobb angle measurement on chest X-rays. Using the mean value of Cobb angles in manual measurements > 10° as a reference standard for scoliosis, the computer-aided method achieved a high level of sensitivity (89.59%) and a relatively low level of specificity (70.37%) for diagnosing scoliosis on chest X-rays.
Conclusion: The computer-aided method has potential for automatic Cobb angle measurement and scoliosis diagnosis on chest X-rays. These slides can be retrieved under Electronic Supplementary Material.
abstract_id: PUBMED:35181935
Chest X-rays are less sensitive than multiple breath washout examinations when it comes to detecting early cystic fibrosis lung disease. Aim: Annual chest X-ray is recommended as routine surveillance to track cystic fibrosis (CF) lung disease. The aim of this study was to investigate the clinical utility of chest X-rays to track CF lung disease.
Methods: Children at Gothenburg's CF centre who underwent chest X-rays, multiple breath washouts and chest computed tomography examinations between 1996 and 2016 were included in the study. Chest X-rays were interpreted with Northern Score (NS). We compared NS to lung clearance index (LCI) and structural lung damage measured by computed tomography using a logistic regression model.
Results: A total of 75 children were included over a median period of 13 years (range: 3.0-18.0 years). The proportion of children with abnormal NS was significantly lower than the proportion of abnormal LCI up to the age of 4 years (p < 0.05). A normal NS and a normal LCI at age 6 years were associated with a median (10-90th percentile) total airway disease of 1.8% (0.4-4.7%) and bronchiectasis of 0.2% (0.0-1.5%).
Conclusion: Chest X-rays were less sensitive than multiple breath washout examinations to detect early CF lung disease. The combined results from both methods can be used as an indicator to perform chest computed tomography less frequently.
abstract_id: PUBMED:33235710
Classification of COVID-19 chest X-rays with deep learning: new models or fine tuning? Background And Objectives: Chest X-ray data have been found to be very promising for assessing COVID-19 patients, especially for resolving emergency-department and urgent-care-center overcapacity. Deep-learning (DL) methods in artificial intelligence (AI) play a dominant role as high-performance classifiers in the detection of the disease using chest X-rays. Given many new DL models have been being developed for this purpose, the objective of this study is to investigate the fine tuning of pretrained convolutional neural networks (CNNs) for the classification of COVID-19 using chest X-rays. If fine-tuned pre-trained CNNs can provide equivalent or better classification results than other more sophisticated CNNs, then the deployment of AI-based tools for detecting COVID-19 using chest X-ray data can be more rapid and cost-effective.
Methods: Three pretrained CNNs, which are AlexNet, GoogleNet, and SqueezeNet, were selected and fine-tuned without data augmentation to carry out 2-class and 3-class classification tasks using 3 public chest X-ray databases.
Results: In comparison with other recently developed DL models, the 3 pretrained CNNs achieved very high classification results in terms of accuracy, sensitivity, specificity, precision, F1 score, and area under the receiver-operating-characteristic curve.
Conclusion: AlexNet, GoogleNet, and SqueezeNet require the least training time among pretrained DL models, but with suitable selection of training parameters, excellent classification results can be achieved without data augmentation by these networks. The findings contribute to the urgent need for harnessing the pandemic by facilitating the deployment of AI tools that are fully automated and readily available in the public domain for rapid implementation.
Answer: The technical quality of chest X-rays appears to be a subject of variability and concern, as evidenced by the studies provided. Inter-rater reliability in assessing the quality of pediatric chest X-rays shows significant variability, with poor agreement on many factors used to assess quality. The least difference between observers was noted for factors such as absent body parts and absent or wrong image orientation, which had almost perfect agreement. However, there was poor agreement for factors such as patient rotation, under inspiration, and scapula in the way, with significant inter-rater bias for these factors (PUBMED:33958315).
In a study of chest X-rays for tuberculosis screening within Saudi Arabian healthcare facilities, 57% of the radiographs exhibited technical issues, with overexposure and clothing artifacts being the most common problems. Despite these issues, only 14% of the radiographs were deemed to have compromised diagnostic ability, suggesting that while technical quality is a concern, it may not always critically hinder diagnostic interpretation (PUBMED:38440036).
Another study highlighted that patient rotation, foreign body overlying anatomy, and anatomy out of the field of view can have detrimental impacts on the diagnostic quality of portable chest X-rays. The study found a high baseline error rate in image quality and suggested that increasing educational resources and support for technologists can lead to higher image quality and improved radiologist accuracy (PUBMED:36507112).
In summary, while there is agreement that technical quality issues are present in chest X-rays, there is variability in the degree to which these issues are identified and agreed upon by different raters. Efforts to improve the technical quality of chest X-rays are necessary, and interventions such as education and support for technologists, as well as quality control measures, are recommended to enhance the reliability and diagnostic utility of these images. |
Instruction: Is the transvenous extraction of cardioverter-defibrillator leads more hazardous than that of pacemaker leads?
Abstracts:
abstract_id: PUBMED:27432391
Pacemaker and defibrillator lead extraction techniques - literature review and own results In the past, the application of continuous traction used to be the alternative to open surgical removal of impacted pacemaker leads. Today's state-of-the-art methods for lead extraction follow the principles of traction (by locking stylets) and countertraction (by outer sheaths). Technical advances with respect to outer sheath design - including the use of lasers or bipolar electrocautery - led to a higher success rate, particularly as far as the removal of endocardial defibrillator leads is concerned. From 1997 to 1999, we treated 31 patients (pts) who required lead extraction more than 6 months after lead implantation. In 16 pts pacemaker leads and in 15 pts endocardial defibrillator leads had to be removed. All but one infected lead could be extracted using the "Cook-Byrd-Method" described here. Incompletely extracted leads were more common in the patient group without infection. This may be the result of different levels of "aggressiveness" when removing leads in infected and non-infected cases, and a reflection of the different risks. We report on the technical principles of lead removal. Published methods and results are reviewed and compared. The laser sheath, recently favored by some authors, are not necessarily quicker, better and safer. New electrosurgical dissection sheaths seem to be promising in one study with just a small sample size. The results of the EXCL study (Electrosurgical Extraction of Cardiac Leads) will provide us with new data. Complete lead removal is mandatory, especially in systemically infected pacemaker systems, while it remains most important to prevent harm to the individual patient. The "aggressiveness" of each procedure should be related to the potential risk. However, the costs associated with each method may not be neglected.
abstract_id: PUBMED:20730717
Is the transvenous extraction of cardioverter-defibrillator leads more hazardous than that of pacemaker leads? Background: Leads used for low-voltage and high-voltage therapy delivered by implantable cardioverter-defibrillators (ICD) differ from low-voltage pacemaker (PM) leads in their diameter and complexity of structure. Although there are reports showing that the extraction of ICD leads may be hazardous, due to firm adhesions to the vascular and chamber walls of high-voltage therapy coils, clinical evidence suggests that such procedures are safe.
Aim: To compare the efficacy and safety of transvenous extraction of ICD and PM leads in patients enrolled in a single tertiary centre.
Methods: We compared the results of lead extraction procedures in 345 patients with PM and in 79 patients submitted for the lead removal including at least one ICD lead. We analysed ingrown leads i.e. over 12 month-old PM leads and over 6 month-old ICD leads, which were removed using Cook's device.
Results: Patients in the two groups differed significantly in age and gender. The ICD systems were significantly younger, less complex (fewer leads per patient) and presented higher efficacy of extraction and fewer technical difficulties. The number of major complications was similar to the encountered during extraction of PM leads. However, minor complications were significantly more frequent in the ICD group.
Conclusions: 1. Extraction of ICD and PM leads is associated with a similar risk for developing major complications, however minor complications are more often during extraction of ICD leads. 2. A larger number of double coil leads may be the cause of complications despite a shorter time period elapsing from ICD implantation. 3. A probable cause of complications during ICD lead extraction is the pronounced growth of the connective tissue around the coils. However, further studies are required to clarify this phenomenon. 4. The success rate of ICD leads extraction using our own surgical technique is similar to that reported by other investigators using laser systems.
abstract_id: PUBMED:32681177
Safety and efficacy of transvenous mechanical lead extraction in patients with abandoned leads. Aims: Optimal management of redundant or malfunctioning leads is controversial. We aimed to assess safety and efficacy of mechanical transvenous lead extraction (TLE) in patients with abandoned leads.
Methods And Results: Consecutive TLE procedures performed in our centre from January 2009 to December 2017 were considered. We evaluated the safety and efficacy of mechanical TLE in patients with abandoned (Group 1) compared to non-abandoned (Group 2) leads. We analysed 1210 consecutive patients that required transvenous removal of 2343 leads. Group 1 accounted for 250 patients (21%) with a total of 617 abandoned leads (26%). Group 2 comprised 960 patients (79%) with 1726 leads (74%). The total number of leads (3.0 vs. 2.0), dwelling time of the oldest lead (108.00 months vs. 60.00 months) and infectious indications for TLE were higher in Group 1. Clinical success was achieved in 1168 patients (96.5%) with a lower rate in Group 1 (90.4% vs. 98.1%; P < 0.001). Major complications occurred in only 9 patients (0.7%), without significant differences among the two groups. The presence of one or more abandoned leads [odds ratio (OR) 3.47; 95% confidence interval (CI) 1.07-11.19; P = 0.037] and dwelling time of the oldest lead (OR 1.01 for a month; 95% CI 1.01-1.02; P < 0.001) were associated with a higher risk of clinical failure.
Conclusion: Transvenous mechanical lead extraction is a safe procedure also in high-risk settings, as patients with abandoned leads. Success rate resulted a bit lower, especially in the presence of abandoned leads with long implantation time.
abstract_id: PUBMED:38000893
Extraction outcomes of implantable cardioverter-defibrillator leads vary by manufacturer and model family. Aims: Transvenous lead extraction (TLE) of implantable cardioverter-defibrillator (ICD) leads is considered challenging. The structure of each ICD leads is variable between manufacturer and model families. The net impact of lead family on the safety and effectiveness of TLE is poorly characterized. We assessed the safety and efficacy of ICD TLE and the impact of manufacturer ICD model family on the outcomes.
Methods And Results: The study cohort included all consecutive patients with ICD who underwent TLE between 2013 and 2022 and are enrolled in the Cleveland Clinic Prospective TLE Registry. A total of 885 ICD leads (median implant duration 8 years) in 810 patients were included. Complete ICD TLE success was achieved in 97.2% of the leads (n = 860) and in 98.0% of the patients (n = 794). Major complications occurred in 22 patients (2.7%). Complete procedural success rate varied by manufacturer and lead family; Medtronic 98.9%, Abbott 95.9%, Boston Scientific 95.0%, Biotronik 91.2%, P = 0.03, and Linox family leads had the lowest, 89.7% P = 0.02. Multivariable predictors of incomplete ICD lead removal included ICD lead age > 10 years and Linox family lead. Multivariable predictors of major complications included ICD lead age > 15 years and longer lead extraction time, and predictors of all-cause mortality within 30 days included lead extraction for infection, end-stage renal disease, and higher New York Heart Association functional class.
Conclusion: Complete and safe ICD lead removal rate by TLE is extremely high but varied by manufacturer and lead family. Linox family lead and >10 years lead age were independent predictors of incomplete lead removal.
abstract_id: PUBMED:30466846
Transvenous Extraction of Pacemaker and Defibrillator Leads and the Risk of Tricuspid Valve Regurgitation. Objectives: The aims of this study were to detect and quantify acute increases in tricuspid regurgitation (TR) severity following transvenous lead extraction (TLE) and to evaluate the associated risk factors.
Background: Although established as a safe and effective method for lead removal, TLE is sometimes complicated by TR.
Methods: In 208 consecutive patients undergoing TLE, acute changes in TR severity were assessed by transesophageal echocardiography. A significant acute TR increase (TRI) was defined as a ≥1 grade increase in TR severity and post-extraction TR severity that was moderate or greater.
Results: Overall, 266 ventricular leads (mean lead age, 11.8 ± 7.3 years) were extracted from the 208 patients. A significant acute TRI was observed in 24 (11.5%) of these patients. Acute TRI was associated with longer lead implant duration, extraction of pacemaker rather than defibrillator leads, anatomic injury to the tricuspid valve (TV), and longer post-extraction hospital stays. Multivariate analysis yielded only lead implant duration as an independent predictor of TLE-related acute TRI (odds ratio: 1.05; 95% confidence interval: 1.01 to 1.11; p = 0.046). When the patients were divided into 4 subgroups according to quartiles of lead age, there was a graded elevation in the rates of acute TRI (p trend = 0.048) and TV injury (p trend = 0.009) with lead implant duration.
Conclusions: Following TLE, TV damage and acute TRI were commonly detected by transesophageal echocardiography, particularly in patients with advanced lead age. Lead abandonment strategies, which prolong implantation duration of future leads requiring extraction, should consider the potential long-term deleterious impact on TV function.
abstract_id: PUBMED:28840589
Occurrence and extraction of implantable cardioverter-defibrillator leads with conductor externalization. Background: The increasing number of patients with implantable cardioverter-defibrillators (ICD) contributes to the rising number of patients qualifying for a transvenous lead extraction (TLE) due to infection, vascular or lead failure related indications. The purpose of this study was to perform a retrospective analysis of the occurrence of conductor externalization in TLE patients and to assess the success rate in the extraction of these leads.
Methods: TLE procedure was performed between 2012 and 2014 of 428 electrodes in 259 patients. Out of these, 143 (33.4%) leads in 138 (52.9%) patients were ICD leads. The indications for the TLE in ICD patients were: infection in 37 patients, lead failure in 84 patients, and others in 17 patients. Conductor externalization was observed in 8 ICD leads (5.6%) in 8 (5.8%) patients. The mean dwell-ing time for externalized leads was 87.9 (55 to 132) months compared to 60.1 (3 to 246) months of the remaining 135 ICD leads (p = 0.0329). All externalized leads were successfully and completely extracted using device traction, mechanical telescopic sheaths and/or autorotational cutting sheaths. No complica-tions of lead extraction procedures were observed in 8 patients with externalization.
Results: Patients with lead externalization were often in a better New York Heart Association func-tional class (I or II) compared to those in the rest of the study group (p = 0.0212).
Conclusions: Conductor externalization is a rare finding in patients undergoing TLE. This occurs with different manufacturers and lead types. In this complication transvenous lead extraction with the mechanical extraction tools can be safely performed.
abstract_id: PUBMED:24971363
Percutaneous extraction of transvenous permanent pacemaker/defibrillator leads. Background: Widespread use of cardiovascular implantable electronic devices has inevitably increased the need for lead revision/replacement. We report our experience in percutaneous extraction of transvenous permanent pacemaker/defibrillator leads.
Methods: Thirty-six patients admitted to our centre from September 2005 through October 2012 for percutaneous lead extraction were included. Lead removal was attempted using Spectranetics traction-type system (Spectranetics Corp., Colorado, CO, USA) and VascoExtor countertraction-type system (Vascomed GmbH, Weil am Rhein, Germany).
Results: Lead extraction was attempted in 59 leads from 36 patients (27 men), mean ± SD age 61 ± 5 years, with permanent pacemaker (n = 25), defibrillator (n = 8), or cardiac resynchronisation therapy (n = 3) with a mean ± SD implant duration of 50 ± 23 months. The indications for lead removal included pocket infection (n = 23), endocarditis (n = 2), and ventricular (n = 10) and atrial lead dysfunction (n = 1). Traction device was used for 33 leads and countertraction device for 26 leads. Mean ± SD fluoroscopy time was 4 ± 2 minutes/lead for leads implanted <48 months (n = 38) and 7 ± 3 minutes/lead for leads implanted >48 months (n = 21), P = 0.03. Complete procedural success rate was 91.7% and clinical procedural success rate was 100%, while lead procedural success rate was 95%.
Conclusions: In conclusion, percutaneous extraction of transvenous permanent pacemaker/defibrillator leads using dedicated removal tools is both feasible and safe.
abstract_id: PUBMED:32447372
Long-term follow-up of abandoned transvenous defibrillator leads: a nationwide cohort study. Aims: Commonly, a dysfunctional defibrillator lead is abandoned and a new lead is implanted. Long-term follow-up data on abandoned leads are sparse. We aimed to investigate the incidence and reasons for extraction of abandoned defibrillator leads in a nationwide cohort and to describe extraction procedure-related complications.
Methods And Results: All abandoned transvenous defibrillator leads were identified in the Danish Pacemaker and ICD Register from 1991 to 2019. The event-free survival of abandoned defibrillator leads was studied, and medical records of patients with interventions on abandoned defibrillator leads were audited for procedure-related data. We identified 740 abandoned defibrillator leads. Meantime from implantation to abandonment was 7.2 ± 3.8 years with mean patient age at abandonment of 66.5 ± 13.7 years. During a mean follow-up after abandonment of 4.4 ± 3.1 years, 65 (8.8%) abandoned defibrillator leads were extracted. Most frequent reason for extraction was infection (pocket and systemic) in 41 (63%) patients. Procedural outcome after lead extraction was clinical success in 63 (97%) patients. Minor complications occurred in 3 (5%) patients, and major complications in 1 (2%) patient. No patient died from complication to the procedure during 30-day follow-up after extraction.
Conclusion: More than 90% of abandoned defibrillator leads do not need to be extracted during long-term follow-up. The most common indication for extraction is infection. Abandoned defibrillator leads can be extracted with high clinical success rate and low risk of major complications at high-volume centres.
abstract_id: PUBMED:23816440
Transvenous extraction of implantable cardioverter-defibrillator leads under advisory--a comparison of Riata, Sprint Fidelis, and non-recalled implantable cardioverter-defibrillator leads. Background: Comparative safety and efficacy associated with transvenous lead extraction (TLE) of recalled and non-recalled implantable cardioverter-defibrillator (ICD) leads has not been well characterized.
Objectives: To compare the indications, techniques, and procedural outcomes of recalled vs non-recalled ICD lead extraction procedures.
Methods: TLE procedures performed at our institution from June 2002 to June 2012 in which Riata, Sprint Fidelis, or non-recalled ICD leads were extracted were included in the analysis.
Results: ICD lead extraction procedures were performed in 1079 patients, including 430 patients with recalled leads (121 Riata, 308 Sprint Fidelis, and 1 Riata and Sprint Fidelis) and 649 patients with non-recalled ICD leads. A total of 2056 chronic endovascular leads were extracted, of which 1215 (59.1%) were ICD leads. Overall, there was 96.8% complete procedural success, 99.1% clinical success, and 0.9% failure, with 3.9% minor complications and 1.5% major complications. Procedural outcomes for Riata and Sprint Fidelis TLE procedures were no different. Lead implant duration was significantly less in recalled than in non-recalled ICD lead TLE procedures. Complete procedural success was higher in recalled (424 of 430 [98.6%]) than in non-recalled (621 of 649 [95.7%]; P = .007) ICD lead TLE procedures. Minor complications were lower in recalled (10 of 430 [2.3%]) than in non-recalled (32 of 649 [5.0%]; P = .030) ICD lead TLE procedures. Rates of clinical success, failure, and major complications were no different in the recalled and non-recalled ICD lead TLE procedures.
Conclusion: In our experience, recalled ICD leads were extracted with safety and efficacy comparable to that of non-recalled ICD leads.
abstract_id: PUBMED:28340095
Non-laser percutaneous extraction of pacemaker and defibrillation leads: a decade of progress. Aims: Non-laser-based methods are safe in lead extraction but in the past have been less effective than laser methods. In the past decade, new equipment has been introduced including the Evolution® Mechanical Dilator Sheath and the Evolution® RL. We sought to determine the impact of new equipment on outcome in mechanical lead extraction.
Methods And Results: We considered 288 consecutive patients (age 66 ± 18 years) who underwent transvenous lead extraction (TLE) of 522 leads in the decade to the end of 2014. Three groups were identified: Group 1 (pre-Evolution® period, 76 patients, 133 leads), Group 2 (original Evolution® period, 115 patients, 221 leads), and Group 3 (Evolution® RL period, 97 patients, 168 leads). The age of leads was significantly greater in Groups 2 and 3 (6.2 ± 4.4 and 6.1 ± 5.4 years vs.4.7 ± 4.5, P < 0.05) as was the proportion of implantable cardioverter defibrillator leads (27.2 and 28.9 vs. 14.3%, P < 0.05). The groups were similar in the number of leads extracted per patient. Despite the increasing complexity of the systems extracted, complete extraction was achieved in a progressively greater proportion of leads (88.0% in Group 1, 95.5% in Group 2, and 97.6% in Group 3, P < 0.05), and procedure duration was similar. The proportion of leads for which femoral access was required was greater in Group 3 (11%, 18/164) compared with Group 2 (3%, 7/211), P = 0.006. The only major complications were a post-procedure subacute tamponade in Group 1 and an oesophageal injury related to transoesophageal echocardiography in Group 3.
Conclusion: With current equipment, mechanical extraction provides a good combination of efficacy and safety.
Answer: The transvenous extraction of cardioverter-defibrillator (ICD) leads is generally considered more challenging than that of pacemaker (PM) leads due to differences in lead diameter and complexity of structure, as well as the presence of high-voltage therapy coils which may form firm adhesions to vascular and chamber walls (PUBMED:20730717). However, studies have shown that the extraction of ICD leads can be performed with similar risks for major complications as PM lead extraction, although minor complications may be more frequent with ICD leads (PUBMED:20730717).
The success rate of ICD lead extraction using various surgical techniques, including mechanical extraction tools, has been reported to be similar to that achieved with laser systems (PUBMED:20730717). Additionally, the presence of abandoned leads and longer dwelling time of the oldest lead have been associated with a higher risk of clinical failure during transvenous mechanical lead extraction (TLE), but the procedure remains safe even in high-risk settings (PUBMED:32681177).
Extraction outcomes for ICD leads also vary by manufacturer and model family, with some leads, such as the Linox family, being associated with lower complete procedural success rates (PUBMED:38000893). Moreover, longer lead age has been identified as a predictor of incomplete ICD lead removal (PUBMED:38000893).
While TLE is effective, it can sometimes be complicated by tricuspid regurgitation (TR), particularly in patients with longer lead implant durations (PUBMED:30466846). This suggests that strategies for lead abandonment, which may prolong the implantation duration of future leads requiring extraction, should consider the potential impact on tricuspid valve function.
Overall, the extraction of ICD leads is associated with a similar risk for developing major complications compared to PM leads, but with a higher incidence of minor complications. The procedure can be performed safely and effectively, with success rates varying depending on the specific lead type and manufacturer (PUBMED:20730717; PUBMED:32681177; PUBMED:38000893; PUBMED:30466846). |
Instruction: Do implementation intentions help to turn good intentions into higher fruit intakes?
Abstracts:
abstract_id: PUBMED:16595275
Do implementation intentions help to turn good intentions into higher fruit intakes? Objective: The present study examined (1) whether respondents who were encouraged to make implementation intentions to eat more fruit increased their fruit intakes, as measured by three measures of fruit intake; (2) whether the effects of implementation intentions on fruit intake were dependent on positive goal intentions at baseline; and (3) the respondents' commitment to perform their implementation intentions.
Design: Dutch adults (n = 535) were randomly assigned to either receive implementation intention instructions or not. Two questionnaires were completed with a 1.5-week time interval. Respondents in the implementation intention condition were asked to form implementation intentions to eat an extra serving of fruit per day during one week.
Results: Respondents in the implementation intention group reported a high frequency of eating an extra serving of fruit per day. The implementation intention effect on frequency of extra fruit did not depend on goal intention at baseline. The more committed respondents were to carrying out their implementation intention, the more likely they were to increase their fruit intake.
Conclusion: These results provide some indications that implementation intentions could be a useful strategy to induce a short-term increase in fruit intake.
abstract_id: PUBMED:22956683
Brief report: effect of dietary restraint on fruit and vegetable intake following implementation intentions. This study explored whether the effects of implementation intentions on increasing fruit and vegetable intake were moderated by dietary restraint. In total, 208 participants were randomly allocated to control or implementation intention conditions where they were asked to write down when, where and how they would increase their fruit and vegetable intake. Implementation intentions increased fruit and vegetable intake but only in participants scoring low (not high) on rigid dietary restraint. Motives underlying fruit and vegetable consumption may be different for restrained and unrestrained eaters. Efforts to increase their intake may need to be tailored, for example, through motivational rather than situational cues.
abstract_id: PUBMED:29029681
Does Situation-Specificity Affect the Operation of Implementation Intentions? Interventions that encourage people to link critical situations with appropriate responses (i.e., "implementation intentions") show promise in increasing physical activity. The study tested whether implementation intentions designed to deal with generic situations are more effective than implementation intentions designed to respond to specific situations. One hundred thirty-three participants either: (a) formed implementation intentions using a volitional help sheet with 10 critical situations (i.e., standard volitional help sheet); (b) formed implementation intentions using a volitional help sheet with one generic situation (i.e., single situation volitional help sheet); or (c) did not form implementation intentions (i.e., control condition). Participants who formed implementation intentions reported more physical activity and greater self-regulation than those in the control condition. There were no differences between participants who were provided with one generic critical situation and those who were provided with 10 specific critical situations. Implementation intentions successfully increased self-reported physical activity irrespective of critical situation specificity. The implication is that implementation intention-based interventions are robust and require minimal tailoring.
abstract_id: PUBMED:25463964
Evidence that implementation intentions reduce drivers' speeding behavior: testing a new intervention to change driver behavior. Implementation intentions have the potential to break unwanted habits and help individuals behave in line with their goal intentions. We tested the effects of implementation intentions in the context of drivers' speeding behavior. A randomized controlled design was used. Speeding behavior, goal intentions and theoretically derived motivational pre-cursors of goal intentions were measured at both baseline and follow-up (one month later) using self-report questionnaires. Immediately following the baseline questionnaire, the experimental (intervention) group (N=117) specified implementation intentions using a volitional help sheet, which required the participants to link critical situations in which they were tempted to speed with goal-directed responses to resist the temptation. The control group (N=126) instead received general information about the risks of speeding. In support of the hypotheses, the experimental group reported exceeding the speed limit significantly less often at follow-up than did the control group. This effect was specific to 'inclined abstainers' (i.e., participants who reported speeding more than they intended to at baseline and were therefore motivated to reduce their speeding) and could not be attributed to any changes in goal intentions to speed or any other measured motivational construct. Also in line with the hypotheses, implementation intentions attenuated the past-subsequent speeding behavior relationship and augmented the goal intention - subsequent speeding behavior relationship. The findings imply that implementation intentions are effective at reducing speeding and that they do so by weakening the effect of habit, thereby helping drivers to behave in accordance with their existing goal intentions. The volitional help sheet used in this study is an effective tool for promoting implementation intentions to reduce speeding.
abstract_id: PUBMED:36438341
How can implementation intentions be used to modify gambling behavior? Problem gambling can cause significant harm, yet rates of gambling continue to increase. Many individuals have the motivation to stop gambling but are unable to transfer these positive intentions into successful behavior change. Implementation intentions, which are goal-directed plans linking cues to behavioral responses, can help bridge the gap between intention and many health behaviors. However, despite the strategy demonstrating popularity in the field of health psychology, its use in the area of gambling research has been limited. This mini review illustrates how implementation intentions can be used to facilitate change in gambling behavior. Adopting the strategy could help reduce the number of people with gambling problems.
abstract_id: PUBMED:26236214
Promoting the translation of intentions into action by implementation intentions: behavioral effects and physiological correlates. The present review addresses the physiological correlates of planning effects on behavior. Although intentions to act qualify as predictors of behavior, accumulated evidence indicates that there is a substantial gap between even strong intentions and subsequent action. One effective strategy to reduce this intention-behavior gap is the formation of implementation intentions that specify when, where, and how to act on a given goal in an if-then format ("If I encounter situation Y, then I will initiate action Z!"). It has been proposed that implementation intentions render the mental representation of the situation highly accessible and establish a strong associative link between the mental representations of the situation and the action. These process assumptions have been examined in behavioral research, and in physiological research, a field that has begun to investigate the temporal dynamics of and brain areas involved in implementation intention effects. In the present review, we first summarize studies on the cognitive processes that are central to the strategic automation of action control by implementation intentions. We then examine studies involving critical samples with impaired self-regulation. Lastly, we review studies that have applied physiological measures such as heart rate, cortisol level, and eye movement, as well as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) studies on the neural correlates of implementation intention effects. In support of the assumed processes, implementation intentions increased goal attainment in studies on cognitive processes and in critical samples, modulated brain waves related to perceptual and decision processes, and generated less activity in brain areas associated with effortful action control. In our discussion, we reflect on the status quo of physiological research on implementation intentions, methodological and conceptual issues, related research, and propose future directions.
abstract_id: PUBMED:36504401
How to support learning with multimedia instruction: Implementation intentions help even when load is high. There is ample evidence that multimedia learning is challenging, and learners often underutilize appropriate cognitive processes. Previous research has applied prompts to promote the use of helpful cognitive processing. However, prompts still require learners to regulate their learning, which may interfere with learning, especially in situations where cognitive demands are already high. As an alternative, implementation intentions (i.e. if-then plans) are expected to help regulate behaviour automatically due to their specific wording, thereby offloading demands. Accordingly, this study aimed at investigating whether implementation intentions compared with prompts improve learning performance, especially under high cognitive load. Students (N = 120) learned either in a control condition without instructional support, with prompts, or with implementation intentions. Within each condition, half of the participants studied the multimedia instruction under conditions of either high or low cognitive load, which was experimentally manipulated by instructing them to perform one of two secondary tasks. In line with our hypotheses, the results showed that under low cognitive load, both prompts and implementation intentions led to better learning than the control condition. By contrast, under high cognitive load, only implementation intentions promoted learning. Thus, implementation intentions are an efficient means to promote learning even under challenging circumstances.
abstract_id: PUBMED:36173203
The effect of forming implementation intentions on alcohol consumption: A systematic review and meta-analysis. Issues: Meta-analysis was used to estimate the effect of forming implementation intentions (i.e., if-then plans) on weekly alcohol consumption and heavy episodic drinking (HED). Sample type, mode of delivery, intervention format and timeframe were tested as moderator variables.
Approach: Cochrane, EThOS, Google Scholar, PsychArticles, PubMed and Web of Science were searched for relevant publications to 31 March 2021. Random-effects meta-analysis was used to estimate the effect size difference (d) between individuals forming versus not forming implementation intentions on weekly consumption and HED.
Key Findings: Sixteen studies were included in meta-analyses. The effect size difference for forming implementation intentions on weekly alcohol consumption was d+ = -0.14 confidence interval (CI) [-0.24; -0.03]. Moderator analyses highlighted stronger effects for: (i) community (d+ = -0.38, CI [-0.58; -0.18]) versus university (d+ = -0.04, CI [-0.13; 0.05]) samples; (ii) paper (d+ = -0.26, CI [-0.43; -0.09]) versus online (d+ = -0.04, CI [-0.14; 0.06]) mode of delivery; and (iii) volitional help sheet (d+ = -0.34, CI [-0.60; -0.07]) versus implementation intention format (d+ = -0.07, CI [-0.16; 0.02]). In addition, effects diminished over time (B = 0.02, SE = 0.01, CI [0.03; 0.01]). Forming implementation intentions had a null effect on HED, d+ = -0.01 CI [-0.10; 0.08].
Implications: Forming implementation intentions reduces weekly consumption but has no effect on HED.
Conclusion: This review identifies boundary conditions on the effectiveness of implementation intentions to reduce alcohol consumption. Future research should focus on increasing the effectiveness of online-delivered interventions and integrating implementation intention and motivational interventions.
abstract_id: PUBMED:31258926
The effect of implementation intentions on use of dental chewing gum. This study examined the effect of implementation intentions on use of dental chewing gum. A total of 80 participants reported intentions to chew gum, read information about the benefits of dental gum, reported intentions again, and formed implementation intentions relating to gum use (experimental group) or solved word puzzles (control group). Seven days later, they reported the amount chewed. Results showed that among those motivated to chew gum, implementation intentions significantly increased the total amount chewed. Time 1 intentions were more highly correlated with behaviour than time 2 intentions. Further research is needed to establish the effectiveness of implementation intentions in dental settings.
abstract_id: PUBMED:26164120
The effectiveness of an implementation intentions intervention for fruit and vegetable consumption as moderated by self-schema status. Two experiments were conducted to determine whether self-schema status moderates the effectiveness of an implementation intentions intervention on nutrition behavior among university students not meeting relevant dietary guidelines. In Experiment 1, students were asked to eat at least 2 servings of fruit and 3 of vegetables daily for a week. Implementation intention condition participants listed what fruits and vegetables they would eat and when and where they would eat them; control condition participants did not. Among those who did not initially meet vegetable targets (n = 108), implementation intentions increased the vegetable consumption of healthy eater schematics, but not of nonschematics. There were no significant effects for fruit consumption among those initially not meeting fruit targets (n = 83). Experiment 2 replicated the moderating effect of healthy eater self-schema status in regard to the effectiveness of an implementation intentions intervention for vegetable consumption among undergraduates who were not initially eating at least 3 servings of vegetables daily (n = 62). Findings are discussed in regard to promoting healthy eating among university students, as well as the implementation intention, self-schema, and self-concordance literatures.
Answer: Yes, implementation intentions do help to turn good intentions into higher fruit intakes. A study involving Dutch adults found that those who were encouraged to make implementation intentions to eat more fruit increased their fruit intakes, as measured by three measures of fruit intake. The implementation intention effect on the frequency of extra fruit did not depend on goal intention at baseline, and the more committed respondents were to carrying out their implementation intention, the more likely they were to increase their fruit intake. This suggests that implementation intentions could be a useful strategy to induce a short-term increase in fruit intake (PUBMED:16595275). Additionally, another study demonstrated that implementation intentions increased vegetable consumption among healthy eater schematics, but not among nonschematics, indicating that self-schema status may moderate the effectiveness of implementation intentions on nutrition behavior (PUBMED:26164120). |
Instruction: Does pregnancy alter the long-term course of multiple sclerosis?
Abstracts:
abstract_id: PUBMED:24935463
Does pregnancy alter the long-term course of multiple sclerosis? Purpose: The purpose was to examine the impact of pregnancy on the rates of relapses, progression to irreversible disability, and transition to secondary progressive multiple sclerosis (SPMS) in patients with relapsing-remitting multiple sclerosis (RRMS).
Methods: We retrospectively followed two subcohorts of women with RRMS: pregnant (n = 254) and nonpregnant (n = 423). We obtained data on demographic, lifestyle, and clinical characteristics from patient records. Poisson and logistic regressions estimated the rate ratios associated with pregnancy as a function of time. Confounding was controlled by propensity-score adjustment, and postbaseline selection bias was controlled by inverse probability weighting.
Results: In the pregnant and nonpregnant subcohorts, respectively, 300 and 787 relapses, 15 and 27 transitions to SPMS, and 11 and 34 progressions to irreversible disability were documented. Adjusted rate ratios (95% confidence intervals) shortly after baseline were 0.67 (0.49; 0.92) for relapses, 0.16 (0.03; 0.79) for irreversible disability, and 1.25 (0.39; 3.96) for SPMS. The corresponding estimates at 5 and 10 years were, respectively, 1.04 (0.72; 1.52), 0.82 (0.36; 1.88), and 2.33 (1.03; 5.26) and 1.62 (0.84; 3.14), 4.14 (0.89; 19.22), and 4.33 (1.10; 16.99).
Conclusions: Pregnancy likely ameliorates the short-term course of RRMS in terms of the rates of relapses and progression to irreversible disability. Over the long term, it appears to have no material impact on these outcomes, and might in fact accelerate the rate of transition to SPMS.
abstract_id: PUBMED:27006700
Fingolimod in the treatment of relapsing-remitting multiple sclerosis: long-term experience and an update on the clinical evidence. Since the approval in 2010 of fingolimod 0.5 mg (Gilenya; Novartis Pharma AG, Basel, Switzerland) in the USA as an oral therapy for relapsing forms of multiple sclerosis, long-term clinical experience with this therapy has been increasing. This review provides a summary of the cumulative dataset from clinical trials and their extensions, plus postmarketing studies that contribute to characterizing the efficacy and safety profile of fingolimod in patients with relapsing forms of multiple sclerosis. Data from the controlled, phase III, pivotal studies [FREEDOMS (FTY720 Research Evaluating Effects of Daily Oral therapy in Multiple Sclerosis), FREEDOMS II and TRANSFORMS (Trial Assessing Injectable Interferon versus FTY720 Oral in Relapsing-Remitting Multiple Sclerosis)] in relapsing-remitting multiple sclerosis have shown that fingolimod has a robust effect on clinical and magnetic resonance imaging outcomes. The respective study extensions show that effects on annualized relapse rates are sustained with continued fingolimod treatment. Consistent, significant reductions in magnetic resonance imaging lesion counts and brain volume loss have also been sustained with long-term treatment. The safety profile of fingolimod is also examined, particularly in light of its long-term use. A summary of the adverse events of interest that are associated with fingolimod treatment and associated label guidelines are also considered, which include cardiac effects following first-dose administration, infections, lymphopenia, macular edema and pregnancy. Historic hurdles to the prescription of fingolimod and how these challenges are being met are also discussed.
abstract_id: PUBMED:36564207
Investigating the Long-term Effect of Pregnancy on the Course of Multiple Sclerosis Using Causal Inference. Background And Objectives: The question of the long-term safety of pregnancy is a major concern in patients with multiple sclerosis (MS), but its study is biased by reverse causation (women with higher disability are less likely to experience pregnancy). Using a causal inference approach, we aimed to estimate the unbiased long-term effects of pregnancy on disability and relapse risk in patients with MS and secondarily the short-term effects (during the perpartum and postpartum years) and delayed effects (occurring beyond 1 year after delivery).
Methods: We conducted an observational cohort study with data from patients with MS followed in the Observatoire Français de la Sclérose en Plaques registry between 1990 and 2020. We included female patients with MS aged 18-45 years at MS onset, clinically followed up for more than 2 years, and with ≥3 Expanded Disease Status Scale (EDSS) measurements. Outcomes were the mean EDSS score at the end of follow-up and the annual probability of relapse during follow-up. Counterfactual outcomes were predicted using the longitudinal targeted maximum likelihood estimator in the entire study population. The patients exposed to at least 1 pregnancy during their follow-up were compared with the counterfactual situation in which, contrary to what was observed, they would not have been exposed to any pregnancy. Short-term and delayed effects were analyzed from the first pregnancy of early-exposed patients (who experienced it during their first 3 years of follow-up).
Results: We included 9,100 patients, with a median follow-up duration of 7.8 years, of whom 2,125 (23.4%) patients were exposed to at least 1 pregnancy. Pregnancy had no significant long-term causal effect on the mean EDSS score at 9 years (causal mean difference [95% CI] = 0.00 [-0.16 to 0.15]) or on the annual probability of relapse (causal risk ratio [95% CI] = 0.95 [0.93-1.38]). For the 1,253 early-exposed patients, pregnancy significantly decreased the probability of relapse during the perpartum year and significantly increased it during the postpartum year, but no significant delayed effect was found on the EDSS and relapse rate.
Discussion: Using a causal inference approach, we found no evidence of significantly deleterious or beneficial long-term effects of pregnancy on disability. The beneficial effects found in other studies were probably related to a reverse causation bias.
abstract_id: PUBMED:34714518
Effects of Pregnancy and Breastfeeding on Clinical Outcomes and MRI Measurements of Women with Multiple Sclerosis: An Exploratory Real-World Cohort Study. Introduction: Pregnancy represents an important event for women with multiple sclerosis (MS) and is often accompanied by post-partum disease reactivation. To date, the influence of this reproductive phase on long-term MS outcomes is still largely unexplored. The objective of the study was characterise a large real-world cohort of women with MS to evaluate the effects of pregnancy and breastfeeding on short- and long-term clinical and magnetic resonance imaging (MRI) outcomes while exploring the relationships with MRI measurements of brain atrophy.
Methods: MS patients with and without pregnancy were recruited. Clinical relapses and MRI activity of the year before conception versus the year after delivery were evaluated. Regression models were performed to investigate the relationships between long-term MS outcomes (EDSS score and MRI brain measurements obtained by SIENAX software) and pregnancy and breastfeeding duration.
Results: Two hundred ten women with MS were enrolled; of them, 129 (61.4%) had at least one pregnancy. Of all pregnancies (n = 212), those that occurred after MS onset (90 [42.4%]) were examined to evaluate the short-term outcomes. A higher annualised relapse rate in the post-partum year versus the pre-conception year (0.54 ± 0.84 vs. 0.45 ± 0.71; p = 0.04) was observed. A regression analysis showed that clinical activity after delivery was associated with clinical activity of the year before conception (p = 0.001) as well as duration of breastfeeding (p = 0.022). Similarly, post-partum MRI activity was associated with pre-conception MRI activity (p = 0.026) and shorter breastfeeding duration (p = 0.013). Regarding long-term outcomes, having had at least one pregnancy during MS was associated with a lower EDSS score (p = 0.021), while no relationships were reported with MRI measurements. Conversely, a breastfeeding duration > 6 months was associated with lower white matter volume (p = 0.008).
Conclusions: Our study underlines the importance of considering the effects of pregnancy and breastfeeding on short- and long-term MS outcomes. In the current therapeutic landscape, pregnancy planning and treatment optimisation in the post-partum period, in particular for women who choose to breastfeed, are fundamental for the management of these biological phases so central in a woman's life.
abstract_id: PUBMED:36969978
Maternal Multiple Sclerosis and Health Outcomes Among the Children: A Systematic Review. Objective: To summarize the available literature and provide an overview of in utero exposure to maternal multiple sclerosis (MS) and the influence on offspring health outcomes.
Methods: We conducted a systematic review by searching Embase, Medline and PubMed.gov databases, and we used covidence.org to conduct a thorough sorting of the articles into three groups; 1) women with MS and the influence on birth outcomes; 2) women with MS treated with disease-modifying therapy (DMT) during pregnancy and the influence on birth outcomes; and 3) women with MS and the influence on long-term health outcomes in the children.
Results: In total, 22 cohort studies were identified. Ten studies reported on MS without DMT and compared with a control group without MS, and nine studies on women with MS and DMT prior to or during pregnancy met the criteria. We found only four studies reporting on long-term child health outcomes. One study had results belonging to more than one group.
Conclusion: The studies pointed towards an increased risk of preterm birth and small for gestational age among women with MS. In terms of women with MS treated with DMT prior to or during pregnancy, no clear conclusions could be reached. The few studies on long-term child outcomes all had different outcomes within the areas of neurodevelopment and psychiatric impairment. In this systematic review, we have highlighted the research gaps on the impact of maternal MS on offspring health.
abstract_id: PUBMED:19939856
Long-term effects of childbirth in MS. Background: The uncertainty about long-term effects of childbirth presents MS patients with dilemmas.
Methods: Based on clinical data of 330 female MS patients, the long-term effects of childbirth were analysed, using a cross-sectional study design. Four groups of patients were distinguished: (1) without children (n = 80), (2) with children born before MS onset (n = 170), (3) with children born after MS onset (n = 61) and (4) with children born before and after MS onset (n = 19). A time-to-event analysis and Cox proportional hazard regression were performed with time from onset to EDSS 6 and age at EDSS 6 as outcome measure.
Results: After a mean disease duration of 18 years, 55% had reached EDSS 6. Survival curves show a distinct shift in the time to EDSS 6 between patients with no children after MS onset and patients with children after MS onset in favour of the latter. Cox regression analysis correcting for age at onset shows that patients with children only after MS onset had a reduced risk compared with patients without children (HR 0.61; 95% CI 0.37 to 0.99, p = 0.049). Also, patients who gave birth at any point in time had a reduced risk compared with patients without children (HR 0.66; 95% CI 0.47 to 0.95, p = 0.023). A similar pattern was seen for age at EDSS 6 (HR 0.57, p = 0.027 and HR 0.68, p = 0.032 respectively)
Conclusion: Although a bias cannot fully be excluded, these results seem to support a possible favourable long-term effect of childbirth on the course of MS.
abstract_id: PUBMED:24507525
The clinical course of multiple sclerosis. Knowledge of the epidemiology and natural history of multiple sclerosis (MS) is essential for practitioners and patients to make informed decisions about their care. This knowledge, in turn, depends upon the findings from reliable studies (i.e., those which adhere to the highest methodological standards). For a clinically variable disease such as MS, these standards include case ascertainment using a population-based design; a large-sized sample of patients, who are followed for a long time-period in order to provide adequate statistical power; a regular assessment of patients that is prospective, frequent, and standardized; and the application of rigorous statistical techniques, taking into account confounding factors such as the use of disease modifying therapy or the age at clinical onset. In this chapter we review the available epidemiologic and natural history data as it relates clinical issues such as the likelihood of incomplete recovery from a first attack; the likelihood and time course of a second attack; the likelihood and time course of disease progression and the accumulation of irreversible disability; the disease prognosis based both upon the clinical nature and presentation of the first episode and upon the initial disease course; and the impact of disease on mortality. In addition, these studies provide insight to the pathophysiologic mechanisms underlying the course and prognosis of MS. Studies of the Lyon cohort have been particularly helpful in this regard and observations from this cohort have led to the hypothesis that, in large part, the accumulation of disability in MS is an age-related process, which is independent of the clinical subtype of MS (i.e., relapsing-remitting, primary progressive, secondary progressive, or relapsing progressive). And finally, we consider briefly the impact of various life events (e.g., pregnancy, infection, vaccination, trauma, and stress) on the clinical course of disease.
abstract_id: PUBMED:20558576
Long-term expression of tissue-inhibitor of matrix metalloproteinase-1 in the murine central nervous system does not alter the morphological and behavioral phenotype but alleviates the course of experimental allergic encephalomyelitis. Tissue inhibitors of metalloproteinases (TIMPs) are a family of closely related proteins that inhibit matrix metalloproteinases (MMPs). In the central nervous system (CNS), TIMPs 2, 3, and 4 are constitutively expressed at high levels, whereas TIMP1 can be induced by various stimuli. Here, we studied the effects of constitutive expression of TIMP1 in the CNS in transgenic mice. Transgene expression started prenatally and persisted throughout lifetime at high levels. Since MMP activity has been implicated in CNS development, in proper function of the adult CNS, and in inflammatory disorders, we investigated Timp1-induced CNS alterations. Despite sufficient MMP inhibition, high expressor transgenic mice had a normal phenotype. The absence of compensatory up-regulation of MMP genes in the CNS of Timp1 transgenic mice indicates that development, learning, and memory functions do not require the entire MMP arsenal. To elucidate the effects of strong Timp1 expression in CNS inflammation, we induced experimental allergic encephalomyelitis. We observed a Timp1 dose-dependent mitigation of both experimental allergic encephalomyelitis symptoms and histological lesions in the CNS of transgenic mice. All in all, our data demonstrate that (1) long-term CNS expression of TIMP1 with complete suppression of gelatinolytic activity does not interfere with physiological brain function and (2) TIMP1 might constitute a promising candidate for long-term therapeutic treatment of inflammatory CNS diseases such as multiple sclerosis.
abstract_id: PUBMED:29112665
Long-term Risk of a Seizure Disorder After Eclampsia. Objective: To evaluate the incidence rate and relative risk of a seizure disorder after eclampsia.
Methods: We evaluated 1,565,733 births in a retrospective data linkage cohort study in Ontario, Canada, from April 1, 2002, to March 31, 2014. We included females aged 15-50 years and excluded patients with epilepsy, conditions predisposing to seizure, and those who died within 30 days of the delivery discharge date. The exposure was defined as a hypertensive disorder of pregnancy, namely 1) eclampsia, 2) preeclampsia, or 3) gestational hypertension. The referent was an unaffected pregnancy. The primary outcome was the risk of seizure disorder starting 31 days after a hospital birth discharge. Risk was expressed as an incidence rate and a hazard ratio (HR) with 95% CI. The predefined study hypothesis was that women with eclampsia would have an increased risk of future seizure disorder.
Results: There were 1,615 (0.10%) pregnancies exclusively affected by eclampsia, 17,264 (1.1%) with preeclampsia, 60,863 (3.9%) with gestational hypertension, and 1,485,991 (94.9%) unaffected. A future seizure disorder was significantly more likely after a pregnancy with eclampsia (4.58/10,000 person-years) than a pregnancy without a hypertensive disorder of pregnancy (0.72/10,000 person-years; crude HR 6.09, 95% CI 2.73-13.60). The adjusted HR was minimally attenuated from 6.09 to 5.42 (95% CI 2.42-12.12) after multivariable adjustment for confounders at the index birth as well as adjusting for traumatic brain injury, stroke, cerebral tumor, aneurysm or hemorrhage, and multiple sclerosis. The risk of seizure disorder was doubled in pregnancies affected by preeclampsia (adjusted HR 1.96, 95% CI 1.21-3.17), but not by gestational hypertension (adjusted HR 1.01, 95% CI 0.71-1.43).
Conclusion: Women with eclampsia should be reassured that, although the relative risk of a seizure disorder is higher than unaffected women, the absolute risk is extremely low (approximately one seizure/2,200 person-years).
abstract_id: PUBMED:34791952
Pregnancy in women with MS: Impact on long-term disability accrual in a nationwide Danish Cohort. Background: Pregnancy is considered to influence the disease course in women with multiple sclerosis (MS).
Objective: The aim of this study was to investigate the effect of pregnancy on long-term disability accrual in women with MS.
Methods: The Danish Multiple Sclerosis Registry (DMSR) was used to identify women diagnosed with clinically isolated syndrome or relapsing-remitting MS. Cox models with pregnancy as a time-dependent exposure and propensity score (PS) models were used to evaluate time to reach confirmed Expanded Disability Status Scale (EDSS) score of 4 and 6.
Results: A total of 425 women became parous and 840 remained nulliparous. When including pregnancy as a time-dependent exposure, a non-significant association with time to reach EDSS 4 (hazard ratio (HR) 0.86, 95% confidence interval (CI) 0.61-1.20) and EDSS 6 (HR 0.70, 95% CI 0.40-1.20) was found. Correspondingly, the PS model showed no association with pregnancy on time to reach EDSS 4 (HR 0.85, 95% CI 0.56-1.28).
Conclusion: This study concludes that pregnancy does not affect long-term disability accumulation.
Answer: The impact of pregnancy on the long-term course of multiple sclerosis (MS) is a subject of ongoing research, and the evidence appears to be mixed. Some studies suggest that pregnancy may have a short-term ameliorative effect on the course of relapsing-remitting multiple sclerosis (RRMS), with reduced rates of relapses and progression to irreversible disability shortly after pregnancy. However, over the long term, pregnancy seems to have no significant impact on these outcomes and might even accelerate the rate of transition to secondary progressive multiple sclerosis (SPMS) (PUBMED:24935463).
A study using a causal inference approach found no evidence of significantly deleterious or beneficial long-term effects of pregnancy on disability in MS patients. It suggested that the beneficial effects found in other studies could be related to reverse causation bias, where women with higher disability are less likely to become pregnant (PUBMED:36564207).
Another study found that having at least one pregnancy during the course of MS was associated with a lower Expanded Disability Status Scale (EDSS) score, suggesting a potential favorable long-term effect of childbirth on the course of MS (PUBMED:19939856). However, a Danish cohort study concluded that pregnancy does not affect long-term disability accumulation in women with MS (PUBMED:34791952).
An exploratory real-world cohort study observed a higher annualized relapse rate in the post-partum year compared to the pre-conception year, indicating a post-partum disease reactivation. However, long-term outcomes showed that having had at least one pregnancy during MS was associated with a lower EDSS score, although no relationships were reported with MRI measurements of brain atrophy (PUBMED:34714518).
In summary, while pregnancy may have some short-term effects on the course of MS, the long-term impact remains unclear, with studies showing varying results. Some research indicates no significant long-term effects, while others suggest a potential favorable influence on disability progression. Further research is needed to fully understand the long-term implications of pregnancy on MS. |
Instruction: Atrial fibrillation and the risk of ischemic stroke: does it still matter in patients with a CHA2DS2-VASc score of 0 or 1?
Abstracts:
abstract_id: PUBMED:25039724
Refinement of ischemic stroke risk in patients with atrial fibrillation and CHA2 DS2 -VASc score of 1. Background: Patients with atrial fibrillation (AF) with CHA2 DS2 -VASc score of 1 (where CHA2 DS2 -VASc is CHA2 DS2 -Vascular disease, Age 65-74 years, Sex category) are recommended to receive antithrombotic therapy. Nonetheless, it remains unclear whether individual components that constitute CHA2 DS2 -VASc score contribute equally to the ischemic stroke risk, particularly in patients with CHA2 DS2 -VASc score of 1. The objective was to describe and compare the risk of ischemic stroke of the six individual components constituting CHA2 DS2 -VASc among AF patients with CHA2 DS2 -VASc score of 1.
Methods And Results: We studied all patients with CHA2 DS2 -VASc score of 1 and no antithrombotic therapy from our cohort of 9,727 Chinese AF patients. A total of 548 patients were studied: 190 patients with CHA2 DS2 -VASc score of 0 and 358 patients with CHA2 DS2 -VASc score of 1. Of those with a baseline CHA2 DS2 -VASc score of 1, 51.1% patients aged 65-75; 29.3% patients were female; 12.0% had hypertension; 4.5% had heart failure; 2.5% had diabetes; and 0.6% had vascular disease. After 1,758 patient-years of follow-up, the annual incidence of stroke was 2.4% and 6.6% for patients with CHA2 DS2 -VASc score of 0 and 1, respectively. Compared with patients with CHA2 DS2 -VASc score of 0, patients with hypertension leading to CHA2 DS2 -VASc score of 1 were at the highest risk of stroke (Hazard ratio [HR]: 9.8, 95% confidence interval [CI]: 2.7-35.6), followed by patients aged 65-74 (HR: 3.9, 95% CI: 2.3-6.6) and female gender (HR: 2.3, 95% CI: 1.1-4.8). Heart failure, diabetes mellitus, and vascular disease were not associated with stroke.
Conclusion: In AF patients with CHA2 DS2 -VASc score of 1, hypertension confers the highest risk for stroke among other risk factors comprising the score. A more aggressive thromboprophylaxis strategy may be justified among AF patients with CHA2 DS2 -VASc score of 1 due to hypertension.
abstract_id: PUBMED:22871677
Atrial fibrillation and the risk of ischemic stroke: does it still matter in patients with a CHA2DS2-VASc score of 0 or 1? Background And Purpose: Atrial fibrillation (AF) is an independent risk factor for stroke. Recent studies have demonstrated that the CHA(2)DS(2)-VASc scheme is useful for selecting patients who are truly at low risk. The goal of the present study was to compare the risk of ischemic stroke among AF patients with a CHA(2)DS(2)-VASc score of 0 (male) or 1 (female) with those without AF.
Methods: The study enrolled 509 males (CHA(2)DS(2)-VASc score=0) and 320 females (CHA(2)DS(2)-VASc score=1) with AF who did not receive any antithrombotic therapy. Patients were selected from the National Health Insurance Research Database in Taiwan. For each study patient, 10 age-matched and sex-matched subjects without AF and without any comorbidity from the CHA(2)DS(2)-VASc scheme were selected as controls. The clinical end point was the occurrence of ischemic stroke.
Results: During a follow-up of 57.4 ± 35.7 months, 128 patients (1.4%) experienced ischemic stroke. The event rate did not differ between groups with and without AF for male patients (1.6% vs 1.6%; P=0.920). In contrast, AF was a significant risk factor for ischemic stroke among females (hazard ratio, 7.77), with event rates of 4.4% and 0.7% for female patients with and without AF (P<0.001).
Conclusions: AF males with a CHA(2)DS(2)-VASc score of 0 were at true low risk for stroke, which was similar to that of non-AF patients. However, AF females with a score of 1 were still at higher risk for ischemic events than non-AF patients.
abstract_id: PUBMED:27860070
CHA2 DS2 VASc score predicts unsuccessful electrical cardioversion in patients with persistent atrial fibrillation. Background: Atrial fibrillation (AF) is the most common arrhythmia occurring in 2% of the population. It is known that AF increases morbidity and limits quality of life. The CHA2 DS2 VASc score (congestive heart failure/left ventricular dysfunction, hypertension, age ≥75 (doubled), diabetes, stroke (doubled), vascular disease, age 65-74 and sex category (female)) is widely used to assess thrombotic complications. The CHA2 DS2 VASc score was not used until now in predicting the effectiveness of electrical cardioversion.
Aim: To assess the value of CHA2 DS2 VASc score in predicting unsuccessful electrical cardioversion.
Methods: We analysed 258 consecutive patients with persistent AF who underwent electrical cardioversion between January 2012 and April 2016 in a Cardiology University Centre in Poland.
Results: Out of 3500 hospitalised patients with AF, 258 (mean age 64 ± 11 years, 64% men) underwent electrical cardioversion. The CHA2 DS2 VASc score in analysed population (258 patients) was 2.5 ± 1.7 (range 0-8), and the HAS-BLED (hypertension, abnormal liver or renal function, stroke, bleeding, labile international normalised ratio, elderly, drugs or alcohol) was 1 ± 0.9 (range 0-4). Electrical cardioversion was unsuccessful in 12%. Factors associated with unsuccessful cardioversion were age (P = 0.0005), history of ischaemic stroke (P = 0.04), male gender (P = 0.01) and CHA2 DS2 VASc score (P = 0.002). The CHA2 DS2 VASc score in patients who had unsuccessful cardioversion was higher compared to patients who had successful cardioversion - 3.5 versus 2.4 (P = 0.001). In the logistic regression model, if the CHA2 DS2 VASc score increases by 1, the odds of unsuccessful cardioversion increase by 39% (odds ratio (OR) 1.39; confidence interval (CI): 1.12-1.71; P = 0.002). The odds of unsuccessful cardioversion are three times higher in patients with a CHA2 DS2 VASc score ≥ 2 than in patients with a CHA2 DS2 VASc score of 0 or 1 (OR 3.06; CI: 1.03-9.09; P = 0.044).
Conclusion: The CHA2 DS2 VASc score routinely used in thromboembolic risk assessment may be a simple, easy and reliable scoring system that can be used to predict unsuccessful electrical cardioversion.
abstract_id: PUBMED:38423377
Both HFpEF and HFmrEF Should be Included in Calculating CHA2DS2-VASc score: a Taiwanese Longitudinal Cohort. Background: Congestive heart failure (CHF) as a risk of stroke in AF patients mainly referred to patients with left ventricular systolic dysfunction (HFrEF). Whether this should include patients with preserved ejection fraction (HFpEF) is debatable.
Objective: To investigate the variation in stroke risk between atrial fibrillation (AF) patients with HFpEF, HFmrEF, and HFrEF for enhancing risk assessment and subsequent management strategies.
Methods: In a longitudinal study utilizing the National Taiwan University Hospital Integrated Medical Database (iMED), 8358 patients with AF were followed for 10 years (mean follow-up 3.76 years). The study evaluated the risk of ischemic stroke in patients with differing ejection fractions (EF) and CHA2DS2-VASc score, further using Cox models adjusted for risk factors of AF-related stroke.
Results: HFpEF and HFmrEF patients had a higher mean CHA2DS2-VASc score compared to HFrEF patients (4.30±1.729 vs. 4.15±1.736 vs 3.73±1.712, p<0.001) and higher risk of stroke during follow-up (HR 1.40 (1.161-1.688), p<0.001 for HFmrEF; HR 1.184 (1.075-1.303), p=0.001) after multivariable adjustment. In patients with lower CHA2DS2-VASc score (0-4), presence of any type of CHF increased ischemic stroke risk (HFrEF HR 1.568 (1.189-2.068), p=0.001; HFmrEF HR 1.890 (1.372-2.603), p<0.001; HFpEF HR 1.800 (1.526-2.123), p<0.001).
Conclusion: After multivariate adjustment, HFpEF and HFmrEF showed a similar risk of stroke in AF patients. Therefore, it is important to extend the criteria for "C" in the CHA2DS2-VASc score to include HFpEF patients. In patients with less concomitant stroke risk factors, the presence of any subtype of CHF increases risk for ischemic stroke.
abstract_id: PUBMED:29436191
CHA₂DS₂-VASc Score in the Prediction of Ischemic Stroke in Patients after Radiofrequency Catheter Ablation of Typical Atrial Flutter. Purpose: Despite undergoing successful catheter ablation of typical atrial flutter (AFL), patients remain at increased risk for ischemic stroke. However, data on risk prediction tools for the development of stroke after AFL ablation are lacking. This study investigates whether CHA₂DS₂-VASc score is useful for predicting ischemic stroke after successful ablation of typical AFL.
Materials And Methods: A total of 293 patients (236 men, mean age 56.1±13.5 years) who underwent successful radiofrequency catheter ablation for typical AFL were included in this study. The clinical end point was occurrence of ischemic stroke during follow-up after AFL ablation.
Results: During the follow-up period (60.8±45.9 months), ischemic stroke occurred in 18 (6%) patients at a median of 34 months (interquartile range, 13-65 months). CHA₂DS₂-VASc score [hazard ratio 2.104; 95% confidence interval (CI), 1.624-2.726; p<0.001] was an independent predictor for the occurrence of stroke after AFL ablation. The area under the receiver operating characteristic curve for CHA₂DS₂-VASc score was 0.798 (95% CI, 0.691-0.904). The CHA₂DS₂-VASc score could be used to stratify patients into two groups with different incidences of ischemic stroke (1.6% vs. 14.4%, p<0.001) at a cutoff value of 2.
Conclusion: CHA₂DS₂-VASc score is useful in a prediction model for the risk of stroke after catheter ablation of typical AFL.
abstract_id: PUBMED:29064076
CHA2DS2-VASc score predicts short- and long-term outcomes in patients with acute ischemic stroke treated with intravenous thrombolysis. The CHA2DS2-VASc score is a validated tool to assess the thromboembolic risk in patients with atrial fibrillation. Pre-stroke CHA2DS2-VASc score may predict outcome in patients with acute ischemic stroke (AIS) without atrial fibrillation. The aim of this study was to investigate if the pre-stroke CHA2DS2-VASc score is able to predict short- and long-term outcomes in AIS patients treated with intravenous thrombolysis (IVT). The study group consisted of 256 consecutive patients admitted to the Udine University Hospital with AIS and underwent IVT between January 2015 to March 2017. The pre-stroke CHA2DS2-VASc score for each patient was calculated from the collected baseline data. Patients were classified into three groups according to their pre-stroke CHA2DS2-VASc score: a score of 0 of 1, a score of 2 or 3 and a score above 3. Primary outcome measures were: rate of favorable outcome at 90-days and at 1-year, and mortality at 90-days and at 1-year. Data on functional outcome and mortality 1 year after stroke were collected in 165 patients (65% of the entire sample). Favorable outcome was defined as a modified Rankin Scale score ≤ 2. Compared with the CHA2DS2-VASc score 0-1 group, patients with higher CHA2DS2-VASc scores had a worse outcome and a higher mortality 3 months and 1 year after stroke. The diagnostic performance of the CHA2DS2-VASc score as judged with AUC-ROC was 0.70 (95% CI, 0.64-0.76; p < 0.001) for favorable outcome at 90-days, 0.78 (95% CI, 0.71-0.85; p < 0.001) for favorable outcome at 1-year, 0.71 (95% CI 0.61-0.79) for mortality at 90-days, 0.73 (95% CI 0.64-0.80; p < 0.001) for mortality at 1-year. Pre-stroke CHA2DS2-VASc score represents a good predictor for short- and long-term outcomes in AIS patients treated with IVT.
abstract_id: PUBMED:27702803
Is an Oral Anticoagulant Necessary for Young Atrial Fibrillation Patients With a CHA2DS2-VASc Score of 1 (Men) or 2 (Women)? Background: Recent studies demonstrated that oral anticoagulants (OACs) should be considered for patients with atrial fibrillation and 1 risk factor in addition to sex. Because age is an important determinant of ischemic stroke, the strategy for stroke prevention may be different for these patients in different age strata. The aim of this study was to investigate whether OACs should be considered for patients aged 20 to 49 years with atrial fibrillation and a CHA2DS2-VASc score of 1 (men) or 2 (women).
Methods And Results: Using the Taiwan National Health Insurance Research Database, 7374 male patients with atrial fibrillation and a CHA2DS2-VASc score of 1 and 4461 female patients with atrial fibrillation and a CHA2DS2-VASc score of 2 and all without antithrombotic therapies were identified and stratified into 3 groups by age. The threshold for the initiation of OACs for stroke prevention was set at a stroke rate of 1.7% per year for warfarin and 0.9% per year for non-vitamin K antagonist OACs. Among male patients aged 20 to 49 years with a CHA2DS2-VASc score of 1, the risk of ischemic stroke was 1.30% per year and ranged from 0.94% per year for those with hypertension to 1.71% for those with congestive heart failure. Among female patients aged 20 to 49 years with a CHA2DS2-VASc score of 2, the risk of ischemic stroke was 1.40% per year and ranged from 1.11% per year for those with hypertension to 1.67% for those with congestive heart failure.
Conclusions: For atrial fibrillation patients aged 20 to 49 years with 1 risk factor in addition to sex, non-vitamin K antagonist OACs should be considered for stroke prevention to minimize the risk of a potentially fatal or disabling event.
abstract_id: PUBMED:35211517
Ischemic Stroke in Non-Gender-Related CHA2DS2-VA Score 0~1 Is Associated With H2FPEF Score Among the Patients With Atrial Fibrillation. Background: Ischemic strokes (ISs) can appear even in non-gender-related CHA2DS2-VA scores 0~1 patients with atrial fibrillation (AF). We explored the determinants associated with IS development among the patients with non-gender-related CHA2DS2-VA score 0~1 AF.
Methods And Results: In this single-center retrospective registry data for AF catheter ablation (AFCA), we included 1,353 patients with AF (24.7% female, median age 56 years, and paroxysmal AF 72.6%) who had non-gender-related CHA2DS2-VA score 0~1, normal left ventricular (LV) systolic function, and available H2FPEF score. Among those patients, 113 experienced IS despite a non-gender-related CHA2DS2-VA score of 0~1. All included patients underwent AFCA, and we evaluated the associated factors with IS in non-gender-related CHA2DS2-VA score 0~1 AF. Patients with ISs in this study had a lower estimated glomerular filtration rate (eGFR) (p < 0.001) and LV ejection fraction (LVEF; p = 0.017), larger LA diameter (p < 0.001), reduced LA appendage peak velocity (p < 0.001), and a higher baseline H2FPEF score (p = 0.018) relative to those without ISs. Age [odds ratio (OR) 1.11 (1.07-1.17), p < 0.001, Model 1] and H2FPEF score as continuous [OR 1.31 (1.03-1.67), p = 0.028, Model 2] variable were independently associated with ISs by multivariate analysis. Moreover, the eGFR was independently associated with IS at low CHA2DS2-VA scores in both Models 1 and 2. AF recurrence was significantly higher in patients with IS (log-rank p < 0.001) but not in those with high H2FPEF scores (log-rank p = 0.079), respectively.
Conclusions: Among the patients with normal LVEF and non-gender-related CHA2DS2-VA score 0~1 AF, the high H2FPEF score, and increasing age were independently associated with IS development (ClinicalTrials.gov Identifier: NCT02138695).
abstract_id: PUBMED:32294240
Association of CHA2 DS2 -VASc Score with Stroke, Thromboembolism, and Death in Hip Fracture Patients. Objectives: Patients undergoing hip fracture surgery have a 10 times increased risk of stroke compared with the general population. We aimed to evaluate the association between the CHA2 DS2 -VASc (congestive heart failure, hypertension, age ≥75 years, diabetes, previous stroke/TIA [transient ischemic attack]/systemic embolism (2 points), vascular disease, age 65-74 years, and female sex) score and the risk of stroke, thromboembolism, and all-cause mortality in patients with hip fracture with or without atrial fibrillation (AF).
Design: Nationwide prospective cohort study.
Setting: Danish hospitals.
Participants: Subjects were all incident hip fracture patients in Denmark age 65 years and older with surgical repair procedures between 2004 and 2016 (n = 78,096). Participants were identified using the Danish Multidisciplinary Hip Fracture Registry.
Measurements: We calculated incidence rates, cumulative incidences, and hazard ratios (HRs) with 95% confidence intervals (CIs) by CHA2 DS2 -VASc score, stratified on AF history.
Results: The cumulative incidence of ischemic stroke 1 year after hip fracture increased with ascending CHA2 DS2 -VASc score, and it was 1.9% for patients with a score of 1 and 8.6% for patients with a score above 5 in the AF group. Corresponding incidences in the non-AF group were 1.6% and 7.6%. Compared with a CHA2 DS2 -VASc score of 1, adjusted HRs were 5.53 (95% CI = 1.37-22.24) among AF patients and 4.91 (95% CI = 3.40-7.10) among non-AF patients with a score above 5. A dose-response-like association was observed for all cardiovascular outcomes. All-cause mortality risks and HRs were substantially higher for all CHA2 DS2 -VASc scores above 1 in both the AF group and the non-AF group.
Conclusion: Among patients with hip fracture, a higher CHA2 DS2 -VASc score was associated with increased risk of stroke, thromboembolism, and death. This finding applied both to patients with and without AF. Patients with high CHA2 DS2 -VASc scores had almost similar absolute risks for cardiovascular outcomes, irrespective of AF. J Am Geriatr Soc 68:1698-1705, 2020.
abstract_id: PUBMED:30304802
The CHA₂DS₂-VASc Score Predicts Major Bleeding in Non-Valvular Atrial Fibrillation Patients Who Take Oral Anticoagulants. Background: Patients with atrial fibrillation (AF) are at a substantial risk of ischemic stroke. The CHA₂DS₂-VASc score predicts the risk of thromboembolism, but its role in predicting major bleeding in patients taking oral anticoagulants is unclear.
Methods: We used the National Health Insurance Research Database (NHIRD) of Taiwan to identify patients with AF from 2010 to 2016. They were divided into four groups according to the oral anticoagulants. The outcomes were ischemic stroke/systemic thromboembolism, and major bleeding.
Results: A total of 279,776 patients were identified. Ischemic stroke or systemic embolism events were observed in 1.73%, 3.62%, 4.36%, and 5.02% of the patients in the apixaban, rivaroxaban, dabigatran, and warfarin groups, respectively. Major bleeding was recorded in 1.18%, 2.66%, 3.23%, and 4.70% of the patients in the apixaban, rivaroxaban, dabigatran, and warfarin groups, respectively. The highest rates for both ischemic stroke and bleeding events occurred in the patients with a CHA₂DS₂-VASc score of five or more.
Conclusion: Non-valvular AF patients with high CHA₂DS₂-VASc scores are susceptible to both systemic thromboembolism and major bleeding. The trend was consistently observed in patients who took non-vitamin K oral anticoagulants (NOACs) or warfarin. NOACs might be potentially more effective in reducing overall events.
Answer: Atrial fibrillation (AF) is a well-known independent risk factor for ischemic stroke, and the CHA2DS2-VASc score is used to assess the risk of thromboembolic events in patients with AF. The relevance of AF in patients with a CHA2DS2-VASc score of 0 or 1 has been investigated in several studies.
For male patients with AF and a CHA2DS2-VASc score of 0, the risk of ischemic stroke was found to be similar to that of non-AF patients, suggesting that these male patients are at a true low risk for stroke (PUBMED:22871677). However, for female patients with AF and a CHA2DS2-VASc score of 1, the risk of ischemic stroke was significantly higher compared to non-AF patients, indicating that AF is still a significant risk factor for ischemic stroke in these females (PUBMED:22871677).
In another study, it was found that among AF patients with a CHA2DS2-VASc score of 1, hypertension conferred the highest risk for stroke among other risk factors comprising the score (PUBMED:25039724). This suggests that even within a CHA2DS2-VASc score of 1, the individual components of the score do not contribute equally to the risk of ischemic stroke, and a more aggressive thromboprophylaxis strategy may be justified for AF patients with hypertension.
Furthermore, ischemic strokes can occur even in patients with non-gender-related CHA2DS2-VA scores of 0~1, and factors such as the H2FPEF score and age have been associated with the development of ischemic stroke in these patients (PUBMED:35211517).
In conclusion, the risk of ischemic stroke in patients with AF and a CHA2DS2-VASc score of 0 or 1 is not uniform and depends on gender and the presence of specific risk factors such as hypertension. While male patients with a score of 0 may be at low risk, female patients with a score of 1 and patients with certain risk factors may still be at a significant risk for ischemic stroke, warranting consideration for antithrombotic therapy. |
Instruction: Is intrauterine exchange transfusion a safe procedure for management of fetal anaemia?
Abstracts:
abstract_id: PUBMED:24965985
Is intrauterine exchange transfusion a safe procedure for management of fetal anaemia? Objective: To study modalities and complications of intrauterine exchange transfusion (IUET) for the management of severe fetal anaemia.
Study Design: Retrospective study of all IUET procedures performed between January 1999 and January 2012 at a regional centre. Characteristics of each procedure were studied to identify risk factors for complications. Survival rates according to the different aetiologies of anaemia were evaluated.
Results: In total, 225 IUET procedures were performed in 96 fetuses. Major indications were feto-maternal erythrocyte alloimmunization (n=80/96, 83.3%) and parvovirus B19 infection (n=13/96, 13.5%). Twenty-six percent of the fetuses (25/96) had hydrops fetalis before the first IUET. Intrauterine fetal death occurred after 2.7% (6/225) of procedures, premature rupture of the membranes occurred after 0.9% (2/225) of procedures, and emergency caesarean section was required after 3.6% (8/225) of procedures. Fetal bradycardia [odds ratio (OR) 37, 95% confidence interval (CI) 8.3-170; p<0.01] and gestational age up to 32 weeks (OR 3.67; 95% CI, 1.07-12.58; p=0.038] were significantly associated with complications after IUET. Complications occurred in 17.7% of pregnancies (17/96) and 7.5% of IUET procedures (17/225). The overall survival rate in the study cohort was 87.5% (84/96): 90% (72/80) in the alloimmunization group and 76.9% (10/13) in the parvovirus-infected group (NS).
Conclusion: IUET has a higher complication rate than simple intrauterine transfusion, and should be performed by well-trained specialists.
abstract_id: PUBMED:28277805
Intrauterine transfusion and non-invasive treatment options for hemolytic disease of the fetus and newborn - review on current management and outcome. Introduction: Hemolytic disease of the fetus and newborn (HDFN) remains a serious pregnancy complication which can lead to severe fetal anemia, hydrops and perinatal death. Areas covered: This review focusses on the current prenatal management, treatment with intrauterine transfusion (IUT) and promising non-invasive treatment options for HDFN. Expert commentary: IUTs are the cornerstone in prenatal management of HDFN and have significantly improved perinatal outcome in the past decades. IUT is now a relatively safe procedure, however the risk of complications is still high when performed early in the second trimester. Non-invasive management using intravenous immunoglobulin may be a safe alternative and requires further investigation.
abstract_id: PUBMED:2506755
Two hundred intrauterine exchange transfusions in severe blood incompatibilities. Two hundred intrauterine exchange transfusions were performed under local anesthesia in 107 cases of blood incompatibilities (60 fetuses with severe anemia and 47 with hydrops). Under sonographic guidance, depending on fetal and placental position, an optimal puncturing site was selected along the umbilical vein: placental insertion, fetal insertion, or fetal intraabdominal segment. Tests were immediately performed to confirm fetal origin of blood obtained and estimate hemoglobin level. Blood used for exchange transfusion was compatible with maternal blood and had a hematocrit value of 75%. Exchange transfusion was continued until a hemoglobin level of 16 gm/dl was reached. This procedure was first associated with intraperitoneal transfusions and was subsequently used independently once a month to maintain an adequate hemoglobin level. In 4 fetuses with hydrops, antenatal regression of this sign was observed in 33 cases (70.2%). Overall outcome of 107 fetuses after exchanges was 84 living neonates (78.5%), 15 deaths in utero, and eight neonatal deaths. The survival rate was 91.6% for fetuses without hydrops and 61.7% for those with hydrops. The advantage of exchange transfusion appears to be rapid and efficient correction of anemia with elimination of incompatible fetal red blood cells.
abstract_id: PUBMED:22356474
Early procedure-related complications of fetal blood sampling and intrauterine transfusion for fetal anemia. Objective: To review the procedure-related complication rates following fetal blood sampling and intrauterine red cell transfusion for anaemic fetuses at a single tertiary center.
Design: A retrospective study of 114 intrauterine transfusions.
Setting: A single tertiary referral fetal medicine center at Queen Charlotte's and Chelsea Hospital, Imperial College London, London, UK.
Sample: All cases (114) undergoing fetal blood sampling and intrauterine transfusion between January 2003 and May 2010.
Methods: Early procedure-related complications (severe fetal bradycardia requiring either abandonment of the procedure or emergency delivery, fetal death, preterm labor or rupture of membranes) were investigated by review of computerized records and individual chart review.
Main Outcome Measures: Live birth rate, perinatal mortality, procedure-related fetal bradycardia, preterm labor and procedure-related spontaneous rupture of membranes.
Results: The majority of cases (77.8%) were due to red cell alloimmunization, with anti-D being the commonest cause. The live birth rate was 93.5%, with a procedure-related fetal death rate of 0.9%. The preterm labor rate (<37 weeks' gestation) was 3.5% only occurring in patients undergoing multiple (>3) fetal transfusions. Complications in this series did not appear to be increased the earlier the gestation at which the first transfusion took place.
Conclusions: Despite a reduction in the number of cases requiring intrauterine therapy for fetal anemia, contemporary outcomes appear to be good if not improving. It is important that the experience required to manage these cases should be concentrated in fewer centers to maximize good perinatal outcome.
abstract_id: PUBMED:10204205
Intrauterine management of fetal parvovirus B19 infection. Objectives: The aim of our study was to determine the outcome of pregnancies after intrauterine management of fetal parvovirus B19 infection.
Design: Retrospective study.
Subjects: A total of 37 cases of maternofetal parvovirus B19 infection, 35 of which were associated with hydrops fetalis, were referred to our tertiary level center between 1989 and 1996. With regard to fetal hydrops, no apparent cause other than parvovirus B19 infection was found in any patient.
Methods: In all patients, cordocentesis was performed to assess the degree of fetal anemia. When anemia was present, cordocentesis was followed by intrauterine transfusion with packed red cells into the umbilical vein. Further management depended on the degree of fetal anemia and gestational age and included follow-up fetal blood sampling/transfusion as well as ultrasound examinations as deemed appropriate.
Results: Packed red cell transfusion was performed in 30 patients with significant fetal anemia (Z-score 1.6-7.8 below the mean for gestational age). The fetal hemoglobin values ranged from 2.1 to 9.6 g/dl. Serum levels of platelets in the transfusion group were 9-228 x 10(9)/l with Z-scores in the range of < 1 to 3.8 below the mean. During treatment and follow-up, there were five intrauterine deaths (13.5%), one neonatal death (2.7%) and 31 live births (83.8%).
Conclusions: Fetal parvovirus infection can lead to marked anemia and hydrops formation. Cordocentesis allows precise assessment of fetal anemia which can then be corrected by intravenous transfusion. Under this regimen, the outcome proved favorable in the majority of fetuses, even those that were severely anemic.
abstract_id: PUBMED:17706475
Intrauterine fetal transfusions in the management of fetal anemia and fetal thrombocytopenia. During the past 40 years, rhesus alloimmunization has gone from being one of the major causes of perinatal mortality to an almost eradicated disease. The unraveling of the pathophysiology, the development of reliable diagnostic tools, a very effective prophylaxis program, and for those (nowadays rare) cases slipping through the prevention system the availability of treatment by intrauterine blood transfusions, together constitute one of the great triumphs in modern medicine. Although Rh-D alloimmunization remains the most common indication for fetal blood transfusion therapy, an increasing percentage of these procedures is used to treat other causes of fetal anemia such as Kell alloimmunization and parvovirus B19 infection. Apart from transfusing blood, the same technique can be used to transfuse platelets to thrombocytopenic fetuses. This chapter describes the technique of fetal transfusion, and reviews the current management of fetal anemia and fetal thrombocytopenia.
abstract_id: PUBMED:26493554
Perinatal survival and procedure-related complications after intrauterine transfusion for red cell alloimmunization. Objectives: To study the perinatal survival and procedure-related (PR)complications after intrauterine transfusions in red cell alloimmunization.
Methods: Prospective data of 102 women with Rh-alloimmunized pregnancy undergoing intrauterine intravascular transfusion for fetal anemia, from January 2011 to October 2014 were analyzed. Main outcome measures were perinatal survival and procedure-related (PR) complications.
Results: A total of 303 intrauterine transfusions were performed in 102 women. Of 102 fetuses, 22 were hydropic at first transfusion. The mean period of gestation and hematocrit at first transfusion was 26.9 ± 3.3 weeks (range 19.7-33.8 weeks) and 17 ± 7.82 % (range 5.7-30 %), respectively. Average number of transfusions was 2.97 (range 1-7) per patient. Overall survival was 93 % and mean period of gestation at delivery was 34.5 ± 1.94 (range 28.3-37.4) weeks. Mean hematocrit at delivery was 36.9 ± 8.77 % (range 10-66 %). Fetal death occurred in four cases (3PR), neonatal death occurred in three cases (2PR). Emergency cesarean delivery after transfusion was performed in four pregnancies. The total PR complication rate was 2.97 %, resulting in overall PR loss in 1.65 % per procedure.
Conclusion: Our results compare favorably with other studies published in the literature. Intravascular transfusion is a safe procedure improving perinatal survival in fetuses with anemia due to Rh-alloimmunization.
abstract_id: PUBMED:36387622
Successful outcome after timely management of severe fetal anemia with intrauterine transfusion in female with bad obstetric history. Development of severe fetal anemia due to red cell destruction in intrauterine life, most commonly implicated with hemolytic disease of fetus or newborn. Untreated cases lead to hydrops and even death of newborn. We are reporting a case of severe fetal anaemia successfully delivered after intrauterine transfusion. A 28-year-old female having bad obstetric history G10 P3600, came to our fetal unit at 23 + 4 weeks gestation. Middle cerebral artery peak systolic velocity (MCA PSV) value was 2.2 mom before 1st intrauterine procedure. Subsequent intrauterine session was planned at 1-2 week interval. After completion of 3rd intrauterine transfusion, MCA PSV value was 0.8 mom and baby was delivered at 32 + 1 week via lower segment cesarean section. Intervention at appropriate time, appropriate volume of selected unit and appropriate rate of transfusion definitely improves perinatal outcome.
abstract_id: PUBMED:3136619
Intrauterine exchange transfusion of the fetus under ultrasound guidance: first successful report. The authors report the first successful case of fetal exchange transfusion performed in Rh disease. The patient was a gravida 3, para 1; a first ultrasound guided intra-funicular transfusion was carried out at 28 gestation weeks (Coombs test: 256, Liley's chart: zone III, fetal hemoglobin: 5.7 g/l). One week later a fetal exchange transfusion was decided because of the appearance of ascites, sinusoidal heart rate pattern, and Manning's score at 4. At 29 weeks of gestation an ultrasound-guided umbilical cord puncture was performed with a 16-gauge Tuohy needle, a catheter for epidural analgesia (Perifix) was inserted through the trocar. A total exchange of +126/-96 cm3 of 50% hematocrit (Hct) concentrated and irradiated blood without leukocyte and platelet was performed increasing fetal hemoglobin concentration from 3.9 to 11.9 g/l. A 2,040-gram girl was delivered by cesarean section at 33 gestation weeks with Apgar score of 7/9/9 and she was given 4 total exchange transfusions in the Neonatal Intensive Care Unit. Two years later her development is normal. This new procedure seems easier than fetoscopy to achieve umbilical cord puncture and avoid excess blood volume while exact correction of anemia is possible. With more concentrated blood (70% Hct) the duration of the procedure is shortened.
abstract_id: PUBMED:27706858
Complications of intrauterine intravascular blood transfusion: lessons learned after 1678 procedures. Objective: Maternal alloimmunization to fetal red-blood-cell antigens is a major cause of fetal anemia, which can lead to hydrops and perinatal death if untreated. The cornerstone of management during pregnancy is intrauterine intravascular blood transfusion (IUT). Although this procedure is considered relatively safe, complications continue to occur. The aim of this study was to evaluate rates of procedure-related complications and perinatal loss following IUT, and their change over time, in order to identify factors leading to improved outcome.
Methods: This was a retrospective analysis of all IUTs for red-cell alloimmunization performed at the national referral center for fetal therapy in The Netherlands, from 1988 to 2015. Differences in complication rates and their associations with alterations in transfusion technique after 2001 were assessed.
Results: Between 1988 and 2015, 1678 IUTs were performed in 589 fetuses. For IUTs performed in 2001 and onwards, there was significant improvement in survival (88.6% vs 97.0%, P < 0.001) and a decline in procedure-related complications per fetus (9.8% vs 3.3%, P = 0.001) and per procedure (3.4% vs 1.2%, P = 0.003) compared with those performed before 2001. Procedure-related perinatal loss declined from 4.7% to 1.8% per fetus (P = 0.053). Beneficial changes in transfusion technique were routine use of fetal paralysis, increased use of intrahepatic transfusion and avoidance of arterial puncture.
Conclusions: IUT has become an increasingly safe procedure in recent years when performed by experienced hands. The chosen technique should be fine-tuned according to the patient's individual situation. The declining complication rates are most likely related to center volume: this rare procedure is best performed in experienced fetal therapy centers. © 2016 Authors. Ultrasound in Obstetrics & Gynecology published by John Wiley & Sons Ltd on behalf of International Society of Ultrasound in Obstetrics and Gynecology.
Answer: Intrauterine exchange transfusion (IUET) is a procedure used for the management of severe fetal anemia, and while it can be effective, it is not without risks. The safety of IUET is contingent upon several factors, including the expertise of the specialists performing the procedure and the gestational age at which it is performed.
A retrospective study of 225 IUET procedures performed on 96 fetuses found that the overall survival rate was 87.5%, with a higher survival rate in cases of feto-maternal erythrocyte alloimmunization compared to parvovirus B19 infection. However, complications occurred in 17.7% of pregnancies and 7.5% of IUET procedures, with fetal bradycardia and gestational age up to 32 weeks being significantly associated with complications. The study concluded that IUET has a higher complication rate than simple intrauterine transfusion and should be performed by well-trained specialists (PUBMED:24965985).
Another study reported that intrauterine transfusion (IUT) is the cornerstone in prenatal management of hemolytic disease of the fetus and newborn (HDFN) and has significantly improved perinatal outcome over the past decades. However, the risk of complications remains high when performed early in the second trimester (PUBMED:28277805).
A retrospective study of 114 intrauterine transfusions at a single tertiary center found a live birth rate of 93.5% and a procedure-related fetal death rate of 0.9%. The study suggested that contemporary outcomes appear to be good, and it is important that the experience required to manage these cases should be concentrated in fewer centers to maximize good perinatal outcome (PUBMED:22356474).
A prospective study of 102 women undergoing intrauterine intravascular transfusion for fetal anemia due to Rh-alloimmunization reported an overall survival of 93% and a procedure-related complication rate of 2.97%, with an overall procedure-related loss of 1.65% per procedure. The study concluded that intravascular transfusion is a safe procedure that improves perinatal survival in fetuses with anemia due to Rh-alloimmunization (PUBMED:26493554).
In summary, IUET can be a safe procedure for the management of fetal anemia when performed by experienced specialists, particularly in specialized centers. However, it carries a higher risk of complications compared to simpler transfusion methods, and these risks must be carefully weighed against the benefits in each individual case. |
Instruction: Does oncotype DX recurrence score affect the management of patients with early-stage breast cancer?
Abstracts:
abstract_id: PUBMED:30498424
The Warwick Experience of the Oncotype DX® Breast Recurrence Score® Assay as a Predictor of Chemotherapy Administration. Introduction: Oncotype DX® analyses the expression of 21 genes within tumour tissue to determine a Recurrence Score® (RS). RS is a marker of risk for distant recurrence in oestrogen receptor-positive early breast cancer, allowing patient-specific benefit of chemotherapy to be evaluated. Our aim was to determine whether the introduction of Oncotype DX led to a net reduction in chemotherapy use.
Methods: Consecutive patients that underwent Oncotype DX at Warwick Hospital were reviewed. Patients were anonymised and re-discussed at a multidisciplinary team meeting (MDM; without RS), and treatment recommendations were recorded. This was compared to the original MDM outcome (recommendations made with RS). Differences were analysed using Wilcoxon signed-rank test.
Results: 67 patients were identified. Proportions of high, intermediate and low risk were 28, 33 and 39% (n = 19/22/26), respectively. Without RS, 56 (84%) patients were recommended for chemotherapy and 3 were not. The remaining 8 patients were deemed borderline for requiring chemotherapy and referred for discussion with an oncologist. With availability of RS, 34 (50%) patients were recommended for chemotherapy, and 24 (43%) patients were spared chemotherapy (p < 0.0005). The net reduction in chemotherapy was 33%.
Conclusion: There has been a significant reduction in chemotherapy usage in patients at Warwick since the introduction of Oncotype DX.
abstract_id: PUBMED:24453493
Oncotype dx results in multiple primary breast cancers. Purpose: To determine whether multiple primary breast cancers have similar genetic profiles, specifically Oncotype Dx Recurrence Scores, and whether obtaining Oncotype Dx on each primary breast cancer affects chemotherapy recommendations.
Methods: A database of patients with hormone receptor-positive, lymph node-negative, breast cancer was created for those tumors that were sent for Oncotype Dx testing from the University of Michigan Health System from 1/24/2005 to 2/25/2013. Retrospective chart review abstracted details of tumor location, histopathology, distance between tumors, Oncotype Dx results, and chemotherapy recommendations.
Results: Six hundred and sixty-six patients for whom Oncotype Dx testing was sent were identified, with 22 patients having multiple breast tumor specimens sent. Of the 22 patients who had multiple samples sent for analysis, chemotherapy recommendations were changed in 6 of 22 patients (27%) based on significant differences in Oncotype Dx Recurrence Scores. Qualitatively, there seems to be a greater difference in genetic profile in tumors appearing simultaneously on different breasts when compared to multiple tumors on the same breast. There was no association between distance between tumors and difference in Oncotype Dx scores for tumors on the same breast.
Conclusions: Oncotype Dx testing on multiple primary breast cancers altered management in regards to chemotherapy recommendations and should be considered for multiple primary breast cancers.
abstract_id: PUBMED:29691722
Applying new Magee equations for predicting the Oncotype Dx recurrence score. Background: Breast cancer is one of the most prevalent cancers in women. Oncotype Dx is a multi-gene assay frequently used to predict the recurrence risk for estrogen receptor-positive early breast cancer, with values < 18 considered low risk; ≥ 18 and ≤ 30, intermediate risk; and > 30, high risk. Patients at a high risk for recurrence are more likely to benefit from chemotherapy treatment.
Methods: In this study, clinicopathological parameters for 37 cases of early breast cancer with available Oncotype Dx results were used to estimate the recurrence score using the three new Magee equations. Correlation studies with Oncotype Dx results were performed. Applying the same cutoff points as Oncotype Dx, patients were categorized into low-, intermediate- and high-risk groups according to their estimated recurrence scores.
Results: Pearson correlation coefficient (R) values between estimated and actual recurrence score were 0.73, 0.66, and 0.70 for Magee equations 1, 2 and 3, respectively. The concordance values between actual and estimated recurrence scores were 57.6%, 52.9%, and 57.6% for Magee equations 1, 2 and 3, respectively. Using standard pathologic measures and immunohistochemistry scores in these three linear Magee equations, most low and high recurrence risk cases can be predicted with a strong positive correlation coefficient, high concordance and negligible two-step discordance.
Conclusions: Magee equations are user-friendly and can be used to predict the recurrence score in early breast cancer cases.
abstract_id: PUBMED:37448522
Survival results according to Oncotype Dx recurrence score in patients with hormone receptor positive HER-2 negative early-stage breast cancer: first multicenter Oncotype Dx recurrence score survival data of Turkey. Background: The Oncotype Dx recurrence score (ODx-RS) guides the adjuvant chemotherapy decision-making process for patients with early-stage hormone receptor-positive, HER-2 receptor-negative breast cancer. This study aimed to evaluate survival and its correlation with ODx-RS in pT1-2, N0-N1mic patients treated with adjuvant therapy based on tumor board decisions.
Patients And Methods: Estrogen-positive HER-2 negative early-stage breast cancer patients (pT1-2 N0, N1mic) with known ODx-RS, operated on between 2010 and 2014, were included in this study. The primary aim was to evaluate 5-year disease-free survival (DFS) rates according to ODX-RS.
Results: A total of 203 eligible patients were included in the study, with a median age of 48 (range 26-75) and median follow-up of 84 (range 23-138) months. ROC curve analysis for all patients revealed a recurrence cut-off age of 45 years, prompting evaluation by grouping patients as ≤45 years vs. >45 years. No significant difference in five-year DFS rates was observed between the endocrine-only (ET) and chemo-endocrine (CE) groups. However, among the ET group, DFS was higher in patients over 45 years compared to those aged ≤45 years. When stratifying by ODx-RS as 0-17 and ≥18, DFS was significantly higher in the former group within the ET group. However, such differences were not seen in the CE group. In the ET group, an ODx-RS ≥18 and menopausal status were identified as independent factors affecting survival, with only an ODx-RS ≥18 impacting DFS in patients aged ≤45 years. The ROC curve analysis for this subgroup found the ODx-RS cut-off to be 18.
Conclusion: This first multicenter Oncotype Dx survival analysis in Turkey demonstrates the importance of Oncotype Dx recurrence score and age in determining treatment strategies for early-stage breast cancer patients. As a different aproach to the literature, our findings suggest that the addition of chemotherapy to endocrine therapy in young patients (≤45 years) with Oncotype Dx recurrence scores of ≥18 improves DFS.
abstract_id: PUBMED:36631400
Evaluation oncotype DX® 21-gene recurrence score and clinicopathological parameters: a single institutional experience. Aims: Oncotype DX recurrence score (RS) is a clinically validated assay, which predicts the likelihood of disease recurrence in oestrogen receptor-positive/HER2-negative (ER+/HER2-) breast cancer (BC). In this study we aimed to compare the performance of Oncotype DX against the conventional clinicopathological parameters using a large BC cohort diagnosed in a single institution.
Methods And Results: A cohort (n = 430) of ER+/HER2- BC patients who were diagnosed at the Nottingham University Hospitals NHS Trust and had Oncotype DX testing was included. Correlation with the clinicopathological and other biomarkers, including the proliferation index, was analysed. The median Oncotype DX RS was 17.5 (range = 0-69). There was a significant association between high RS and grade 3 tumours. No grade 1 BC or grade 2 tumours with mitosis score 1 showed high RS. Low RS was significantly associated with special tumour types where none of the patients with classical lobular or tubular carcinomas had a high RS. There was an inverse association between RS and levels of ER and progesterone receptor (PR) expression and a positive linear correlation with Ki67 labelling index. Notably, six patients who developed recurrence had an intermediate RS; however, four of these six cases (67%) were identified as high-risk disease when the conventional clinical and molecular parameters were considered.
Conclusion: Oncotype DX RS is correlated strongly with the conventional clinicopathological parameters in BC. Some tumour features such as tumour grade, type, PR status and Ki67 index can be used as surrogate markers in certain scenarios.
abstract_id: PUBMED:29503586
Spotlight on the utility of the Oncotype DX® breast cancer assay. The Oncotype DX® assay was developed to address the need for optimizing the selection of adjuvant systemic therapy for patients with estrogen receptor (ER)-positive, lymph node-negative breast cancer. It has ushered in the era of genomic-based personalized cancer care for ER-positive primary breast cancer and is now widely utilized in various parts of the world. Together with several other genomic assays, Oncotype DX has been incorporated into clinical practice guidelines on biomarker use to guide treatment decisions. The Oncotype DX result is presented as the recurrence score which is a continuous score that predicts the risk of distant disease recurrence. The assay, which provides information on clinicopathological factors, has been validated for use in the prognostication and prediction of degree of adjuvant chemotherapy benefit in both lymph node-positive and lymph node-negative early breast cancers. Clinical studies have consistently shown that the Oncotype DX has a significant impact on decision making in adjuvant therapy recommendations and appears to be cost-effective in diverse health care settings. In this article, we provide an overview of the validation and clinical impact studies for the Oncotype DX assay. We also discuss its potential use in the neoadjuvant setting, as well as the more recent prospective validation trials, and the economic and utility implications of studies that use a lower cutoff score to define low-risk disease.
abstract_id: PUBMED:28449064
Using the Modified Magee Equation to Identify Patients Unlikely to Benefit From the 21-Gene Recurrence Score Assay (Oncotype DX Assay). Objectives: This study aimed to compare a modified Magee equation with Oncotype DX (Genomic Health, Redwood City, CA) recurrence score (RS) and identify patients who are unlikely to benefit from Oncotype DX.
Methods: Magee equation RS was calculated in 438 cases and correlated with Oncotype DX RS.
Results: The Pearson correlation coefficient ( r ) for the Magee equation and Oncotype DX RS was 0.6645 ( P < .00001), and the overall agreement was 66.4%. All cases (11.6%) with a Magee equation RS greater than 30 or 11 or less had been correctly predicted to have either high Oncotype DX RS or low Oncotype DX RS, respectively.
Conclusions: The modified Magee equation is able to identify up to 12% patients who are unlikely to benefit from Oncotype DX testing. Using the modified Magee equation RS on these patients would be an alternative to Oncotype DX, leading to cost savings.
abstract_id: PUBMED:27729940
The analytical validation of the Oncotype DX Recurrence Score assay. In vitro diagnostic multivariate index assays are highly complex molecular assays that can provide clinically actionable information regarding the underlying tumour biology and facilitate personalised treatment. These assays are only useful in clinical practice if all of the following are established: analytical validation (i.e., how accurately/reliably the assay measures the molecular characteristics), clinical validation (i.e., how consistently/accurately the test detects/predicts the outcomes of interest), and clinical utility (i.e., how likely the test is to significantly improve patient outcomes). In considering the use of these assays, clinicians often focus primarily on the clinical validity/utility; however, the analytical validity of an assay (e.g., its accuracy, reproducibility, and standardisation) should also be evaluated and carefully considered. This review focuses on the rigorous analytical validation and performance of the Oncotype DX® Breast Cancer Assay, which is performed at the Central Clinical Reference Laboratory of Genomic Health, Inc. The assay process includes tumour tissue enrichment (if needed), RNA extraction, gene expression quantitation (using a gene panel consisting of 16 cancer genes plus 5 reference genes and quantitative real-time RT-PCR), and an automated computer algorithm to produce a Recurrence Score® result (scale: 0-100). This review presents evidence showing that the Recurrence Score result reported for each patient falls within a tight clinically relevant confidence interval. Specifically, the review discusses how the development of the assay was designed to optimise assay performance, presents data supporting its analytical validity, and describes the quality control and assurance programmes that ensure optimal test performance over time.
abstract_id: PUBMED:30230117
Breast cancer histopathology is predictive of low-risk Oncotype Dx recurrence score. Background: Oncotype Dx is a genetic test that has been incorporated into the 2017 AJCC breast cancer staging system for ER positive, HER2-negative, lymph node-negative patients to predict the risk of recurrence. Recent data suggest that immunohistochemistry (ER, PR, HER2, and Ki-67) and histologic subtype may identify patients that will not benefit from Oncotype Dx testing.
Methods: A total of 371 patients underwent Oncotype Dx testing at our institution from 2012 to 2016. Oncotype recurrence score was categorized as low- (ORS = 0-10), intermediate- (11-25), or high risk (26-100). Invasive carcinomas were categorized based on histologic subtype as "favorable" (mucinous, tubular, cribriform, tubulolobular, and lobular) and "unfavorable" (ductal, mixed ductal and lobular, and micropapillary carcinoma). All cases were estrogen receptor positive and HER2-negative. Clinical and histologic predictors of low-risk ORS were assessed in univariate and multivariate logistic regression.
Results: A total of 371 patients were categorized by ORS as low risk (n = 85, 22.9%), intermediate risk (n = 244, 65.8%), and high risk (n = 42, 11.3%). The histologic subtypes with the highest percentage of high-risk ORS were invasive micropapillary (n = 4/17, 23.5%), pleomorphic lobular (n = 2/10, 20%), and ductal carcinoma (n = 28/235, 11.9%). Low-grade invasive carcinomas with favorable histology rarely had a high-risk ORS (n = 1/97, 1%). In a simple multivariable model, favorable histologic subtype (OR = 2.39, 95% CI: 1.10 to 5.15, P = 0.026), and histologic grade (OR = 1.76, 95% CI: 1.07 to 2.90, P = 0.025) were the only significant predictors of an ORS less than 11 in estrogen receptor positive, HER2-negative, and lymph node-negative patients.
Conclusion: We question the utility of performing Oncotype Dx in subtypes of invasive carcinoma that are associated with excellent prognosis. We propose that immunohistochemistry for ER, PR, and HER2 is sufficient for patients with low-grade invasive carcinomas and can be used as a surrogate for Oncotype Dx.
abstract_id: PUBMED:31404265
Comparison of GenesWell BCT Score With Oncotype DX Recurrence Score for Risk Classification in Asian Women With Hormone Receptor-Positive, HER2-Negative Early Breast Cancer. Introduction: The GenesWell Breast Cancer Test (BCT) is a recently developed multigene assay that predicts the risk of distant recurrence in patients with early breast cancer. Here, we analyzed the concordance of the BCT score with the Oncotype DX recurrence score (RS) for risk stratification in Asian patients with pN0-N1, hormone receptor-positive, human epidermal growth factor receptor 2 (HER2)-negative breast cancer. Methods: Formalin-fixed, paraffin-embedded breast cancer tissues previously analyzed using the Oncotype DX test were assessed using the GenesWell BCT test. The risk stratification by the two tests was then compared. Results: A total of 771 patients from five institutions in Korea were analyzed. According to the BCT score, 527 (68.4%) patients were classified as low risk, and 244 (31.6%) as high risk. Meanwhile, 134 (17.4%), 516 (66.9%), and 121 (15.7%) patients were categorized into the low-, intermediate-, and high-risk groups, respectively, according to the RS ranges used in the TAILORx. The BCT high-risk group was significantly associated with advanced lymph node status, whereas no association between RS risk groups and nodal status was observed. The concordance between the two risk stratification methods in the overall population was 71.9% when the RS low-risk, and intermediate-risk groups were combined into one group. However, poor concordance was observed in patients aged ≤50 years and in those with lymph node-positive breast cancer. Conclusions: The concordance between the BCT score and RS was low in women aged ≤50 years or with lymph node-positive breast cancer. Further studies are necessary to identify more accurate tests for predicting prognosis and chemotherapy benefit in this subpopulation.
Answer: Yes, the Oncotype DX Recurrence Score (RS) significantly affects the management of patients with early-stage breast cancer. The RS is a genomic test that analyzes the expression of 21 genes within tumor tissue to determine the risk of distant recurrence in estrogen receptor-positive early breast cancer. This information helps clinicians evaluate the potential benefit of chemotherapy for individual patients.
Studies have shown that the introduction of Oncotype DX has led to a significant reduction in chemotherapy usage. For instance, at Warwick Hospital, the use of Oncotype DX resulted in a net reduction of chemotherapy recommendations by 33%, sparing 43% of patients from chemotherapy who would have otherwise received it based on traditional clinical assessments alone (PUBMED:30498424).
Additionally, Oncotype DX testing on multiple primary breast cancers has been shown to alter management in regards to chemotherapy recommendations in 27% of patients with multiple breast tumors, suggesting that the genetic profiles of multiple primary tumors can differ significantly and should be individually assessed (PUBMED:24453493).
The Oncotype DX RS also correlates with conventional clinicopathological parameters, such as tumor grade, type, progesterone receptor status, and Ki67 index, which can be used as surrogate markers in certain scenarios (PUBMED:36631400). Furthermore, the RS has been incorporated into clinical practice guidelines to guide treatment decisions and has been validated for use in both lymph node-positive and lymph node-negative early breast cancers (PUBMED:29503586).
In Turkey, a study found that the Oncotype DX RS and age are important factors in determining treatment strategies for early-stage breast cancer patients, suggesting that chemotherapy added to endocrine therapy improves disease-free survival in young patients (≤45 years) with RS ≥18 (PUBMED:37448522).
Moreover, alternative methods like the Magee equations have been developed to predict the recurrence score using standard pathologic measures and immunohistochemistry scores, potentially sparing some patients from the need for Oncotype DX testing (PUBMED:29691722, PUBMED:28449064).
In summary, the Oncotype DX RS plays a crucial role in the personalized management of early-stage breast cancer, influencing decisions regarding the administration of adjuvant chemotherapy and allowing for more tailored treatment approaches based on the genetic profile of the tumor. |
Instruction: Clinical inertia in response to inadequate glycemic control: do specialists differ from primary care physicians?
Abstracts:
abstract_id: PUBMED:35457303
Predictors of Clinical Inertia and Type 2 Diabetes: Assessment of Primary Care Physicians and Their Patients. With the growing prevalence and complex pathophysiology of type 2 diabetes, many patients fail to achieve treatment goals despite guidelines and possibilities for treatment individualization. One of the identified root causes of this failure is clinical inertia. We explored this phenomenon, its possible predictors, and groups of patients affected the most, together with offering potential paths for intervention. Our research was a cross-sectional study conducted during 2021 involving 52 physicians and 543 patients of primary healthcare institutions in Belgrade, Serbia. The research instruments were questionnaires based on similar studies, used to collect information related to the factors that contribute to developing clinical inertia originating in both physicians and patients. In 224 patients (41.3%), clinical inertia was identified in patients with poor overall health condition, long diabetes duration, and comorbidities. Studying the changes made to the treatment, most patients (53%) had their treatment adjustment more than a year ago, with 19.3% of patients changing over the previous six months. Moreover, we found significant inertia in the treatment of patients using modern insulin analogues. Referral to secondary healthcare institutions reduced the emergence of inertia. This assessment of primary care physicians and their patients pointed to the high presence of clinical inertia, with an overall health condition, comorbidities, diabetes duration, current treatment, last treatment change, glycosylated hemoglobin and fasting glucose measuring frequency, BMI, patient referral, diet adjustment, and physician education being significant predictors.
abstract_id: PUBMED:15735195
Clinical inertia in response to inadequate glycemic control: do specialists differ from primary care physicians? Objective: Diabetic patients with inadequate glycemic control ought to have their management intensified. Failure to do so can be termed "clinical inertia." Because data suggest that specialist care results in better control than primary care, we evaluated whether specialists demonstrated less clinical inertia than primary care physicians.
Research Design And Methods: Using administrative data, we studied all non-insulin-requiring diabetic patients in eastern Ontario aged 65 or older who had A1c results >8% between September 1999 and August 2000. Drug intensification was measured by comparing glucose-lowering drug regimens in 4-month blocks before and after the elevated A1c test and was defined as 1) the addition of a new oral drug, 2) a dose increase of an existing oral drug, or 3) the initiation of insulin. Propensity score-based matching was used to control for confounding between groups.
Results: There were 591 patients with specialist care and 1,911 with exclusively primary care. In the matched cohorts, 45.1% of patients with specialist care versus 37.4% with primary care had drug intensification (P = 0.009). Most of this difference was attributed to specialists' more frequent initiation of insulin in response to elevated A1c.
Conclusions: Fewer than one-half of patients with high A1c levels had intensification of their medications, regardless of specialty of their physician. Specialists were more aggressive with insulin initiation than primary care physicians, which may contribute to the lower A1c levels seen with specialist care. Interventions assisting patients and physicians to recognize and overcome clinical inertia should improve diabetes care in the population.
abstract_id: PUBMED:36554673
Barriers and Attitudes of Primary Healthcare Physicians to Insulin Initiation and Intensification in Saudi Arabia. Saudi Arabia is a country with high prevalence of diabetes, uncontrolled diabetes, and diabetes-related complications. Poor glycemic control is multifactorial and could be explained in part by physician and patient reluctance toward insulin or insulin inertia. This study aimed to address physician barriers toward insulin therapy in primary care settings. It included 288 physicians from 168 primary healthcare centers (PHC) in the Jazan region of Saudi Arabia. Participants responded to questionnaire investigating physicians' attitude and barriers to insulin initiation and intensification in PHCs. In physician opinion, the most common barriers among their patients were fear of injection, lack of patient education, fear of hypoglycemia, and difficult administration. Physicians were reluctant to initiate insulin for T2D patients mostly due to patient non-adherence to blood sugar measurement, non-adherence to appointment or treatment, elderly patients, or due to patient refusal. Physicians' fear of hypoglycemia, lack of staff for patient education, and lack of updated knowledge were the primary clinician-related barriers. Exaggerated fears of insulin side effects, patient non-adherence, limited staff for patient's education, patient refusal, and inadequate consultation time were the main barriers to insulin acceptance and prescription.
abstract_id: PUBMED:35221245
Family Physician Clinical Inertia in Managing Hypoglycemia. Aims: Clinical inertia behaviour affects family physicians managing chronic disease such as diabetes. Literature addressing clinical inertia in the management of hypoglycemia is scarce. The objectives of this study were to create a measurement for physician clinical inertia in managing hypoglycemia (ClinInert_InHypoDM), and to determine physicians' characteristics associated with clinical inertia.
Methods: The study was a secondary analysis of data provided by family physicians from the InHypo-DM Study, applying exploratory factor analysis. Principal axis factoring with an Oblimin rotation was employed to detect underlying factors associated with physician behaviors. Multiple linear regression was used to determine association between the ClinInert_InHypoDM scores and physician characteristics.
Results: Factor analysis identified a statistically sound 12-item one-factor scale for clinical inertia behavior. No statistically significant differences in clinical inertia score for the studied independent variables were found.
Conclusions: This study provides a scale for assessing clinical inertia in the management of hypoglycemia. Further testing this scale in other family physician populations will provide deeper understanding about the characteristics and factors that influence clinical inertia. The knowledge derived from better understanding clinical inertia in primary care has potential to improve outcomes for patients with diabetes.
abstract_id: PUBMED:32548871
Assessment of clinical inertia in people with diabetes within primary care. Rationale, Aims And Objectives: Clinical inertia, defined as a delay in treatment intensification, is prevalent in people with diabetes. Treatment intensification rates are as low as 37.1% in people with haemoglobin A1c (HbA1c) values >7%. Intensification by addition of medication therapy may take 1.6 to more than 7 years. Clinical inertia increases the risk of cardiovascular events. The primary objective was to evaluate rates of clinical inertia in people whose diabetes is managed by both pharmacists and primary care providers (PCPs). Secondary objectives included characterizing types of treatment intensification, HbA1c reduction, and time between treatment intensifications.
Method: Retrospective chart review of persons with diabetes managed by pharmacists at an academic, safety-net institution. Eligible subjects were referred to a pharmacist-managed cardiovascular risk reduction clinic while continuing to see their PCP between October 1, 2016 and June 30, 2018. All progress notes were evaluated for treatment intensification, HbA1c value, and type of medication intensification.
Results: Three hundred sixty-three eligible patients were identified; baseline HbA1c 9.6% (7.9, 11.6) (median interquartile range [IQR]). One thousand one hundred ninety-two pharmacist and 1739 PCP visits were included in data analysis. Therapy was intensified at 60.5% (n = 721) pharmacist visits and 39.3% (n = 684) PCP visits (P < .001). The median (IQR) time between interventions was 49 (28, 92) days for pharmacists and 105 (38, 182) days for PCPs (P < .001). Pharmacists more frequently intensified treatment with glucagon-like peptide-1 agonists and sodium glucose cotransporter-2 inhibitors.
Conclusion: Pharmacist involvement in diabetes management may reduce the clinical inertia patients may otherwise experience in the primary care setting.
abstract_id: PUBMED:36121937
Conquering diabetes therapeutic inertia: practical tips for primary care. Diabetes is a complex condition that is largely self-managed. Decades of scientific evidence has proved that early glycemic control leads to improved microvascular and macrovascular outcomes in people with diabetes mellitus. Despite well-established management guidelines, only about half of the patients with diabetes achieve glycemic targets, and only one in five patients achieve metabolic control (blood pressure, lipid, and glucose targets), and both patients and physicians find themselves stuck in a rut called therapeutic inertia (TI). The authors present several practical strategies that can be tailored to different practice settings and facilitate reducing TI.
abstract_id: PUBMED:31023106
Clinical inertia in hypertension: a new holistic and practical concept within the cardiovascular continuum and clinical care process. Purpose: Recognition of clinical inertia is essential to improve the control of chronic diseases. Although it is very intuitive, a better interpretation of the concept of clinical inertia is lacking, likely due to its high complexity. Materials and Methods: After a review of the published articles, we propose a practical vision of inertia, contextualized within the clinical process of hypertension care. Results: This new vision enables the integration of previous terms and definitions of clinical inertia, as well as proposing specific strategies for its reduction. Conclusion: Although some concepts should be considered as 'justified inertia' or 'investigator inertia', the idea that inertia may be present throughout the continuum of care gives physicians a holistic view of the problem that is easily applicable to their clinical practice. Measures to overcome inertia are complicated because of the intrinsic complexity of the concept.
abstract_id: PUBMED:35783866
Characterizing Diagnostic Inertia in Arterial Hypertension With a Gender Perspective in Primary Care. Background And Objectives: Substantial evidence shows that diagnostic inertia leads to failure to achieve screening and diagnosis objectives for arterial hypertension (AHT). In addition, different studies suggest that the results may differ between men and women. This study aimed to evaluate the differences in diagnostic inertia in women and men attending public primary care centers, to identify potential gender biases in the clinical management of AHT.
Study Design/materials And Methods: Cross-sectional descriptive and analytical estimates were obtained nested on an epidemiological ambispective cohort study of patients aged ≥30 years who attended public primary care centers in a Spanish region in the period 2008-2012, belonging to the ESCARVAL-RISK cohort. We applied a consistent operational definition of diagnostic inertia to a registry- reflected population group of 44,221 patients with diagnosed hypertension or meeting the criteria for diagnosis (51.2% women), with a mean age of 63.4 years (62.4 years in men and 64.4 years in women).
Results: Of the total population, 95.5% had a diagnosis of hypertension registered in their electronic health record. Another 1,968 patients met the inclusion criteria for diagnostic inertia of hypertension, representing 4.5% of the total population (5% of men and 3.9% of women). The factors significantly associated with inertia were younger age, normal body mass index, elevated total cholesterol, coexistence of diabetes and dyslipidemia, and treatment with oral antidiabetic drugs. Lower inertia was associated with age over 50 years, higher body mass index, normal total cholesterol, no diabetes or dyslipidemia, and treatment with lipid-lowering, antiplatelet, and anticoagulant drugs. The only gender difference in the association of factors with diagnostic inertia was found in waist circumference.
Conclusion: In the ESCARVAL-RISK study population presenting registered AHT or meeting the functional diagnostic criteria for AHT, diagnostic inertia appears to be greater in men than in women.
abstract_id: PUBMED:35473579
Physicians' misperceived cardiovascular risk and therapeutic inertia as determinants of low LDL-cholesterol targets achievement in diabetes. Background: Greater efforts are needed to overcome the worldwide reported low achievement of LDL-c targets. This survey aimed to dissect whether and how the physician-based evaluation of patients with diabetes is associated with the achievement of LDL-c targets.
Methods: This cross-sectional self-reported survey interviewed physicians working in 67 outpatient services in Italy, collecting records on 2844 patients with diabetes. Each physician reported a median of 47 records (IQR 42-49) and, for each of them, the physician specified its perceived cardiovascular risk, LDL-c targets, and the suggested refinement in lipid-lowering-treatment (LLT). These physician-based evaluations were then compared to recommendations from EAS/EASD guidelines.
Results: Collected records were mostly from patients with type 2 diabetes (94%), at very-high (72%) or high-cardiovascular risk (27%). Physician-based assessments of cardiovascular risk and of LDL-c targets, as compared to guidelines recommendation, were misclassified in 34.7% of the records. The misperceived assessment was significantly higher among females and those on primary prevention and was associated with 67% lower odds of achieving guidelines-recommended LDL-c targets (OR 0.33, p < 0.0001). Peripheral artery disease, target organ damage and LLT-initiated by primary-care-physicians were all factors associated with therapeutic-inertia (i.e., lower than expected probability of receiving high-intensity LLT). Physician-suggested LLT refinement was inadequate in 24% of overall records and increased to 38% among subjects on primary prevention and with misclassified cardiovascular risk.
Conclusions: This survey highlights the need to improve the physicians' misperceived cardiovascular risk and therapeutic inertia in patients with diabetes to successfully implement guidelines recommendations into everyday clinical practice.
abstract_id: PUBMED:30291589
Clinical inertia, reverse clinical inertia, and medication non-adherence in type 2 diabetes. Purpose: Clinical inertia and medication non-adherence are thought to contribute largely to the suboptimal glycemic control in many patients with type 2 diabetes. The present review explores the relations between A1C targets, clinical inertia and medication non-adherence in type 2 diabetes.
Methods: We searched PubMed for English-language studies published from 2001 through June 1, 2018. We also manually searched the references of selected articles, reviews, meta-analyses, and practice guidelines. Selected articles were mutually agreed upon by the authors.
Results: Clinical inertia is the failure of clinicians to initiate or intensify therapy when indicated, while medication non-adherence is the failure of patients to start or continue therapy that a clinician has recommended. Although clinical inertia may occur at all stages of diabetes treatment, the longest delays were reported for initiation or intensification of insulin. Medication non-adherence to antidiabetic drugs may range from 53 to 65% at 1 year and may be responsible for uncontrolled A1C in about 23% of cases. Reverse clinical inertia can be acknowledged as the failure to reduce or change therapy when no longer needed or indicated. Clinical inertia and medication non-adherence are difficult to address: clinician-and patient-targeted educational programs, more connected communications between clinicians and patients, the help of other health professional figures (nurse, pharmacist) have been explored with mixed results.
Conclusions: Both clinical inertia and medication non-adherence remain significant barriers to optimal glycemic targets in type 2 diabetes. Moreover, part of clinical inertia may be a way through which clinicians face current uncertainty in medicine, including some dissonance among therapeutic guidelines. Scientific associations should find an agreement about how to measure and report clinical inertia in clinical practice and should exhort clinicians to consider reverse clinical inertia as a cause of persisting inappropriate therapy in vulnerable patients.
Answer: Clinical inertia, defined as the failure to initiate or intensify therapy when indicated, is a recognized issue in the management of type 2 diabetes, affecting the achievement of glycemic control. When comparing specialists to primary care physicians in response to inadequate glycemic control, there is evidence suggesting that specialists may demonstrate less clinical inertia than primary care physicians. A study using administrative data found that in matched cohorts, 45.1% of patients with specialist care versus 37.4% with primary care had drug intensification following elevated A1c levels. Specialists were more aggressive with insulin initiation, which may contribute to the lower A1c levels seen with specialist care (PUBMED:15735195).
However, it is important to note that fewer than half of patients with high A1c levels had intensification of their medications, regardless of the specialty of their physician. This indicates that clinical inertia is a widespread issue in diabetes care, and interventions to assist both patients and physicians in recognizing and overcoming clinical inertia should improve diabetes care across the board (PUBMED:15735195).
In addition, other studies have identified various barriers and attitudes that contribute to clinical inertia. For instance, primary healthcare physicians in Saudi Arabia reported barriers to insulin initiation and intensification, such as fear of hypoglycemia, lack of staff for patient education, and inadequate consultation time (PUBMED:36554673). Similarly, a study in Serbia identified significant predictors of clinical inertia, including overall health condition, comorbidities, diabetes duration, and the frequency of glycosylated hemoglobin and fasting glucose measuring (PUBMED:35457303).
Overall, while specialists may show less clinical inertia than primary care physicians, particularly in the initiation of insulin, the problem of clinical inertia is complex and multifactorial, affecting both specialists and primary care providers. Addressing clinical inertia requires a comprehensive approach that includes education, system-level interventions, and support for both patients and healthcare providers. |
Instruction: Is it necessary to cover the macular hole with the inverted internal limiting membrane flap in macular hole surgery?
Abstracts:
abstract_id: PUBMED:33937583
Partial detachment of internal limiting membrane flap and spontaneous re-covering of macular hole by flap. Purpose: To report a case in which an internal limiting membrane (ILM) flap that was used to cover an idiopathic macular hole (MH) during pars plana vitrectomy (PPV) with the inverted internal limiting membrane flap technique partially detached from the retina. Most interestingly, the flap fell back spontaneously to re-cover the MH.
Observations: A 70-year-old woman presented with a full-thickness MH, and her vision was 20/400. She underwent PPV with an inverted ILM flap and air tamponade. When the intraocular gas was absorbed, the ILM flap detached but was held to the retina where it had not been peeled and the MH was open. Her visual acuity at this time was 20/400. The patient did not want further treatment and was followed by observation alone. At three months after the initial surgery, the ILM flap was noted to have spontaneously re-covered the MH, and her visual acuity improved to 20/200. At 6 months after the re-covering, the flap remained over the MH and the visual acuity remained at 20/200.
Conclusions And Importance: Surgeons should be aware that it is possible for an ILM flap created by the inverted ILM flap technique to partially detach from the retina after the tamponade gas is resorbed. Most importantly, the flap can return to re-cover the MH spontaneously.
abstract_id: PUBMED:37089043
Flap-Related Complications Following Temporal Inverted Internal Limiting Membrane Flap for Macular Hole Repair. Here we report three cases of flap-related complications following temporal inverted internal limiting membrane (ILM) flap technique for the repair of macular holes (MH). The first case showed a flap closure pattern in which the MH completely closed at 2 months spontaneously. The second case showed early anatomical and functional improvement provided by an immediate closure of the MH but developed flap contracture and nasally located epiretinal membrane (ERM) at postoperative 18 months. There was no functional deterioration, thus no further intervention was required. In the third case, early postoperative flap dislocation was observed and an additional surgery to reposition the flap was needed. The flap closure pattern observed with inverted ILM flap techniques may represent the ongoing healing process of large MHs and may be related to delayed spontaneous anatomical closure. ILM flap contracture and ERM formation may be a harmless long-term complication. Dislocation of the ILM flap is an unexpected early postoperative complication that may necessitate a second surgery for flap repositioning.
abstract_id: PUBMED:37484614
Temporal and double inverted internal limiting membrane flap for bilateral choroidal ruptures complicated by bilateral macular holes. Choroidal ruptures occur in 5% to 10% closed-globe injuries with wide variation in visual prognosis, which depending on the visual acuity at presentation, the location of the rupture, and other associated ocular injuries. We reported a case of bilateral traumatic choroidal rupture with a large macular hole. We performed surgery in the right eye of microincisional vitrectomy, temporally inverted internal limiting membrane (ILM) flap, and C3F8 tamponade; then microincisional vitrectomy, fibrotic scar removal, double inverted ILM flap, and C3F8 tamponade in the left eye. After surgery, she achieved both good anatomical and visual acuity improvement in the right eye, but limited visual acuity improvement in the left eye due to subfoveal choroidal scar formation.
abstract_id: PUBMED:31118552
Inverted internal limiting membrane (ILM) flap technique for macular hole closure: patient selection and special considerations. This paper reviews the current status of the newer inverted internal limiting membrane flap technique for macular hole surgery. It gives an overview of the importance of patient selection and special considerations along with variations in technique. It discusses the pathophysiology and how the technique has been an important addition in the armamentarium of vitreoretinal surgeons to attain better anatomical as well as functional results in challenging situations.
abstract_id: PUBMED:36588234
Comparative study of conventional internal limiting membrane peeling versus temporal inverted internal limiting membrane flap for large macular hole treatment. Purpose: To compare the anatomical, morphological, and functional outcomes of the conventional internal limiting membrane (ILM) peeling versus temporal inverted ILM flap technique for large full-thickness macular holes (FTMHs).
Methods: Sixty eyes of 60 patients with a minimum base diameter >600 μm were included in this retrospective interventional study. Patients were divided into conventional ILM peeling (Group 1) and temporal inverted ILM flap (Group 2) groups. The hole closure rate, best-corrected visual acuity (BCVA), ellipsoid zone (EZ), and external limiting membrane (ELM) defects were analyzed at baseline and 6 months after surgery.
Results: Hole closure was achieved in 24/32 (75.0%) cases of Group 1 and 27/28 (96.4%) cases of Group 2 (P = 0.029). The mean BCVA (logMAR) changed from 1.23 ± 0.47 to 0.70 ± 0.29 logMAR in Group 1 and from 1.03 ± 0.36 to 0.49 ± 0.24 logMAR in Group 2 at 6 months (P < 0.001 in both cases). U-shaped closure was observed in 5 (15.6%) eyes in Group 1 and 19 (67.9%) eyes in Group 2 (P < 0.001). The total restoration rates of ELM and EZ were significantly higher in the temporal inverted ILM flap group (P = 0.002, P = 0.001, respectively).
Conclusion: The study results suggested that the FTMH closure rate, recovery of the outer retinal layers, and, consequently, the post-operative BCVA were better with the temporal inverted ILM flap technique than with the conventional ILM peeling for larger than 600 μm macular holes.
abstract_id: PUBMED:29179705
Vitrectomy with internal limiting membrane peeling versus inverted internal limiting membrane flap technique for macular hole-induced retinal detachment: a systematic review of literature and meta-analysis. Background: To evaluate the effects on vitrectomy with internal limiting membrane (ILM) peeling versus vitrectomy with inverted internal limiting membrane flap technique for macular hole-induced retinal detachment (MHRD).
Methods: Pubmed, Cochrane Library, and Embase were systematically searched for studies that compared ILM peeling with inverted ILM flap technique for macular hole-induced retinal detachment. The primary outcomes are the rate of retinal reattachment and the rate of macular hole closure 6 months later after initial surgery, the secondary outcome is the postoperative best-corrected visual acuity (BCVA) 6 months later after initial surgery.
Results: Four studies that included 98 eyes were selected. All the included studies were retrospective comparative studies. The preoperative best-corrected visual acuity was equal between ILM peeling and inverted ILM flap technique groups. It was indicated that the rate of retinal reattachment (odds ratio (OR) = 0.14, 95% confidence interval (CI):0.03 to 0.69; P = 0.02) and macular hole closure (OR = 0.06, 95% CI:0.02 to 0.19; P < 0.00001) after initial surgery was higher in the group of vitrectomy with inverted ILM flap technique than that in the group of vitrectomy with ILM peeling. However, there was no statistically significant difference in postoperative best-corrected visual acuity (mean difference (MD) 0.18 logarithm of the minimum angle of resolution; 95% CI -0.06 to 0.43 ; P = 0.14) between the two surgery groups.
Conclusion: Compared with ILM peeling, vitrectomy with inverted ILM flap technique resulted significantly higher of the rate of retinal reattachment and macular hole closure, but seemed does not improve postoperative best-corrected visual acuity.
abstract_id: PUBMED:32676272
Inverted temporal internal limiting membrane flap technique for chronic large traumatic macular hole. Various modifications of surgical techniques and surgical adjuncts are adopted with standard pars plana vitrectomy (PPV) to improve the outcome of traumatic macular hole (TMH) surgeries. We describe a successful closure of a chronic large TMH of three years duration with inverted temporal internal limiting membrane (ILM) flap technique. A 36-year-old male patient had an optical coherence tomography (OCT) documented chronic macular hole (MH) for three years following blunt trauma. Fundus examination also showed choroidal rupture scar temporal to fovea. The minimum MH diameter was 769 µ and the basal diameter 1431 µ in OCT. Standard PPV with inverted temporal ILM flap and gas tamponade was done. The postoperative period was uneventful. The best corrected visual acuity improved from 6/60 preoperatively to 6/18 six months postoperatively, and OCT showed a closed MH with anatomical type 1 closure. This case highlights that the inverted temporal ILM flap technique is a safe and effective technique for patients with even chronic and large TMH.
abstract_id: PUBMED:31920283
Comparison of Three Different Techniques of Inverted Internal Limiting Membrane Flap in Treatment of Large Idiopathic Full-Thickness Macular Hole. Purpose: To evaluate and compare three different techniques of inverted internal limiting membrane (ILM) flap in the treatment of large idiopathic full-thickness macular hole.
Methods: In a comparative interventional case series, 72 eyes from 72 patients with large (> 400 µm) full-thickness macular hole were randomly enrolled into three different groups: group A - hemicircular ILM peel with temporally hinged inverted flap; group B - circular ILM peel with temporally hinged inverted flap; and group C - circular ILM peel with superior inverted flap. Best-corrected visual acuity (BCVA), anatomical closure rate, and ellipsoid zone (EZ) or external limiting membrane (ELM) defects were evaluated preoperatively, at week 1, and months 1, 3 and 6 after surgery.
Results: There were 24 eyes in group A, 23 in group B, and 25 in group C. In all three groups, larger diameter macular hole was associated with worse preoperative visual acuity (r=0.625, P<0.001). Mean BCVA improved significantly in all three groups 6 months after surgery (0.91vs 0.55, p<0.001). 6 months after surgery, mean BCVA improved from 0.91 logMAR to 0.52±0.06 in group A, 0.90 to 0.53±0.06 in group B, and 0.91 to 0.55±0.11 in group C. In group A vs. B vs. C, improvement of BCVA was 0.380±0.04 vs. 0.383±0.04 vs. 0.368±0.11 logMAR, with no statistically significant difference between groups (P=0.660). The rate of successful hole closure was 87.5% vs. 91.3% vs. 100%. Although the closure rate was 100% in Group C (circular ILM peel with superiorly hinged inverted flap), this difference was not statistically significant (P=0.115).
Conclusion: ILM peel with an inverted flap is a highly effective procedure for the treatment of large, full-thickness macular hole. Different flap techniques have comparable results, indicating that the technique can be chosen based on surgeon preference.
abstract_id: PUBMED:34703743
Changes in retinal sensitivity following inverted internal limiting membrane flap technique for large macular holes. Purpose: The aim of this study was to evaluate the effect of inverted internal limiting membrane (ILM) flap technique and measure the retinal sensitivity using microperimetry-1 (MP-1) test in patients with large macular hole (MH).
Materials And Methods: We enrolled patients undergoing surgery for idiopathic MHs from January 2016 to October 2019. Only patients having a minimum diameter of idiopathic MH exceeding 500 μm were included in this study. All patients underwent complete preoperative ophthalmologic examinations, optical coherence tomography (OCT), and best-corrected visual acuity (BCVA) measurements. Postoperative OCT and BCVA were evaluated at least 3 months postoperatively. In addition, these patients also received MP-1 pre- and postoperatively for retinal sensitivity measurement.
Results: Totally ten patients (ten eyes) were included for analysis. The mean retinal sensitivity within central 12° and 40° was statistically improved after surgery (P < 0.05). The number of absolute or relative scotoma (stimulus values ≤4 dB) within central 4° showed a significant reduction postoperatively. There was also a significant increase in visual acuity postoperatively.
Conclusion: Patients with large MH have a great successful rate by receiving inverted ILM flap technique. In our study, all MHs of ten eyes were closed postoperatively. The results also demonstrated that ILM flap technique improves both the functional and anatomic outcomes.
abstract_id: PUBMED:29610221
Comparative analysis of large macular hole surgery using an internal limiting membrane insertion versus inverted flap technique. Background: To determine whether the internal limiting membrane (ILM) insertion technique is as effective as the inverted ILM flap technique for the initial surgical treatment of eyes with large idiopathic macular holes (MHs).
Methods: This retrospective, non-randomised, comparative clinical study included 41 eyes with large MHs (minimum diameter >500 µm) that were treated using the ILM insertion technique or the inverted ILM flap technique. The hole closure rate, postoperative best corrected visual acuity (BCVA) and swept source optical coherence tomography findings were analysed at 6 months after surgery.
Results: There were 15 and 26 eyes in the insertion and inverted flap groups, respectively. Hole closure was achieved in all eyes. The mean final BCVA was better in the inverted flap group than in the insertion group (0.527 vs 0.773, p=0.006), although significant postoperative improvements were observed in both groups (p<0.001). Postoperative foveal discolouration was more common in the insertion group than in the inverted flap group (86.7% vs 7.7%, p<0.001). Complete resolution of ellipsoid zone and external limiting membrane defects was observed in 7 and 18 eyes, respectively, in the inverted flap group; in contrast, complete resolution was not observed in any of the eyes in the insertion group (p=0.035 and p<0.001, respectively).
Conclusion: The ILM insertion technique may be as effective as the inverted ILM flap technique for the closure of large MHs. However, the latter technique results in better recovery of photoreceptor layers and, consequently, better postoperative visual acuity.
Answer: Yes, covering the macular hole with the inverted internal limiting membrane (ILM) flap is a technique used in macular hole surgery. The inverted ILM flap technique has been shown to be effective in improving anatomical and functional outcomes in macular hole repair, particularly in cases of large full-thickness macular holes (FTMHs) and macular hole-induced retinal detachment (MHRD) (PUBMED:36588234, PUBMED:29179705).
The technique involves creating a flap from the ILM and positioning it over the macular hole to promote closure. Studies have demonstrated that the inverted ILM flap technique can result in a higher rate of macular hole closure and retinal reattachment compared to ILM peeling alone (PUBMED:29179705). Additionally, the inverted ILM flap technique has been associated with better recovery of the outer retinal layers, which can lead to improved postoperative best-corrected visual acuity (BCVA) (PUBMED:36588234, PUBMED:29610221).
Moreover, the inverted ILM flap technique has been adapted and modified in various ways to treat different types of macular holes, including large idiopathic FTMHs and traumatic macular holes (PUBMED:31920283, PUBMED:32676272). The technique has been shown to be highly effective for the treatment of large FTMHs, with different flap techniques yielding comparable results, indicating that the choice of technique can be based on surgeon preference (PUBMED:31920283).
However, it is important to note that there can be complications related to the flap, such as partial detachment and the need for repositioning, contracture, and epiretinal membrane formation (PUBMED:37089043). In some cases, the ILM flap may spontaneously reattach and re-cover the macular hole, as observed in a case where the flap detached after the tamponade gas was absorbed but later spontaneously re-covered the hole, leading to improved visual acuity (PUBMED:33937583).
In conclusion, the inverted ILM flap technique is a necessary and effective approach in macular hole surgery, particularly for large and challenging cases, and it has been shown to improve both anatomical and functional outcomes. |
Instruction: Does cesarean section prevent mortality and cerebral ultrasound abnormalities in preterm newborns?
Abstracts:
abstract_id: PUBMED:17437214
Does cesarean section prevent mortality and cerebral ultrasound abnormalities in preterm newborns? Objective: Despite the increased use of the cesarean section (CS), the rates of cerebral palsy, a frequent consequence of brain damage, have remained stable over the last decades. Whether an actual decrease in cerebral palsy has been masked by increased survival of infants delivered by CS or not, remains undefined. To investigate the role of CS, we compared risks of mortality and brain damage, as defined by ultrasound (US) abnormalities, in preterm newborns by mode of delivery.
Methods: Information on fetal, maternal, and neonatal risk factors was collected from the paired clinical records of preterm newborns and mothers. Crude and adjusted odds ratios (OR) of mortality and ultrasound abnormalities, according to mode of delivery (i.e., vaginal, elective CS, and emergency CS) were calculated. All the analyses were controlled for possible confounding by indication.
Results: In newborns of gestational age <32 weeks, no effect of CS on cerebral US abnormalities was found (OR 0.71 and 0.73 for emergency CS and elective CS, respectively). None of the maternal and neonatal factors were associated with both cerebral US abnormalities and mode of delivery. Among newborns of gestational age >or=32 weeks, after controlling for known and potential confounders in a multivariate model, the adjusted ORs remained close to one for both elective CS and emergency CS.
Conclusions: CS does not reduce overall mortality in preterm newborns. No protective effect of CS on US abnormalities was found after stratifying by gestational age and controlling for possible confounding. These results do not encourage the widespread use of CS in preterm labor.
abstract_id: PUBMED:20486542
Mode of delivery and mortality among preterm newborns. Objective: The purpose of our study was to analyze the frequency of preterm deliveries in Obstetrics & Gynecology Clinic, University Clinical Centre of Kosovo, Prishtina (Republic of Kosovo), as well as to assess the survival advantage of premature newborns according to mode of delivery (cesarean section vs. vaginal).
Material And Methods: A cohort of 12,466 deliveries from the year 2002 was studied retrospectively and preterm deliveries were analyzed. Survival advantage until 28 days of life associated with cesarean and vaginal delivery was assessed with regard to birth weights (500-999 g, 1000-1499 g, 1500-1999 g, and 2000-2499 g).
Results: There were 1,135 preterm deliveries which resulted in 1,189 preterm infants (including multiples). The overall cesarean delivery rate in this group was 32.2%. Among preterm newborns with birth weight 500-999 g, 68 children were delivered vaginally and 5 by caesarean section (5.7% and 0.4% of all preterm babies respectively). None of the infants survived. The percentage of children from cesarean deliveries in the other groups was higher: for preterm infants with birth weight 1000-1499 g--3.2%, 1500-1999 g--8.8% and 2000-2499 g--19.8%. A survival advantage associated with cesarean section was observed in neonates with birth weight 1000-1499 g (p < 0.01).
Conclusions: On the basis of our study it can be concluded that cesarean delivery is associated with a decreased neonatal mortality risk in preterm neonates but only in those with birth weight of 1000-1499 g.
abstract_id: PUBMED:32350375
Risk Factors for Neonatal Mortality in Preterm Newborns in The Extreme South of Brazil. Neonatal mortality still remains a complex challenge to be addressed. In Brazil, 60% of neonatal deaths occur among preterm infants with a gestational age of 32 weeks or less (≤32w). The aim of this study was to evaluate the factors involved in the high mortality rates among newborns with a gestational age ≤32w in a socioeconomically developed southern city in Brazil. Data on retrospective births and deaths (2000-2014) were analyzed from two official Brazilian national databases. The risk of neonatal death for all independent variables (mother's age and schooling, prenatal visits, birth hospital, delivery method, gestational age, and the newborn's sex, age, and birth year, gemelarity, congenital anomalies and birthplace) was assessed with a univariable and a multivariable model of Cox's semiparametric proportional hazards regression (p < 0.05). Data of 288,904 newborns were included, being 4,514 with a gestational age ≤32w. The proportion of these early newborns remained stable among all births, while the neonatal mortality rate for this group tended to decrease (p < 0.001). The adjusted risk was significantly for lower birthweight infants (mean 659.13 g) born from Caesarean (HR 0.58 [95% CI 0.47-0.71]), but it was significantly higher for heavier birth weight infants (mean 2,087.79) also born via Caesarean section (HR 3.71 [95% CI 1.5-9.15]). Newborns with lower weight seemed to benefit most from Cesarean deliveries. Effort towards reducing unacceptably high surgical deliveries must take into account cases that the operations may be lifesaving for mother and/or the baby.
abstract_id: PUBMED:2025032
Trends in preterm survival and incidence of cerebral haemorrhage 1980-9. The annual survival rates and incidence of cerebral haemorrhage in 2618 preterm infants of 34 weeks' gestation or less were examined in one referral centre over a 10 year period from January 1980 to December 1989. Survival was independently related to weight, gestation, sex, and inborn delivery. When these variables had been taken into account, survival was 56% greater at the end of the decade compared with 1980. The incidence of cerebral haemorrhage (diagnosed by cranial ultrasound scanning) was related to birth weight, gestation, sex, inborn delivery, and caesarean section, but there was no significant trend in the incidence with time. Rates of caesarean section in this group increased from 31% in 1980 to over 50% more recently. Haemorrhage affecting the brain parenchyma was related to gestation and inborn delivery, and showed a small but significant decline over time. The lack of association between changes in survival rates and rates of cerebral haemorrhage may indicate that factors associated with both neonatal mortality and the incidence of cerebral haemorrhage may not be causally related as previously assumed.
abstract_id: PUBMED:26835280
Epidemiological survey on newborns born at the obstetric departments in hospitals in mid-southern region of China in 2005. Objective: To investigate the situations at birth of newborns in the mid-southern region of China by performing a survey on the newborns born at urban hospitals.
Methods: A total of 23 hospitals in the mid-southern region of China were selected to participate in this survey. The data of 15,582 newborns who were born at the obstetric departments from January 1, 2005 to December 31, 2005 were retrospectively investigated.
Results: The male to female ratio among newborns was 1.16:1. The incidence of preterm birth was 8.11%, while very low birth weight (VLBW) infants accounted for 0.73%. The rates of spontaneous vaginal delivery and cesarean section ware 57.52% and 40.82%, respectively, while the other delivery modes accounted for 1.66%. The incidence of neonatal asphyxia was 3.78% (0.75% for heavy asphyxia). The mortality of newborns was 0.55% (5.56% for preterm infants).
Conclusions: The incidences of preterm birth and neonatal asphyxia are relatively high in the mid-southern region of China. The rate of cesarean section is abnormally high, which is due mainly to social factors.
abstract_id: PUBMED:19757335
Neonatal morbidity and mortality of late-preterm babies. Objective: To analyze neonatal morbidity and mortality rates of late-preterms and to compare them with their term counterparts in a tertiary care unit in Turkey.
Study Design: The study included 252 late-preterm newborns (34 0/7--36 6/7 weeks' gestational age), admitted to Neonatal Intensive Care Unit in the first 24 h of life between January 2005 and June 2007 and 252 newborns born in the same hospital in the same period of time. Babies with major congenital and/or chromosomal abnormalities were excluded.
Results: The mortality rate was 2.3% in late-preterms. None of the term newborns died. Compared to terms, late-preterms were 11 times more likely to develop respiratory distress, 14 times more likely to have feeding problems, 11 times more likely to exhibit hypoglycemia, 3 times more likely to be readmitted and 2.5 times more likely to be rehospitalized. Late-prematurity, being large for gestational age, male gender, and cesarean delivery were significant risk factors for respiratory distress.
Conclusion: Late-preterms have significantly higher risk of morbidity and mortality compared with term newborns. Greater concern and attention is required for the care of this ignored, at-risk population.
abstract_id: PUBMED:36013486
Changes Overtime in Perinatal Management and Outcomes of Extremely Preterm Infants in One Tertiary Care Romanian Center. Background and Objectives: Extremely preterm infants were at increased risk of mortality and morbidity. The purpose of this study was to: (1) examine changes over time in perinatal management, mortality, and major neonatal morbidities among infants born at 250-286 weeks' gestational age and cared for at one Romanian tertiary care unit and (2) compare the differences with available international data. Material and Methods: This study consisted of infants born at 250-286 weeks in one tertiary neonatal academic center in Romania during two 4-year periods (2007-2010 and 2015-2018). Major morbidities were defined as any of the following: severe intraventricular hemorrhage (IVH), severe retinopathy of prematurity (ROP), necrotizing enterocolitis (NEC), and bronchopulmonary dysplasia (BPD). Adjusted logistic regression models examined the association between the mortality and morbidity outcome and the study period. Results: The two cohorts differed with respect to antenatal antibiotics and rates of cesarean birth but had similar exposure to antenatal steroids and newborn referral to the tertiary care center. In logistic regression analyses, infants in the newer compared to the older cohort had a lower incidence of death (OR: 0.19; 95% CI: 0.11-0.35), a lower incidence of IVH (OR: 0.26; 95% CI: 0.15-0.46), and increased incidence of NEC (OR: 19.37; 95% CI: 2.41-155.11). Conclusions: Changes over time included higher use of antenatal antibiotics and cesarean delivery and no change in antenatal steroids administration. Overall mortality was lower in the newer cohort, especially for infants 250-266 weeks' gestation, NEC was higher while BPD and ROP were not different.
abstract_id: PUBMED:22682965
Histological chorioamnionitis is associated with cerebral palsy in preterm neonates. Objective: To determine the interaction between histological chorioamnionitis and unexplained neonatal cerebral palsy among low birth weight infants.
Study Design: We studied 141 preterm infants below 1500 g delivered between 2000 and 2010. Clinical data, neonatal neuroimaging, laboratory results, the histopathological features of the placenta and gastric smear within the first hour of delivery, were evaluated.
Results: Cerebral palsy was detected in 11 out of 141 preterm newborns (7.8%). The incidence of silent histological chorioamnionitis was 33.6% (43 of 128 cases). Chorioamniontis was significantly associated with the risk of unexplained cerebral palsy (p=0.024). There were also significant correlations between maternal genital infections and chorioamnionitis (p=0.005), and between maternal infections and a positive smear of neonatal gastric aspirates (p=0.000). The rate of cesarean section was 67.4% (95 out of 141 deliveries), and elective cesarean section was performed in 68 cases.
Conclusion: Intrauterine exposure to maternal infection was associated with a marked increase in the risk of cerebral palsy in preterm infants.
abstract_id: PUBMED:22840720
Neonatal mortality by attempted route of delivery in early preterm birth. Objective: We sought to study neonatal outcomes in early preterm births by delivery route.
Study Design: Delivery precursors were analyzed in 4352 singleton deliveries, 24 0/7 to 31 6/7 weeks' gestation. In a subset (n = 2906) eligible for a trial of labor, neonatal mortality in attempted vaginal delivery (VD) was compared to planned cesarean delivery stratified by presentation.
Results: Delivery precursors were classified as maternal or fetal conditions (45.7%), preterm premature rupture of membranes (37.7%), and preterm labor (16.6%). For vertex presentation, 79% attempted VD and 84% were successful. There was no difference in neonatal mortality. For breech presentation, at 24 0/7 to 27 6/7 weeks' gestation, 31.7% attempted VD and 27.6% were successful; neonatal mortality was increased (25.2% vs 13.2%, P = .003). At 28 0/7 to 31 6/7 weeks' gestation, 30.5% attempted VD and 17.2% were successful; neonatal mortality was increased (6.0% vs 1.5%, P = .016).
Conclusion: Attempted VD for vertex presentation has a high success rate with no difference in neonatal mortality unlike breech presentation.
abstract_id: PUBMED:26015815
Birth weight independently affects morbidity and mortality of extremely preterm neonates. Background: Neonates born between 24 + 0 and 27 + 6 gestational weeks, widely known as extremely preterm neonates, present a category characterized by increased neonatal mortality and morbidity. Main objective of the present study is to analyze the effect of various epidemiological and pregnancy-related parameters on unfavorable neonatal mortality and morbidity outcomes.
Methods: A retrospective study was performed enrolling cases delivered during 2003 - 2008 in our department. Cases of neonatal death as well as pathological Apgar score (≤ 4 in the first and ≤ 7 in the fifth minute of life), need for emergency resuscitation, respiratory disease syndrome (RDS), neonatal asphyxia, intraventricular hemorrhage (IVH) and neonatal death were recorded for neonates of our analysis. A multivariate regression model was used to correlate these outcomes with gestational week at delivery, maternal age, parity, kind of gestation (singleton or multiple), intrauterine growth restriction (IUGR), birth weight (BW), preterm premature rupture of membranes (PPROM), mode of delivery (vaginal delivery or cesarean section) and antenatal use of corticosteroids.
Results: Out of 5,070 pregnancies delivered, 57 extremely preterm neonates were born (1.1%). Mean BW was 780.35 ± 176.0, RDS was observed in 93.0% (n = 53), resuscitation was needed in 54.4% (n = 31) while overall mortality rate was 52.6% (n = 30). BW was independently associated with neonatal death (P = 0.004), pathological Apgar score in the first (P = 0.05) and fifth minute of life (P = 0.04) as well as neonatal sepsis (P = 0.05).
Conclusion: BW at delivery is independently affecting neonatal mortality and morbidity parameters in extremely preterm neonates.
Answer: The evidence from the provided abstracts suggests that the impact of cesarean section (CS) on mortality and cerebral ultrasound abnormalities in preterm newborns is not straightforward and may vary depending on gestational age and birth weight.
One study found that CS does not reduce overall mortality in preterm newborns and did not show a protective effect on cerebral ultrasound abnormalities after controlling for confounding factors (PUBMED:17437214). Another study concluded that CS is associated with a decreased neonatal mortality risk in preterm neonates but only in those with a birth weight of 1000-1499 g (PUBMED:20486542). Similarly, a study from Brazil indicated that lower birth weight infants born from Caesarean had a significantly lower adjusted risk of neonatal death, while heavier birth weight infants born via Caesarean section had a significantly higher risk (PUBMED:32350375).
An analysis of trends in preterm survival and incidence of cerebral haemorrhage over a decade showed that while survival rates improved, there was no significant trend in the incidence of cerebral haemorrhage, despite increasing rates of CS (PUBMED:2025032). In contrast, a study on extremely preterm infants in Romania reported lower overall mortality in the newer cohort, which had higher rates of cesarean birth (PUBMED:36013486).
In the context of early preterm birth, one study found that attempted vaginal delivery for vertex presentation had a high success rate with no difference in neonatal mortality compared to planned cesarean delivery, while for breech presentation, neonatal mortality was increased with attempted vaginal delivery (PUBMED:22840720).
Overall, the data suggest that while CS may offer a survival advantage in certain groups of preterm newborns, particularly those with a birth weight of 1000-1499 g, it does not universally prevent mortality or cerebral ultrasound abnormalities in preterm newborns. The decision to perform a CS should be based on individual clinical circumstances rather than as a blanket approach for all preterm labor cases. |
Instruction: Can certain benign lesions of the proximal femur be treated without surgery?
Abstracts:
abstract_id: PUBMED:30373674
Treatment of the benign lytic lesions of the proximal femur with synthetic bone graft. Background: Benign bone tumors and tumor-like conditions are commonly located in the proximal femur. The main indications for surgical treatment are lesions with impending or actual pathological fractures, or with aggressive or recurrent lesions. However, patients complaining of persistent pain, limping, or abnormal gait patterns are also considered for surgical treatment. In this study, we describe the outcomes of the surgical treatment of benign lytic lesions of the proximal femur by curettage followed by implantation of synthetic bone graft.
Methods: This retrospective study included 27 patients (22 females and 5 males) with benign lytic lesions of the proximal femur. The average age was 25.5 years (6-65 years), and the mean follow-up period was 54.5 months (9-145 months). The histopathological diagnoses were fibrous dysplasia (8 patients), simple bone cyst (8 patients), chondroblastoma (7 patients), giant cell tumor (3 patients), and eosinophilic granuloma (1 patient). These lesions were managed with curettage followed by implantation of the bone defects with alpha tricalcium phosphate in 14 patients, beta tricalcium phosphate granules in 11 patients, hydroxyapatite granules in 1 patient, and combined beta tricalcium phosphate and hydroxyapatite granules in 1 patient. Internal fixation was performed in three patients.
Results: The mean operative time was 143 min (80-245 min). Patients had regained normal unrestricted activity without pain at the operation site. Patients treated with beta tricalcium phosphate achieved radiographic consolidation of the bone defects within 1 year after the surgery, and those treated with alpha tricalcium phosphate or hydroxyapatite experienced no progression nor recurrence of the lesions. Local tumor recurrence was observed in one patient with giant cell tumor 5 years after the surgery. Post-operative pathological fracture was occurred in one patient with a simple bone cyst of the subtrochanteric region 1 month after surgery. No post-operative infection was observed.
Conclusion: We concluded that the treatment of benign lytic lesions of the proximal femur, either primary or recurrent, using synthetic bone graft is a safe and satisfactory method and the addition of internal fixation should be carefully planned.
abstract_id: PUBMED:23670674
Can certain benign lesions of the proximal femur be treated without surgery? Background: Benign lesions in the proximal femur can cause pathologic fractures. To avoid fracture, benign tumors and tumor-like lesions in this region often are treated surgically, yet there have been few reports regarding the decision-making processes or protocols for nonsurgical treatment of these lesions.
Questions/purposes: In this study, we asked (1) whether some benign lesions of the proximal femur can be managed safely using a conservative protocol, and (2) if observed according to such a protocol, what are the outcomes of such lesions at this anatomic site?
Methods: Fifty-four consecutive patients who had been followed for at least 12 months were enrolled in this study. The mean age of the patients at first visit was 38 years (range, 13-70 years), and the minimum followup was 12 months (mean, 25 months; range, 12-59 months). After ruling out malignancy, lesions were categorized as aggressive benign tumors or nonaggressive benign lesions using a standardized approach. We used conservative treatment for most patients with nonaggressive, benign lesions. Surgery was performed only for patients with nonaggressive lesions who met our fracture risk criteria: pain on initiating hip movement, progressively worsening pain, cortical thinning, and the absence of a sclerotic margin.
Results: Of the 47 patients with a nonaggressive, benign lesion without fracture at presentation, 83% were treated conservatively and only 10% of these patients had progression of the lesion. No new pathologic fractures developed during followup. In 88% of patients who presented with pain that was managed conservatively, pain improved either partially or completely at final followup.
Conclusions: Most nonaggressive, benign lesions in the proximal femur can be treated conservatively, and our protocol appears to be a useful outpatient guideline.
Level Of Evidence: Level IV, therapeutic study. See the Guidelines for Authors for a complete description of levels of evidence.
abstract_id: PUBMED:27317344
A treatment strategy for proximal femoral benign bone lesions in children and recommended surgical procedures: retrospective analysis of 62 patients. Purpose: We aimed to develop a surgical treatment strategy for benign bone lesions of the proximal femur based upon retrospective review of our data in 62 children.
Methods: Sixty-two children [38 male, 24 female; median age 9 years (range 5-18 years)] with proximal femoral benign bone lesions were surgically treated between 2005 and 2013. Histopathological diagnoses were simple (31) or aneurysmal (27) bone cysts, and nonossifying fibromas (4). The pathological fracture rate was 77.4 %. Surgical treatment was determined due to four criteria, including patient's skeletal maturity, localization and initial diagnosis of lesion, and amount of bone loss in the femoral neck and lateral proximal femur. Surgical procedure consisted of biopsy, curettage, bone grafting, and internal fixation when required. The median follow-up was 45 months (range 25-89 months).
Results: Complete clinical recovery was achieved in 56 (90.3 %) patients between 4 and 8 months postoperatively; full weight-bearing and mobilization, without pain and limping, was possible. The median preoperative and postoperative last follow-up Musculoskeletal Tumor Society (MSTS) scores were 13.3 % (range 10-23.3 %) and 96.6 % (range 90-100 %), respectively (p < 0.0001). The pathological fractures were healed in 10 weeks on average (range 8-12 weeks). Fifty-seven (92 %) patients demonstrated complete or significant partial radiographic healing between 5 and 7 months that maintained throughout follow-up. Local recurrence was not observed, and only 1 (1.6 %) patient required reoperation for partial cyst healing. There were 5 (8 %) complications, 1 (1.6 %) of which required reoperation.
Conclusions: This treatment strategy can provide good local control and excellent functional and radiological results in the management of benign bone lesions of the proximal femur in children.
abstract_id: PUBMED:27163071
Treatment of the benign bone tumors including femoral neck lesion using compression hip screw and synthetic bone graft. Purpose: The proximal femur is one of the most common locations for benign bone tumors and tumor like conditions. We describe the clinical outcomes of the surgical treatment of benign lesions of the proximal femur including femoral neck using compression hip screw and synthetic bone graft.
Methods: Thirteen patients with benign bone tumors or tumor like conditions of the proximal femur including femoral neck were surgically treated. Their average age at the time of presentation was 35 years and the average follow-up time was 76 months.
Results: The average intraoperative blood loss was 1088 mL and intraoperative blood transfusion was required in eight patients. The average operative time was 167 minutes. All patients required one week and 12 weeks after surgery before full weight-bearing was allowed. All patients had regained full physical function without pain by the final follow-up. No patient sustained a pathological fracture of the femur following the procedure. All patients achieved partial or complete radiographic consolidation of the lesion within one year except one patient who developed a local tumor recurrence in 11 months. Post-operative superficial wound infection was observed in one patient, which resolved with intravenous antibiotics. Chronic hip pain was observed in one patient due to the irritation of tensor fascia lata muscle by the tube plate.
Conclusion: We suggest that the treatment of benign bone lesion of the proximal femur using compression hip screw and synthetic bone graft is a safe and effective method.
abstract_id: PUBMED:28300344
Intramedullary Nailing Combined with Bone Grafting for Benign Lesions of the Proximal Femur. Objective: To evaluate the effectiveness of intramedullary nailing for benign lesions of the proximal femur.
Method: A retrospective analysis was carried out on 68 cases of benign lesions in the proximal femur at our hospital from April 2002 to April 2013 (38 men and 30 women). Mean age at surgery was 35.5 years (range, 22-56 years). The cases were divided into two groups: curettage of the lesion with bone grafting only as the grafting group (32 cases) and internal fixation after removal of the lesion as the fixation group (36 cases). For the grafting group, lesions were scraped out, deactivated and washed thoroughly with normal saline, then the allogeneic bone was implanted. For the fixation group, after the lesions were scraped, the intramedullary nails were implanted, and allogeneic bone was implanted into the scraped cavity with compaction.
Results: Pathological examination showed that 24 out of 68 cases (35.3%) had simple bone cysts (suffered from pathological fracture in 2 cases); 21 (30.9%) fibrous dysplasia; 18 (26.5%) aneurysmal bone cysts; 3 (4.4%) chondroblastoma, 2 (2.9%) out of which were combined with aneurysmal bone cysts. All patients were followed up for 12-96 months (56 months for mean). In the grafting group, 4 patients had postoperative complications (1 pathological bone fractures and 3 deep vein thrombosis), but only 1 patient of the fixation group (deep vein thrombosis) (P < 0.05). The average bedridden time after surgery was 11.4 ± 7.6 days for the grafting group, and for the other group was 7.5 ± 5.4 days ( P < 0.05).
Conclusion: Both treatments are effective for benign lesions in the proximal femur, but the fixation group facilitated the functional recovery of patients and reduced postoperative complications.
abstract_id: PUBMED:29806361
Treatment of benign bone lesions of proximal femur using dynamic hip screw and intralesional curettage via Watson-Jones approach Objective: To explore the effectiveness of dynamic hip screw (DHS) and intralesional curettage via Watson-Jones approach in treatment of benign bone lesions of the proximal femur.
Methods: Between March 2012 and December 2016, 20 patients (21 lesions) with benign bone tumors or tumor like conditions of proximal femurs were treated with DHS and intralesional curettage via Watson-Jones approach. Their average age was 27.8 years (range, 11-51 years), including 13 males and 7 females. The pathological diagnosis were fibrous dysplasia in 11 cases, simple bone cyst in 2 cases, aneurysmal bone cyst in 2 cases, benign fibrous histocytoma in 2 cases, giant cell tumor in 2 cases, and chondroblastoma in 1 case, including 3 pathological fractures. According to the Enneking staging system, 18 patients were in stage S1, 3 patients with pathological fractures in stage S2. There was no varus deformity or valgus deformity. The operation time, intraoperative blood loss, and time of full weight-bearing were recorded. X-ray film and CT were used to observe the bone graft fusion and location of DHS. Complications were recorded. Visual analogue scale (VAS) and Musculoskeletal Tumor Society (MSTS) scoring were used to evaluate function of lower limbs.
Results: The average operation time was 177.1 minutes (range, 110-265 minutes). The average intraoperative blood loss was 828.6 mL (range, 200-2 300 mL). There was superficial incision infection in 1 case, deep incision infection in 1 case, and hip discomfort in 1 case, respectively. All patients were followed up 6-63 months (mean, 27.4 months). The time of full weight-bearing was 2 days in 2 patients with giant cell tumor and 2 to 13 weeks with an average of 7.2 weeks in the other patients. At last follow-up, VAS and MSTS were 0.19±0.51 and 29.62±0.97 respectively, showing significant differences when compared with the values before operation (3.52±2.62 and 23.71±8.77) ( t=5.565, P=0.000; t=-3.020, P=0.007 ). X-ray film showed the all bone grafts fusion with mean time of 8.2 months (range, 5-12 months). There was no pathological fracture of the femur, local tumor recurrence, chronic hip pain, dislocation, or femoral head necrosis during follow-up.
Conclusion: The treatment of benign bone lesion of the proximal femur using DHS and intralesional curettage via Watson-Jones approach is a safe and effective method.
abstract_id: PUBMED:30129314
Treatment of proximal femoral benign lesions by proximal femoral nail anti-rotation combined with curettage and bone graft through the Watson-Jones approach Objective: To evaluate the feasibility and effectiveness of proximal femoral nail anti-rotation (PFNA) combined with curettage and bone graft through Watson-Jones approach in the treatment of proximal femur benign tumors and tumor like lesions.
Methods: The clinical data of 38 patients with benign tumors and tumor like lesions in the proximal femur who were treated through the Watson-Jones approach with PFNA combined with curettage and bone graft between January 2008 and January 2015 were retrospective analysed. There were 24 males and 14 females with an average age of 28 years (range, 15-57 years). Pathological types included 20 cases of fibrous dysplasia, 7 cases of bone cyst, 5 cases of aneurysmal bone cyst, 3 cases of giant cell tumor of bone, 2 cases of enchondroma, and 1 case of non-ossifying fibroma. Before operation, hip pain occurred in 19 patients, pathological fracture occurred in 12 patients, limb shortening and coxa varus deformity was found in 4 patients, and 3 patients received surgery for the local recurrence. The operation time, intraoperative blood loss, and full-weight bearing time after operation were recorded. Patients were followed up to observe union of bone graft and the position of internal fixator on X-ray films and CT images. Visual analogue scale (VAS) score was used to evaluate the level of pain. The Musculoskeletal Tumor Society (MSTS93) score was used to evaluate lower limb function. Harris hip score was used to evaluate hip joint function.
Results: The operation time was 130-280 minutes (mean, 182 minutes) and the intraoperative blood loss was 300-1 500 mL (mean, 764 mL). After operation, 3 cases of fat liquefaction of incision healed successfully by carefully dressing, and the rest incisions healed by first intention. All patients started partially weight-bearing exercise at 2-4 weeks after operation. The total weight-bearing time was 3-6 months (mean, 4.2 months). All the patients were followed up 24-108 months (median, 60 months). Imaging examination showed that the bone graft fused and the fusion time was 8-18 months (mean, 11.4 months). During the follow-up period, there was no complication such as pathological fracture, femoral head ischemic necrosis, hip joint dislocation, internal fixation loosening and fracture, and no tumor recurrence or distant metastasis occurred. At last follow-up, the VAS score, MSTS93 score, and Harris score were significantly improved when compared with preoperative ones ( P<0.05).
Conclusion: The treatment of proximal femoral benign lesions by PFNA combined with curettage and bone graft through the Watson-Jones approach is safe and effective, with advantages of better mechanical stability, less residual tumor, and less postoperative complications.
abstract_id: PUBMED:30828197
Comparison of complications and functional results of unstable intertrochanteric fractures of femur treated with proximal femur nails and cemented hemiarthroplasty. A prospective, comparative study was done over a period of 3 years to compare the complications and functional results of two treatment modalities of unstable intertrochanteric fractures of the femur in the elderly; i.e closed reduction and internal fixation (CRIF) with proximal femur nail (PFN) and primary cemented hemireplacement arthroplasty (HRA) with bipolar prosthesis. 100 elderly patients with unstable intertrochanteric fractures of femur were studied over a period of 3 years. 50 patients underwent CRIF with PFN and 50 patients were treated with primary cemented hemireplacement arthroplasty with bipolar prosthesis. Harris Hip score analysis revealed that the difference between the patients treated with cemented hemiarthroplasty and proximal femoral nailing was statistically significant in favour of the hemiarthroplasty group within the first 3 months. However, this difference diminished at the 6th month time point and reversed at the 12 month time point indicating a better functional outcome of Proximal Femur Nail in the long term. Although cemented hemireplacement arthhroplasty allows early pain free mobilization and has a good short term outcome, over time it is associated with a variety of complications which significantly affects quality of life of patients. On the other hand, although patients treated with PFN had delayed post op mobilization, they had better results when followed up at 1 year post surgery.
abstract_id: PUBMED:31840957
Surgical treatment of proximal femur metastases Aging of the population results in an increase of the incidence of cancer and bone metastases. The proximal femur is one of the most frequent locations of bone metastases. A pathological fracture has a major impact on the quality of life and potentially on survival. In case of impending fracture, prophylactic fixation is therefore strongly recommended. The management of metastases of the proximal femur depends on multiple parameters, life expectancy and fracture risk being the most important ones. If survival is estimated to be less than 6 weeks, surgery is generally not indicated. Beyond 6 weeks, surgical indication essentially depends on location of the metastases on the proximal femur and the presence of a fracture.
abstract_id: PUBMED:29954245
Review of 16 cases of aneurysmal bone cyst in the proximal femur treated by extended curettage and cryosurgery with reconstruction using autogenous nonvascularized fibula graft. Background: Aneurysmal bone cyst (ABC) is an aggressive benign lesion that may sometimes be difficult to treat. Lesions that occur in the proximal femur require to be addressed aggressively because of the high rate of local recurrence and the risk of fracture. Few articles report the experience of management of ABC in the proximal femur. This location presents a surgical challenge due to its anatomical and biomechanical peculiarity.
Methods: We retrospectively reviewed 16 cases with ABC in the proximal femur that were treated in our institution between 2005 and 2014, utilizing extended curettage and liquid nitrogen as adjuvant therapy and reconstruction using nonvascularized autogenous fibular strut graft. The mean follow-up period was 50.5 months. Five cases presented as recurrent cases, while four cases presented with pathological fractures.
Results: The functional score ranged from 70 to 90 on modified Musculoskeletal Tumor Society score with a mean score of 81.25, which is considered to be excellent function. Time for graft incorporation ranged from 10 to 12 weeks. One case got local recurrence, and one case got early surgical site infection.
Conclusions: Extended curettage and cryotherapy represent a recommended approach for management of ABC in the proximal femur with favorable results.
Answer: Yes, certain benign lesions of the proximal femur can be managed safely using a conservative protocol. A study asked whether some benign lesions of the proximal femur could be managed safely without surgery and what the outcomes of such lesions would be when observed according to a conservative protocol. The study involved 54 consecutive patients who had been followed for at least 12 months. After ruling out malignancy, lesions were categorized as aggressive benign tumors or nonaggressive benign lesions. Conservative treatment was used for most patients with nonaggressive, benign lesions. Surgery was performed only for patients with nonaggressive lesions who met specific fracture risk criteria. The results showed that 83% of the patients with a nonaggressive, benign lesion without fracture at presentation were treated conservatively, and only 10% of these patients had progression of the lesion. No new pathologic fractures developed during follow-up, and in 88% of patients who presented with pain that was managed conservatively, pain improved either partially or completely at final follow-up. The study concluded that most nonaggressive, benign lesions in the proximal femur can be treated conservatively, and the protocol used appears to be a useful outpatient guideline (PUBMED:23670674). |
Instruction: Effect of intraperitoneal and extraperitoneal insertion of mesh on bacterial translocation: does it make a difference?
Abstracts:
abstract_id: PUBMED:10365841
Effect of intraperitoneal and extraperitoneal insertion of mesh on bacterial translocation: does it make a difference? Objective: To asses the effect of insertion of mesh, with or without contact with the peritoneum, on the induction of bacterial translocation.
Design: Open experimental study.
Setting: Surgical research laboratory, Turkey.
Subjects: 158 Swiss albino mice.
Interventions: A defect in the abdominal wall was created. In the control group, the defect was closed primarily. In the extraperitoneal group, polypropylene mesh was sutured over the abdominal wall after primary closure of the peritoneum and in the intraperitoneal group, polypropylene mesh was sutured to close the created defect so that it was in contact with the intestines.
Main Outcome Measures: Bacterial translocation at 4, 24 and 48 hours.
Results: Insertion of mesh in contact with the peritoneum led to increased bacterial translocation to mesenteric lymph nodes at 4 (p = 0.02) and 48 (p = 0.03) hours compared with insertion without contact.
Conclusion: Contact between a foreign body and the peritoneum is required to induce bacterial translocation.
abstract_id: PUBMED:34800188
Intraperitoneal versus extraperitoneal mesh in minimally invasive ventral hernia repair: a systematic review and meta-analysis. Purpose: The ideal location for mesh placement in minimally invasive ventral hernia repair (VHR) is still up for debate. We undertook a systematic review and meta-analysis (SRMA) to evaluate the outcomes of patients who received intraperitoneal mesh versus those that received extraperitoneal mesh in minimally invasive VHR.
Methods: We searched PubMed, EMBASE, Cochrane, and Scopus from inception to May 3, 2021. We selected studies comparing intraperitoneal mesh versus extraperitoneal mesh placement in minimally invasive VHR. A meta-analysis was done for the outcomes of surgical site infection (SSI), seroma, hematoma, readmission, and recurrence. A subgroup analysis was conducted for a subset of studies comparing patients who have undergone intraperitoneal onlay mesh (IPOM) versus extended totally extraperitoneal approach (e-TEP).
Results: A total of 11 studies (2320 patients) were identified. We found no statistically significant difference between patients who received intraperitoneal versus extraperitoneal mesh for outcomes of SSI, seroma, hematoma, readmission, and recurrence [(RR 1.60, 95% CI 0.60-4.27), (RR 1.39, 95% CI 0.68-2.81), (RR 1.29, 95% CI 0.45-3.72), (RR 1.40, 95% CI 0.69-2.86), and (RR 1.22, 95% CI 0.22-6.63), respectively]. The subgroup analysis had findings similar to the overall analysis.
Conclusion: Based on short-term results, extraperitoneal mesh does not appear to be superior to intraperitoneal mesh in minimally invasive ventral hernia repair. The choice of mesh location should be based on the current evidence, surgeon, and center experience as well as individualized to each patient.
abstract_id: PUBMED:9486886
Adhesion formation after intraperitoneal and extraperitoneal implantation of polypropylene mesh. Polypropylene mesh is commonly used in open and laparoscopic hernia repairs. We tested the hypothesis that intra-abdominal adhesion formation secondary to polypropylene mesh is greater when mesh is placed in an intraperitoneal versus an extraperitoneal position. Fifty adult male rats underwent midline laparotomy with or without implantation of a nonabsorbable mesh. There were ten rats in each of the following five groups: EP-M, creation of an extraperitoneal pocket without mesh placement; EP+M, mesh placement in an extraperitoneal pocket; IP+M, intraperitoneal mesh; IT-M, creation of an abdominal wall ischemic defect without mesh placement; IT+M, ischemic defect plus mesh. Adhesion formation was graded on a scale of 0 to 5, 2 weeks after operation. All groups formed adhesions. Tissue injury or the placement of a mesh in an intraperitoneal position resulted in significantly more adhesions. An entirely extraperitoneal approach to mesh placement is needed to minimize adhesions after laparoscopic hernia repair.
abstract_id: PUBMED:37470632
A randomised control trial study of early post-operative pain and intraoperative surgeon workload following laparoscopic mesh repair of midline ventral hernia by enhanced-view totally extraperitoneal and intraperitoneal onlay mesh plus approach. Introduction: The aim of this study was to compare the peri-operative outcomes, especially intraoperative surgeon workload and early post-operative pain, following midline ventral hernia repair by laparoscopic enhanced-view totally extraperitoneal (eTEP) approach and laparoscopic intraperitoneal onlay mesh plus (IPOM plus) approach.
Patients And Methods: This single-centre randomised control trial study was conducted from January 2020 to June 2022. A total of 60 adult patients undergoing elective ventral hernia surgery with small- and medium-sized midline defects were included. Intraoperative surgeon workload and early post-operative pain were systematically recorded and analysed for each procedure.
Results: Out of 30 patients assigned to each group, 29 patients underwent eTEP mesh repair and 27 patients underwent successful IPOM plus repair. The intraoperative surgeon's workload, especially mental demand, physical demand, task complexity and degree of difficulty as reported and felt by the operating surgeon, was significantly higher in the eTEP mesh repair group compared to IPOM plus group (P < 0.001) with comparable operating room distractions (P = 0.039). The mean overall post-operative pain score on post-operative day 1 was slightly less in eTEP mesh repair (4.28 ± 1.12) group compared to IPOM plus group (4.93 ± 1.17), which was statistically insignificant (P = 0.042). The eTEP group had significantly longer operative time and length of hospital stay compared to the IPOM plus group.
Conclusion: Our study revealed significantly longer operative time, higher surgical workload and increased length of hospital stay in the eTEP group with comparable early post-operative pain in both groups, thus making eTEP mesh repair a more difficult and challenging procedure.
abstract_id: PUBMED:30524617
Comparison of slit mesh versus nonslit mesh in laparoscopic extraperitoneal hernia repair. Introduction: Endoscopic hernia repair integrates the advantages of tension-free preperitoneal mesh support of the groin with the advantages of minimally invasive surgery procedures.
Aim: To compare outcomes between slit mesh (SM) and nonslit mesh (NSM) placement in laparoscopic totally extraperitoneal (TEP) inguinal hernia repair.
Material And Methods: This is a retrospective study of 353 patients who underwent TEP inguinal hernia repair between January 2010 and December 2011. One hundred forty-nine and 154 hernias were operated on in the SM and NSM groups, respectively. Postoperative complications, recurrence, early postoperative pain, and chronic pain levels were evaluated.
Results: In a total of 303 patients, hernia repair was performed as 395 direct and indirect hernias. Nonslit mesh was converted from TEP to transabdominal preperitoneal patch plasty (TAPP) in 4 patients in the group and 6 patients in the slit mesh group. The average operation time of the SM group was significantly higher than that of the NSM group (p < 0.001). In the evaluation of early postoperative pain, VAS levels of the NSM group were statistically significantly lower than those of the SR group in all evaluations (p = 0.001). The pain rate of the SM group after 3 months of chronic pain was significantly higher than that of the NSM group (p = 0.004). There was no difference in recurrence rate, 6th month chronic pain, wound infection or wound hematoma.
Conclusions: The use of SM and NSM in TEP operations is not different in terms of recurrence and complications. However, the use of NSM gives better results in terms of early postoperative pain and chronic pain.
abstract_id: PUBMED:29359032
Utility of single-incision totally extraperitoneal inguinal hernia repair with intraperitoneal inspection. Aim: To study the utility of single-incision totally extraperitoneal inguinal hernia repair with intraperitoneal inspection.
Methods: A 2 cm transverse skin incision was made in the umbilicus, extending to the intraperitoneal cavity. Carbon dioxide was insufflated followed by insertion of laparoscope to observe the intraperitoneal cavity. The type of hernia was diagnosed and whether there was the presence of intestinal incarceration was confirmed. When an intestinal incarceration in the hernia sac was found, the forceps were inserted through the incision site and the intestine was returned to the intraperitoneal cavity without increasing the number of trocars. Once the peritoneum was closed, totally extraperitoneal inguinal hernia repair was performed, and finally, intraperitoneal observation was performed to reconfirm the repair.
Results: Of the 75 hernias treated, 58 were on one side, 17 were on both sides, and 10 were recurrences. The respective median operation times for these 3 groups of patients were 100 min (range, 66 to 168), 136 min (range, 114 to 165), and 125 min (range, 108 to 156), with median bleeding amounts of 5 g (range, 1 to 26), 3 g (range, 1 to 52), and 5 g (range, 1 to 26), respectively. Intraperitoneal observation showed hernia on the opposite side in 2 cases, intestinal incarceration in 3 cases, omental adhesion into the hernia sac in 2 cases, severe postoperative intraperitoneal adhesions in 2 cases, and bladder protrusion in 1 case. There was only 1 case of recurrence.
Conclusion: Single-incision totally extraperitoneal inguinal hernia repair with intraperitoneal inspection makes hernia repairs safer and reducing postoperative complications. The technique also has excellent cosmetic outcomes.
abstract_id: PUBMED:31973843
Robotic intraperitoneal onlay versus totally extraperitoneal (TEP) retromuscular mesh ventral hernia repair: A propensity score matching analysis of short-term outcomes. Background: Short-term outcomes of robotic intraperitoneal onlay mesh(rIPOM) versus robotic totally extraperitoneal retromuscular mesh(rTEP-RM) ventral hernia repair were compared.
Methods: A retrospective review of prospectively collected data of patients was conducted. A one-to-one propensity score matching(PSM) analysis was performed to achieve two well-balanced groups in terms of preoperative variables. A univariate and multivariate analysis were conducted to determine factors influencing post-operative outcomes.
Results: Of 291 rIPOM and rTEP-RM procedures, 68 patients were assigned to each group after PSM. Operative times were longer for the rTEP-RM group. Adhesiolysis was more frequently required in rIPOM. The rTEP-RM allowed for a greater mesh-to-defect ratio. The rate of overall perioperative complications, Clavien-Dindo grades, and surgical site events were higher for the rIPOM group than the rTEP-RM group. The Comprehensive Complication Index® morbidity scores were lower in favor of rTEP-RM group. Adhesiolysis, rIPOM, and craniocaudal defect size were predictors for post-operative complications.
Conclusion: Robotic TEP-RM repair has better early postoperative outcomes for ventral hernias, suggesting that it may be preferable over robotic IPOM repair. Further studies with longer follow-up are needed.
abstract_id: PUBMED:15984719
Simultaneous extraperitoneal laparoscopic radical prostatectomy and intraperitoneal inguinal hernia repair with mesh. Objective: This report depicts the feasibility of the concomitant repair of a large direct inguinal hernia with mesh by using the intraperitoneal onlay approach after extraperitoneal laparoscopic radical prostatectomy.
Methods: A 66-year-old man with localized adenocarcinoma of the prostate was referred for laparoscopic radical prostatectomy. The patient also had a 4-cm right, direct inguinal hernia, found on physical examination. To minimize the risk of infection of the mesh, an extraperitoneal laparoscopic prostatectomy was performed in the standard fashion after which transperitoneal access was obtained for the hernia repair. The hernia repair was completed by reduction of the hernia sac, followed by prosthetic mesh onlay. In this fashion, the peritoneum separated the prostatectomy space from the mesh. A single preoperative and postoperative dose of cefazolin was administered.
Results: The procedure was completed with no difficulty. Total operative time was 4.5 hours with an estimated blood loss of 450 mL. The final pathology revealed pT2cN0M0 prostate cancer with negative margins. No infectious or bowel complications occurred. At 10-month follow-up, no evidence existed of recurrence of prostate cancer or the hernia.
Conclusion: Concomitant intraperitoneal laparoscopic mesh hernia repair and extraperitoneal laparoscopic prostatectomy are feasible. This can decrease the risk of potential infectious complications by separating the mesh from the space of Retzius where the prostatectomy is performed and the lower urinary tract is opened.
abstract_id: PUBMED:31307534
Single-incision totally extraperitoneal hernia repair with intraperitoneal inspection of strangulated femoral hernia at risk for intestinal ischemia after repositioning: a case report. Background: Totally extraperitoneal hernia repair and the transabdominal preperitoneal approach have advantages and disadvantages. We used the advantages of totally extraperitoneal hernia repair and the transabdominal preperitoneal approach and performed single-incision totally extraperitoneal hernia repair with intraperitoneal inspection for the treatment of strangulated femoral hernia in a patient at risk for intestinal ischemia.
Case Presentation: We report a case of a 75-year-old Japanese woman who presented with black vomiting of 5 days' duration. Physical examination revealed a right inguinal bulge and sharp pain. Computed tomography revealed a right strangulated femoral hernia with no intestinal ischemia. We were able to reposition the hernia; however, we performed the operation with consideration of the possibility of intestinal ischemia by incarceration of the intestine because the onset was 5 days previously. Intraperitoneal observation revealed a right femoral hernia and confirmed that the intestinal tract was not ischemic. However, the intestinal tract was expanded because of ileus, and securing a clear field of vision was difficult. Hence, we switched to totally extraperitoneal hernia repair at the same incision and performed single-incision totally extraperitoneal hernia repair with intraperitoneal inspection. The hernia sac was observed at the femoral rings and obturator foramen. The mesh was inserted through the incision, and after it was positioned to cover the Hesselbach triangle, femoral rings, inguinal ring, and obturator foramen, it was fixed with SECURESTRAP®. Observation of the abdominal cavity confirmed complete repair. After the operation, there was no recurrence or other complications.
Conclusion: We report a case of successful single-incision totally extraperitoneal hernia repair with intraperitoneal inspection for the treatment of strangulated femoral hernia in a patient at risk for intestinal ischemia after repositioning.
abstract_id: PUBMED:37629383
Primary Ventral Hernia Repair and the Risk of Postoperative Small Bowel Obstruction: Intra Versus Extraperitoneal Mesh. Objective: The aim of this study was to compare the likelihood of bowel obstruction according to the placement of the mesh (either intraperitoneal or extraperitoneal) in ventral hernia repairs.
Materials And Methods: Patients were divided into two groups, an intraperitoneal (IP) group (mesh placed by laparoscopy or with an open approach) and an extraperitoneal (EP) group, all operated on in the Digestive Surgery Department at the Dijon University Hospital. The primary outcome was the occurrence of an episode of bowel obstruction requiring hospitalization and confirmed by abdominal CT scan.
Results: Between March 2008 and July 2021, 318 patients were included, with 99 patients in the EP group (71 meshes placed preperitoneally and 28 placed retromuscularly) and 219 patients in the IP group (175 patients operated on laparoscopically versus 44 patients by direct approach). Three patients presented an episode of acute intestinal obstruction, with no difference between the two groups (p = 0.245), although all bowel obstructions occurred in the IP group and with the laparoscopic approach (1.7% of patients operated on by laparoscopy). The occlusive events occurred at 1 month, 2 years, and 3 years. There was no difference in terms of recurrence or postoperative chronic pain. There were more seroma and mesh infections in the EP group (p < 0.05). Two patients operated on by laparoscopy had undetected bowel injuries, prompting emergent surgery for peritonitis.
Conclusions: No statistically significant difference was found in terms of bowel obstruction between the intraperitoneal and the extraperitoneal position, but all cases of obstruction happened in the intraperitoneal mesh group. Visceral lesions remain a major complication of the laparoscopic approach that should not be neglected.
Answer: The effect of intraperitoneal and extraperitoneal insertion of mesh on bacterial translocation has been studied, and there is evidence to suggest that it does make a difference. An experimental study involving Swiss albino mice found that the insertion of mesh in contact with the peritoneum led to increased bacterial translocation to mesenteric lymph nodes at 4 and 48 hours compared with insertion without contact (PUBMED:10365841). This suggests that contact between a foreign body, such as a mesh, and the peritoneum is required to induce bacterial translocation. Therefore, the placement of mesh in relation to the peritoneum appears to have a significant impact on the risk of bacterial translocation. |